Are you intelligent enough to survive the AI Revolution? If it ever happens? Or if it doesn’t?
When we talk about intelligence we can define it in many, many different ways. These range from the type of recall tasks which we see in TV Quiz shows such as Jeopardy, through winning zero-sum games and on to the type of learning tasks that involve categorisation, association, prediction and a broader range of data handling functions. But are we intelligent enough about intelligence? And do we need, and can we create meta-intelligence -in ourselves and/or machines?
In his book, Human Intelligence (2010), Earl Hunt returns to the hedgehog and fox story first recorded by Archilochus and re-popularised by Isaiah Berlin.
He uses this parable to suggest that sometimes we cannot develop a unifying theory of a phenomena- in this case human intelligence- because it simply has too many facets to model as one thing- what data scientists might call the curse of dimensionality. Indeed, Howard Gardiner has defined intelligence as having seven such ‘facets’, whilst Robert Sternberg identifies only two. This leads us to problems not just in defining intelligence within individuals, but a concomitant problem in measuring it — and then, hopefully, managing it upwards.
And the process becomes even more fraught when we start measuring the intelligence of groups or populations. Are doctors more intelligent than lawyers? Postgraduates brighter than graduates? Of course, this is important too because we certainly seem to pay more for some groups than we do for others. This problem has bred a slightly different approach which measures, not just intelligence, but competence which might be defined as ‘applied intelligence’.
And of course, we can see a similarly dispersed discourse in the field of Artificial Intelligence, which encompasses different models such as deep and machine learning, transfer learning, reinforcement learning, GOFAI, computational intelligence, swarm intelligence, and so on. Here we seen an emphasis on pattern recognition, where responding to and acting upon patterns is seen as a critical element of intelligence.
But is it?
So, should we keep on trying to standardise something which cannot- apparently- be standardised? If we measure intelligence in
This article suggests that intelligence — human or AI — consists of a number of key operations
1. Insight Driven Supervision. The ability to identify, monitor, adapt and control your own data handling processes. These range from volition or conation- goal setting and goal related activity through to cognition- activities regarding how we adapt to changes in our social and physical environment.
2. Interactional Model Design. The ability to model both internal and external context interactively using categorisation and association, amongst other elements.
3. Relevant Sortation. The ability to choose combine and discard the right data to build models
4. Interactional Modelling. The ability to build and test models
5. Model Control Surface Deployment The ability to select or design model control systems
6. Models Control Surface use. The ability to implement model controls.
We cannot other than emphasise too strongly the interactive nature of intelligence. Human systems seem- according to our research- to be self-regulated in the forms of stimulus they choose to respond to and modify. One the other hand, many AI systems still seem to rely heavily — although not completely- on the grand old Skinnerian paradigm of positive and negative reinforcement using models such as those which rely on dopamine reward ‘systems’. Of course reinforcement sensitivity varies from context to context, as well as diminishing over experience.
But it seems to us that there is a race currently taking place in AI. An interactive race in which the increasing ‘intelligence’ of the AI designer needs to be matched by the increasing ‘intelligence ‘ of the AI itself. And the increasing intelligence of the human users.
Are we focusing on the right thing?