Hypercopying — A Permanent Revolution at Scale?

Peter Stannack
3 min readJan 8, 2018

--

Human beings aren’t particularly tool making animals, language using animals and -sadly- not originating animals. We are — like monkeys, mynah birds and budgies — copying animals.

Most of the things that make us human — learning, search, modelling are- and are only at base — elements of copying. Most of our behaviours are copied (although few of us can understand just how-and when — we turned into our parents)

.

There are, of course, different types of copying. Copy-adapt and copy maintain are two such types. Copy adaptation involves changing what we have copied to meet different contextual needs. Copy maintain is just straight mimicry, although it can produce exaptation where mimetic skills are poor.

But conscious copying is only a small fraction of our mimetic activity. Between 95 and 99% of our brain activity is unconscious. This means that what we often think is original is copied and combined, -using associative, abstractive-adaptive and mimetic information tools — from something that we have in brain, but which we do not consciously recall (Not all the information we hold is held in memory).

This means that much of what we think we have ‘created’ or ‘invented’ is — in fact -borrowed and adapted. Of course, we call this copying different things — modelling or experimentation. But nothing starts from nothing.

So, when we work in any of the fields which have come to be known as “Artificial Intelligence” — Machine Learning, Machine Intelligence, Deep Learning and some aspects of Data Science — it’s important to recognise the debt we owe to nature when we build our projects and models.

Sometimes this debt is second hand — owed by some of our progenitors in fields such as computational neuroscience and mathematical psychology- who also tried to create something ‘new by copying.

A good example here is the work of Hodgkin and Huxley carried out in the early 1950s and

described in a series of 1952 papers. This was an attempt to develop a framework for the modelling of the action potential of neurons. Their work won them the Nobel Prize in 1963, and laid some of the foundations of computational neuroscience and — ultimately — artificial intelligence.

Hodgkin and Huxley’s work quantitatively explained the process by which action potentials are formed by the voltage dependent activation and subsequent deactivation of sodium channels, terminated by a delayed activation of potassium channels. They were able to reproduce the action potential of neurons, correctly calculate the velocity of propagation, analyse the refractory period,

and account for the phenomenon of post-inhibitory rebound or “anode break.

This allows us to build multilayer feedforward neural networks with multi-valued neurons. But the important thing to remember is that their original research -copying- was wrong and it was not until they copied the voltage clamp from a tool developed by a physicist in the 1930s that they were able to determine how wrong their original copying had been. This process was slow. It took thirty years for genotypic adaptation theories to be accepted. We still copy- maintain much more than we copy-adapt. Thomas Kuhn’s work on the structure of scientific revolutions is — despites its’ weaknesses- still relevant

It is useful to think about this when we plan for the design and deployment of AI. Smart AI can -extensively- copy adapt. Dumb AI tends to copy maintain.

But in our highly-connected world, there are many more things available to copy and many more tools to copy with. Humans can copy faster and more accurately. This means that being aware of what we are copying — and how and why — is critical. We may be moving from an era of copying relatively static models to an era of hypercopying.- where models are copied and discarded without being tested and adapted.

Imagine what this could mean for individuals, communities and economies. Reference points- another thing we copy- could break down. Service and product market volatility will increase with cycles becoming much shorter — possibly too short to react to. Social ties will weaken. Contexts will multiply. And AI — or at least the sort of AI which we commonly design and deploy now — will have a similarly hard time keeping up

--

--

Peter Stannack
Peter Stannack

Written by Peter Stannack

Just another person, probably quite a bit like you

No responses yet