16 Things to Think About Before You Post on Technology

Peter Stannack
5 min readJan 3, 2018

--

We all need reference points. And in the absence of such we will create them. Unfortunately, once we have adopted a reference point, it becomes very difficult for us to move away from it. Indeed, arguments tend to force even scientists (!) to move closer to and defend their reference points. And that’s when scientists seem to be as lost as the rest of us.

At the dawn of the new millennium, Bill Joy claimed in a -timely for attention- and consequently widely shared and commented article, that the most powerful 21st century technologies are threatening to make humans an endangered species.

Indeed, since then a growing number of forecasters have developed ‘visions’ in which the claim was made more explicit. I say forecasters , because let’s not forget that we are talking about the future where scientists -even data scientists -are almost as blind as the rest of us.

These visions insist that accelerating progress in disruptive technologies such as artificial intelligence (machine intelligence, deep learning, data science) robotics, genetic engineering, and nanotechnology may lead to what they refer to as the technological singularity: an event or phase that will radically change human civilization, and perhaps even human nature itself, before the middle of the 21st century

Such- so called- “singularity hypotheses” largely refer to either one of two distinct and very different scenarios.

The first is driven by technology. It postulates the emergence of “artificial super-intelligent agents” such as suggested by Nick Bostrom, and others. Such software-based synthetic minds create the sort of ‘intelligence explosion’ posited in the 1960’s -a process in which artificially intelligent minds enter a ‘runaway reaction’ of self-improvement cycles, with each new and more intelligent generation appearing faster than its predecessor.

The second is driven by people. This involves ‘transhumanists’ who expect progress in enhancement technologies, most notably the amplification of human cognitive capabilities, to lead to the emergence of a post human race. Post humans will overcome — or escape -all existing physical and mental — human — limitations, and rise above or eliminate illness aging and death. Sometimes these enhancements are biologically based, sometimes they are constructed by ‘reverse-engineering’ the human brain and integrating it into machines or other structures.

Advocates of the technological singularity have developed a go faster argument based on the extrapolation of trend curves seen in computing technology — such as Moore’s Law — as well as econometrics. Their argument is based on the following premises:

(1) Time series data shows that technological progress has been accelerating for a number of centuries.

(2) There is no reason for this acceleration not to continue.

(3) As it continues, our technological achievements will become so great that our bodies, minds, societies, and economies will be radically transformed.

(4) This transformation will occur on such a scale as to lead to ‘disruption’.

Kurzweil sets the date mid-century, around 2045, and poetically tells us that the change will create a ‘‘rupture in the fabric of human history’’.

Critics of both these arguments suggest that they deeply unsound and lack adequate empirical support. Further it has been suggested that these arguments may be motivated by unexamined mysticism, covert religious mania or even greed.

It therefore seems that, as well as balanced reporters on technology, there are also many singularity junkies around. These individuals post regularly on social media sites to make claims about technology — specifically artificial intelligence, robotics, genetic engineering, and nanotechnology — that suggest such a rupture is approaching or already here.

So, since Lakatos suggested that the Kuhnian model of scientific revolutions amounted to ‘mob psychology’, we perhaps should develop a guide that can help us think about how we post on the subject of digital transformation and disruption in artificial intelligence (machine intelligence, deep learning, data science) robotics, genetic engineering, and nanotechnology

As a starting point, I would propose an — incomplete- list of the questions you should ask yourself before you post on social media. These are:

1. What is the actual nature of the technology I am making the claim for? Can it be defined? What exactly is being claimed?

2. What is the empirical content of this conjecture? Can it be refuted or corroborated, and if so, how?

3. What is the ‘raw material’ of this technology? Can a shift be effectively measured?

4. What effect am I claiming? Is it a discontinuity on a par with- say- a phase transition or a process on a par with Toffler’s ‘third wave’?

5. What metrics am I using to measure the effect?

6. Am I sure that my metrics correspond with the effects? Can I demonstrate this?

7. What evidence, taken for example from the history of technology and economic theories, supports these effects?

8. What, if anything, can be said to be accelerating? What evidence can reliably be said to support its existence?

9. Which metrics support the idea that ‘artificial intelligence (machine intelligence, deep learning, data science) robotics, genetic engineering, and nanotechnology’ are indeed accelerating?

10. In what areas are the accelerating? In what areas are they slowing?

11. What are the most likely milestones in the countdown to the shift or effect I am claiming?

12. What are the necessary and sufficient conditions for an intelligence or technological ‘explosion’ (a runaway effect)? What is the actual likelihood of such an event? How will it occur?

13. What evidence supports the claim that machine intelligence has been rising? How is machine intelligence measured? How is human intelligence measured? How effective are these measures? Do we have sufficient data to extrapolate reliably?

14. What are the necessary and sufficient conditions for machine intelligence to be considered to be on a par with that of humans? As Turing said, what would it take for the ‘‘general educated opinion [to] have altered so much that one will be able to speak of machines thinking without expecting to be contradicted’’? What is thinking? How can we tell good from bad thinking objectively?

15. What does it mean to claim that biological evolution will be replaced or augmented by technological evolution?

16. What exactly can be the expected effects of augmentation and enhancement, in particular over our cognitive abilities?

This is — as noted above- not a comprehensive set of parameters, but, in the interests of data accuracy we need something when we are talking and posting about technology- on a large or small scale.. In an attempt to overcome noise, it is easy to post articles that just add to the noise we are trying to escape.

Happy Posting!

--

--

Peter Stannack
Peter Stannack

Written by Peter Stannack

Just another person, probably quite a bit like you

No responses yet