‘Mad, Bad and Dangerous to Know’ The Cultural Curse of Artificial Intelligence

Peter Stannack
4 min readSep 13, 2020

--

Can AI experience uncertainty?

And should it?

Since the original Nineteenth Century beginnings of the introduction of compulsory education across elements of the developed and developing world, there is a strong, but perhaps unsustainable belief, that education is strongly correlated to learning. And even if it isn’t, it is strongly correlated with socialisation and the normative pressures inherent within social institutions to conform.

Of course, this ‘given’ is supported by people who have successfully passed through the education process, despite the fact that it seems to leave them so exhausted that curiosity withers and dies within them for the rest of their existence. It is also- understandably supported by teachers, educators and AI algorithm designers who- at the end of the day- like to have jobs which enable them to live in nice houses, eat good food, and so on.

And yet, we port these paradigms into fields arbitrarily labelled- with no real reason- machine or deep learning. After all, they sound so much more effective than machine and deep teaching. And you can avoid responsibility for both failure and error.

Teaching machines to think is hard to impossible. Teaching humans to think- some might say is even harder. Cloud based Artificial Intelligence (AI) seems to learn from its users across a range of parameters and claims to understand the users’ intent and offer data and services in ways that allow their integration and composition.

Toolkits such as Spark claim to achieve extraordinary accuracy in natural language processing at scale and we see claims that tens of thousands of such capabilities and services are already integrated into the Google Assistant, Apple Siri, Amazon Alexa, the open source Mycroft, Microsoft Cortana, Samsung Bixby, and others. Models such as GPT-3 create ‘near-perfect’ written text samples at scale.

And many of these capabilities have been taught to AI by you. Without pay, or even thanks. Tough, huh?

But does this count as a culture of learning?

Culture is- justifiably- considered as a “slippery concept” within most environments, whether in the home, workplace or school. These are just contexts to you and AI. But within these ‘cultural’ contexts — which are both intercausative- ‘A’ in this case ‘causes’ ‘B’ whilst ‘B’ also causes ‘A’ and intracausative in that ‘A’ causes ‘B’. Such an approach, of course, militates against the unquestioned control model which either places the system controller outside the system being controlled or assumes that the system controller is unaffected by his, her or its control of the system.

Here, the teacher acts as the system controller for both the human and machine system. But she also acts as part of the culture within the teaching and learning process. This should offer various environments for all the stakeholders although they will experience and interpret them differently. We can, of course, consider learning as a mixed input-output platform with a range of interpersonal — non role based- interactions and outputs.

These might include collaborative and group learning, selection and use of software, selection and use of other learning materials, models pedagogy, the impact of

external pressures such as exam requirements models of constructed conceptual development and cognitive knowledge, and the link between learning and learner interactions.

Normally, the teaching response to this is embedded in time, student numbers and other resource-based arguments. But even where ‘classroom’ culture is seen as essential for promoting academic resiliency, the classroom environment serves as a way of managing the tolerance for uncertainty intrinsic in effective learning.

The intention behind cultural engineering in human learning is to promote students’ trust relationships with their teachers, while building competence, and confidence, and opportunities. This involves continually evaluating the curriculum to find opportunities for learners to escape uncertainty by engaging in problem-solving and the practical application of their learning. The ‘right. learning culture might improve human students’ ownership of their learning and alleviate feelings of anger, anxiety, alienation, and powerlessness.

All of which- if unresolved- seems a little like a blueprint for social unrest or mental health issues.

Both the constructivist psychological theories of Jean Piaget and the radicalconstructivism of von Glaserfeld emphasize the nature and importance of interactive processes among learners within the social context of learning. Whilst AI has no source of cultural context it seems possible to suggest that it can develop no control over its own learning because learning is a social process.

Learning motivation is deeply embedded in the sociocultural contexts of learning and the transitive processes occurring in those particular contexts. From a social constructivist perspective, the motivation for learning encompasses not only the socio-culture but also includes the interpersonal and intrapersonal components for promoting the learners’ learning process. And, of course, learning as adaptation and learning as normativity are two different things, which may be related on in such contexts

Conceptual framework addressing or touching on culture do exist. These include White’s effectance motivation, Weiner’s Attribution theory, deCharm’s theory of personal causation, Glasser’s control theory, cognitive evaluation theories, and organismic integration of Deci and Ryan

Learning culture plays a pivotal role in learners’ satisfaction and achievement.

Dweck (2000) indicated that the of students pays much attention to create social affirmation rather than learning content. So maybe our potential problems are less the lack of AI ethics and more the fact that those ethics are developed in isolation.

And that until AI — like humans-develop their awareness through effective socialisation embedded in an effective learning culture — all the GPT-3 writing, Go playing personal assistants in the world won’t fix the social and economic problems we face.

Because socialisation is a fluid solution for managing uncertainty.

Which is why none of us likes a know it all. And opens the question of how sustainable and functional AI can be over time? And whether the struggle with uncertainty is what makes us human?

--

--

Peter Stannack
Peter Stannack

Written by Peter Stannack

Just another person, probably quite a bit like you

No responses yet