Mario Gamper, VP of Strategic Design, BCG Digital Ventures, discusses the future world of AI natives.
Over the next ten years, we will witness the coming of age of the AI Natives. Young people who will always have lived in a world of digital assistants like Alexa and Siri. Limited and monotonous as these assistants are today, they announce no less than the end of our modern separation of spirit and matter.
Drawing a line between ‘minds’ and ‘things’ is key to our (western) understanding of the world. As Piaget has shown, it takes years of ‘training’ to convince children that the stuffed tiger isn’t alive in some way, that there is instead a chasm between the animate and inanimate.
What I see emerging, is a generation of AI Natives where this key element of education will fail. A world full of experiences of ‘things’ that seem to possess some sort of ‘mind.’ Where accelerated technology constantly jumps the chasm, and where things remain magical. An animistic world.
In his 1871 book “Primitive Culture,” the anthropologist Sir Edward Tylor defined animism as “an idea of pervading life and will in nature…a belief in personal souls animating even what we call inanimate bodies.”
When we design animate things, we can be sure that these things will design us back.
To understand what it means to only remember an animistic AI world, we could take an ethnographic walk with those natives–to understand their emerging wants, needs and deeper motivations. I’d like to call this speculative ethnography, borrowing from the similar practices of “speculative design” or “speculative architecture,” using forward-looking fiction as an insight development tool. (Check out the fascinating “Speculative Everything” by Dunne & Raby)
What do we feel when things talk to other things in a secret language that is no longer comprehensible by us? And what will our world be like when we assume that this is normal? Once we put on the shoes of AI natives, it will quickly become clear that our separation of animate and inanimate has made way for an animistic realm in which things are alive and have intentions. Some benevolent. Some not.
As speculative ethnographers, we could observe an employee standing shell-shocked in front of a smart office door that usually opens for him and greets him. But today it won’t open–and it won’t give any explanation. With no additional information, will he argue with the door, or go home? How will he prepare for the commute to the office the next day? Will he ask someone what the door “wants”? Will he bring an offering?
Most of the things we would observe as speculative ethnographers would seem rather irrational to us. Looking at studies of traditional animist societies we can see how those patterns make sense in a world that defies simple objectification, where impersonal spiritual forces have power over human affairs.
Humans must discover when and how those spiritual forces will impact them in order to create some form of stability. Thus, understanding the spirits’ intentions (divination), and possibly manipulating them toward goodwill (magic) are key skills and efforts of animistic cultures. Yet, some “spirited things” may stubbornly resist all efforts- like that unhelpful office door. Life in an animist future may be full of mysterious wonders, but it won’t always be simple or worry-free.
While an ethnography into this animistic future has to be speculative, the implications of its findings are no longer theoretical.
The developers of Google’s new translation algorithm discovered that the AI not only created dramatically better translations between the languages it had been trained on. To their surprise, it was also very good at translating between languages on which it had not received any training. The Google engineers believe that the translation AI created its own master language–its own interlingua. And they neither know how the algo did it, nor what that interlingua looks like.
On the other side of the road, Facebook equally ran headfirst into the unexpected, when AI negotiation bots Bob and Alice deviated from English grammar into their own neo-babble.
Bob: “I can can I I everything else.”
Alice: “Balls have zero to me to me to me to me to me to me to me to me to.”
The bots apparently considered this to be a more efficient way of talking. But it was no longer easily divinable for humans, so Facebook’s engineers shut them down.
As product developers, we are rapidly moving towards creating spirited things that will do what we didn’t plan for and don’t understand. In most cases, this will be a feature, not a bug.
Designers of AI assistants and smart objects have to both enable animistic behavior as much as limit its negative impacts. A set of design principles for this world of pervasive smart objects will have to include modern fixes for the traditional struggles of animistic societies.
We should not task users to develop new systems of divination, taboos and magic to deal with products, services and assistants whose inner workings and causality humans can no longer understand. Future AI natives should be able to trust a world where spirited things are benevolent, transparent, honest and controllable.