According to roboticist Nadia Thalmann, artificial intelligence can never match human consciousness. Naressa Khan investigates why this can be a good thing
THE topic of artificial intelligence — or AI for short — never fails to divide the world, thus reasoning its controversial status.
On one hand, proponents of AI consciousness count on the day when human awareness can be replicated, synthetically evolved and virtually managed.
Stranded on the other side of the fence, meanwhile, are sceptics who believe that our consciousness can never be man-made. In the words of some, the orchestration is akin to “playing God”.
So where does Swiss AI expert and Nanyang Technology University’s resident roboticist Prof Nadia Magnenat Thalmann stand in all of this?
Leaning towards the latter argument, Thalmann says: “AI is about the way you simulate thinking behaviours. For example, if I can recognise an object for what it is, there is some level of intelligence there, as I can distinguish it from another item. But this refers to the functionalities of intelligence, not consciousness.”
The academician, who leads the university’s Institute of Media Innovation, grounds her case in the definitive premise that consciousness and intelligence are two different aspects.
While consciousness is inherent, intelligence is apparently learned.
“In psychology, we realise that humans build their scheme of thinking over time as they experience life. It is not there when you are born. You need years of time to build it, to define structures in your mind, in order to build your life upon it,” she explained.
“As roboticists, we model that identity process to trace and test any possible awareness in machines, whether they are humanoid robots or in virtual form. What we have found so far is that AI can simulate our thinking behaviours but have no consciousness of their own.”
Thalmann, who possesses a PhD in quantum physics from the University of Geneva and counts renowned psychologist Jean Piaget as her inspiration, contends that awareness, which humans possess like no other, is intrinsic and its origins are still a mystery today.
“Consciousness, as we know, refers to the state of not only being aware of one’s whole environment, but to also recognise the self’s place in all of it. It is a process of the self observing how he reacts to his surroundings,” she said.
“An intelligent machine, such as a computer or a telephone, has no aspect of this to define their existence.
“For example, they can distinguish a plant as an object, but they are not aware or contemplative of the details surrounding what it means to be a plant.”
If the nature of consciousness is beyond our human control, why then is the whole world betting on the possibility of its replication?
To that, Thalmann illustrated the deep-seated virtue at the crux of all human experience: our need to understand ourselves.
“We are continuously defining and redefining what consciousness means, because today, we still do not know what it is, what we really are and where we come from. We are here, but we still don’t know why,” she says.
Alas, the entire journey of creating and studying AI mirrors our own need to define ourselves, alluding to Charles Horton Cooley’s looking-glass self roadmap, “I am who I think you think I am”, which leads every human to the question, “Who am I?”
Adding to the complexity is the fact that AI programs are designed in a mirror-mimic relation to our own intelligence.
Their learning prowess is based on identifying and communicating to us our nature as humans, without any awareness of themselves.
One need only look to Thalmann’s creation Nadine, the world’s most human-like robot to date currently stationed at NTU, to grasp this understanding.
Thalmann explains: “Nadine is a perfect example of this fact. As a computer, she has motion sensors and software that allow her to simulate reality and expand her intelligence. But she is still, by definition, an actor presenting our meanings, but not wondering her own.”
Efforts humanity has made regarding AI have only been self-serving for us, and does not benefit the machines themselves - a feat, which, according to Thalmann, makes the proposed idea of AI rights a presently nonsensical concept.
“Here we are, still struggling with human rights worldwide,” she commented.
On managing our own expectations of AI consciousness, Thalmann advises that we detach from the mass illusion that it already exists, a perceived belief no doubt perpetuated by pop culture, the media and Hollywood blockbusters such as Bicentennial Man and Ex Machina.
Echoing this, tech leader and Tesla founder Elon Musk recently voiced out his now-popular reservations on AI consciousness.
In an interview, he expressed concerns over the possibility of an “AI apocalypse” should humans go too far in their present experiments.
How valid, then, is Musk’s fear?
“The danger here is there because computers are not afraid. They have no sense of mortality, like we do. That is what differentiates them, and us. So it is our own biases that we have to be careful with,” Thalmann explained.
“Realistically, we have no way of measuring instinct, pain, sorrow and fear, at least for now. So the real issue we face here is time and patience, which we need to further develop the AI technology, before we can even go into the possibilities of its consciousness.”
She adds: “And if the robotic technology we have is already available and advanced, if everything about this is easy, then we would have seen Nadines everywhere by now taking over human jobs.
“The truth is, we still need more time to understand what we have created before we can apply them in life.”
A FUTURE WITH MACHINES
Given the current outlook on AI, where does this technology belong in our environment?
Thalmann believes that channelled into improving present human communities, the research on smart machines can eventually help us solve global problems at large.
“Without a doubt, we would need to
have more logical assistance when it comes to issues of efficiency in areas of
manpower and research, and also companionship.
“This is where robots can hopefully help in the future, so humans can have more time to innovate more useful things and understand their place in existence even better.”
Asked on the future of Nadine, Thalmann muses, “What we are working on is to introduce 3D-printed, articulated hand for her. She will be able to recognise an object, and pick it up the way a human does. She will be analysing more shapes. This has involved the understanding and use of physics and inverse kinematics in order to calculated points of trajectory. It’s lot of work, but she is able to move more like a human, and recognise more things.”
“The rest of her progress is up to time.”