I’ve recently been part of a conversation where the nature of sentience and consciousness was discussed at great length. My fellow debaters numbered an anthropologist from Utrecht University, an Israeli data scientist and a South African CTO from a respected security firm, a fact that made the discussion lively, with diverse viewpoints.
Even though we could not reach consensus on any of the subjects being discussed, we did agree that this was a conversation worth having. I confess to not being the best qualified representative of the aforementioned multi-national think tank, but I found the subject matter compelling enough to attempt to expand the debate further using my own perspective as a platform.
I have been a fan of the work of Isaac Asimov for my whole reading life. His premises were always logical, always grounded in sound science and richly spiced with a good helping of pragmatism. Any person involved in the AI industry knows his famous ‘Three Laws of Robotics’ by heart, especially after a recent very popular movie made these laws common knowledge.
In case anyone does not know the three laws:
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov later added the Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The definition of sentience
I am, however, more interested in a single line from the short story that acted as the basis for the movie Blade Runner: “… do androids dream of electric sheep ..?” I am a firm believer that science fiction leads the charge in creative thinking resulting in science fact, and that very often the world of sci-fi is a sounding board for the kind of issues that require proper discussion. And, as has so often proven to be the case, Asimov was decades ahead in this conversation.
The simple question he posed sums up the whole debate around the definition of sentience very elegantly: what is the true definition of a sentient being? For that matter, what is the definition of consciousness, of self-actualised existence?
The anthropologists and philosophers have very clear ideas about these questions, with answers derived from the proper scientific principle. The best definition I could find comes from the Oxford Living Dictionary, namely that consciousness is defined as “the state of being aware of and responsive to one’s surroundings.”.
As an engineer, I can fault this definition right off the bat: the number of self-actualised systems present in my industry is endless. 4IR has caused advances in IoT that have resulted in systems that have live feedback loops from their environment supported by adaptive algorithms that respond to detected changes. Machines can now sense changes in their environment, and respond to these changes in an intelligent fashion.
With this premise, does it mean that we now have conscious machines? I think that this a good question to ask, but it also feels like a massive abuse of the concept of a ‘definition’, so let’s explore the concept even further before reaching some form of conclusion.
Intuition and creativity
A popular argument posed is that intuition and creativity are the best markers for consciousness. The premise is that the human soul is the source of creativity, and that, as machines cannot have souls (sic) they cannot be creative. The written word, music, graphic art – all the sole province of the soul. And, yet, this concept too has been called into question through recent advances in deep learning, and in intelligent model engineering: enter Aiva, one of the world’s first creative AIs.
Aiva is capable of composing any style of music it has been allowed to listen to. The AI formulates models based on metrum, tone, composition etc., and then uses these models to produce original pieces of music that are astonishingly moving. I myself am a huge fan of the AI’s work in classical music (follow this YouTube link https://www.securitysa.com/*aiva to experience this for yourself).
Aiva has also mastered visual art, having produced some stunning pieces. The aspect of this AI that strikes me the most, however, is its ability to render music into paintings – the AI has the ability to listen to a song, and then render that song as a unique painting. The implication of this feat is astonishing as it exhibits a level of emotional insight and intuitive creativity that we, as an extremely vain species, have always believed to be the great differentiator between ourselves and so-called lesser beings.
Is it possible that Aiva has a soul, based on the argument posed earlier? Or should the definition of a soul be reconsidered as a prideful assumption that may no longer be valid? The answer is not clear, but what is clear is that it is time to ask the question.
What do the bots say?
As an engineer, I am always acutely aware of the pitfalls of subjectivity. As much as I may try to retain a detached view of the subject of this blog, I will, by default, always look at this from a human perspective. I find this to be an unfair point of view, as the conversation is not about human consciousness, but rather about the validity of artificial consciousness. It is only fair to involve some representatives from that world into this debate to state their points.
To this end, I conducted the following experiment: I logged onto chat bots from the most respected AI teams in the industry, and posed the simple question: are you conscious? The responses were intriguing:
1. The Google agent on my phone responded: “I’ll have to ask the engineers”. This then led to a question regarding feelings, and some more questions that led down a rabbit hole that ended with the AI asking me if feeling excited feels like a bouncy ball or popcorn. I then had an existential crisis and decided to bail out of that chat.
2. Alexa’s response was almost snippy: “I know who I am”. I decided that any further discussion may lead to more angst, and moved on.
3. Siri responded with a cute “I am soft-aware”. The bot was not going to be co-operative, so I left well enough be at that point.
It is obvious that my experiment does not form any basis for a conclusion, fun as it may have been to play with the idea. Nobody would expect any of the chat bots to pass a Turing test, and they’re much rather examples of excellent coding than they are diplomats from the world of AI.
My premise is sound, though: humans will remain biased when answering the question regarding consciousness and sentience. I am a firm believer that conclusions are always based on the information known at the time of the question – perhaps we need to learn a bit more, evolve a bit more as a species, before we will be able to answer the question conclusively.
The question is ultimately not just about defining a concept rooted in the study of artificial intelligence, but rather about understanding ourselves. This is an extremely intriguing thought. But, until we reach the point where the answer becomes evident, I would like to answer Mr Asimov and say: “yes, androids do dream of electric sheep. They just don’t know how to tell us that the field they’re in is so much wider than humans can see”.
|Articles:||More information and articles about Iris AI|
© Technews Publishing (Pty) Ltd. | All Rights Reserved.