Advances in technology, such as the development of more sophisticated machine learning algorithms and robots, have caused a lot of debate about artificial intelligence (AI) and artificial consciousness. While many of the tools created to date have achieved excellent results, there has been plenty of discussion about what sets them apart from the individual.
In particular, computer scientists and neuroscientists speculated about the difference between intelligence and consciousness, wondering if machines could ever reach this other.
Conscious machines are the bedrock of science fiction, often taken for granted by an article of facts of the future, but this is hardly possible.
What is artificial consciousness?
Artificial consciousness, also known as machine consciousness, synthetic consciousness or AI consciousness, is a non-biological, human-created machine that realizes its own existence. When it is created, it will greatly affect our understanding of what it means to be "alive."
Before considering artificial consciousness, we need to first study what we have now: artificial intelligence.
Artificial intelligence is a broad term used to refer to everything from basic software automation to machine learning. Artificial intelligence is far from artificial consciousness.
One way to build artificial intelligence using machine learning technologies or neural networks; technology that allows you to "train" computers based on the information received using complex algorithms. As a result of this training, these AI machines can perform actions.
Artificial intelligence is rapidly evolving and finding many uses in a wide variety of industries. From retail to manufacturing, banking and financial services, AI algorithms are being implemented to meet people's needs and desires, as well as improve user experience, efficiency and security.
When can artificial consciousness become real?
There is still no specific or agreed time frame for when artificial consciousness can become a reality. Artificial consciousness is unpredictable because we don't yet understand what technological leaps and bounds need to be made to achieve artificial consciousness.
Hybrid artificial consciousness
It is assumed that by 2030 we will be able to see a hybrid artificial consciousness, which means the merger of man and machine.
Futurist Ray Kurzweil predicts that by this point humans will be a hybrid of biological and non-biological intelligence, increasingly dominated by a non-biological (artificial) component.
Once a two-way brain-technology interface is developed, augmented humans, combining the best of biological human intelligence and emotions, as well as the speed of artificial technological processing and storage capabilities, will be able to embark on significant new technological developments. After this point, the rate of change may increase exponentially.
What is consciousness?
When discussing artificial consciousness, there seems to be no consensus on what "consciousness" actually means. There is still debate regarding its definition. Neuroscientists and philosophers offer several different explanations. The difficulty is that, unlike intelligence, consciousness includes an element of subjective experience. This subjectivity was articulated by David Chalmers as a "difficult problem of consciousness." This problem is how physical processes in the brain cause subjective experiences. Subjective experience means, for example, what it's like to see red or feel pain. Our perception is subjective and includes vision, smell, touch, emotion and in general has several biochemical influences.
The concept of consciousness
One of the difficulties in creating artificial consciousness is that for this we need to have a good understanding of what consciousness is and how it occurs in the brain, especially from a scientific point of view. While the debate has largely been discussed philosophically, neuroscience has recently become interested in consciousness and is determined to provide an explanation for how it emerges in a physical sense.
Neurology suggests that whatever consciousness may be, it must arise physically through the brain and nervous system. The idea is that if something happens in the physical world, science can replicate it. Science has already replicated the human heart, so why shouldn't it reproduce the brain in the form of artificial consciousness?
Identification of artificial consciousness
Swedish philosopher Nick Bostrom said that given our problems understanding "consciousness," we must accept the possibility that true self-aware artificial consciousness can occur long before machines reach human-level intelligence.
Bostrom suggests that in the future, humanity may unwittingly cause suffering to artificial consciousness.
What does it mean for a machine to act intelligently?
When discussing AI and artificial consciousness, the "gold standard" is humans and thus natural intelligence. Just as human intelligence is measured by problem-solving testing, a machine is considered reasonable if it can solve problems. The more complex these problems, the smarter the machine is considered.
If a machine can solve certain problems better than a human, it can be considered more reasonable in this area. But this does not mean that it is reasonable in all indicators of intelligence.
Turing test
In 1950, Alan Turing created the "Turing Test," which tests the machine's ability to exhibit intelligent behavior. It is based on the notion that if a machine acts indistinguishably from a person, then it is considered reasonable. In this test, the human evaluator must decide between A and B which of them is the car and which is the person based solely on their answers. If the assessor cannot correctly distinguish a person from a car, the car will be considered reasonable.
Machine intelligence has come a long way since 1950, and scientists have already created robots that surpass human intelligence. For example, in chess, data mining, search theorem, etc. The Turing test has thus been criticized for its limited applicability in this new age of intelligence.
Chinese Room argument
Since 1980, philosopher John Searle has put forward "one of the most famous arguments" against strong AI - "The Argument of the Chinese Room". In his reasoning, he presents himself in a room where he is given Chinese characters thrust under the door. Searle does not know Chinese, but he follows a program to manipulate characters and numbers, as the computer does, and then sends the corresponding Chinese characters back under the door using these instructions. Because it has successfully followed the program, people on the other side of the door would assume there is a Chinese-speaking person on the other end, when it really isn't. Searle's view is that artificial intelligence is strong and can never make mistakes.
Philosophy of artificial intelligence and artificial consciousness
There are some deep philosophical questions about what it means to be alive and therefore to what extent it can be replicated in technology.
Emotions and self-awareness by machines
Robots have been created that can mimic emotions. However, these "emotional robots" simply do what is programmed and are a few light years away from possessing any intelligence or artificial consciousness.
However, AI is being developed to read emotions in humans. Humans and animals as social beings rely on emotion to interact and understand each other. The ability to understand these signals is called emotional intelligence and is described as "critical to the success of human interactions." These robots are designed to study and interpret human emotions by interpreting signals in tone, voice, body language, such as facial expression. Still, there are still no robots that can reproduce and similarly "feel" emotions.
Probably, the development of artificial consciousness has not yet reached the point where robots can feel emotions because there are not many reasons why we would like robots to feel emotions. There is a strong argument that robots are more effective without artificial emotions or artificial consciousness. If robots are "feeling" tired or emotional and thus distracted from their core responsibilities, they won't have any advantage over humans.
Also among the numerous definitions of consciousness, it has been useful for many scientists to classify different levels or stages of consciousness. Some scientists have argued that there are three levels and that self-awareness arises on the third level. And it's usually called introspection, or what psychologists call "metacognition." There has been no evidence that robots develop this level of consciousness or self-awareness.
Exploitation of artificial consciousness
The problem with using the word "conscious" to describe a common AI is that it causes certain connotations of humanity. Had machines ever been considered conscious, a number of ethical and possibly legal considerations would have come into play.
Robots slaves
As mentioned above, there is a possibility that artificial consciousness may exist long before we can identify it, and so humanity may unknowingly enslave a new life.
The idea of "robot rights" has long been discussed in films and various articles. Of course, if we manage to create a conscious machine, there are strong arguments for it to enjoy rights and legal protection. People, on the other hand, do not prevent other people with lower IQs from getting the same protection and rights. Everyone likes them simply because they are human beings. If future machines are really reasonable and have artificial consciousness, at what point will they get the rights?
But if robots have been designed, programmed and built to serve humans, the idea of robot rights seems counterproductive.
However Joanna J. Bryson argues in her article that robots should be considered slaves because they will be servants. And it would be wrong to consider their place on our planet next to humanity. Bryson argues that we actually have robots of our own, and everything about them, from their appearance to intelligence, is designed directly or indirectly by humans. Using human empathy for AI could be potentially dangerous, she argues.