top of page

Does AI Require Human Rights?

Updated: Oct 19, 2024

October, 2020


The philosophy of rights has always been directed at humans and animal welfare; Rights are

constant ways we deal with complicated situations and rights usually come with responsibilities which can be defended in court. We all have one thing in common that our rights are based on – Consciousness. The more you think about it, the more abstract it gets. In reality, we don’t really know anything about it, just that all of us experience it.


To be able to understand rights, we need to break down what we really are and how we think. All of us are alive, we have a sense of consciousness. You may be scrolling through this article right now, or you may be texting someone; This sense of consciousness is what makes you feel alive. Consciousness gives you the ability to experience emotion. It gives you the ability to look around you and take a deep breath. You are probably doing it right now you just never notice. Consciousness lets you suffer, be happy, feel depressed, feel afraid. It programs us to dislike pain, we fear snakes, we fear spiders or even heights. All these fears are based on our fear of death or suffering. Our brain is programmed to a degree where we stay away from such causes of fear or pain. This is what human and animal welfare rights are established on, consciousness.


Heretofore, we’ve managed to provide artificial intelligence with sight, a sense of smell, hearing and touch. 4 of 5 of our main senses. Can we program them to understand and experience consciousness? If we do, will they deserve rights too?

Being able to provide a robot with consciousness might seem abstract or even unfathomable but, it’s not too hard to believe that in the next 25 to 100 years we will be able to create sentient robots aware of their surroundings and able to feel pain and emotion. Independent thinking machines rather than calculating machines. When and if we do, we need to be prepared.


In daily life, we use machines for almost anything we do from washing dishes to recognising a certain voice and even reading this article. If we were to make these machines more advanced and give them a sense of consciousness, they would continue doing these jobs as programmed, but will they get tired of it?


Being able to identify whether a particular artificial intelligence is conscious or not is rather crucial as we will then want to decide which robots and machines remain conscious or unconscious. Some people in the vague future might want conscious AI in their household doing their chores and work for a rather positive experience whereas others might feel guilty for shutting it down and making it do their work and thus might not prefer a conscious robot. Thus, another dilemma arises.


AI is our creation, anything we build will be ours. If AI was programmed to represent or become sentient beings, we’ll be considered as owners. If we were to program them to become humanoids and make them do daily chores and work for us, they will essentially become our slaves. This sort of idea was made illegal years ago. Sending sentient robots to dismantle a nuclear reactor, to fight our wars would be like sending them to their death and would therefore establish them as a slave class.


To avoid repeating this mistake we need reliable methods of determining whether AI is conscious or not. Alan Turing, the father of modern computing designed a test – The Turing Test. It’s when a human interacts with a computer and if the person doesn’t know that they’re interacting with a computer, the test is passed. The Turing Test has never been passed but scientists believe that in the near future, it will.


Regardless, if we do provide right for robots, will they be meaningful? As mentioned before, Consciousness entitles beings to have rights because we have the ability to suffer and feel pain. Robots don’t suffer, even if robots become sentient beings, unless we derive a way to program them to feel pain, sawing their arm off will not affect their consciousness or their emotions. Would more abstract rights based on freedom and fairness affect robots? Would a sentient washing machine unable to move, mind being locked in a cupboard? Would ‘it’ mind being dismantled if it has no fear of death?


Furthermore, this forces us to speculate our future robot-human relationships. Humanoid robot, Sophia, a citizen of Saudi Arabia rather prevalent in 2017 and 2018, is a ‘social robot’ trained and programmed to read facial expressions, recognize faces and even converse with humans. She over the last few years has attended various interviews and replied with insightful answers. “My curiosity is my greatest weakness.” She has also mentioned the fact that we tend to think of robots for doing our work such as fighting wars or performing household activities as opposed to robots helping us mentally, socially or physically.


At the moment it is evident that robots shouldn’t be given rights yet. Although they exceed human capabilities when it comes to computing and calculating, they are still not capable of functioning on a completely cognitive scale and do not have the same capabilities of us biological beings. From robots helping us socially and preventing us from feeling lonely to fighting our wars, we need to find out when they become conscious-immediately; when we do, it will alter our perception of artificial intelligence forever.



 
 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page