If you are fan of Sci-Fi in in its various forms, you have probably encountered the trope of Robots becoming sentient: Blade Runner, I, Robot, and now, WestWorld.
This concept is usually shown throughout the Cyberpunk sub-genre, and brings up a lot of ethical questions for me. The big one being:
How do we treat those made in our image?
If/when we get around to developing a thing that even remotely resembles human consciousness, how do we deal with it? Do we treat it as an object to be used and abused? Do we take care of it like we would an animal, because we assume there is no soul? Or do we give it the same, or nearly the same, respect we would a human being? Obviously we are going to have differences, but in general, how should we act towards them?
The HBO show WestWorld brought the question up as to what makes us Human, and what makes us perceive to be human. In episode 2: Chestnut, a human character asks another person, “Are you real?” She responds with “Well, if you can’t tell, does it even matter?”
So does it matter? If you can’t tell the difference between them and a human being you would end up treating them the same.
Well, let’s think about when this has not been the case. When in history have we ever viewed something as being slightly less than human? I’m not saying that the hypothetical future plight of robots compares to the actual horror of the African slave trade- Actually, no, I am.
This was an issue that many people, religious, God-fearing people, struggled with because as far as they were concerned, slaves were not human. The obvious difference was simply skin color, but the subtle differences were in speech and language.
So if Robots were obviously not human because of a glaring difference, we would treat them differently because of this difference. As for the subtle differences, possibly Artificial Sentience being unable to understand all of the intricacies of human social interaction. At what point in the future did someone look at their cotton-picking slave and suddenly it clicked that they might be human? At what point do we look at our household Robot and wonder if they might be sentient?
Obviously, African slaves were always human, but Robots start off as inhuman. So the next question is where is the threshold? Where do we determine their humanity? At what point in sentience do they become Human for all intents and purposes? Well, where do we call it now?
Let’s look at when humans develop sentience. Baby’s are not self-aware, operating on reactions to incoming stimuli, but eventually begin copying the actions of the what they see from the outside stimuli such as their parents movements and actions. They are aware enough to know that there is a similarity between me waving my hand, and the baby knowing that it also has a very hand-like thing that can also preform a waving action.
Before this, an embryo develops the hardware for sentience between the 24th and 28th week of gestation, with its brain forming the pathways for thoughts without the thoughts themselves racing back and forth. So when it finally does develop this ability, it sleeps, and its sleep can dip into not just any sleep, but REM sleep. REM is usually when you encounter the most vivid dreams, so if the embryo is there, does it dream? What does it have to dream about? I feel like Helen Keller might actually have a lot to say on something like this. (When does Consciousness Arise in Human Babies By Christof Koch)
But let’s skip past the one year mark. Now, the child is able to look in a mirror and know that the figure in the mirror is itself, but not itself. This is considered a common event of self-awareness. Eventually, it will develop its own want and needs other than a simple reaction to stimuli, latching onto toys that have existential value to it.
Finally, language and symbol will have meaning to it. Language will be the child’s means of pinning things down, and giving them a name. Its then that memories can finally be formed; mental boxes labeled. The number I keep seeing, is 18-24 month. I personally remember things at the later age of 2, most of the time I hear between 3-5, and I know some people who don’t even remember having memories start till 7 or 8. So maybe the solidifying of thought takes a while after you are capable of it. (Self-Awareness: Transition from infant to Toddler By Paul C Hollinger M.D.)
So when do the sentience happen? Is it at self-awareness? Is it at the ability to dream? Because dogs dream, and most of us don’t treat them like humans. But dogs also can’t figure out mirrors. If that’s the case, then until a infant becomes self-aware, you could say that it isn’t fully human. That seems like quite the case for abortion if you think about it. Forget fingernails at 3 months; Post-birth abortion is the new black.
Let’s step back and take this from another point of view. If you believe in the sanctity of life, and believe that we are made in God’s image, then anything in that image, even if it isn’t finished, or human, should be at the very least valued, if not protected. If that’s the case such things as Art should be valued as the Grandchild of God. Dante Alighieri places people who destroy art in the seventh ring of hell for this exact reason. (Dante’s Inferno, Canto XI)
(On a slightly political note: Democrat’s typically vote to save the planet, but support abortion; Republican’s protect the sanctity of life, but politically could care less about God’s creation. Strange.)
Obviously an infant is human. But it develops the brain that we so commonly associate as the most human identifier throughout its entire life. So what if that development process takes place for robots, not over the period of 70 years, but the next couple of hundred. Do we treat them like objects? Things to be owned? When do we begin to treat them like children, mentoring them in their walk to bear our image?
P.S.-In my head, one day, the Robot’s humanity argument will be the perfect storm of political topics discussed. Let’s say we get to the point where robot’s have sentience. What happens when someone wants to marry a robot? What if that robot is the property of someone else? What if two robots fall in love and want to get married? This is why I could never stand Philosophy class: Everyone wants to talk about “What is truth?” and I just want to know if robots could have souls. Silly me.
And look at those references! I did research!