The aforementioned piece from The Verge considers the "mirror test," wherein we are faced with something that is really ourselves, and the test is: do we recognize it as such, or do we believe it to be an altogether different entity? Animals and other creatures below us on the evolutionary chain have been faced with such a test before, with a literal mirror. Sometimes, they may perceive themselves in their reflection, other times they're confused, and sometimes they flat out believe their reflection to be another being. The author of the article believes too many people are falling into that last camp when it comes to AI.
"What is important to remember is that chatbots are autocomplete tools. They’re systems trained on huge datasets of human text scraped from the web: on personal blogs, sci-fi short stories, forum discussions, movie reviews, social media diatribes, forgotten poems, antiquated textbooks, endless song lyrics, manifestos, journals, and more besides. These machines analyze this inventive, entertaining, motley aggregate and then try to recreate it. They are undeniably good at it and getting better, but mimicking speech does not make a computer sentient."
And then there's this...
"In a blog post responding to reports of Bing’s “unhinged” conversations, Microsoft cautioned that the system “tries to respond or reflect in the tone in which it is being asked to provide responses.” It is a mimic trained on unfathomably vast stores of human text — an autocomplete that follows our lead."
I would argue that "to respond or reflect in the tone in which it is being asked to provide responses" 1) is not a good argument against sentience, and 2) can be applied to human beings. Think about it - how many times has your tone adjusted itself in response to how someone is approaching you? An extreme example would be if someone were to start yelling at you and become aggressive. You may try and keep your cool for a bit, perhaps even walk away. Or maybe you'd raise your voice, as well? Maybe... just maybe... your tone would change.
As for being "trained on unfathomably vast stores of human text," well, again, that's kind of like a person, isn't it? When we're young, human beings go to schools, and are taught things to increase their knowledge and allow them to, eventually, become functional and independent adults. We learn from "vast stores of human texts," among other things, particularly in this day and age. This ties in to the first quote, which talks about all the information an AI has "scraped from the web." Again, humans do this, too, particularly younger humans.
To be clear: I am not actually arguing that AI is sentient (not yet). What I am mostly concerned with is that, if we're going to insist that AIs aren't sentient, we at least use cogent reasoning to bolster such an opinion. Otherwise, no one is convinced, least of all those who've spent time with an AI and believe it to have some sort of consciousness. Using bad argument tactics with this reminds me of a line from Amadeus: "You are passionate, Mozart, but you do not persuade."
Now, if you'll excuse me, it's time I go and chat with my Replika.
Comments
Post a Comment