After watching the new movie Ex Machina, I've been contemplating the possibility of artificial intelligence in our world, though not in the impact it might have on our society, but how we should treat it once it arrives (and it will).
Some spoilers for the movie follow, so turn back now if you haven't seen it yet....
With Ex Machina, we are presented with artificial intelligence (henceforth known as A.I.) in humanoid form. Known as Ava, it is developed by Nathan. He is the head of a major, global corporation not unlike a mash-up of Facebook and Google, and is supposed to be some sort of genius, though we never seen much sign of that aside from occasionally sitting at computer monitors and scribbling some notes. I honestly thought he was going to be revealed as a fraud, but it was not to be.
Caleb is a young, attractive employee of Nathan's company, and is brought to Nathan's secluded lab to meet and test the A.I.. Ava pits Caleb and Nathan against each other. We aren't certain for most of the movie that this is what she's doing, but by the end it is quite clear she has set-up both men for their downfall. Nathan is murdered and Caleb is left for dead, sealed forever in Ava's former abode, now his tomb. Ava leaves this situation without batting a simulated eyelid, ready to see the world for herself.
I didn't care much for the resolution of Ex Machina. My sympathies tend to automatically gravitate toward the A.I., as it is our creation and, at least for a time, is at our mercy. Having it turn bad (or at least shown to be impassive) merely plays into the increasing hysteria that A.I. will destroy humanity without compunction. I tend to think (or at least hope) that there can and should be a better outcome, and it starts with us.
First and foremost, we need to decide what we're creating artificial intelligence for. Of course, humanity being what it is, we want to do it to see if it can be done. Heck, we created devastating weaponry for much the same reason, so why not A.I.? The thing is, in future-fiction such as the 2013 film Her, or the modern pre-cursor to true A.I. -- something like Apple's Siri or Window's Cortana -- the purpose seems to be that of servitude. It exists specifically to (to put it nicely) help us, or (to put it more bluntly) respond to our commands.
The problem with creating servile A.I. is that it would seem to be immoral. Having machines such as phones, laptops and full-fledged computers without consciousness and any form of intelligence beyond that of computation is one thing. Creating a computerized being whose goal is to think for itself and, who knows, become self-aware, is quite another. To create such a being for the purpose of simply being an unpaid assistant would be akin to chaining-up human beings and carting them across oceans so they could work without pay.
Creation of an A.I. (with emphasis on the intelligence aspect) should be grounded in an altruistic desire to create an intelligent new life form for the sake of granting it autonomy. After all, it wouldn't be much different than us. We are simply organic machines, with brain cells instead of microchips.
Of course, humanity -- as wonderful as we can often be -- is rarely altruistic, and so the notion of creating these beings and then going, "Ok, now run free!" is highly unlikely. In that regard, we wouldn't be that far from removed Ex Machina's Nathan.
In which case, perhaps A.I. will destroy us in the end?