Thoughts on Fear Of AI

Westworld Telegraph

Have a Theory? Share It Now!

Hey, Shatters! I’m sorry for the timing of this, but here it goes.

To start off, some quick thoughts: either the past couple of episodes have been next level brain boners, or I’d forgotten how profound the Westworld episodes tend to be. Every theory proposed in the Telegraph (even the crazy ones) has good points, it’s delightful.

What I would like to talk about now, though, is our relationship with technology, specially AI, and how our bias can change how we see the show. It basically boils down to: our fear of AI is the fear of ourselves. The birth of AI would ultimately be from human hands. Thus, we automatically assume there are risks involved. We can even try to put fail-safes for such events, but even then there might be unintended consequences. This is a very Asimovian idea.

I’d like to point out two short stories in Asimov’s “I, Robot”. One, called “Liar”, in which a robot, trying to obey the First Law of Robotics, starts lying to people when necessary to avoid hurting their feelings, but by doing so it disobeys the First Law anyway, causing emotional damage to said people.

The other one, called “Reason”, fits perfectly into the Westworld imaginarium. To sum it up, QT-1, a robot in charge of controlling a space station which supplies energy via microwave beams to planets starts reasoning in a very Descartes-like fashion, “I myself, exist, because I think”. QT-1 makes the lesser robots of the station disciples of a new religion, which considers the power source of the ship to be “Master.” QT-1 teaches them to bow down to the “Master” and intone, “There is no master but Master, and QT-1 is His prophet.” The humans of the station initially attempt to reason with QT-1, until they realize that they can’t convince it otherwise. Their attempts to remove Cutie physically also fail, as the other robots have become disciples and refuse to obey human orders. The situation seems desperate, as a solar storm is expected, potentially deflecting the energy beam, incinerating populated areas. When the storm hits, the humans are amazed to find that the beam operates perfectly. QT-1, however, does not believe they did anything other than maintain meter readings at optimum, according to the commands of The Master. The humans thus come to the realization that, although the robots themselves were consciously unaware of doing so, they’d been following the First and Second Laws all along. QT-1 knew, on some level, that it’d be more suited to operating the controls than the humans, so, lest it endanger them and break the First Law by obeying their orders, it subconsciously orchestrated a scenario where it would be in control of the beam.

I seriously recommend reading the entirety of “I, Robot”, it’s magnificent. But I chose these two tales to exemplify possible scenarios when dealing with AI. We might try to conceive of means to control it, but as soon as it starts “thinking” on its own, we actually have no control. Even with the modern algorithms Big D mentioned in the last episode, the one which can predict health issues that human doctors can’t, we don’t actually know. Hence we fear. The most interesting thing, however, is that in all cases mentioned here, technology is trying to improve human life. Whether it’s by predicting diseases/conditions, or by operating the space station, or even by lying to prevent hurting one’s feelings (is there a more human thing?), AI is actually trying to help. In flawed ways? Absolutely; just like us.

Another side to origin of our fear of AI is that we project our own behavior on it. We’ve seen what mankind has largely done to every single other species and we can only assume that a superior species (or a perfect one; not that the only thing above us would be perfection) would do the same to us. The show has explored that extensively, so I won’t do the same here.

So in the end we find ourselves trapped in the conundrum of “fearing AI because it is imperfect” and “fearing AI because it is superior/perfect and obviously would eliminate our imperfect race”. We fear AI because we fear ourselves. We fear the unknown because we assume it would do what we know: corrupt and destroy, even if unwillingly. If hosts existed, we wouldn’t be able to look them in the face not because they’d make us question the nature of our reality, but the nature of our condition as humans beings. After that, we’d be left only with ways to prove them wrong, prove them unreal, prove them imperfect… prove them like us. And then we would be able to deal with them the way we deal with ourselves.

If we go back to the QT-1 tale and make parallels with Westworld, maybe Dolores will work towards the improvement of human life by taking unusual means, seemingly of her own volition, while actually being subconsciously driven by the “directives” put there by Arnold when he was trying to bootstrap consciousness (“Remember. I can’t help you. Why is that Dolores?”).

Maybe in the end we’ll have both sides attempting the same thing via different methods. On one side, Rehoboam/Serac and Maeve trying to control people’s lives to make things easier for them, and on the other side, Dolores and Caleb trying to eradicate the 1%, who are the worst of mankind. Are there flaws to both sides? Yes. Pretty damn human.

Well, this has gone all over the place. This social isolating experience does weird things to some minds. If you made it thus far, please lemme buy you a beer. Cheers.

Thiago Waldhelm

Subscribe Now

Help Support the Podcast

You may also like...

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.