People love to say that there are two kinds of people in this world. There are actually three: people who think A.I. is a good idea, people who think A.I. is a bad idea, and the robots that will kill us all if the first group has their way. You can probably tell that I’m in the second group, and I know I’m a little behind on the paranoia bandwagon. I mean, filmmakers have been going down this “what if” rabbit hole for decades, all the way back to 2001: A Space Odyssey and that godless sonofabitch HAL, and we’ve all seen Terminator and Blade Runner. A friend alerted me to an 80s flick called Chopping Mall, which I guess is like Mallrats but with murderous robots. You may have heard of a recent short film, Sunspring, that according to Wikipedia was “entirely written by an artificial intelligence bot using neural networks.” Good lord, my blood pressure went up just copying and pasting that quote.
As I said, I’m a bit late to the panic party, but suddenly it’s a topic I can’t escape. Within the past two weeks, these are just a few of the headlines that I’ve come across: “The Merger of Humans and Machines Has Already Begun” (Newsweek), “3D-Printed Skin Could Help Create Bionic Superhumans” (Newsweek), and “Elon Musk’s Billion-Dollar Crusade to Stop the A.I. Apocalypse” (Vanity Fair).
The A.I. apocalypse is indeed underway, my friends. According to the Vanity Fair piece, we’re already cyborgs because we’ve become integrated with our phones in some terrifying way that I can’t explain because I burned the magazine halfway through the article. Yes, I still read actual, physical magazines. Keep in mind that these are mainstream media. If Vanity Fair pivoted away from a story about an embattled British art collector’s family long enough to cover the aforementioned story, we’re in trouble. I’d probably have to put on a diaper to read what they’re printing on this issue in Wired or Popular Mechanics.
My takeaway from these articles is that companies like Google and Facebook are in a psychopathic race to make humans obsolete, simply because they can. Nobody is in total agreement about what A.I. is, but everyone in the business of developing it seems to agree that more is better, even if someday our sex dolls are in charge of feeding us and letting us out to use the bathroom.
Twenty years ago when I was in high school and actually seeking out information on A.I., there was nothing to be found. I remember the circumstances clearly because I was doing research for a school assignment that I ended up failing (well, I got a ‘C’, but I was the kind of student who considered anything less than an ‘A’ failure. Except in Physics, when a ‘D’ was cause for celebration once I figured out that the answer was always ‘never’ or ‘zero’).
My unorthodox, came-of-age in the sixties, liberal history teacher assigned me the topic of Artificial Intelligence for a research project. I didn’t have an understanding of political liberalism back then, but there was a consensus among my friends that he partook of the ganja (alas, we were never able to prove this by smoking a fatty with him), plus he played Pink Floyd in class and had a Nixon mask descend from the ceiling to illustrate doublespeak, so yeah, he probably leaned to the left. Teachers who encourage students to question the status quo will always appeal to teenagers’ rebellious natures. This one had somewhat of a cult following at my school.
Naturally I didn’t want him to think I was stupid, so I didn’t ask for help even though I had no clue what A.I. was. This was around 1995, which meant that we relied on the library’s trusty card catalog to find information. My search came up empty, so I did the only thing I could do, which was write a paper about pills that make people smarter, like Adderall if Adderall had been developed by then. I could have really used the Googles at that time. But because the premise of this essay is that TECHNOLOGY IS EVIL, forget I said that.
Fine, I don’t actually think technology is evil. I’m not a Luddite, although it’s true that I’m a late adopter. When people first started texting, I asked a friend “How is this a thing? I don’t see myself ever doing it.” Despite comments like that, I don’t hate technology. I worry that it might make life too convenient, bringing out our worst and laziest tendencies. A critical distinction for me is that technology should be in service to the betterment of humanity and our planet, not the other way around. Steve Wozniak reportedly thinks that in his lifetime people will become robots’ domestic pets.
I AM NOT OK WITH THIS. I do not want Mark Zuckerberg to enhance Facebook so that we can interface with it on the street. I do not want to meld with my phone any more than I already have. I do not want my daughter to become bionic and live for as long as she wants to, because what kind of life will it be? There is already a disturbing disconnect between people and nature. I want future generations to experience a world where we still have a physical connection to our environment and to each other. I want my daughter to stare into another person’s eyes and know what it feels like to be vulnerable. I want her to look up at the sky and wonder. I want her to delve into the pages of a musty library book that didn’t come up in a search for Artificial Intelligence.
Alas- I did not have this teacher at York. Sounds like his class would have been memorable. P.S. I suggest you do NOT watch the movie Ex Machina- it’s well done but based in this piece the AI in it would creep you out for sure!