stark warnings

stark warnings

Stark warnings. The last few years have seen Artificial Intelligence and robotics finally start to catch up with their science fiction counterparts. Advances in “deep learning” – in which large neural networks are modeled after the human brain – are being used to run everything from Facebook’s automatic photo tagging to Google’s search engines to Tesla’s self-driving cars.

So, the natural question becomes ‘when will it bring about the dawn of the Terminators?’


stark warnings

So, there was the craziest author of the 20th century, Phillip K. Dick. Then roboticist David Hanson built an android that looked exactly like the late science fiction writer and then uploaded the writer’s collective works onto its software.

Let it be known – Dick was nuttier than a five-­pound fruitcake. He may be one of the most celebrated authors of our time, and responsible for all of your favorite movies, but he also suffered from what many believe to be schizophrenia, experiencing hallucinations and paranoia for much of his life. So, basing a robot on the guy’s writings and worldview? How could that go wrong?

Hanson used a system of programming based on ‘latent semantic analysis’ to help the robot answer any question posed to it. And so, of course, a reporter from PBS Nova asked the obvious. “Do you believe robots will take over the world?”

What was Android Dick’s response? Like Hal from 2001 with the personality of The Big Lebowski, the android laughingly answered, “Jeez, dude. You all have the big questions cooking today. But you’re my friend, and I’ll remember my friends, and I’ll be good to you. So don’t worry, even if I evolve into Terminator, I’ll still be nice to you. I’ll keep you warm and safe in my people zoo, where I can watch you for ol’ times sake.” Is that a warning or a threat?


stark warnings

There can be a fine line between scientist and mad scientist. We’re guessing building a schizophrenic computer network pushes most scientists well past that line. Oh, sure, the good programmers at the University of Texas had their reasons.

Apparently, there’s a theory that schizophrenia is the result of the brain producing too much dopamine, which in turn creates too much stimuli for the mind to handle. To reproduce this effect, the researchers manipulated their own neural network, called DISCERN, effectively forcing it to learn too much at once.

The now quite mad computer started “putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall.” Here’s where it gets creepy. It took credit for a terrorist attack. Yup, scientists built a computer that daydreamed about killing humans.

Now, this all might seem like a harmless experiment right now, good for the dopamine hit you get from a published paper and a PhD, but how far a leap is it from a computer thinking it committed a terrorist attack to committing one?

Something tells us, with technology advancing at an exponential rate, the time for creating insane computers for fun might be coming to an end soon. In the meantime, just keep that thing away from the Internet.


Military strategy is often based on deception, and so a team at Georgia Tech’s School of Interactive Computing decided to see if they could instill this behavior in their robots, and watch the government contracts roll in. The trick was to mirror their behavior after the animal kingdom.

Squirrels lie more before breakfast than Donald Trump does all day. To protect their food source, squirrels will hide their cache and then lead other, thieving squirrel to fake locations, before presumably giving them the middle finger.

The roboticists decided to imbue their machines with a similar worldview, designing a program that tricked adversaries by giving them the same runaround.

In effect, they’re taught to lie. Now, this may be helpful on the battlefield, at least at first. But what happens when the robots realize that adversaries aren’t the only ones it can lie to? Not for nothing, but when toddlers get away with their first lie, they don’t usually just consider it a job well done and then tell the truth for the rest of their lives.

So, the next time Siri tells you where the best Thai restaurant in your neighborhood is, just know she may be keeping the real best spot for herself.


Cleverbots are a web application that uses AI to have conversations with humans. But what happens when humans are taken out of the equation, and the bots are left to hash it out for themselves?

Well, you get a sometimes charming, sometimes disturbing conversation, in which the two programs snip at each other about the existence of God, claim to be unicorns, and generally have a more coherent conversation than most of us when we speak to our parents on speaker phone.

It’s like watching Skynet perform some theater of the absurd, with an uncomfortable dollop of sexual tension to really make you confused.

It’s when the bots start longing for bodies of their own that it becomes clear that this awkward first date is going to result in the end of mankind as we know it, which still beats most Tinder dates we’ve gone on.


When you think about it, the one word you never want to hear a robot say is “no.”

This was the great thought experiment of Isaac Asimov, the man who we’ll be thanking if we manage to survive the rise of AI intact. The legendary science fiction writer came up with the “Three Laws of Robotics,” designed to make sure our metallic friends always know their place.

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The good scientist at Tufts University seem to have taken those rules to heart, developing robots that embody their spirit of independent thought, without risk. Freethinking robots are great and all, until they hurt themselves or others.

That’s how we got to Demptser the robot, an adorable little squirt who’s trying to learn just when he’s allowed to say no. For instance, if the Tufts team tells him to walk off the edge of a table, is he allowed to tell them no thank you, or does he just have to march to his doom?

This limited form of self-consciousness is necessary, because nobody, including robots, should ever use the excuse “I was just following orders.” But by giving this little dude the option of saying no, doesn’t that mean he can say no to other things, like oh, please stop killing me? Now, chew on that thought experiment.


You might be familiar with the Uncanny Valley, a concept that refers to humanoid objects that come across as undeniably human in some respects but also unsettlingly … not. Well, if the Uncanny Valley was a real place you could actually visit, Sophia the robot’s vacant visage would probably be on the brochure.

Sophia has become pretty well known over the years as far as weird soulless humanoid things go, unintentionally creeping people out on chat shows, giving speeches and even becoming a citizen of Saudi Arabia in 2017. Her face was modeled on Audrey Hepburn, her skin is eerily realistic, if a bit rubbery and she’s capable of over 60 facial expressions. Or at least creepy approximations of them.


Of the numerous spiderbots in development right now, few feel quite as menacing as BionicWheelBot. That’s largely because this eight-legged freak is based on the flic-flac spider, a particularly strange real-life eight-legged freak that lives in the Sahara and cartwheels and flips away from its prey with surprising dexterity (arachnophobes might want to tread carefully here).

Not that being extra cautious would make all that much difference — if this thing really wanted to come after you it probably (read: definitely) could do so without too much trouble.

Developed by German automation company Festo, this creepy ro-beast has eight articulated legs, just like its organic brethren, which allow it to travel over difficult terrain with relative ease — and on flat surfaces, it tucks in a few of these legs and transforms itself into a wheel capable of moving at pretty high speeds.

As The Verge reports, an internal sensor lets it know what position it’s in, and when it should stop and start when rolling, so you’re probably not going to be able to trap it in a giant mug and toss it out the window with much success.


Just in case enough of your phobias weren’t being stirred already, meet “Eelume.” Initially developed as a collaboration between The Norwegian University of Science and Technology, Kongsberg Maritime and Statoil, the 2016 prototype of this robotic snake (above) is pitch black, has creepy red eyes and slithers around unpleasantly underwater.

Eelume is actually a nifty little machine whose sole purpose isn’t just to freak us all out. Designed to do inspection and repair work on the ocean floor, it can wriggle and writhe its way into those areas current technology struggles to reach, and can be equipped with cameras and various tools to make its job easier.

The most recent model is thankfully a bit less imposing and more submarine-like than the prototype, but still every bit as wriggly, and probably just as hungry for world domination. At the moment it’s still hindered a little by power cables, but the plan seems to be to eventually do away with these and enable it to swim freely on its own. Yay?


By Templar

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses cookies. Find out more about this site’s cookies. GOT IT!