A.I. friend or future foe?

As the global race to build sentient A.I. continues at an

ever-increasing pace, we ask could you ever completely trust a Robot with your life?

Image: Comfreak

The Terminator

 

 

 

 

 

 

 

 

Since the birth of the Science fiction genre, we have been seemingly obsessed with the notion that we as humans can create sentient life from a collection of parts. Whether we as a species have a God complex, keen to prove ourselves more capable than nature itself I don’t know, but as far back as Samuel Butler’s 1872 book Erewhon we as a species had thoughts of machines gaining human-like intelligence.

As the fast-paced age of robotics and computer programming brings us rapidly closer to achieving ever greater levels of A.I. should we heed the dystopian warnings of our sci-fi writers? Whether it’s Doctor Who, Star Trek, Avengers or anything in between, there are lessons that humanity must take from Sci-fi if we ever hope to use A.I. responsibly.

A.I. (2001)

A.I. (2001)

Cinema and our viewpoint on A.I.

From an early age, the silver screen encourages us to be mistrustful, even frightened of artificial intelligence. Kids movies are often full of the classic good vs evil battle, and robots frequently ‘go bad.’ As a genre the stories of evil A.I. going rogue far outnumber the tales of happily ever after, and we should ask ourselves, is that just because it makes for more exciting viewing or are there lessons to be learned from the movies?

The original Blade runner movie 1982 opens with a scene about empathy being something that only a human can experience, during the interview the android Leon then shoots the interrogator when asked questions about his mother. This is just one early example of the negative potential outcomes from taking technology too far. What really makes something ‘alive’, we learn in school that it’s all down to respiration, but is it just that? What makes us human is our ability to feel, to empathise, to love, could you ever programme a robot with those things, and if you did would it ever be real? Data in Star Trek Next Generation wants nothing more than to experience emotions, to be able to have real feelings, but we soon learn that is fraught with pitfalls. The human brain is so very complex, and the processes involved in dealing with different emotions is vast. Even if an Android could feel I cannot believe that it would it be real in the same way as it is to us, rather an imitation of reality.

Cinema makes us both love and hate A.I. in equal measure. The likes of The Terminator (1984) made us feel both fear and affection for thinking robotics, whilst groundbreaking movies such as A.I. (2001) tug on our heartstrings for a Pinocchio-like robot child whose ability to feel love is not returned by the maternal focus of his affection. Should we be even attempting to create robots that can think and feel, possibly even love, or is that a slippery slope? because if something learns to love can it also learn other emotions such as hate, anger and a desire to inflict pain upon those who created it, can it ever lean to process those emotions as we do growing up as children?

Are we right to consider the warnings of the Sci-fi writers? how far is too far? there has surely got to be a line that we don’t cross, a research programme recently shut down because they realised that the computers were talking to each other in a language that only the computers could understand, unnerved by this the research team asked the computers to tell them the conversation, the answer was no, so the frightened team shut it down. Sounds like the plot to a movie? nope, that happened, so we are not that far away. I for one do not want a SkyNet world where machines have more power than humans, where we have to fight with our own creation, because I believe that however flawed we are, our humanity is also our biggest strength, and no matter how smart they get, without that spark that makes us human, robots can never be trusted, and must never be given absolute power over life and death.

No matter what happens with the future of A.I. Hollywood teaches us one very important thing… the humans always win in the end right, or perhaps is the tide turning?

Blade runner

 

 

 

Article: Emma Murfin for Movie-Reliquary