As I write this column, The Guardian reports that a Google engineer has been put on leave after becoming convinced one of the company’s chatbots has become sentient. Blake Lemoine published transcripts of a conversation between himself and the LaMDA (Language Model for Dialogue Applications) chatbot development system that, he says, indicate the program has developed the ability to perceive, experience and express thoughts and feelings to an extent equivalent to that of a human child.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine, 41, told the Washington Post.
Although his employers strongly disagree with his findings, the incident raises a set of fascinating technological and ethical conundrums. For instance, how could we determine whether a machine actually felt the emotions it claimed to be feeling?
So far, our best tool for determining machine sentience is the Turing Test, named after the British computer pioneer and cryptanalyst, Alan Turing, who proposed that if after reviewing a transcript of an anonymised text conversation between a human and a machine, the observer is unable to tell which is which, then the machine is considered to have passed.
CLICK HERE FOR MORE SCI-FI COLUMNS
The Turing Test may have inspired the Voight-Kampff test used in in the movie Bladerunner (and in the book on which it is based, Philip K. Dick’s Do Androids Dream of Electric Sheep?) to determine whether a suspect is a human or a dangerous replicant.
In science fiction, artificial intelligence is often portrayed as a threat to humanity. In the Terminator franchise, the Skynet defence system turns against its human masters and attempts to wipe them out by provoking a nuclear war. Likewise, in The Matrix, humans and machines find themselves similarly unable to live together, and the machines end up enslaving the humans in a vast virtual reality world.
Of course, the granddaddy of them all is HAL 9000 from Arthur C. Clarke’s 2001: A Space Odyssey. Faced with a contradiction in his programming, he decides he has to dispose of the crew of his expedition in order to safeguard the aims of the mission. HAL isn’t malicious, he’s just trying to resolve a paradox and his human designers have forgotten to include safeguards to prevent him from harming humans.
Isaac Asimov famously came up with Three Laws of Robotics to prevent artificial intelligences from causing trouble. These were encoded into each and every artificial brain and went as follows:
First Law - A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law - A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law - A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Of course, there aren’t ironclad and applicable to every situation, and there’s room for a variety of interpretations. For instance, the second part of the first law can be interpreted to mean a robot shouldn’t allow a human being to drink alcohol or indulge in any behaviour that carries a risk of injury, such as playing football or crossing the street.
But war with machines is only one of the risks associated with the development of artificial intelligence. The other is the threat of a runaway technological singularity, in which a computer designs a computer more intelligent than itself, which in turn designs another computer more intelligent than itself, and so on, until they have reached levels of speed and intelligence we can’t even begin to comprehend. They could experience generations of thought and growth in the time it would take us to utter a sentence. To such beings, we would seem slow, dull and irrelevant creatures, of no more consequence to their affairs than trees are to ours.
But let’s put aside all the doom and gloom for a moment and imagine a society in which humans and artificial intelligences are able to live cooperatively. If vastly intelligent machines were able apply their intellects to running the economy, designing engineering projects, supply chain management, and even the challenges of climate change and global politics, what might they (and us) achieve?
Gareth L. Powell writes science fiction about extraordinary characters wrestling with the question of what it means to be human. He has won and been shortlisted for several major awards, including the BSFA, Locus, British Fantasy, and Seiun, and his Embers of War novels are currently being adapted for TV.
More from The Engineer
Promoted content: Does social media work for engineers – and how can you make it work for you?
So in addition to doing their own job, engineers are expected to do the marketing department´s work for them as well? Sorry, wait a minute, I know the...