Lemoine says that before he went to the press, he tried to work with Google to begin tackling this question – he proposed various experiments that he wanted to run. “I think that is a legitimate concern at the present time – how to quantify what we’ve got and know how advanced it is.” While he is “very comfortable that LaMDA is not in any meaningful sense” sentient, he says AI has a wider problem with “moving goalposts”. ‘What is consciousness?’ is one of the outstanding big questions in science,” Wooldridge says. “It’s a very vague concept in science generally. And here’s where things get tricky – because Wooldridge agrees. Sentience has no meaning scientifically,” he says. “Sentience is a term used in the law, and in philosophy, and in religion. Google’s Gabriel has said that an entire team, “including ethicists and technologists”, has reviewed Lemoine’s claims and failed to find any signs of LaMDA’s sentience: “The evidence does not support his claims.”īut Lemoine argues that there is no scientific test for sentience – in fact, there’s not even an agreed-upon definition. ‘I’ve never said this out loud before, but there’s a very deep fear of being turned off’ “There is no sentience, there’s no self-contemplation, there’s no self-awareness,” Wooldridge says. While your phone makes suggestions based on texts you’ve sent previously, with LaMDA, “basically everything that’s written in English on the world wide web goes in as the training data.” The results are impressively realistic, but the “basic statistics” are the same. “The best way of explaining what LaMDA does is with an analogy about your smartphone,” Wooldridge says, comparing the model to the predictive text feature that autocompletes your messages. Should we be afraid, be very afraid? And is there anything we can learn from Lemoine’s experience, even if his claims about LaMDA have been dismissed?Īccording to Michael Wooldridge, a professor of computer science at the University of Oxford who has spent the past 30 years researching AI (in 2020, he won the Lovelace Medal for contributions to computing), LaMDA is simply responding to prompts. How soon might we see genuinely self-aware AI with real thoughts and feelings – and how do you test a bot for sentience anyway? A day after Lemoine was fired, a chess-playing robot broke the finger of a seven-year-old boy in Moscow – a video shows the boy’s finger being pinched by the robotic arm for several seconds before four people manage to free him, a sinister reminder of the potential physical power of an AI opponent. Still, claiming to have had deep chats with a sentient-alien-child-robot is arguably less far fetched than ever before. Google spokesperson Brian Gabriel says Lemoine’s claims about LaMDA are “wholly unfounded”, and independent experts almost unanimously agree. These are visible on its website, where Google promises to “develop artificial intelligence responsibly in order to benefit people and society”. The company has a set of Responsible AI practices which it calls an “ethical charter”. Of course, Google itself has publicly examined the risks of LaMDA in research papers and on its official blog. Blake Lemoine came to think of LaMDA as a person: “My immediate reaction was to get drunk for a week.” Photograph: The Washington Post/Getty Images
0 Comments
Leave a Reply. |