According to an article in the Washington Post, a Google engineer said that after hundreds of interactions with a yet-to-be-released advanced artificial intelligence system called LaMDA, he believes the software has achieved a level of awareness.
In interviews and public statements, many in the AI community dismissed the engineer's claims, while some noted that his story highlights how technology can lead people to assign human traits to it.
But it can be argued that the belief that Google's artificial intelligence may reach this degree of awareness highlights our fears and expectations of what this technology can do.
LaMDA, which stands for "Language Model for Dialog Applications" is one of several large-scale AI systems that have been trained on large swaths of text from the Internet and can respond to written prompts.
It is mainly concerned with finding patterns and predicting which word or words should come next. Such systems are becoming increasingly good at answering questions and writing in ways that can seem human.
Reportedly, engineer Blake Lemoine told The Washington Post that he shared the evidence with Google that LaMDA had the ability to experience sensations, but the company did not agree.
Google said its team, which includes ethical professionals and technologists, "has reviewed Blake's concerns in accordance with our AI principles and informed him that the evidence does not support his claims."
Lemoine posted on Medium on June 6 that Google had placed him on paid administrative leave "in connection with an investigation into ethical concerns about artificial intelligence I had been raising within the company" and that he might be fired "soon."
A Google spokesperson confirmed that Lemoine remains on administrative leave. According to The Washington Post, he was given leave of absence for violating the company's confidentiality policy.
Lemoine was not available for comment on Monday.
The continued emergence of powerful computing programs trained on big data has raised concerns about the ethics that govern the development and use of this technology. Developments are sometimes viewed in terms of what might come, rather than what is currently possible.
Those in the AI community responded to Lemoine's experiment on social media over the weekend, and generally came to the same conclusion: Google's AI is nowhere near self-aware.
Gary Marcus, founder and CEO of Uber-sold Geometric Intelligence, and author of books including “Rebooting AI: Building Artificial Intelligence We Can Trust” described LaMDA’s idea as “bullshit on stilts.”
In an interview Monday with CNN Business, Marcus said the best way to think of systems like LaMDA is as a "glorified version" of an autocomplete program you might use to predict the next word in a text message.
If you write "I'm really hungry so I want to go to," he might suggest "restaurant" as your next word. But this is a prediction made using statistics.
Jebro, founder and CEO of the Distributed Artificial Intelligence Research Institute, or DAIR, said in an interview that LeMoyne is a victim of many companies claiming that conscious AI, or artificial general intelligence, is an idea that refers to artificial intelligence that can perform human-like tasks and interact with us in relevant ways. Significance - not far.
"What happens is that there is such a race to use more data, more computing, to say you created this thing that everyone knows, answers all your questions or whatever... So how are you surprised when that person takes it to the extreme? the border?"