It’s Becoming Less Taboo to Talk About AI Being ‘Conscious’


Three years ago, suggesting that AI was “sensitive” was a way of being dismissed in the world of technology. Now technological companies are more open to have this conversation.

This week, AI Startup Anthropic launched a new research initiative to explore if the models could one day live a “conscience”, while a scientist from Google Deepmind described today’s models as “exotic entities similar to the mind”.

It is a sign of the amount of advanced AI since 2022, when Blake Lemoine was dismissed from his work as an engineer Google after having said that the company’s chatbot, Lamda, had become sensitive. Lemoine said the system feared to be closed and described itself as a person. Google described its claims as “completely founded” and the AI ​​community quickly moved to close the conversation.

Neither anthropogenic nor the Google scientist go as far as Lemoine.

Anthropic, the startup behind Claude, said in a Thursday blush Post that he plans to investigate whether the models could one day have experiences, preferences or even distress.

“Should we also worry about the consciousness and the potential experiences of the models themselves? Well-being of the modelAlso? “Asked the company.

Kyle Fish, alignment scientist at Anthropic who is researching AI well-being, said in a video Released Thursday that the laboratory does not claim that Claude is aware, but the fact is that it is no longer responsible for assuming that the answer is certainly not.

He said that AI systems become more sophisticated, companies should “take the possibility” that they “can be found with a certain form of consciousness along the way”.

He added: “There are incredibly complex technical and philosophical questions, and we are in the early stages to try to wrap our heads.”

Fish said that researchers from the anthropogenic estimate Claude 3.7 have between 0.15% and 15% of us to be aware. The laboratory studies if the model shows preferences or aversions, and tests deactivation mechanisms that could let it refuse certain tasks.

In March, the CEO of Anthropic Dario Amodei launched the idea of ​​giving future AI systems a button “I leave this work”-not because they are sensitive, he said, but as a means of observing models of refusal that could signal discomfort or disalg off.

Meanwhile, at Google Deepmind, the main scientist Murray Shanahan proposed that we need to completely rethink the concept of consciousness.

“Perhaps we have to look or break the vocabulary of consciousness to adapt to these new systems,” said Shanahan on a Deepmind Podcast, published Thursday. “You cannot be in the world with them as you can with a dog or a octopus – but that does not mean that there is nothing there.”

Google seems to take the idea seriously. A recent job list requested a “Post-agi” Researcher, with responsibilities that include studying the consciousness of machines.

“ We could also give rights to calculators ”

Not everyone is convinced, and many researchers recognize that AI systems are excellent imitations that could be trained to act conscious even if they are not.

“We can reward them for saying that they had no feeling,” said Jared Kaplan, director of sciences at Anthropic, in an interview with the New York Times this week.

Kaplan warned that testing AI systems for conscience is intrinsically difficult, precisely because they are so good in imitation.

Gary Marcus, a cognitive scientist and long -standing criticism of media in the AI ​​industry, told Business Insider that he thought that the accent put on AI awareness was more on the brand than on science.

“What a business like Anthropic really says is” look at what extent our models are intelligent-they are so intelligent that they deserve rights “,” he said. “We could just as easily give rights to calculators and calculation sheets – which (unlike language models) never compensate.”

However, Fish said that the subject will no longer become relevant that people interact with AI in more ways – at work, online or even emotionally.

“It will simply become an increasingly protruding question if these models have their own experiences-and if so, what types,” he said.

Anthropic and Google Deepmind did not immediately respond to a request for comments.