Media coverage of Facebook AI malfunction irresponsible: Indian-origin researcher

Views: 64

San Francisco, Aug 2 (IANS) Blaming media for being “irresponsible” in its coverage on Facebook shutting down one of its AI systems after chatbots started communicating in their own language, an Indian-origin researcher who is part of Facebook’s AI Research (FAIR) has said such coverage was “clickbaity”.

Dhruv Batra, who works as research scientist at FAIR, wrote on his Facebook page that while the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI, with publications dating back decades.

“Simply put, agents in environments attempting to solve a task will often find unintuitive ways to maximise reward. Analysing the reward function and changing the parameters of an experiment is NOT the same as ‘unplugging’ or ‘shutting down AI’,” Batra said in the post late on Tuesday.

ALSO READ:   Facebook forms team to spot troubles before they arise

“If that were the case, every AI researcher has been ‘shutting down AI’ every time they kill a job on a machine,” he added.

It was widely reported that the social media giant had to pull the plug on the AI system its researchers were working on “because things got out of hand”.

“The AI did not start shutting down computers worldwide or something of the sort, but it stopped using English and started using a language that it created,” media reports said.

Initially, the AI agents used English to communicate with each other but they later created a new language that only AI systems could understand, thus, defying their purpose.

This reportedly led Facebook researchers to shut down the AI systems and then force them to speak to each other only in English.

ALSO READ:   India will achieve double-digit growth soon, says BSE Chief (IANS Interview)

“I do not want to link to specific articles or provide specific responses for fear of continuing this cycle of quotes taken out of context, but I find such coverage clickbaity and irresponsible,” Batra posted.

In June, researchers from FAIR found that while they were busy trying to improve chatbots, the “dialogue agents” were creating their own language.

Soon, the bots began to deviate from the scripted norms and started communicating in an entirely new language which they created without human input, media reports said.



Comments: 0

Your email address will not be published. Required fields are marked with *