Google's parent company lifting a longstanding ban on artificial intelligence (AI) being used for developing weapons and surveillance tools is "incredibly concerning", a leading human rights group has said.
Alphabet has rewritten its guidelines on how it will use AI, dropping a section which previously ruled out applications that were "likely to cause harm".
Human Rights Watch has criticised the decision, telling the BBC that AI can "complicate accountability" for battlefield decisions that "may have life or death consequences."
Google defended the change, arguing that businesses and democratic governments needed to work together on AI that "supports national security".
Experts say AI could be widely deployed on the battlefield - though there are fears about its use too, particularly with regard to autonomous weapons systems.
"For a global industry leader to abandon red lines it set for itself signals a concerning shift, at a time when we need responsible leadership in AI more than ever," said Anna Bacciarelli, senior AI researcher at Human Rights Watch.
The "unilateral" decision showed also showed "why voluntary principles are not an adequate substitute for regulation and binding law" she added.
In its blog, Alphabet, said democracies should lead in AI development, guided by what it called "core values" like freedom, equality and respect for human rights.
"And we believe that companies, governments and organisations sharing these values should work together to create AI that protects people, promotes global growth and supports national security," it added
The blog - written by senior vice president James Manyika and Sir Demis Hassabis, who leads the AI lab Google DeepMind - said the company's original AI principles published in 2018 needed to be updated as the technology had evolved.
Comments (0)