Google vows to not allow its artificial intelligence software to be used in weapons
Google’s CEO specified its artificial intelligence objectives in a blog post after thousands of employees protested against its work with the US military.
Google on Thursday said it would not allow its artificial intelligence programme to be used to develop weapons or for surveillance efforts that violate international laws. Google’s Chief Executive Officer Sundar Pichai specified seven objectives for the use of artificial intelligence in a blog post on Thursday.
The announcement follows protests by thousands of the search engine’s employees who objected to the company’s collaboration with the United States military on identifying objects in drone videos, Reuters reported. Pichai said the company will “continue our work with governments and the military in many other areas”.
The company’s collaboration with governments will be in the fields of cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. The chief executive officer said it had the prerogative to reject applications that violated its principles.
Google’s goals for the programme are to be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, accountable to people, incorporate privacy design principles and to uphold high standards of scientific excellence.
Pichai said Google would use artificial intelligence to make its products more useful. “From email that is spam-free and easier to compose to a digital assistant you can speak to naturally,” the company’s CEO said. “Beyond our products, we are using AI to help people tackle urgent problems. A pair of high school students are building AI-powered sensors to predict the risk of wildfires. Farmers are using it to monitor the health of their herds. Doctors are starting to use it to help diagnose cancer and prevent blindness.”