Google says it won't use artificial intelligence for weapons

Google vows to not allow its artificial intelligence software to be used in weapons

Google vows to not allow its artificial intelligence software to be used in weapons

In a press release, he outlined Google's principles and policies on using its AI to reflect this change. These new set of principles are moreover an answer to questions raised past year.

The document, which also enshrines "relevant explanations" of how AI systems work, lays the groundwork for the rollout of Duplex, a human-sounding digital concierge that was shown off booking appointments with human receptionists at a Google developers conference in May. Now, the company has chose to post a number of things it won't do, particularly with its artificial intelligence (AI) operations.

But it has also experienced some of the perils associated with AI, including YouTube recommendations pushing users to extremist videos or Google Photos image-recognition software categorizing black people as gorillas. Now, in order to make it simple for users to understand, the company has made seven points principles. No more than defining exactly what the company's ditched "don't be evil" motto meant, it seems that the tech world is satisfied with principles, not always what they mean in context.

He said Google is using AI "to help people tackle urgent problems" such as prediction of wildfires, helping farmers, diagnosing disease or preventing blindness.

More news: Porsche's first all-electric vehicle will be called the 'Taycan'

For brevity, these are the seven headlines - though Pichai goes into more detail on all of them on the blog.

The principals generally assert artificial intelligence should be safe, socially beneficial and avoid creating or reinforcing unfair bias.

They should be "built and tested for safety", be built with privacy in mind, and "uphold high standards of scientific excellence".

Will it have a significant impact?

More news: Origin Access Premier subscription tier announced

Taking one example - pharma - there's nothing to stop medical research by Google being weaponised without Google's involvement, so as worthy as this all is, it's impossible to completely control.

Downplayed by Google as simply "low-res object identification using AI", many Google employees saw the potentially darker side of the technology.

Weapons or other technologies whose principal goal or implementation is to cause or directly facilitate injury to people are also excluded, along with technologies that gather or use information for surveillance violating internationally accepted norms, and technologies whose objective contravenes widely accepted principles of worldwide law and human rights. However, there is a contradiction in this.

"In other words, the company acknowledges that some AI developed for one goal may in fact be re-purposed in unintended ways, even by the military", she said Friday. The company points toward a variety of categories, including military training and cybersecurity, as areas where it will work with the government/military.

More news: Yahoo Messenger to shut down after 20 years

Recommended News

We are pleased to provide this opportunity to share information, experiences and observations about what's in the news.
Some of the comments may be reprinted elsewhere in the site or in the newspaper.
Thank you for taking the time to offer your thoughts.