Google's Sundar Pichai to stop AI weapon project

IvyMikeShot 16-9 nuclear bomb

Google's Sundar Pichai to stop AI weapon project

"We recognize that such powerful technology raises equally powerful questions about its use", he said in a blog post Thursday. It did say that it will continue its governmental and military work in other areas, however. The chief executive officer said it had the prerogative to reject applications that violated its principles. In the post, Google also acknowledges the "significant impact" AI will have on society and said that it feels a "deep responsibility" to get AI right.

Pichai's memo lists seven principles.

Google released the guidelines soon after it said it would stop working with the military on the controversial Project Maven.

Google plans to honor its commitment to the project through next March, a person familiar with the matter said last week. The charter shows Google's pursuit of these contracts will continue. "These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe". The Google official acknowledged that enforcement would be hard because the company cannot track each use of its tools, some of which can be downloaded free of charge and used privately. But Google is joining other labs in saying it may hold back certain research if it believes others will misuse it. Civil liberties organisations recently called out Amazon for offering facial recognition tech to local police departments.

Google has pledged not to use its artificial intelligence technology in military weapons or anything that might weaken human rights in a set of principles announced on Thursday. The principles also state that the company will work to avoid "unjust impacts" in its AI algorithms by injecting racial, sexual or political bias into automated decision-making.

More than 4,000 Google employees signed a petition protesting Google's contract, and some staffers resigned over it. The proposed limit on the use of AI for surveillance is positive, but the language was too cautious, the person said.

Special counsel brings new charges against Paul Manafort in Russian Federation probe
It's not clear exactly what Manafort knows, but he does appear to be a linchpin between Trump's campaign and Russian Federation . Paul Manafort's legal team pushes back against Special Counsel Robert Mueller , after fresh obstruction charges were put on him.

Golden State Warriors beat Cleveland Cavaliers to win title
It was the fourth year in a row the Warriors and Cavaliers met in the finals, Golden State winning in 2015 and the past two years. The Warriors won only 58 regular-season games, by far their fewest in this four-year run of excellence.

U.S. Senate leader cancels most of August recess
Still, the legislation is picking up support from one of the Republican Party's chief constituencies, the business community. Democratic co-sponsors include Mark Warner of Virginia, Brian Schatz of Hawaii and Chris Van Hollen of Maryland.

A nine-employee committee drafted the AI principles, according to an internal email seen by Reuters. They asked not to be identified criticising their employer.

Technologies whose objective contravenes widely accepted principles of worldwide law and human rights.

Downplayed by Google as simply "low-res object identification using AI", many Google employees saw the potentially darker side of the technology.

No Google AI technology will ever be used as a weapon or for surveillance, the policy states.In addition, the company will refuse to develop any AI projects that will "cause or are likely to cause overall harm". "But it's a start".

"It's so exciting that we're close to getting MAVEN!" Google seems to want to set the principles for their use of AI early, but also recognise that some of these may need to change, or be added to.

Latest News