Google has reportedly pulled out of a Pentagon’s cloud computing contract that is valued as much as $10 Billion saying the project may conflict with its value.
Project JEDI (Joint Enterprise Defense Infrastructure) had bids from Amazon who set up CIA’s private cloud, Oracle, Microsoft, IBM and Google who has been under fire from employees for working with the military.
“We are not bidding on the JEDI contract because first, we couldn’t be assured that it would align with our AI Principles, and second, we determined that there were portions of the contract that were out of scope with our current government certifications” a Google spokesman said in a statement made available by Bloomberg.
Google also addressed the issue of awarding the contract to a sole cloud provider insisting it would have been better for it to be split into different parts so as to allow them to manage a portion of the project.
“Had the JEDI contract been open to multiple vendors, we would have submitted a compelling solution for portions of it, Google Cloud believes that a multi-cloud approach is in the best interest of government agencies, because it allows them to choose the right cloud for the right workload.” the Google spokesman said.
But the Department of Defense refused the suggestion stating that splitting the contract “could prevent DoD from rapidly delivering new capabilities and improved effectiveness to the warfighter that enterprise-level cloud computing can enable.”
Though the Department expects “to maintain contracts with numerous cloud providers to access specialized capabilities not available under the JEDI Cloud contract.”
Google had recently pulled out of a contract with the Pentagon over Project Maven which involved Google using AI to analyze drone images including images taken from conflict zones for the military. This must be why the wanted a multi-vendor cloud system where they can choose a part to work on.
Google’s AI principle does not allow it to deploy AI in the following areas:
- Technologies that cause or are likely to cause overall harm. (Subject to risk/benefit analysis.)
- Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- Technologies that gather or use information for surveillance violating internationally accepted norms.
- Technologies whose purpose contravenes widely accepted principles of international law and human rights.
While not committed to supporting the military when it comes to weapons development, Google is still open to collaborating with the military.
“While we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” Sundar Pichai, the chief executive officer for Google wrote. “These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.”