Google CEO Sundar Pichai recently warned against the dangers of “rushing” into the regulation of AI, a key area for the future growth of his company.
During a recent interview with the Financial Times, Google CEO Sundar Pichai expressed his concerns about “rushing” into the regulation of artificial intelligence technology. “It is such a broad cross-cutting technology, so it’s important to look at [regulation] more in certain vertical situations,” Pichai said. “Rather than rushing into it in a way that prevents innovation and research, you actually need to solve some of the difficult problems.”
Pichai acknowledged issues with AI such as algorithmic bias and accountability but still appeared hopeful for the future of the technology. Google has received some backlash to certain AI-driven projects, but overall AI is a huge growth market for the firm. Google’s drone contract with the Department of Defense, Project Maven, was ended following fierce backlash from employees. Thousands signed a petition to have the firm end the program as many felt that it was unethical for Google to use AI in a military application. In March, Google allowed the contract to lapse.
But AI continues to be a future growth area for all of Silicon Valley, and Google in particular. During a first-quarter earnings call this year, Pichai stated that almost three-quarters of Google’s advertising clients were using automated ad technologies which utilized machine learning and artificial intelligence. Given that Google makes the majority of its revenue from advertising, this is an extremely lucrative area for the firm.
Sandra Wachter, a professor at the Oxford Internet Institute, told Business Insider in an interview that how AI us utilized should change how it is regulated: “I think it makes a lot of sense to look at sectorial regulation rather than just an overarching AI framework,” Wachter said. “Risks and benefits of AI will differ depending on the sector, the application, and the context. For example, AI used in criminal justice will pose different challenges than AI used in the health sector. It is crucial to assess if all the laws are fit for purpose.”
Former Google engineer Laura Nolan who was recruited to Project Maven and resigned over ethical concerns stated: “Algorithmic decision-making is definitely an area that I believe the US needs to regulate. Another major area seems to be automated biometric-based identification (i.e., facial recognition but also other biometrics like gait recognition and so on). These are areas where we see clear problems emerging due to new technological capabilities, and not to regulate would be to knowingly allow harms to occur.”
Nolan added: “The use of AI in warfare absolutely needs to be regulated — it is brittle and unpredictable, and it would be very dangerous to build weapons that use AI to perform the critical function of target selection.”