Director of Google DeepMind advocates for clear and consistent AI regulations

In the aftermath of the California governor’s veto of extensive AI safety legislation, a Google DeepMind executive is urging for a consensus on what defines safe, responsible, and human-centric artificial intelligence.

“My hope for the field is that we can achieve consistency, so we can fully reap the benefits of this technology,” said Terra Terwilliger, director of strategic initiatives at Google DeepMind, during the Fortune Most Powerful Women Summit. She was joined by January AI CEO and co-founder Noosheen Hashemi, Eclipse Ventures general partner Aidan Madigan-Curtis, and Dipti Gulati, CEO for audit and assurance at Deloitte & Touche LLP U.S.

The group discussed SB-1047, the highly debated California bill that would have mandated developers of the largest AI models to meet specific safety testing and risk mitigation standards. Madigan-Curtis suggested that if companies like OpenAI are creating models as powerful as they claim, there should be legal obligations to ensure their safe development.

“That’s how our system operates, right? It’s about the push and pull,” Madigan-Curtis explained. “The fear of being a doctor stems from the possibility of being sued for medical malpractice.”

She highlighted the now-defunct California bill’s provision for a “kill-switch,” requiring companies to have a mechanism to shut down their model in case it was being misused for catastrophic purposes, such as developing weapons of mass destruction.

“If a model is being used to terrorize a specific population, shouldn’t we have the ability to deactivate it or prevent its misuse?” she questioned.

Terwilliger from DeepMind emphasized the need for regulation that considers the various levels of the AI stack. She noted that foundational models have distinct responsibilities compared to applications utilizing those models.

See also  Award-winning show Toxic comes to Octagon with Bolton theatre director

“It’s crucial that we assist regulators in understanding these differences to establish stable and coherent regulations,” she stated.

However, Terwilliger argued that the drive to develop AI responsibly should not solely rely on government intervention. Even amidst evolving regulatory requirements, responsible AI development is vital for the long-term acceptance of the technology. This encompasses ensuring clean data and implementing safeguards for the model at every level.

“We must recognize that responsibility can be a competitive advantage, and comprehending how to act responsibly across all levels of the stack will be impactful,” she concluded.