I. Overview: AI and Cybersecurity
More than six years ago, the Cybersecurity Tech Accord was launched as a pledge by the technology industry, drawing a line in the sand as a commitment to uphold foundational cybersecurity principles amid escalating conflict online. Since then, the importance of this commitment has only grown, mirrored by the rapid growth of our coalition from 34 original tech companies to over 160 signatories today. While cyber risk, malicious actors, and technology itself continuously evolves, the emergence of generative artificial intelligence (AI) over the past year and a half represents a groundbreaking leap forward that compels us to reassess and redefine the principles of the Cybersecurity Tech Accord in this new AI era.
While the Cybersecurity Tech Accord’s principles remain steadfast – to support (i) strong defense, (ii) no offense, (iii) capacity building and (iv) collective action – it is important to consider what these commitments mean in a digital landscape where people everywhere will increasingly leverage and interface with generative AI in the years to come. How will AI be used to strengthen cybersecurity and how can our industry limit the threats posed by malicious use? How might we work together, as an industry and as a broader multistakeholder community, to ensure that AI improves security not just in advanced nations but in the emerging economies as well? These are the essential questions drive this new report series.
As we kick-off Cybersecurity Awareness Month this October, we hope you will follow along with this exploration of the multifaceted and evolving intersection of AI and cybersecurity. This series will examine the current landscape and capabilities of AI models, the implications for cyber risk, and the immense potential of widespread AI adoption for enhancing cybersecurity. Drawing on insights from our diverse signatories, this series will articulate principles, best practices, gaps, barriers, and recommendations to equip the industry to harness the benefits and mitigate risks posed by AI in cybersecurity. Furthermore, it will propose a framework for ensuring responsible and trustworthy AI-driven cybersecurity. The series will also delve into the demand and supply of AI-based cyber defense solutions and skills, and discuss the pivotal role and responsibilities of the tech industry in fostering innovation and collaboration. Finally, it will delineate the roles and responsibilities of the various stakeholders within the AI cybersecurity ecosystem.
AI for cybersecurity vs. the security of AI:
As a crucial distinction, this report series will address how advancements in AI impact the cyber threat landscape – what we term “AI for cybersecurity.” This encompasses how AI models can reinforce cyber defenses and how they might be exploited by malicious actors to escalate offensive cyber activities, a pressing concern for the industry. This focus is separate from discussions about unique cyber threats targeting AI models themselves – such as their potential corruption or manipulation by nefarious actors. It also excludes analyses of threats stemming from AI systems related to fraud and manipulation (e.g., “deepfakes”) or augmenting non-cyber offensive operations (e.g., autonomous weapon systems). While these topics concerning the security and weaponisation of AI are critical, they fall outside the scope of this series, which remains dedicated to cybersecurity.
We hope you will follow along throughout Cybersecurity Awareness Month as each week this series will pull back the curtain on new ways in which AI may impact cybersecurity going forward. If you have thoughts or comments, please reach out to the secretariat: [email protected].