Cybersecurity Tech Accord Principle #2: No Offense
The Cybersecurity Tech Accord’s second principle says, “we will oppose cyberattacks on innocent civilians and enterprises from anywhere” and it is critical to consider how we can help prevent AI from being used maliciously by attackers. While AI is a powerful tool for defending against cyber threats, it also poses new challenges and risks that require careful consideration and collaboration. Industry should work together to develop responsible guardrails and principles for deploying AI cybersecurity, based on shared values and standards, to ensure maximum uptake by industry stakeholders. Industry should also foster trust and transparency among stakeholders, respect human rights and privacy, and ensure accountability and oversight for AI cybersecurity decisions and outcomes.
We propose the following framework for ensuring responsible and trustworthy AI cybersecurity deployment:
- Reliable and safe: AI cybersecurity should be robust, resilient, and secure, and able to handle complex and uncertain situations without causing harm or errors. AI developers and deployers should follow rigorous testing and verification standards, and provide clear and accurate information about their AI capabilities and limitations.
- Private and secure: AI cybersecurity should respect the privacy and security of their customers, partners, and users, and protect their data and identities from unauthorised access or misuse. AI developers and deployers should also enable their customers and users to control and manage their own data and preferences, and to exercise their rights and choices regarding their data and AI interactions.
- Transparent: AI cybersecurity should provide clear and understandable explanations and information about how they work, what they do, and why they make certain decisions or recommendations. AI developers and deployers should also disclose the sources and quality of their AI data and models, and the assumptions and trade-offs involved in their AI design and development.
- Accountable: Industry should take responsibility for AI cybersecurity and its outcomes, and ensure that they are aligned with its ethical values and legal obligations. It should also establish and enforce appropriate governance and oversight mechanisms, and provide ways for customers and users to report and resolve any issues or concerns related to AI systems.
The above responsible framework serves as a foundational element for us to adopt a non-offensive stance. As signatories of the Cybersecurity Tech Accord, we endorse this framework and encourage others in the industry align with it. This framework is particularly timely and important as threat actors increasingly use AI to enhance cybersecurity attacks. For instance, AI is already fuelling a rise in phishing attacks; a 2023 report indicated that 75 percent of 650 surveyed cybersecurity professionals noted an increase in attacks over the previous year, with 85 percent attributing the uptick to AI. These capabilities pose new threats to banks, hospitals, and other critical infrastructure often targeted by state-backed hackers, as well as sensitive data or intellectual property. By creating an environment where it is cheaper and easier to attack than to develop effective defenses, AI has the potential to intensify hybrid conflicts and cause substantial harm to civilians.
Below is how threat actors are currently leveraging AI to augment their capabilities to launch and scale cyber attacks:
- Reconnaissance: Attackers gather information to help them choose their targets and plan their attacks. This data may involve details about individuals for social engineering purposes or technical insights regarding targeted networks and software systems.
- Unauthorized Access to Systems: The attacker gains access to the target’s information system, either by stealing user credentials or exploiting software vulnerabilities to create a backdoor.
- Privilege Escalation and Lateral Movement: Actions taken by an attacker after breaching a system to obtain higher privileges or access other, more valuable systems.
Read how our Principles apply to AI in cyber:
Introduction to Cybersecurity Tech Accord in the Age of AI: A new series exploring challenges and opportunities for industry
Cybersecurity Tech Accord Principle #1: Strong Defense: Tilting the advantage towards cyber defenders
Cybersecurity Tech Accord Principle #3: Capacity Building Building AI Cybersecurity Capacity
Cybersecurity Tech Accord Principle #4: Collective Action A Multistakeholder Approach