Balancing risk: how extensive vulnerability reporting requirements undermine security

The Cybersecurity Tech Accord welcomes regulatory efforts to improve cybersecurity but has significant concerns with recent proposals for the reporting or public disclosing of unpatched vulnerabilities in information technology (IT) products that will leave them more vulnerable to exploitation. In both the European Union’s Cyber Resilience Act (CRA) and the United States’ Federal Information Security Modernization Act of 2023 (FISMA), provisions mandating the reporting of unpatched vulnerabilities to government agencies will expand the number of individuals aware of vulnerabilities, thereby undermining product security and potentially and inadvertently putting governments, critical infrastructure, businesses, and consumers at risk. We strongly recommend governments reconsider such provisions based on existing best practices for vulnerability management.

We understand that governments have well-founded concerns surrounding vulnerabilities in IT products, especially those supporting critical infrastructure, driving a desire to know more about potential risk. However, to improve vulnerability management and reduce risk we would encourage policymakers to instead consider policies that support more secure software development practices and expedite the mitigation of vulnerabilities in a risk-based manner. To this end, the Cybersecurity Tech Accord has long promoted vendors adopting Coordinated Vulnerability Disclosure (CVD) policies for receiving information on vulnerabilities, coordinating information sharing, and disclosing vulnerabilities and their mitigations to different stakeholders.[1] CVD is a well-established industry best practice, reflected in ISO and NIST standards, which would be undermined by overzealous vulnerability reporting requirements.

Responsibly mitigating vulnerabilities

IT vulnerabilities generally refer to errors in code which can be used by malicious actors to gain illicit access to a system. Indeed, reducing the overall number and severity of security vulnerabilities in IT products should be a top priority for industry and regulators alike. This can be accomplished through more secure software development practices to limit the likelihood and impact of vulnerabilities in the first place or via the patching of vulnerabilities in products when they are discovered. Consistent with our first principle to support “strong defense,” the Cybersecurity Tech Accord has been a strong proponent of companies utilizing software development best practices to help minimize the impact of security vulnerabilities.  

Unfortunately, even when utilizing the most responsible engineering practices there is no way to eliminate every vulnerability in IT products beyond an irreducible minimum – errors will still occur. Vulnerabilities are generally uncovered by the company responsible for the product, other stakeholders in the supply chain or by independent security researchers who responsibly disclose it to the vendor to be patched.  In either scenario, there must be processes in place for triaging vulnerabilities when they are discovered, and it is important that policymakers consider how requirements for reporting vulnerabilities would impact such processes. When a security vulnerability is discovered, the goal for all involved must be to limit the risk exposure of customers or business users, which should be appropriately informed, and to ensure that it will be patched in a timely fashion with minimal customer disruption.

Not all vulnerabilities are alike – some may pose no risk at all while others might present a serious threat, and some may be simpler to fix while others may take more time – this is why vendors need discretion regarding patching on a case-by-case basis in order to prioritize addressing the most pressing vulnerabilities first. Regardless of the relative severity of a vulnerability, however, a key element is ensuring that the number of individuals and organizations that know about any vulnerability in a product remains small until it is fixed.

Mandated reporting or disclosure dramatically increases cyber risk

Regulations that require the reporting of vulnerability details– whether discovered by vendors, supply chain stakeholders or security researchers – to a government agency (or agencies) before a patch has been developed, or before the vulnerability is actively being exploited, undermine the process outlined above. Such regulations introduce significantly more risk by enlarging the circle of people who know about a vulnerability and therefore increase the likelihood that it will be discovered by malicious actors before a patch is available. No government agency is in a position to help develop a patch or otherwise triage a newly discovered vulnerability. Therefore, such reporting requirements can serve only to aggravate risk and complicate the security situation by introducing more variables and tying up vendor resources in compliance and reporting activities when their focus should be on vulnerability response.

This does not prevent the timely and safe private sharing of vulnerabilities or vulnerability advisories – even before a fix is available – between trusted stakeholders in the private and public sectors who can help to mitigate a vulnerability, where immediate public sharing is not appropriate, to allow for the necessary remediation to be planned and implemented. However, the Cybersecurity Tech Accord recognizes several particular risks posed by requiring widespread reporting of unpatched vulnerabilities, including:

  • Leaking of vulnerabilities – The preeminent concern is that reporting a known vulnerability to a government authority, or multiple uncoordinated authorities, before a patch is available will immediately expand the number of people aware of the vulnerability who may – wittingly or no – share that information to be weaponized by a malicious actor before customers can be protected.
  • Proliferation of reporting requirements – While many may feel comfortable/confident that their government is equipped to responsibly handle knowledge around unpatched vulnerabilities without risk of it being leaked, adopting such regulations in one market will encourage others to follow who may be less capable. Ultimately, regardless of how responsible a particular government is, growing the number of individuals and organizations who know about an unpatched vulnerability will increase risk for customers.
  • Government abuse – While governments play an important role in supporting cyber defenses, they are also some of the most advanced threat actors, with agencies and departments that may also be interested in weaponizing any vulnerabilities that are reported – whether to combat criminals or target adversaries online. Some governments have even adopted, or are considering, policies requiring independent security researchers to report vulnerabilities to government agencies first before disclosing to vendors to be remediated, creating further risk of abuse.
  • Undermining good-faith security researchers – A healthy cybersecurity ecosystem greatly benefits from good-faith security researchers who independently discover vulnerabilities intentionally and responsibly disclose them to vendors to be fixed. This is a tremendously helpful service and stands in stark contrast to those who sell vulnerabilities on the black market. If companies are required to report vulnerabilities to governments before they are fixed, security researchers will have less incentive to responsibly disclose such vulnerabilities to vendors to be patched.

We understand of course the good intentions of policymakers seeking to have a better understanding of the threat landscape and to protect in particular critical infrastructure and sensitive data from cyber risk. Moreover, industry should be held accountable for following up on vulnerabilities discovered in their products and ensuring patches are developed using a risk-based approach. However, requiring rapid reporting of unpatched vulnerabilities will only serve to weaken security and create a race-to-the-bottom across jurisdictions on reporting requirements. Instead, we would encourage policymakers to consider other avenues for promoting industry accountability for mitigating vulnerabilities.

Such alternatives might include encouraging companies to adopt CVD policies in keeping with industry best practices, as outlined in the introduction and reflected in the more than 100 Tech Accord companies in the that have published CVD policies which are currently listed on the Cybersecurity Tech Accord website. Such policies can help ensure that there is a process in place for receiving vulnerability disclosures from security researchers to then be patched in a risk-based manner. Beyond that, policymakers might also consider what kind of after-action reporting could be required to ensure visibility into the timeline for vulnerability discovery, investigation and remediation. This would allow governments to confirm that responsible practices are being implemented without disseminating the vulnerability or detracting resources from issuing a security response.


[1] The CERT Guide to Coordinated Vulnerability Disclosure (cmu.edu)