On August 13, 2016, a mysterious organization calling itself the ShadowBrokers opened for business. Its product: A cache of cyber weapons stolen from the National Security Agency (NSA). “We hack Equation Group,” bragged the auctioneers in gleeful broken English, using an NSA nickname coined by a Russian cyber-security firm. “We find many many Equation Group cyber weapons.”

The ShadowBrokers hack wasn’t the first time the NSA’s defenses were breached, but it represented something new. Previous leakers—such as Thomas Tamm and Edward Snowden—focused on policy, legal, and programmatic subjects in the information they gave to journalists. The ShadowBrokers made off, for the first time, with the NSA’s digital intrusion tools themselves—the computer code for digital weapons that the NSA uses to break into other people’s computing systems and networks. And the group did not hand off the stolen tools to reporters; it tried to sell them online.

Within days of the ShadowBrokers post, software companies acknowledged that the exploits worked against their products. Analysts linked them back to the NSA. Worse still, the cache included a slew of NSA-developed “zero-day” vulnerabilities. Zero-days are previously undiscovered security flaws that a hacker can exploit before the developer knows they exist. This gives the developer “zero-days” to mount a defense. Zero-days are valuable (documents revealed by Edward Snowden show the NSA paid $25 million dollars for zero-day exploits in 2013) and can take years to develop.

For criminal hackers and foreign spies, the NSA cache was a bonanza. In May of 2017, a party or parties unknown launched an attack known as “WannaCry” that delivered malware to millions of machines around the world by means of an NSA-developed zero-day designed to target Windows machines. The ransomware wreaked havoc across Europe, particularly in the British hospital system. In June, an even more prolific variant called “NotPetya” landed in Ukraine and began to spread. NSA cyberweapons were on the loose.

Zero-Days and the Vulnerability Equities Process

It has been just over a year since the ShadowBrokers auction, and not much has changed. The NSA still develops and buys zero-day exploits. Sometimes, the government discloses the security flaws to companies like Microsoft or Adobe, which allows them to plug the holes in their products and protect the general public from criminal hackers. Sometimes, the government keeps the zero-days secret for its own use, risking that others will discover or steal them for criminal purposes. The interagency deliberations on whether to retain or disclose new zero-days is known as the Vulnerability Equities Process (VEP). It isn’t working.

It took several years to force the government to acknowledge that the process even existed. The timeline began in April of 2014, when Bloomberg alleged that the NSA deliberately withheld information about a damaging zero-day vulnerability known as “Heartbleed.” Special Assistant to the President and Cybersecurity Coordinator Michael Daniel denied that claim, and asserted in a blog post that the government used a “rigorous and high-level decision-making process for vulnerability disclosure.” In May of 2014, the Electronic Frontier Foundation filed a Freedom of Information Act request for records related to that process. After several years and a lawsuit, the government released a redacted description of the process in January of 2016.

The document provides some clarity into the process, but not much. In a nutshell, it shows that a previously unknown interagency group called the Equities Review Board (ERB) decides by majority vote whether to keep or disclose a vulnerability. The actual procedure for withholding a zero-day is blacked out, and the appeals process is heavily redacted. The government has not even released a list of agencies that have a seat at the ERB.

When Michael Daniels hinted at the existence of the VEP in 2014, he acknowledged that there are “no hard and fast rules.” An overly rigid VEP would be cumbersome and ineffective, he argued. Daniels’ successor (and current chair of the VEP) Rob Joyce holds a similar view. Officials are making “non-black-and-white decisions” about zero-days, he told a group of tech leaders in Boston last May. Releasing information on every bureaucratic step would not necessarily make the system better.

A Need for Reform

The concerns about reform that Joyce and Daniels raise are legitimate. When it comes to intelligence operations, zero-days are the ace-in-the-sleeve. They are particularly important in high-stakes intelligence operations, such as those dealing with matters of national security or state-level espionage. In both cases, targets of surveillance have hardened their defenses against intrusion, and zero-days may be the only viable option. It is reductive for the government to publish specifics on how it manages its arsenal.

Secretive and informal processes, like the current VEP, are anathema to such a system.

On the other hand, the public has a strong interest in transparency. Without it, citizens and domestic businesses cannot know if the VEP adequately balances their interest in disclosure with the needs of the intelligence community. In fact, representative government writ large does not function without transparency. It derives its legitimacy from a system of codified and publicly understood laws. Secretive and informal processes, like the current VEP, are anathema to such a system. They blur the line between legal and illegal behavior, and undermine public trust in our government by insulating decision-makers from any meaningful oversight.

Biased for Disclosure?

Government officials have repeatedly assured the public that the VEP is biased toward disclosure. President Obama ordered a review of the VEP in 2013, which National Security Council spokesperson Caitlin Hayden told the New York Times resulted in a “reinvigorated” process that was “biased toward responsibly disclosing such vulnerabilities.” In the fall of 2015, the NSA reported that it “disclosed 91 percent of vulnerabilities discovered in products that have gone through our internal review process and that are made or used in the United States.”

This description raises as many questions as it answers. It does not, for example, say when the zero-days were disclosed. This means the NSA may exploit dangerous security holes for months or years before notifying software developers. A former White House official admitted that it would be “a reasonable assumption” that a large fraction of vulnerabilities are exploited before they are disclosed. The longer the window between discovery and disclosure, the more likely it is that malicious hackers have used the same exploit to steal passwords, identities, money, or valuable data.

The National Security Council statement’s careful phrasing also carves out several important exceptions. For example, the phrase “made or used in the United States” excludes zero-day flaws in software sold by foreign companies to customers overseas. That is little comfort to U.S. citizens and software designers using these products. Malware deployed against foreign targets is notoriously difficult to control, and can infect machines or systems that were not the original target. The Internet knows no geographic boundaries. The phrase “that have gone through our internal review process” also seems to suggest that some zero-day exploits are not reviewed at all.

Absent reliable information indicating otherwise, it is possible that the government does not lean toward disclosure in practice. “Very rarely did we actually declassify and inform for defense,” reflected former cyber policy advisor Melissa Hathaway in an interview with FCW, “and we never actually, really, thought about the economic consequences if we didn’t actually share it.” Without independent oversight, there is no way to reconcile her account with the official version.

A Push Toward Meaningful Oversight

This is not to say the VEP is doomed to fail. Initial reforms should focus on increasing transparency. Researchers from Harvard’s Belfer Center suggest several changes, including naming the agencies involved and who has the final say; releasing the aggregate number of zero-days discovered, retained, and disclosed; releasing the average delay before disclosure; and establishing a periodic review process. These reforms would provide much-needed clarity without compromising ongoing intelligence operations.

The VEP must also ensure that a full range of stakeholders, including agencies whose missions incline them to give greater weight to Internet security, are full participants. This is especially important given the recent integration of the defensive wing of the NSA, the Information Assurance Directorate (IAD), into its much larger offensive counterpart, Signals Intelligence. In the future, voices advocating for disclosure may be drowned out. The VEP could be chaired by a different organization; the Belfer Center report suggests that the Department of Homeland Security (DHS) could assume that role. The VEP review board could also include stronger advocates for disclosure if domestic interests were at risk; the PATCH Act, introduced by Representative Ted Lieu in May, proposes including the Commerce Secretary on the board.

But these reforms alone are not enough. The ShadowBrokers hack proved that the NSA’s digital arsenal can be turned against U.S. companies and citizens. When these weapons were stolen, vulnerable companies were left in the dark until the ShadowBrokers announced the theft themselves. At a minimum, VEP reforms should require the NSA to notify vulnerable developers immediately after their systems have been breached.

The VEP should also apply to “black box exploits” that the government purchases from third-parties but does not understand. After the San Bernardino shooting, for example, the FBI refused to disclose the exploit used to unlock the shooter’s iPhone. Later, it emerged that the FBI had purchased the exploit from an Israeli company and did not know how it worked. Intelligence agencies can easily circumvent reforms that fail to include these exploits.

Implementing these reforms will be difficult, but zero-day exploits are too dangerous to be stockpiled without meaningful oversight. As the digital age progresses, they are likely to become more common and dangerous. It is past time for honest and accountable deliberation over the American arsenal.