Vulnerability disclosure can be difficult. I had hoped that bug disclosure programs (VDPs), and bug bounty programmes (bugger bounty), would make things easier. This doesn’t appear to be the case in general, and often for unanticipated reasons.
However, it’s not all bad. Bug bounty programs reward independent researchers for their efforts and encourage organisations to work together. They are also very useful for filtering out unnecessary information and allowing organisations focus on critical vulnerabilities (read more about security tests).
The hardest part of helping VDP people make changes in their organizations when they are not interested is perhaps the most difficult. While many organisations claim they take customer security seriously but, inaction and abdications of responsibility show that they don’t. The organization’s responsibility to listen to and fix vulnerabilities is not exempted by outsourcing to a bug bounty platform.
It is not my job to shoot at the messenger. It is not fair and right for a researcher hold someone from the vendor’s Product Security Incident Response Team, (PSIRT), responsible for all the vendor’s failures. If they are my only point of contact, what options do I have?
I will not receive a reasonable response to a VDP if I am not able to get one. If the vulnerability is severe enough, I will simply go up to the top.
Why?
VDP staff are seldom empowered to make the necessary changes.
They can submit fixes requests and may even be able to raise their priority. But if that is the priority of the dev team, good luck getting it fixed.
Although I say “rarely”, there have been a few amazing cases in which people have listened and understood the threats to their customers and brands and took swift action. They have been empowered to mobilize resources and effect change.
It is interesting to see the lack of correlation between positive or negative outcomes and organisation size, or whether the organisation had a VDP. Although our experience is only a small part of a larger picture, it shows the inconsistency of disclosure and why it can be a pain to do right.
What is the problem?
If the PSIRT and VDP team are not empowered, creating a VDP is pointless.
If the team doesn’t have the ability to rapidly escalate to the level of those who can make quick decisions to down revenue-generating systems live, they don’t feel empowered.
An example
Sonicwall’s cloud management platform had a vulnerability. This allowed remote account compromise without authorization, which resulted in remote compromise of any number of their customer networks. This is a CVSS 10 violation.
It was reported through the VDP. It was acknowledged. That was it. We asked for an update almost two weeks later. They deflected, but the vulnerability remained.
So I searched my LinkedIn network to find the Sonicwall CEO. I sent him a message. He quickly responded and immediately contacted his CTO.
It was fixed in eight hours.
It was a mystery to me that they were a security vendor and run their own VDP. The CEO shouldn’t have had to intervene.
This is only one of many similar experiences that we have had.
The downside to empowerment
However, empowerment comes with risks and costs. If you want them to be able to shut down production servers, you will need to hire highly skilled (and expensive) security personnel.
Would your security personnel be more productive if they dealt with bounty? No. No.
Triage
The problem is that it’s difficult to triage the large number of reports received by PSIRT teams. Beg bounty requests that present scan output irrelevant to vulnerability reports, as well as people reporting vulnerabilities that aren’t vulnerabilities.
A VDP can do a few things to make triaging easier.
For example, you might ask the researcher to compile a list with 10 yes/no questions in order to score it. Take this example:
- Is the vulnerability able to allow other users PII access?
- Is it possible to access shells from other machines?
- …
This type of triage could be used to push it to the CTO/security team as a high priority.
There is no CVSS
It is best to avoid asking researchers not to score their findings using CVSS. This is a trap of the highest order. It’ll allow the researcher to tilt the risk in their favor if any bounty payments are linked to CVSS.
Moreover, their inexperience might lead to significant risks not being flagged.
Platforms for bug bounty
We are starting to see bug bounty platforms also posing problems for vulnerability disclosure. Organisations are increasingly outsourcing their VDPs to bug bounty operations. This means that the researcher who is trying to reveal an issue will have to agree to the terms and conditions of bug bounty platforms.
These may include public disclosure only under their terms. This could mean that the vulnerability is not disclosed to the rest of the world. This is understandable if money is being transferred, but it assumes everyone who discloses a security problem wants payment.
I won’t sign up for Ts&Cs that limit our ability to publicly disclose, because it removes one of my few levers with which to press the organisation to do the right thing by their customers.
Although disclosing information publicly is an entirely different matter and subject to extensive internal ethics debate, I won’t give up the possibility.
Outsourced VDP management must be accepted by bug bounty programs. They should offer ways to facilitate disclosure that don’t remove the possibility of public disclosure in the future or limit it.
VDPs that don’t work
If a disclosure via a VDP is delayed or impossible due to restrictive terms, I will be directly going to the top of the organization, to the CEO.
We report on all types of vulnerabilities, including critical account compromises. Exploitation of these would have a huge impact on a company’s brand.