Another Face of Ransomware: Abuse of Critical Vulnerabilities and Exposures Responses

ransomware

Ransomware continues to evolve, often faster than we can prepare. In some cases, the goal of the offender is not immediately apparent. Traditionally, ransomware was about extortion. It has morphed into other purposes, including a means of competitive attack. This paper is based on a very real situation that occurred this past year where the true purpose was never determined but is suspected by some to have been to obtain a competitive advantage. The Crisis Team names and structure, other than mine, have been changed for obvious reasons. I do want to thank HPE Product Management for their solid handling of the situation illustrated in this paper and the VTN Team for managing the detection.

The Crisis

“Hello?” Steve looked at the clock on this phone… 06:00.

“Steve, it’s Jan.” The urgent tone in her voice was, unfortunately, familiar.

“What happened?” He already knew that this was not going to be a good day.

“CVE.Critical. Please get on the call.” She sent the details. A CVE is a Critical Vulnerability and Exposure. CVEs are graded on a scale of 1 to 10, with 10 being the most severe. This one, we learned later, was opened at an initial 9.8.

“Alright, I’ll be on in 5.” And so, it began again. Another crisis. This time, a vulnerability. But why the urgency?

After getting an exceptionally large coffee, Steve went on the call. The Big Crisis Board (BCB) was being shared. At least half of all the critical applications and subsystems were red. What had happened? Worse, the vendor subsystem side of the board was about the same proportion. The Big Status said, “Triage.” Oh, joy.

The Zoom call was noisy. Everyone was trying to get a word in. One person looked like they were banging their head on an off-camera desk. Jan quickly reminded everyone of the standard meeting rules, muted everyone, and started talking. “What happened? What has been hit? How exposed are we?” Everyone’s hand went up. “Steve, what do we know?”

“This is just preliminary, but it looks like the YAML parser we use almost everywhere had a 9.8 CVE come out last night. We track who uses it on the BCB, which is why the board was all red. Even if there is a fix right now, this is going to take days.”

Em’s hand went down and up. “There is no fix yet. The project owner is investigating. There are three different CVEs, all 9.8. Details have not hit the pavement yet. All we have from NIST and Mitre are the case IDs, the library, the version – all of them – severity, and when the case was opened, which seems to be last week. It seems like each CVE takes longer to get from the reporter to us.” NIST is the National Institute of Standards and Technology, and Mitre is an NGO that, with NIST, tracks exposures.

This went on for a while. Jan spoke up again. “Let’s look at the impact. Worst case is cost, time, and action.” One by one, the leads went on:

“Simple math on this. It’s going to take at least five people on each application to vet and approve any changes and push this out the door. Assuming a week, that’s about 15K per app, loaded, so 2.3 million after we have a fix.”

“We are not even there yet. What if there is no fix, or if the fix won’t work in our environment? This is going to take at least three person weeks per application, so 10 million just to get through analysis.” The BCB status changed to analysis pending. “Hey, what about resource assignments?”

Jan was not happy with her PMO. “You are all assigned to this as of right now. Let’s meet up in an hour, in this meeting, with preliminaries. We need to know the slippage from there. I will keep the meeting open if anything breaks.” There was a general groan as the realization was dawning with the Sun.

An hour later, the chat had gone a bit crazy, with the gist being that no one knew anything yet. “I am ending this for now. I want two people researching what NIST and Mitre have as well as one on the project GitHub. It might as well be Ron and Randall. They’ll do it anyway from home whether anyone assigns them or not. Use the same meeting ID as soon as NIST or Mitre updates.”

Follow-up

It took three days, give or take, for the exposure databases to be updated with incomplete information, but at least the CVE author put source file and lines in the case – that might save weeks – or not as we will find out. The project GitHub issues list was not pretty. The CVE author opened three issues on GitHub, and to say that the project maintainer lost it would be putting it kindly. We had all entered Dispute World, a fictional place where we have all been, and there are arguments about whether problems are real. The maintainer intended to push back on all the CVEs, claiming that they were not valid. What I personally found suspicious was that there was no described attack vector that could be exploited for this vulnerability. Back to the meeting:

I was very forceful on this point, “To be blunt, this is [expletive]. The CVE author decided to go anonymous, so we cannot ask them about their experience or intent. The vulnerability is a double memory release, which is not even in the project code. What we should do, as a matter of normal practice, is our code reviews, look for any occurrences of this situation in our code, and ask our vendors whether they have done the same. Or we could wait for a fix. Or contribute our own. In any event, doing a panic update of 57% of our entire code base is going to be expensive and potentially open other vulnerabilities, but it is not my call. It is up to Jan.”

Situation Summary

Ultimately, the three CVEs were declared invalid. The project maintainer worked with both NIST and Mitre to close the issues. The justification was that there was a potential bug, but only in clients of the library, not in the library itself. There was no way to exploit the defect to cause a problem in any of our code. The fact that the originator maintained their public anonymity, although we found out who it was in the GitHub issue log, that there was no attack vector and no real defect, managed to convince people that this was invalid. We did contribute some code that would guard against a double memory-free situation, but that is simply good practice, not a fix to a vulnerability.

The Real Situation

In the above scenario, three virtually identical CVEs were opened by an anonymous actor with the apparent intent to disrupt as many companies as possible in as many countries as possible without evidence or justification. The actual cost of this situation probably went into the billions based on how broad the impact was and how many subsystems were impacted. Like the project maintainer, I pushed back hard on these CVEs, which should have been rated minor at worst. The problem today is that the CVE reporting process puts the onus on the reporter to provide an initial level of severity. NIST and Mitre do not change the severity rating until those who are vulnerable respond and the project owner/maintainer requests a change (either up or down). We never found out whether the irresponsible actor involved was doing this to block competition of their own product, whether this was mischief, or a lack of understanding of the process. The real effect was that hundreds of companies were forced to stop what they were doing and spend thousands of person-hours investigating what turned out to be a non-issue.

Here is the rub: Anyone from anywhere can do this anonymously. Anyone can hold up and seriously damage the software economy for frivolous reasons. This is ransomware. How? Ransomware is not always about extortion; rather, it has become about disruption. The initiator could easily have retracted the CVEs and made the problem go away quickly when notified that the situation was frivolous, but they did not. They aggressively dug their heels in and insisted on the severity of a questionable bug for their own motives. They forced the issue and stopped a lot of companies from delivering services to customers. We will likely never really know their true motives, but we know the effect: ridiculous amounts of money spent investigating and time lost.

How do we guard against this in the future?

Bluntly put, we cannot. Not until the CVE submission process is more robust. What we can do is have strong contacts with the most common open-source project maintainers and get into the conversation early. When a new CVE is opened, particularly a high-severity one, we should learn as much as we can as early as we can directly from the maintainer. Is the CVE legitimate? Can it be exploited? Is it a defect? Is it an overreaction by the reporter? Or is it economic sabotage? Look carefully at CVEs. Panic responses are our worst enemy, just ahead of lack of information.

What helps is having every single change accounted for, whether in your own source code or vendor-delivered source and object code. Git and its ecosystem are a huge help here, where you can find relevant changes and search for instances of vulnerable code. Obviously, NSGit is also in the ecosystem for your Guardian code. Having continuous integration (CI), continuous deployment (CD), and a system with logs of what is flowing from vendors and development through QA and into production can help analyze the impact of vulnerable applications and subsystems.

Other major projects you probably use in your shops have experienced similar CVE situations in the past year. Fortunately, the maintainers of those were on top of the situation and fought aggressively against hostile submissions. In many of those reports, the cases were closed before there was any public-facing awareness. In others, there were serious impacts. There was one CVE in the curl project that would have disrupted every DevOps and DevSecOps IT department worldwide. In that case, the initial high severity was reduced to medium, but it took months to convince NIST and Mitre to close the invalid case.

Did the Crisis Team respond appropriately? Up to a point. It is nice to have a big crisis board, particularly if you participate in risk management. The information fed into such a summary needs to be carefully vetted. I am grateful that we were able to stop this one, because it would have had a serious impact on the credibility of our community.

Author

  • Randall Becker

    Randall is the Chief Architect for the NSGit (T1198) product enabling GUARDIAN access to the git distributed version control system (DVCS). As a member of the ITUGLIB Technical Committee, he is the designated NonStop platform maintainer for some popular Open Source packages including git and OpenSSL. He has been a regular speaker and author in various NonStop conferences and journals. Randall also runs Nexbridge Inc., the developers of the NSGit product.

Be the first to comment

Leave a Reply

Your email address will not be published.


*