As reported in Information Week, several Black Hat researchers have noted that humans must qualify application vulnerability scanning results. It’s hard to find this statement as “news” since I’ve been saying this publicly, at conferences for at least 4 years. I’m not the only one. Ask the vendors’ Sales Engineers. They will tell you (they have certainly told me!) that one has to qualify testing results.
“At the end of the day, tools don’t find vulnerabilities. People do,” says Nathan Hamiel, one of the researchers.
Yes, Nathan, this has been the state of affairs for years.
The status quo is great for penetration testers. But it’s terrible for larger organizations. Why? Pen testers are expensive and slow.
I agree with the truisms stated by the researchers. If there were 10’s of thousands of competent penetration testing practitioners, the fact the the tools generally do NOT find vulnerabilities without human qualification would not matter.
But there are only 1000s, perhaps even less than 3000 penetration testers who can perform the required vulnerability qualification. These few must be apportioned across 10s of millions of lines of vulernable code, much of which is exposed to the hostile Public Internet.
Plus which, human qualified vulnerability testing is slow. A good tester can test maybe 50 applications each year. Against that throughput, organizations who have a strong web presence may deploy 50 applications or upgrades each quarter year. Many of the human qualified scans become stale within a quarter or two.
Hence this situation is untenable for enterprises who have an imperative to remove whatever vulnerabilities may be found. Large organizations may have 1000s of applications deployed. At $4-500/hour, which code do you scan? Even the least critical application might hand a malefactor a compromise.
This is a fundamental mis-match.
The pen testers want to find everything that they can. All fine and good.
But enterprises must reduce their attack surface. I would argue that any reduction is valuable. We don’t have to find every vulnerability in every application. Without adequate testing, there are 100% of the vulnerabilities exposed. Reducing these by even 20% overall is a big win.
Further, it is not cost effective to throw pen testers at this problem. It has to either be completely automated or the work has to be done by developers or QA folk who do not have the knowledge to qualify most results.
Indeed, the web developer him/herself must be able to at least test and remove some of the attack surface while developing. Due to the nature of custom code testing, I’m afraid that the automation is not yet here. That puts the burden on people who have little interest in security for security’s sake.
I’ve spoken to almost every vulnerability testing vendor. Some understand this problem. Some have focused on the expert tester exclusively. But, as near as I can tell, the state of the art is at best hindered by noisy results that typically require human qualification. At worst, some of these tools are simply unsuitable for the volume of code that is vulnerable.
As long as so much noise remains in the results, we, the security industry, need to think differently about this problem.
I’m speaking at SANS What Works in Security Architecture August 29-30, Washington DC, USA. This is a gathering of Security Architecture practioners from around the world. If your role includes Architecture, let me urge you to join us as we show each other what’s worked and as we discuss are mutual problem areas.