CISA Gives Us A Starting Point

Everyone struggles with identifying the “right” vulnerabilities to patch. It’s a near universal problem. We are all wondering, “Which vulnerabilities in my long queue are going to get me?”

Most organizations calculate a CVSS (base score or amended) which a solid body of research starting with Alloddi & Massacci, 2014, demonstrates is inadequate to the task.

Exploit Prediction Scoring System (EPSS) is based in the above research, so it could provide a solution. But the web calculator cannot score each open vulnerability in our queue over time: we need to watch the deltas, not a point in time. There’s unfortunately, no working EPSS solution, despite the promise.

CISA have listed 300 “vulnerabilities -currently- being exploited in the wild” in Binding operational directive 22-01.

Finally, CISA have given us a list that isn’t confined to a “Top 10”: Start with these!

Top 10 lists provide some guidance, but what if attackers exploit your #13?

300 is an addressable number in my experience. Besides, you probably don’t have all of them in your apps. > 300. We all can do that much.

The CISA list provides a baseline, a “fix these, now” starting point of actively exploited issues that cuts through the morass of CVSS, EPSS, your security engineer’s discomfort, total vulnerabilities ==> organization “risk”. (If you’re running a programme based upon the total number of open vulnerabilities, you’re being misled.)

https://thehackernews.com/2021/12/why-everyone-needs-to-take-latest-cisa.html

Trojan Source

Trojan Source!

I first read about Trojan Source this morning (ugh, Yet Another Branded Vulnerability: YABV).

Yes, there is a continuing fire hose of vulnerability announcements. But, new techniques are actually fairly rare: 1-3/year, in my experience.

There is nothing new about hiding attack code in source. These are the “backdoors” every software security policy forbids. Hiding attack code in unicode strings by abusing the “bidi override” IS new and important (IMVHO). Brilliant research for which I’m grateful.

I’m a bit unclear about how attack code can supposedly be hidden through bidi override in comments. Every decent compiler strips comments during the transformation from source to whatever form the compiler generates, byte code (java), object, etc. Comments are NOT part of any executable format, so far as I know. And mustn’t be (take up needed space).

Still, attacks buried in string data is bad enough to take notice: every language MUST include string data; modern languages are required to handle unicode, the many language scripts that must be represented, and methods for presenting each script (i.e., bidi override). Which means exactly what the researchers found: nearly every piece of software that parses source into something executable (19 compilers and equivalent) is going to have this problem.

The other interesting research finding is how vendors reacted to being told about Trojan Source and how quickly they are fixing, if at all.

Only 9 vendors have committed to a fix. That is really surprising to me. If you want people to trust your software security, you’ve got to be rigorously transparent and responsive. Doh! What’s wrong with the other 10 vendors/makers? Huh? Let’s get with the program. Refusing to deal, dragging feet, or worse, ignoring appears as though you don’t care about the security of your product(s).

It’ll be interesting to see whether Trojan Source technique actually gets used by attackers and how fast that occurs. I hope that one or more researchers pay attention to that piece of the puzzle. Depending upon the research findings, at least (if not more) 75% of reported vulnerabilities never get exploited. Will Trojan Source? Only time will tell.

Many thanks to Brian Krebs for the excellent write up (below)

https://krebsonsecurity.com/2021/11/trojan-source-bug-threatens-the-security-of-all-code/

Cheers,

/brook

Is API Authentication Enough?

Reading a recent article, Five Common Cloud Security Threats and Data Breaches“, posted by Andreas Dann I came across some advice for APIs that is worth exploring.

The table stakes (basic) security measures that Mr. Dann suggests are a must. I absolutely support each of the recommendations given in the article. In addition to Mr. Dann’s recommendations, I want to explore the security needs of highly exposed inputs like internet-available APIs.

Again and again, I see recommendations like Mr. Dann’s: “…all APIs must be protected by adequate means of authentication and authorization”. Furthermore, I have been to countless threat modelling sessions where the developers place great reliance on authorization, often sole reliance (unfortunately, as we shall see.)

For API’s with large open source, public access, or which involve relatively diverse customer bases, authentication and authorization will be insufficient.

As True Positives‘ AppSec Assurance Strategist, I’d be remiss to misunderstand just what protection, if any, controls like authentication and authorization provide in each particular context.

For API’s essential context can be found in something else Mr. Dann states, “APIs…act as the front door of the system, they are attacked and scanned continuously.” One of the only certainties we have in digital security is that any IP address routable from the internet will be poked and prodded by attackers. Period.

That’s because, as Mr. Dann notes, the entire internet address space is continuously scanned for weaknesses by adversaries. Many adversaries, constantly. This has been true for a very long time.

So, we authenticate to ensure that entities who have no business accessing our systems don’t. Then, we authorize to ensure that entities can only perform those actions that they should and cannot perform actions which they mustn’t. That should do it, right?

Only if you can convince yourself that your authorized user base doesn’t include adversaries. There lies “the rub”, as it were. Only systems that can verify the trust level of authorized entities have that luxury. There actually aren’t very many cases involving relatively assured user trust.

As I explain in Securing Systems, Secrets Of A Cyber Security Architect, and Building In Security At Agile Speed, many common user bases will include attackers. That means that we must go further than authentication and authorization, as Mr. Dann implies with, “…the API of each system must follow established security guidelines”. Mr. Dann does not expand on his statement, so I will attempt to provide some detail.

How do attackers gain access?

For social media and fremium APIs (and services, for that matter), accounts are given away to any legal email address. One has only to glance at the number of cats, dogs, and birds who have Facebook accounts to see how trivial it is to get an account. I’ve never met a dog who actually uses, or is even interested in Facebook. Maybe there’s a (brilliant?) parrot somewhere? (I somehow doubt it.)

Retail browsing is essentially the same. A shop needs customers; all parties are invited. Even if a credit card is required for opening an account, which used to be the norm, but isn’t so much anymore, stolen credit cards are cheaply and readily available for account creation.

Businesses which require payment still don’t get a pass. The following scenario is from a real-world system for which I was responsible.

Customers paid for cloud processing. There were millions of customers, among which were numerous nation-state governments whose processing often included sensitive data. It would be wrong to assume that nation-states with spy agencies could not afford to purchase a processing account in order to poke and prod just in case a bug should allow that state to get at the sensitive data items of one of its adversaries. I believe (strongly) that it is a mistake to assume that well-funded adversaries won’t simply purchase a license or subscription if the resulting payoff is sufficient. spy agencies often work very slowly and tenaciously, often waiting years for an opportunity.

Hence, we must assume that our adversaries are authenticated and authorized. That should be enough, right? Authorization should prevent misuse. But does it?

All software fails. Rule #1, including our hard won and carefully built authorization schemes.

“[E]stablished security guidelines” should expand to a robust and rigorous Security Development Lifecyle (SDL or S-SDLC), as James Ransome and I lay out in Building In Security At Agile Speed. There’s a lot of work involved in “established security”, or perhaps better termed, “consensus, industry standard software security”. Among software security experts, there is a tremendous amount of agreement in what “robust and rigorous” software security entails.

(Please see NIST’s Secure Software Development Framework (SSDF) which is quite close to and validates the industry standard SDL in our latest book. I’ll note that our SDL includes operational security. I would guess that Mr. Dann was including typical administrative and operational controls.)

Most importantly, for exposed interfaces, I continue to recommend regular, or better, continual fuzz testing.

The art of software is constraining software behaviour to the desired and expected. Any unexpected behaviour is probably unintended and thus, will be considered a bug. Fuzzing is an excellent way to find unintended programme behaviour. Fuzzing tools, both open source and commercial, have gained maturity while at the same time, commercial tools’ price has been decreasing. The barriers to fuzzing are finally disappearing.

Rest assured that you probably have authorized adversaries amongst your users. That doesn’t imply that we abandon authentication and authorization. These controls continue to provide useful functions, if only to tie bad behaviour to entities.

Along with authentication and authorization, we must monitor for anomalous and abusive behavour for the inevitable failures of our protective software. At the same time, we must do all we can to remove issues before release, while at the same time, using techniques like fuzzing to identify those unknown, unknowns that will leak through.
 

Cheers,

/brook

Continuing Commentary on NIST AppSec Guidance

Continuing with a commentary on NIST #appsecurity guidance, again, NIST reiterates what we recommended in Core Software Security and then updated (for Agile, CI/CD, DevOps) in Building In Security At Agile Speed:

 

Threat Model

Automate as much as possible

Static analysis (or equivalent) SAST/IAST

DAST (web app/api scan)

Functional tests

Negative testing

Fuzz

Check included 3rd party software

 

NIST guidance call’s out heuristic hard coded secrets discovery. Definitely a good idea. I maintain that it is best to design in protections for secrets through threat modelling. Secrets should never be hardcoded without a deep and rigorous risk analysis. (there are a few cases I’ve encountered where there was no better alternative. these are few and very far between.)

NIST guidance is essentially the same as our Industry Standard, Generic Security Development Life cycle (SDL).

There has been an implicit industry consensus on what constitutes effective software security for quite a while. Let’s stop arguing about it, ok? Call the activities whatever you like. but do them.

Especially, fuzz all inputs that will be exposed to heavy use (authenticated or not!) I’ve been saying this for years. I hope that fuzzing tools are finally coming up to the our needs?

https://www.nist.gov/itl/executive-order-improving-nations-cybersecurity/recommended-minimum-standard-vendor-or-developer

MITRE D3FEND Is Your Friend

Defenders have to find the correct set of defences for each threat. Many attacks have no direct prevention making a many-to-to-many relationship. Threat:defence == M:N 

Certainly some vulnerabilities can be prevented or mitigated with a single action. But overall, there are other threats require an understanding of the issue and which defences might be applied.

While prevention is the holy grail, often we must:

— Limit access (in various ways and at various levels in the stack)
— Mitigate effects
— Make exploitation more difficult (attacker cost)

MITREcorp D3FEND maps defences to attacks

“…tailor defenses against specific cyber threats…”

In my threat modelling classes, the 2 most difficult areas for participants are finding all the relevant attack scenarios and then putting together a reasonable set of defences for each credible attack scenario. Typically, newbie threat modellers will identify only a partial defence. Or, misapply a defence.

MITRE ATT&CK helps to find all the steps an attacker might take from initial contact to compromise.

MITRE D3FEND and the ontology between ATT&CK and D3FEND give defenders what we need to then build a decent defence.

https://www.csoonline.com/article/3625470/mitre-d3fend-explained-a-new-knowledge-graph-for-cybersecurity-defenders.html#tk.rss_all