CISA Gives Us A Starting Point

Everyone struggles with identifying the “right” vulnerabilities to patch. It’s a near universal problem. We are all wondering, “Which vulnerabilities in my long queue are going to get me?”

Most organizations calculate a CVSS (base score or amended) which a solid body of research starting with Alloddi & Massacci, 2014, demonstrates is inadequate to the task.

Exploit Prediction Scoring System (EPSS) is based in the above research, so it could provide a solution. But the web calculator cannot score each open vulnerability in our queue over time: we need to watch the deltas, not a point in time. There’s unfortunately, no working EPSS solution, despite the promise.

CISA have listed 300 “vulnerabilities -currently- being exploited in the wild” in Binding operational directive 22-01.

Finally, CISA have given us a list that isn’t confined to a “Top 10”: Start with these!

Top 10 lists provide some guidance, but what if attackers exploit your #13?

300 is an addressable number in my experience. Besides, you probably don’t have all of them in your apps. > 300. We all can do that much.

The CISA list provides a baseline, a “fix these, now” starting point of actively exploited issues that cuts through the morass of CVSS, EPSS, your security engineer’s discomfort, total vulnerabilities ==> organization “risk”. (If you’re running a programme based upon the total number of open vulnerabilities, you’re being misled.)

https://thehackernews.com/2021/12/why-everyone-needs-to-take-latest-cisa.html

Continuing Commentary on NIST AppSec Guidance

Continuing with a commentary on NIST #appsecurity guidance, again, NIST reiterates what we recommended in Core Software Security and then updated (for Agile, CI/CD, DevOps) in Building In Security At Agile Speed:

 

Threat Model

Automate as much as possible

Static analysis (or equivalent) SAST/IAST

DAST (web app/api scan)

Functional tests

Negative testing

Fuzz

Check included 3rd party software

 

NIST guidance call’s out heuristic hard coded secrets discovery. Definitely a good idea. I maintain that it is best to design in protections for secrets through threat modelling. Secrets should never be hardcoded without a deep and rigorous risk analysis. (there are a few cases I’ve encountered where there was no better alternative. these are few and very far between.)

NIST guidance is essentially the same as our Industry Standard, Generic Security Development Life cycle (SDL).

There has been an implicit industry consensus on what constitutes effective software security for quite a while. Let’s stop arguing about it, ok? Call the activities whatever you like. but do them.

Especially, fuzz all inputs that will be exposed to heavy use (authenticated or not!) I’ve been saying this for years. I hope that fuzzing tools are finally coming up to the our needs?

https://www.nist.gov/itl/executive-order-improving-nations-cybersecurity/recommended-minimum-standard-vendor-or-developer

MITRE D3FEND Is Your Friend

Defenders have to find the correct set of defences for each threat. Many attacks have no direct prevention making a many-to-to-many relationship. Threat:defence == M:N 

Certainly some vulnerabilities can be prevented or mitigated with a single action. But overall, there are other threats require an understanding of the issue and which defences might be applied.

While prevention is the holy grail, often we must:

— Limit access (in various ways and at various levels in the stack)
— Mitigate effects
— Make exploitation more difficult (attacker cost)

MITREcorp D3FEND maps defences to attacks

“…tailor defenses against specific cyber threats…”

In my threat modelling classes, the 2 most difficult areas for participants are finding all the relevant attack scenarios and then putting together a reasonable set of defences for each credible attack scenario. Typically, newbie threat modellers will identify only a partial defence. Or, misapply a defence.

MITRE ATT&CK helps to find all the steps an attacker might take from initial contact to compromise.

MITRE D3FEND and the ontology between ATT&CK and D3FEND give defenders what we need to then build a decent defence.

https://www.csoonline.com/article/3625470/mitre-d3fend-explained-a-new-knowledge-graph-for-cybersecurity-defenders.html#tk.rss_all

KRACKs Against Wi-Fi Serious But Not End of the World

On October 12, researcher Mathy Vanhoef announced a set of Wi-Fi attacks that he named KRACKs, for key reinstallation attacks. These attack scenarios are against the WPA2 authentication and encryption key establishment portions of the most recent set of protocols. The technique is through key reinstallation.

The attack can potentially allow attackers to send attacker controlled data to your Wi-Fi connected device. In some situations, the attacker can break the built in Wi-Fi protocol’s encryption to reveal users’ Wi-Fi messages to the attacker.

However key reinstallation depends on either working with the inherent timing of a Wi-Fi during a discreet, somewhat rare (in computer terms) exchange or the technique depends upon the attacker forcing the vulnerable exchange through some method (see below for examples). Both of these scenarios take a bit of attacker effort, perhaps more effort than using any one of the existing methods for attacking users over Wi-Fi?

For this reason, while KRACKs Attacks is a serious issue that should be fixed as soon as possible (see below), our collective digital lives probably won’t experience a tectonic shift from Wi-Fi key reinstallation attacks (“KRACKs Attacks”).

Please read on for a technical analysis of the issue.

“… because messages may be lost or dropped, the Access Point (AP) will retransmit message 3 if it did not receive an appropriate response as acknowledgment. As a result, the client may receive message 3 multiple times. Each time it receives this message, it will reinstall the same encryption key, and thereby reset the incremental transmit packet number (nonce) and receive replay counter used by the encryption protocol. We show that an attacker can force these nonce resets by collecting and replaying retransmissions of message 3 of the 4-way handshake. By forcing nonce reuse in this manner, the encryption protocol can be attacked, e.g., packets can be replayed, decrypted, and/or forged.”

–Mathy Vanhoef, “Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2”

The highlighted text above is Vanhoef’s, not mine.

Depending upon which key establishment exchange is attacked, as Vanhoef notes, injection of messages might be the result. But, for some exchanges, decryption might also be possible.

For typical Wi-Fi traffic exchange, AES-CCMP is used between the client (user) and the access point (AP, the Wi-Fi router). Decryption using KRACKs is not thought to be possible. But decryption is not the only thing to worry about.

An attacker might craft a message that contains malware, or at least, a “dropper,” a small program that when run will attempt to install malware. Having received the dropper, even though the message may make no sense to the receiving program, the dropper is still retained in memory or temporary, perhaps even permanent, storage by the receiving computer. The attacker then has to figure out some way to run the dropper—in attack parlance, to detonate the exploit that has been set up through the Wi-Fi message forgery.

“Simplified, against AES-CCMP an adversary can replay and decrypt (but not forge) packets. This makes it possible to hijack TCP streams and inject malicious data into them. Against WPA-TKIP and GCMP the impact is catastrophic: packets can be replayed, decrypted, and forged.“

–Mathy Vanhoef, “Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2”

Packet forgery is worse, as the receiver may consider that the data is legitimate, making it easier for an attacker to have the receiver accept unexpected data items, influencing the course of an exchange and establish a basis upon which to conduct follow-on, next-step exploits.

In my opinion KRACKs are a serious problem to which the wise will wish to respond.

The essential problem with Wi-Fi is that it is free to intercept. Radio waves travel over the air, which, as we all know, is a shared medium. One need not “plug in” to capture Wi-Fi (or any radio-carried) packets. One simply must craft a receiver strong enough to “hear,” that is, receive the transmissions of interest.

But although certainly serious, are KRACKs Attacks the end of the known digital universe? I think not.

First, the attacker has to be present and alert to the four-way, WPA2 handshake. If this has already passed successfully, the attacker’s key reinstallation is no longer possible. The handshake must be interjected at exchange #3, which will require some precision. Or, the attacker must force the key exchange.

One could, I imagine, write an attack that would identify all WPA2 handshake exchanges within range, and inject the attacker’s message #3 to each handshake to deliver some sort of follow-on exploit. This would be a shotgun approach, and might on large, perhaps relatively open networks have potential attacker benefits.[1]

Supplicants (as Wi-Fi clients are called) do not constantly handshake. The handshake interchange takes place upon authentication and reauthentication, and in some networks upon shifting from one AP to another when the client changes location (a “handoff”). These are discreet triggers, not a constant flow of handshake message #3. Johnny or Janie attacker has to be ready and set up in anticipation of message #3. That is, right after message #3 goes by, attacker’s #3 must be sent. Once the completed handshake message #4 goes by, I believe that the attack opportunity closes.

I fear that the timing part might be tricky for the average criminal attacker. It would be much easier to employ readily available Wi-Fi sniffing tools to capture sufficient traffic to allow the attacker to crack weak Wi-Fi passwords.

There are well-understood methods for forcing Wi-Fi authentication (see DEAUTH, below). Since these methods can be used to reveal the password for the network or the user, using one of these may be a more productive attack?

There are other methods for obtaining a Wi-Fi password, as well: the password must be stored somewhere on each connecting device or the user would need to reenter the password upon every connection/reconnection. Any attack that gives attacker access to privileged information kept by any device’s operating system can be used to gain locally stored secrets like Wi-Fi passwords.

Wi-Fi password theft (whether WPA-Enterprise or WPA-Personal) through Wi-Fi deauthentication, “DEAUTH”, has been around for some time; there are numerous tools available to make the attack relatively trivial and straight forward.

Nation-state attackers may have access to banks of supercomputers or massive parallel processing systems that they can apply for rapid password cracking of even complex passwords. A targeted nation-state–sponsored Wi-Fi password-cracking attack is difficult to entirely prevent with today’s technologies. There are likely other adversaries with access to sufficient resources to crack complex passwords in a reasonable amount of time, as well[2].

Obviously, as Vanhoef suggests, as soon as your operating system or Wi-Fi client vendor offers an update to this issue, patch it quickly. Problem, hopefully, solved. Please see https://www.krackattacks.com for updates from security vendors that are working with the discoverer. Question your vendor. If your vendor does not respond, pressure them; that’s my suggestion.

Other actions that organizations can couple with good Wi-Fi hygiene are to use good traffic and event analysis tools, such as modern Security Information Event Management (SIEM) software, network ingress and egress capture for anomaly analysis, and perhaps ingress/egress gateways that prevent many types of attack and forms of traffic.

“One can use VPN, SSH2 tunnels, etc., to ensure some safety from ARP poisoning attacks. Mathy Vanhoef also uses an SSL stripper that depends on badly configured web servers. Make sure your TLS implementation is correct!” – Carric Dooley, Global Lead, Foundstone Consulting Services

Organization Wi-Fi hygiene includes rapidly removing rogue access points, Wi-Fi authentication based upon the organization’s central authentication mechanism (thus don’t employ a WPA password), strong password construction rules, and certificates issued to each Wi-Fi client without which access to Wi-Fi will be denied. Add Wi-Fi event logs to SIEM collection and analysis. Find out whether organization access points generate an event on key reinstallation. This is a fairly rare event. If above-normal events are being generated, your Wi-Fi may be suffering from a KRACK.

None of the foregoing measures will directly prevent key reinstallation attacks. Reasonable security practices do make gaining access to Wi-Fi more difficult, which will prevent or slow other Wi-Fi attacks. Plus, physical security ought to pay attention to anyone sitting in the parking lot with a Wi-Fi antenna pointing at a campus building. But that was just as true before KRACKs were discovered.

For home users, use a complex password on your home Wi-Fi. Make cracking your password difficult. Remember, you usually must enter the password only once for each client device.

Maintain multiple Wi-Fi networks (probably, multiple access points), each with different passwords: one for work or other sensitive use; and another for smart TVs and the like and other potentially vulnerable devices such as Internet of Things devices—isolate those from your network shares, printers, and other sensitive data and trusted devices. Install a third Wi-Fi for your guests. That network should be set up so that each session is isolated from the others; all your guests need is to get to the internet. Each SSID network must have a separate, highly varied password: numbers, lowercase, uppercase, symbols. Make it a long passphrase that resists dictionary attacks.

Wi-Fi routers are a commodity; I do not suggest spending a great deal of money. How much speed do your occasional guests really need? Few Internet connections are as fast as even bottom-tier home Wi-Fi. Most of your visitors probably do not need access to your sensitive data, yes? Should your kids have their own Wi-Fi so that their indiscretions do not become yours?

Multiple Wi-Fi networks will help to slow and perhaps confound KRACKs cybercriminals. “Which network should I focus on?” You increase the reconnaissance the attacker must perform to promulgate a successful attack. Maybe they will move on to simpler home network.

If you happen to notice someone sitting in a car in your neighborhood with an open laptop, be suspicious. If they have what looks like an antenna, maybe let law enforcement know.

Basic digital security practices can perhaps help? For instance, use a VPN even over Wi-Fi. Treat Wi-Fi, particularly, publicly available Wi-Fi as an untrustable network. Be cognizant of where the Wi-Fi exists. WPA2 is vulnerable to decryption, so don’t trust airports, hotels, cafes, anywhere where the implementers and maintainers of the network are unknown.

Users of some Android and Linux clients should be aware that an additional implementation error allows an attacker immediate access to the decryption of client and access point Wi-Fi traffic. Remember, all those smart TVs, smart scales, home automation centers, and thermostats are usually nothing more than specialized versions of Linux. Hence, each may be vulnerable. On the other hand, if these are segregated onto a separate, insecure Wi-Fi, at least attackers will not have readily gained your more sensitive network and devices. While waiting for a patch for your vulnerable Android device, perhaps it makes sense to put it on the untrusted home Wi-Fi as a precaution.

You may have noticed that I did not suggest changing the home Wi-Fi passwords. Doing so will not prevent this attack. Still, the harder your WPA2 password is to crack, the more likely common cybercriminals will move on to easier pickings.

As is often the case in these serious issues, reasonable security practices may help. At the same time, failure to patch quickly leaves one vulnerable for as long as it takes to patch. I try to remember that attackers will become ever more facile with these methods and what they can do with them. They will build tools, which, if these have value, will then become ever more widespread. It is a race between attacker capability and patching. To delay deploying patches is typically a dance with increasing risk of exploitation. In the meantime, the traffic from this attack will be anomalous. Watch for it, if you can.

Many thanks to Carric Dooley from Foundstone Professional Services for his suggestions to complete and round out this analysis as well as for keeping me technically accurate.

Cheers,

/brook

[1] “When a client joins a network, it […] will install this key after receiving message 3 of the 4-way handshake. Once the key is installed, it will be used to encrypt normal data frames using an encryption protocol. However, because messages may be lost or dropped, the Access Point (AP) will retransmit message 3 if it did not receive an appropriate response as acknowledgment. […] Each time it receives this message, it will reinstall the same encryption key, and thereby reset the incremental transmit packet number (nonce) and receive replay counter used by the encryption protocol. We show that an attacker can force these nonce resets by collecting and replaying retransmissions of message 3 of the 4-way handshake. By forcing nonce reuse in this manner, the encryption protocol can be attacked, e.g., packets can be replayed, decrypted, and/or forged.”

[2] Interestingly, my LinkedIn password was among those stolen during the 2011 LinkedIn breach. A research team attempted to crack passwords. After 3 months, they cracked something like 75% of the passwords. However, my highly varied, but merely 6 character password was never cracked. While today’s cracking is factors more sophisticated and rapid, still, a varied non-dictionary password still slows the process down considerably.

Heartbleed Exposure, What Is It Really?

Heartbleed Exposure, what is it really?

“Heap allocation patterns make private key exposure unlikely” Neel Mehta, discoverer of HeartBleed” 

In the media, there’s been a lot of discussion about what might be exposed from the heartbleed OpenSSL attack. It is certainly true that very sensitive items can be exposed. And over thousands of test runs, sensitive items like private keying materials and the like have been returned by the heartbleed buffer overread.

A very strong case can be made for doing exactly as industry due diligence suggests. Teams should replace private keys on servers that had been vulnerable, once these are patched. But should every person on the Internet change every password? Let’s examine that problems by digging into the details of exactly how heartbleed works.

First, heartbleed has been characterized as an “overflow” error: “Heartbleed is basically a buffer-overflow vulnerability”. This unfortunately is a poor descriptor and somewhat inaccurate. It may make better media copy, but calling heartbleed an “overflow” is a poor technical description upon which to base a measured response.

Heartbleed is not a classic buffer overflow. No flow control or executable code may be injected via heartbleed. A read of attacker chosen memory locations is not possible, as I will explain, below. A better descriptor of heartbleed is a “buffer over-read”. Unintentionally, some data from memory is returned to the attacker. To be precise, heartbleed is a data leak, not a flow control error.

In order to understand what’s possible to disclose, it’s key to understand program “heap” memory. The heap is an area of memory that programs use to store data. Generally speaking, well-written programs (like OpenSSL) do not to put executable code into heap (that is, data) memory[1]. Because data and execution are separated, the attacker has no way through this vulnerability to execute code. And that is key, as we shall see.

As a program runs, bits of data, large and small, temporary and more or less permanent for the run, are put into the heap[2]. Typically, data are put wherever is convenient at the moment of allocation, depending upon what memory is available.

Memory that’s been deallocated gets reused. If an available piece of memory happens to be larger than a requested size, the new sized piece will be filled with the new data, while adjacent to the new data will remain bits and pieces of whatever was there previously.

In other words, while not entirely random, the heap is filled with bits and pieces of data, a little from here, a little from there, a nice big chunk from this session, with a bit left over from some other session, all helter-skelter amongst each other. The heap is a jumble; taking random bits from the heap may be considered to be like attending a jumble sale.

Now, let’s return to heartbleed. The heartbleed bug returns whatever happens to be on the heap just above the 16 bytes that are required for the TLS heartbeat packet. The attacker may request as much as 64K bytes. That’s a nice big chunk of stuff from the heap; make no mistake about it. Anything might be in there. At the very least, decrypted  data intended for application processing will be returned to the attacker[3]. That’s certainly bad! It breaks the confidentiality supposedly gained through the TLS encryption. But getting a random bit is different than requesting an arbitrary memory location at the discretion of the attacker. And that is a very important statement to hold in mind as we respond to this very serious situation.

An analogy to Heartbleed might be a bit like going fishing. Sometimes, we fish where we can clearly see the fish (mountain streams) or signs of fish (clearer lakes), or with a “fish finder” appliance, that identifies fish  under the surface when the fish aren’t visible.

Heartbleed is a lot more like fishing for fish that are deep in a turbulent lake with no fish finding capability. The fisher is guessing. If she or he guesses correctly, fish for dinner. If not, it’s a long day holding onto the fishing rod.

In the same manner, the attacker, the “fisher” as it were, doesn’t know where the “fish”, the goodies are. The bait (the heartbleed request) is cast upon the “lake” (the program heap) in the hopes that a big fish will “bite” (secret “bytes” will get returned).

The attacker can heartbleed to her or his heart’s content (pun intended). That is, if left undiscovered, an attacker can continuously pound the other side of the connection with heartbleeds, perhaps thousands of times. Which means multiple chunks of memory will be returned to the attacker, as the heap allocates, deallocates, and moves data around.

Lots of different heap chunks will get returned. There will likely also be overlap between the chunks that are returned to the attacker. Somewhere within those memory chunks are likely to be some sensitive data. If the private key for a session happens to be in one of those chunks, it will be exposed to the attacker. If any particular session open through the OpenSSL library happens to a contain a password that had been transmitted, it’s been exposed. It won’t take an engineering genius to do an ASCII dump of returned chunks of memory in order to go poking about to find interesting bits.

Still, and nonetheless, this is hunting for goodies in a bit of a haystack. Some people are quite good at that. Let’s acknowledge that outright. But that’s very different than a directed attack.

And should a wise and prepared security team, making good use of appropriate security tools, notice a heartbleed attack, they will most likely kill the connection before thousands of buffers can be read. Heartbleed over any particular connection is a linear process, one packet retrieved at a time. Retrieving lots of data takes some time. Time to respond. Of course, an unprotected and unaware site could allow many sessions to get opened by an attacker, each linearly heartbled, thus revealing far more of what’s on the heap than a single session might. Wouldn’t you notice such anomalous behaviour?

It’s important to note that the returns in the heartbleed packets are not necessarily tied to the attackers’ session. Again, it’s whatever happens to be on the heap, which will contain parts of other sessions. And any particular heartbleed packet is not necessarily connected to the data in a previous or subsequent packet. Which means that there’s no continuity of session nor any linearity between heartbleed retrievals. All session continuity must be pieced together by the attacker. That’s not rocket science. But it’s also work, perhaps significant work.

I’ll reiterate in closing, that this is a dangerous bug to which we must respond in an orderly fashion.

On the other hand, this bug does not give attackers free reign to go after all the juicy targets that may be available on any host, server, or endpoint that happens to have OpenSSL installed. Whatever happens to be on the heap of the process using the OpenSSL library and that is adjacent to the heartbeat buffer will be returned. And that attack may only occur during a TLS session. Simply including the vulnerable library poses no risk, at all. Many programs make use of OpenSSL for other functionality beyond TLS sessions.

This bug is not the unfettered keys to the kingdom, unless a “key to the kingdom” just happens to be on the heap and happens to get returned in the over-read. What gets returned is entirely due to the distribution of the heap at the moment of that particular heartbeat.

Cheers,

/brook

These assertions have been demonstrated in the lab through numerous runs of the heartbleed attack by a  team who cannot be named here. My thanks to them for confirming this assessment. Sorry for not disclosing.

[1] There are plenty of specialized cases that break this rule. But typically, code doesn’t run from the heap; data goes onto the heap. And generally speaking, programs refrain from executing on the heap because it’s a poor security practice. Let’s make that assumption about OpenSSL (and there’s nothing to indicate that this is NOT true in this case), in order to make clear what’s going on with heartbleed.

[2] The libraries that support programs developed with the major development tools and running on the major operating systems have sophisticated heap management services that are consumed by the running application as it allocates and deallocates memory. While care must be exercised in languages like C/C++, the location of where data end up on the heap is controlled by these low-level services.

[3] That is, intended for the application that is using OpenSSL for TLS services.

The “Real World” of Developer-centric Security

My friend and colleague, Dr. James Ransome, invited me late last Winter to write a chapter for his 10th book on computer security, Core Software Security(with co-author, Anmol Misra published by CRC Press. My chapter is “The SDL In The Real World”, SDL = “Secure Development Lifecycle”. The book was released December 9, 2013. You can get copies from the usual sources (no adverts here, as always).

It was an exciting process. James and I spent hours white boarding possible SDL approaches, which was very fun, indeed*. We collectively challenged ourselves to uncover current SDL assumptions, poke at the validity of these, and find better approaches, if possible.

Many of you already know that I’ve been working towards a different approach to the very difficult, multi-dimensional and multi-variate problem of designing and implementing secure software for a rather long time. Some of my earlier work has been presented to the industry on a regular basis.

Specifically, during the period of 2007-9, I talked about a new (then) approach to security verification that would be easy for developers to integrate into their workflow and which wouldn’t require a deep understanding of security vulnerabilities nor of security testing. At the time, this approach was a radical departure.

The proving ground for these ideas was my program at Cisco, Baseline Application Vulnerability Assessment, or BAVA, for short (“my” here does not exclude the many people who contributed greatly to BAVA’s structure and success. But it was more or less my idea and I was the technical leader for the program).

But, is ease and simplicity all that’s necessary? By now, many vendors have jumped on the bandwagon; BAVA’s tenets are hardly even newsworthy at this point**. Still, the dream has not been realized, as far as I can see. Vulnerability scanning still suffers from a slew of impediments from a developer’s view:

  • Results count vulnerabilities not software errors
  • Results are noisy, often many variations of a single error are reported uniquely
  • Tools are hard to set up
  • Tools require considerable tool  knowledge and experience, too much for developers’ highly over-subscribed days
  • Qualification of results requires more in-depth security knowledge than even senior developers generally have (much less an average developer)

And that’s just the tool side of the problem. What about architecture and design? What about building security in during iterative, fast paced, and fast changing agile development practices? How about continuous integration?

As I was writing my chapter, something crystalized. I named it, “developer-centric security”, which then managed to get wrapped into the press release and marketing materials of the book. Think about this:  how does the security picture change if we re-shape what we do by taking the developer’s perspective rather than a security person’s?

Developer-centric software security then reduced to single, pointed question:

What am I doing to enable developers to innovate securely while they are designing and writing software?

Software development remains a creative and innovative activity. But so often, we on the security side try to put the brakes on innovation in favour of security. Policies, standards, etc., all try to set out the rules by which software should be produced. From an innovator’s view at least some of the time, developers are iterating through solutions to a new problem while searching for the best way to solve it. How might security folk enable that process? That’s the question I started to ask myself.

Enabling creativity, thinking like a developer, while integrating into her or his workflow is the essence of developer-centric security. Trust and verify. (I think we have to get rid of that old “but”)

Like all published works, the book represents a point-in-time. My thinking has accelerated since the chapter was completed. Write me if you’re intrigued, if you’d like more about developer-centric security.

Have a great day wherever you happen to be on this spinning orb we call home, Earth.

cheers,

/brook

*Several of the intermediate diagrams boggle in complexity and their busy quality. Like much software development, we had to work iteratively. Intermediate ideas grew and shifted as we worked. a creative process?

**At the time, after hearing BAVA’s requirements, one vendor told me, “I’ll call you back next year.”. Six months later on a vendor webcast, that same vendor was extolling the very tenets that I’d given them earlier. Sea change?

RSA 2013 & Risk

Monday, February 25th, I had the opportunity to participate in the Risk Seminar at RSA 2013.

My co-presenters/panelists are pursuing a number of avenues to strengthen not only their own practices, but for all. As I believe, it turns out that plenty of us practitioners have come to understand just how difficult a risk practice is. We work with an absence of hard longitudinal data, with literally thousands of variant scenarios, generally assessing ambiguous situations. Is that hard enough?

Further complicating our picture and methods is the lack of precision when we discuss risk. We all think we know what we mean, but do we? Threats get “prioritized” into risks. Vulnerabilities, taken by themselves with no context whatsoever, are assigned “risk levels”. Vulnerability discovery tools ratings don’t help this discussion.

We all intuitively understand what risk is; we calculate risk every day. We also intuitively understand that risk has a threat component, a vulnerability component, vulnerabilities have to be exploited, threats must have have the capacity and access to exploit. There are mitigation strategies, and risk always involves some sort of  impact or loss. But expressing these relationships is difficult. We want to shorthand all that complexity into simple, easily digestible statements.

Each RSA attendee received a magazine filled with short articles culled from the publisher’s risk archives. Within the first sentences risk was equated to threat (not that the writer bothered to define “threat”), to vulnerability, to exploit. Our risk language is a jumble!

Still, we plod on, doing the best that we can with the tools at hand, mostly our experience, our ability to correlate and analyze, and any wisdom that we may have picked up along the journey.

Doug Graham from EMC is having some success through focusing on business risk (thus bypassing all those arguments about whether that Cross-site Scripting Vulnerability is really, really dangerous). He has developed risk owners throughout his sister organizations much as I’ve used an extended team of security architects successfully at a number of organizations.

Summer Fowler from Carnegie-Mellon reported on basic research being done on just how we make risk decisions and how we can make our decisions more successful and relevant.

All great stuff, I think.

And me? I presented the risk calculation methodology that Vinay Bansal and I developed at Cisco, based on previous, formative work by Catherine Blackadder Nelson and Rakesh Bharania. Hopefully, someday, the SANS Institute will be able to publish the Smart Guide that Vinay and I wrote describing the method?*

That method, named “Just Good Enough Risk Rating” (JGERR) is lightweight and repeatable process through which assessors can generate numbers that are comparable. That is one of the most difficult issues we face today; my risk rating is quite likely to be different than yours.

It’s difficult to get the assessor’s bias out of risk ratings. Part of that work, I would argue, should be done by every assessor. Instead of pretending that one can be completely rational (impossible and actually wrong-headed, since risk always involves value judgements), I call upon us all to do our personal homework by understanding our own, unique risk preferences. At least let your bias be conscious and understood.

Still, JGERR and similar methods (there are similar methods, good ones! I’m not selling anything here) help to isolate assessor bias and at the very least put some rigour into the assessment.

Thanks  to Jack Jones’ FAIR risk methodology, upon which JGERR is partly based, I believe that we can adopt a consistent risk lexicon, developing a more rigorous understand of the relationships between the various components that make up a risk calculation. Thanks, Jack. Sorry that I missed you; you were speaking at the same time that I was in the room next door to mine. Sigh.

As Chris Houlder, CISO of Autodesk, likes to say, “progress, not perfection”. Yeah, I think I see a few interesting paths to follow.

cheers

/brook

*In order to generate some demand, may I suggest that you contact SANS and tell them that you’d like the guide? The manuscript has been finished and reviewed for more than a year as of this post.

Alan Paller’s Security Architecture Call To Action

I was attending the 2nd SANS What Works in Security Architecture Summit, listening to Alan Paller, Director of Research at the SANS Institute, and Michele Guel, National Cybersecurity Award winner. Alan has given us all a call to action, namely, to build a collection of architectural solutions for security architecture practitioners. The body of work would be akin to Up To Date for medical clinicians.

I believe that creating these guides is one of the next critical steps for security architects as a profession. Why?

It’s my contention that most experienced practioners are carrying a bundle of solution patterns in their heads. Like a doctor, we can eyeball a system, view the right type of system diagram, get a few basic pieces of information, and then begin very quickly to assess which of those patterns may be applicable.

In assessing a system for security, of course, it is often like peeling an onion. More and more issues get uncovered as the details are uncovered. Still, I’ve watched experienced architects very quickly zero in to the most relevant questions. They’ve seen so many systems that they intuitively understand where to dig deeper in order to assess attack vectors and what is not important.

One of the reasons this can happen so quickly is almost entirely local to each practitioner’s environment. Architects must know their environments intimately. That local knowledge will eliminate irrelevant threats and attack patterns. Plus, the architect also knows  which  security controls are pre-existing in supporting systems. These controls have already been vetted.

Along with the local knowledge, the experienced practioner will have a sense of the organization’s risk posture and security goals. These will highlight not only threats and classes of vulnerabilities, but also the value of assets under consideration.

Less obvious, a good security architecture practitioner brings a set of practical solutions that are tested, tried, and true that can be applied to the  analysis.

As a discipline, I don’t think we’ve been particularly good at building a collective knowledge base. There is no extant body of solutions to which a practioner can turn. It’s strictly the “school of hard knocks”; one learns by doing, by making mistakes, by missing important pieces, by internalizing the local solution set, and hopefully, through peer review, if available.

But what happens when a lone practioner is confronted with a new situation or technology area with which he or she is unfamiliar? Without strong peer review and mentorship, to where does the architect turn?

Today, there is nothing at the right level.

One can research protocols, vendors, technical details. Once I’ve done my research, I usually have to infer the architectural patterns from these sources.

And, have you tried getting a vendor to convey a product’s architecture? Reviewing a vendor’s architecture is often frought with difficulties: salespeople usually don’t know, and sales engineers just want to make the product work, typically as a cookie cutter recipe. How many times have I heard, “but no other customer requires this!”

There just aren’t many documented architectural patterns available. And, of the little that is available, most has not been vetted for quality and proof.

Alan Paller is calling on us to set down our solutions. SANS’ new Smart Guide series was created for this purpose. These are intended to be short, concise security architecture solutions that can be understood quickly. These won’t be white papers nor the results of research. The solutions must demonstrate a proven track record.

There are two tasks that must be accomplished for the Smart Guide series to be successful.

  • Authors need to write down the architectures of their successes
  • The series will need reviewers to vet guides so that we can rely on the solutions

I think that the Smart Guides are one of the key steps that will help us all mature security architecture practice. From the current state of Art (often based upon personality) we will force ourselves to:

  • Be succinct
  • Be understandable to a broad range of readers and experience
  • Be methodical
  • Set down all the steps required to implement – especially considering local assumptions that may not be true for others
  • Eliminate excess

And, by vetting these guides for worthwhile content, we should begin to deliver more consistency in the practice. I also think that we may discover that there’s a good deal of consensus in what we do every day**. Smart Guides have to juried by a body of peers in order for us to trust the content.

It’s a Call to Action. Let’s build a respository of best practice. Are you willing to answer the call?

cheers

/brook

** Simply getting together with a fair sampling of people who do what I do makes me believe that there is a fair amount of consensus. But that’s entirely my anecdotal experience in the absence of any better data

SANS Security Architecture

There are plenty of security conferences, to be sure. You can get emersed in the nitty gritty of attack exploits, learn to better analyze intrusion logs, even work on being a better manager of information security folks (like me).

In the plethora of conferences trying to get your attention, there is one area that doesn’t get much space: Security Architecture.

While the Open Group has finally recognized the Security Architect discipline, conferences dedicated to the practice have been too few and too far between. The SANS Institute hosted one last year in Las Vegas. It was such a success that they’ve decided to do it again.

SANS Security Architecture: Baking Security Into Applications And Networks

This year’s conference is intended to bring security architecture to the practical. The conference will be chock full of approaches and techniques that you can use “whole hog” or cherry pick for those portions which make sense for your organization.

Why do security architects need a conference of their own?

If you ask me (which you didn’t!), security architecture is a difficult, complex practice. There are many branches of knowledge, technical, organizational, political, personal that get brought together by the successful architect. It’s frustrating; it’s tricky. It’s often more Art than engineering.

We need each other, not just to commiserate, but to support each other, to help each other take this Art and make what we do more repeatable, more sustainable, more effective. And, many times when I’ve spoken about the discipline publicly, those who are new to security architecture want to know how to learn, what to learn, how to build a programme. From my completely unscientific experience, there seems a strong need for basic information on how to get started and how to grow.

Why do organizations need Information Security Architects? Well, first off, probably not every organization does need them. But certainly if your organization’s security universe is complex, your projects equally complex, then architecture may be of great help. My friend, Srikanth Narasimhan, has a great presentation on “The Long Tail of Architecture”, explaining where the rewards are found from Enterprise Architecture.

Architecture rewards are not reaped upfront. In fact, applying system architecture principles and processes will probably slow development down. But, when a system is designed not just for the present, but the future requirements; when a system supports a larger strategy and is not entirely tactical; when careful consideration has been given to how seeming disparate systems interact and depend, as things change, that’s where architecting systems delivers it’s rewards.

A prime example of building without architecture is the Winchester Mystery House in San Jose, California, USA. It’s a fun place to visit, with its stairwells without destination, windows opening to walls, and so on. Why? The simple answer is, “no architecture”.

So if your security systems seem like a version of the “Mystery House”, perhaps you need some architecture help?

Security Architecture brings the long tail of reward through planning, strategy, holism, risk practice, “top to bottom, front to back, side to side” view of the matrix of flows and system components to build a true defense in depth. It’s become too complex to simply deploy the next shiney technology and put these in without carefully considering how each part of the defense in depth depends upon and affects, interacts, with the others.

The security architecture contains a focus that addresses how seeming disparate security technologies and processes fit into a whole. And, solutions focused security architects help to build holistic security into systems; security architecture is a fundamental domain of an Enterprise Architecture practice. I’ve come to understand that security architecture has (at least) these two foci, largely through attending last year’s conference. I wonder what I’ll gain this year?

Please join me  in Washington DC, September 28-30, 2011 for SANS Security Architecture.

cheers

/brook

 

Architecture or Engineering?

It’s not exactly the great debate. There may not be that many folks on the planet who care what we call this emerging discipline in which I’m involved? Still, those of us who are involved are trying to work through our differences. We don’t yet have a clear consensus definition. Though I would offer that a consensus on the field of security architecture is beginning to coalesce within the practitioner community. (call the discipline what-you-will)

One of the lively discussions here at the SANS What Works In Security Architecture is, “What do we call ourselves?”

Clearly, there are at least two threads. One thread (my practice) is to call what I do “Information Security Architecture”. The other thread is “Systems Security Engineer”.

I read one of the formative documents from the “Systems Security Engineer” position last month (it’s not yet announced; so I’m sure if I can say from whom, yet?)  And what was described was pretty close to what I’ve been calling “Security Architecture”.

Here’s the interesting part to me: that no matter the name, collectively, we’re realizing that we require this set of functions to be fulfilled. I’ll try to make a succinct statement about the consensus task list that I think is emerging:

  • Study of a proposed system architecture for security needs
  • Discussion with the proposers of the system (“the business” in parlance) about what the system is meant to do and the required security posture
  • Translation of required security posture into security requirements
  • Translation of the security requirements into specific security controls within a system design
  • Given an architecture, assessing the security risk based upon a strong understanding of information security threats for that organization and that type of architecture

In some organizations, the security architect task is complete at delivery of the plan. In others, the architect is expected to help design the specific controls into the system. And then, possibly, the architect is also responsible for the due diligence task of assuring that the controls were actually implemented. In my experience, I am responsible for all of these tasks and I’m called a “Security Architect”.

From the other perspective, the role is called “Systems Security Engineer”. And, it is also expected that the upfront tasks of discussion and analysis are performed by a security architect who is no longer engaged after delivery of the high level items (listed above).

That’s not how it works in my world. And, I would offer that this “leave it after analysis” model might lead to “Ivory Tower” type pronouncements. I’ll never forget one of my early big errors. I was told that “all external SSL communications were required to be bi-directionally authenticated” OK – I’d specify that for every external SSL project. Smile

Then projects started coming back to me, sometimes months later, having been asking for bi-directional authentication, and being told that on the shared java environment,”no can do”.

The way that this environment was set up, it didn’t support bi-directional authentication at an application granularity. And that was the requirement that I had given, not once but several times. I was “following the established standard”, right?

I submit to your comment and discussion that

  1. There’s more than one way to achieve similar security control (in this case, bi-directionally authenticated encrypted tunnels)
  2. It’s very important to specify requirements that can be met
  3. A security architect must understand the existing capabilities before writing requirements.

Instead of ivory tower pronouncements, if I believe that there is a technology gap, my job is to surface and make the gap as transparent as possible to the eventual owners of the system under analysis (“the business”).
How can I do this job if all I have to do is work from paper standards  and then walk away. Frankly, I’d lose all credibility. The only thing that saved me early on was my willingness to admit my errors and then pitch in wholeheartedly to solve any problems that I’d help create (or, really, anything that comes up!)

Some definitions for your consideration:

  • Architecture: the structure and organization of a computer’s hardware or system software
  • Engineering:  the discipline dealing with the art or science of applying scientific knowledge to practical problems

Engineers apply say, cryptography, an algorithm to solve problems. Architects plan that cryptography will be required for a particular set of transmissions or for storage of a set of data.

At least, these are my working definitions for the moment.

So, after listening to both sides, in my typically equivocal manner, I’m wondering if this role that I practice is really a good and full slice of architecture, combined with a good and full slice of engineering?

Practitioners, it seems to me from my experience, need both of these if they are to gain credibility with the various groups with whom they must interact. And “no”, I don’t want to perpetuate any ivory towers!

Send me your ideas for a better name, a combinatorial name. what-have-you

cheers,

/brook

from Las Vegas airport