Finally Making JGERR Available

Originally conceived when I was at Cisco, Just Good Enough Risk Rating (JGERR) is a lightweight risk rating approach that attempts to solve some of the problems articulated by Jack Jones’ Factor Analysis Of Information Risk (FAIR). FAIR is a “real” methodology; JGERR might be said to be FAIR’s “poor cousin”.

FAIR, while relatively straightforward, requires some study. Vinay Bansal and I needed something that could be taught in a short time and applied to the sorts of risk assessment moments that regularly occur when assessing a system to uncover the risk posture and to produced a threat model.

Our security architects at Cisco were (probably still are?) very busy people who have to make a series of fast risk ratings during each assessment. A busy architect might have to assess more than one system in a day. That meant that whatever approach we developed had to be fast and easily understandable.

Vinay and I were building on Catherine Nelson and Rakesh Bharania’s Rapid Risk spreadsheet. But it had arithmetic problems as well as did not have a clear separation of risk impact from those terms that will substitute for probability in a risk rating calculation. We had collected hundreds of Rapid Risk scores and we were dissatisfied with the collected results.

Vinay and I developed a new spreadsheet and a new scoring method which actively followed FAIR’s example by separating out terms that need to be thought of (and calculated) separately. Just Good Enough Risk Rating (JGERR) was born. This was about 2008, if I recall correctly?

In 2010, when I was on the steering committee for the SANS What Works in Security Architecture Summits (they are no longer offering these), one of Alan Paller’s ideas was to write a series of short works explaining successful security treatments for common problems. The concept was to model these on the short diagnostic and treatment booklets used by medical doctors to inform each other of standard approaches and techniques.

Michele Guel, Vinay, and myself wrote a few of these as the first offerings. The works were to be peer-reviewed by qualified security architects, which all of our early attempts were. The first “Smart Guide” was published to coincide with a Summit held in September of 2011. However, SANS Institute has apparently cancelled not only the Summit series, but also the Smart Guide idea. None of the guides seem to have been posted to the SANS online library.

Over the years, I’ve presented JGERR at various conferences and it is the basis for Chapter 4 of Securing Systems. Cisco has by now, collected hundreds of JGERR scores. I spoke to a Director who oversaw that programme a year or so ago, and she said that JGERR is still in use. I know that several companies have considered  and/or adapted JGERR for their use.

Still, the JGERR Smart Guide was never published. I’ve been asked for a copy many times over the years. So, I’m making JGERR available from here at brookschoenfield.com should anyone continue to have interest.

cheers,

/brook schoenfield

Of Flaws, Requirements, And Templates

November 13th, 2013, I spoke at the annual BSIMM community get together in Reston, Virginia, USA. I had been asked to co-present on Architecture Risk Assessment (ARA) and threat modeling along side Jim Delgrosso, long time Cigital threat modeler. We gave two talks:

  1. Introduction to  Architecture Risk Assessment And Threat Modeling
  2. Scaling Architecture Risk Assessment And Threat Modeling To An Enterprise

Thanks much, BSIMM folk, for letting me share the tidbits that I’ve managed to pick up along my way. I hope that we delivered on some of  the promises of our talks? One of my personal values from my participation in the conference was interacting with other experienced practitioners.

Make no mistake! ARA-threat modeling is and will remain an art. There is science involved, of course. But people who can do this well learn through experience and the inevitable mistakes and misses. It is a truism that it takes a fair amount of background knowledge (not easily gained):

  • Threat agents
  • Attack methods and goals used by each threat agent type
  • Local systems and infrastructure into which a system under analysis will go
  • Some form of fairly sophisticated risk rating methodology.

These specific knowledge sets sit on top of significant design ability and experience. The assessor has to be able to understand a system architecture and then to decompose it to an appropriate level.

The knowledge domains listed above are pre-requisite to an actual assessment. That is, there are usually years of experience in system and/or software design, in computer architectures, in attack patterns, threat agents, security controls, etc., that the assessor  brings to an ARA. One way of thinking about it is that ARA/threat modeling is applied computer security, computer security applied to the system under analysis.

Because ARA is a learned skill with many local variations, I find it fascinating to match my practice to practices that have been developed  independent of mine. What is the same? Where do we have consensus? And, importantly, what’s different? What have  I missed? How can I improve my practice based upon others’? These co-presentations and conversations are priceless. Interestingly, Jim and I agreed about the basic process we employ:

  • Understand the system architecture, especially the logical/functional representation
  • Uncover intended risk posture of the system and the risk tolerance of the organization
  • Understand the system’s communication flows, to the protocol interaction level
  • Get the data sensitivity of each flow and for each storage. Highest sensitivity rules any resulting security needs
  • Enumerate attack surfaces
  • Apply relevant active threat agents and their methodologies to attack surfaces
  • Filter out protected, insignificant, or unlikely attack vectors and scenarios
  • Output the security that the system or the proposed architecture and design are missing in order to fulfill the intended security posture

There doesn’t seem to be much disagreement on what we do. That’s good. It means that this practice is coalescing.The places were we disagree or approach the problem differently I think are pretty interesting.

Gary McGraw calls security architecture misses, “flaws”. Flaws as opposed to software bugs. bugs can be described as errors in implementation, usually, coding. Flaws would then be those security items which didn’t get considered during architecture and design, or which were not designed correctly  (like poorly implemented authentication, authorization, or encryption). I would agree that implementing some sort of no entropy scramble and then believing that you’ve built “encryption” is, indeed both a design flaw and an implementation error*.

I respect Gary’s opinion greatly. So I carefully considered his argument. My personal “hit”, not really an opinion so much as a possible rationale, is that “flaw” gets more attention than say, “requirement”? This may especially be true at the senior management level? Gary, feel free to comment…

Why do I prefer the term “requirement”? Because I’m typically attempting to fit security into an existing architecture practice. Architecture takes “requirements” and turns these into architecture “views” that will fulfill the requirements. So naturally, if I want security to get implemented, I will have to deliver requirements into that process.

Further, if I name security items that the other architects may have missed, as “flaws”, I’m not likely to make any friends amongst my peers, the other architects working on a system. They may take umbrage in my describing their work as flawed? They bring me into analysis in order to get a security view on the system, to uncover through my expertise security requirements that they don’t have the expertise to discover.

In other words, I have very good reasons, just as Gary does, for using the language of my peers so that my security work fits as seamlessly as possible into an existing architecture practice.

The same goes for architecture diagram preferences. Jim Delgrosso likes to proffer a diagram template for use. That’s a great idea! I could do more with this, absolutely.

But once again, I’m faced with an integration problem. If my peers prefer Data Flow Diagrams (DFD), then I’d better be able to analyze a system based upon its DFD. If architects use UML, I’d better be able to understand UML. Ultimately, if I prize integration, unless there’s no existing architecture approach with which to work, my best integration strategy is to make use of whatever is typical and normal, whatever is expected. Otherwise, if I demand that my peers use my diagram, they may see me as obstructive, not collaborative. I have (so far) focused on integration with existing practices and teams.

As I spend more time teaching (and writing a book about ARA), I’m finding that having accepted whatever I’ve been given in an effort to integrate well, I haven’t created a definitive system  ARA diagram template from which to work (though I have lots of samples). That may be a miss on my part? (architectural miss?)

Some of the different practices I encountered may be due to differing organizational roles? Gary and Jim are hired as consultants. Because they are hired experts, perhaps they can prescribe more? Or, indeed, customers expect a prescriptive, “do it this way” approach? Since I’ve only consulted sparingly and have mostly been hired to seed and mentor security architecture practices from the inside, perhaps I don’t have enough experience to understand consultative demands and expectations? I do know that I’ve had far more success through integration than through prescription. Maybe that’s just my style?

You, my readers, can, of course, choose whatever works for you, depending upon role, maturity of your organization, and such.

Thanks for the ideas, Jim (and Gary). It was truly a great experience to kick practices around with you two.

cheers,

/brook

*We should be long past the point where anyone believes that a proprietary scramble protects much. (Except, of course, I occasionally do come across exactly this!).

Security Testing Is Dead. Rest In Peace? Not!

Apparently, some Google presenters are claiming that we can do away with the testing cycles in our software development life cycles? There’s been plenty of reaction to Alberto Savoia’s Keynote in 2011. But I ran into this again this Spring (2013)! So, I’m sorry to bring this up again, but I want to try for a security-focused statement…

The initial security posture of a piece of software is dependent upon the security requirements for that particular piece of software or system. In fact, the organizational business model influences an organization’s security requirements, which in turn influence the kinds of testing that any particular software or system will need before and after release. We don’t sell and deliver software in a vacuum.

Google certainly present a compelling case for user-led bug hunting. Their bounty programme works! But there are important reasons that Google can pull off user-led testing for some of their applications when other businesses might die trying.

It’s important to understand Google’s business model and infrastructure in order to understand a business driven bounty programme.

  • Google’s secured application infrastructure
  • Build it and they might come
  • If they come, how to make money?
  • If Google can’t monetize, can they build user base?

First and foremost, Google’s web application execution environment has got to have a tremendous amount of security built into it. Any application deployed to that infrastructure inherits many security controls. (I’ll exclude Android and mobility, since these are radically different) Google applications don’t have to implement identity systems, authorization models, user profiles, document storage protection, and the panoply of administrative and network security systems that any commercial, industrial strength cloud must deploy and run successfully. Indeed, I’m willing to guess that each application Google deploys runs within a certain amount of sandboxed isolation such that failure of that application cannot impact the security and performance of the other applications running on the infrastructure. In past lives, this is precisely how we built large application farms: sandbox and isolation. When a vulnerable application gets exploited, other applications sharing the infrastructure cannot be touched.We also made escape from the sandbox quite  difficult in order to protect the underlying infrastructure. Google would be not only remiss, but clueless to allow buggy applications to run in any less isolating environment. And, I’ve met lots of very smart Google folk! Scary smart.

Further, from what I’ve been told, Google has long since implemented significant protections for our Google Docs. Any application that needs to store documents inherits this document storage security. (I’ve been told that Google employ some derivation of Shamir’s Threshold Scheme such that unless an attacker can obtain M of N  stored versions of a document, the attacker gains no data whatsoever. This also thwarts the privileged insider attack)

My simple point is that Google is NOT entirely relying upon its external testers, its bug bounty programme. A fair amount of security is inherent and inherited by Google’s web application experiments.

And, we must understand Google’s business model. As near as I can tell from the outside, Google throws a lot of application “spaghetti” onto the Web. If an application “sticks”, that is, attracts a user base, then Google figure out how to monetize the application. If the application can’t be monetized, Google may still support the application for marketing (popularity, brand enhancement) purposes. Applications that don’t generate interest are summarily terminated.

In my opinion, Google’s business model leaves a lot of wiggle room for buggy software. Many of these experiments have low expectations, perhaps no expectation at the outset? This gives Google time to clean the code as the application builds user base and penetration. If nobody is dependent upon an application, then there’s not a very high security posture requirement. In other words, Google can take time to find the “right product”. This is entirely opposite for security function that must deliver protection independent of any support (like on an end point that can be offline). Users expect security software to be correct on installation: the product has to be built “right”, right from the start.

And, the guts of Google are most likely protected from any nasty vulnerabilities. So, user testing makes a lot of business sense and does not pose a brand risk.

Compare this with an endpoint security product from an established and trusted brand. Not only must the software actually protect the customer’s endpoint, it’s got to work when the endpoint is not connected to anything, much less the Internet (i.e., can’t “phone home”). Additionally, security software must meet a very high standard for not degrading the posture of the target system. That is, the software shouldn’t install vulnerabilities that can be abused alongside the software’s intended functionality. Security software must meet a very high standard of security quality. That’s the nature of the business model.

I would argue that security software vendors don’t have a great deal of wiggle room for user-discovery of vulnerabilities. Neither do medical records software, nor financials. These applications must try to be as clean as possible from the get go. Imagine if your online banking site left its vulnerability discovery to the user community. I think it’s not too much of a leap to imagine the loss in customer confidence that this approach might entail?

I’ll state the obvious: different businesses demand different security postures and have different periods of grace for security bugs. Any statement that makes a claim across these differences is likely spurious.

Google, in light of these obvious differences, may I ask your pundits to speak for your own business, rather than assuming that you may speak for all business models, rather than trumpeting a “new world order”? Everyone, may I encourage us to pay attention to the assumptions inherent in claims? Not all software is created equally, and that’s a “Good Thing” ™.

By the way, Brook Schoenfield is an active Google+ user. I don’t intend to slam Google’s products in any manner. Thank you, Google, for the great software that I use every day.

cheers

/brook

 

 

 

Agile & Security, Enemies For Life?

Are Agile software development and security permanent enemies?

I think not. In fact, I have participated with SCRUM development where building security into the results was more effective than classic waterfall. I’m an Secure Agile believer!

Dwayne Melancon, CTO of Tripwire, opined for BrightTALK on successful Agile for security.

Dwayne, I wish it was that simple! “Engage security early”. How often have I said that? “Prioritize vulnerabilities based on business impact”. Didn’t I say that at RSA? I hope I did?

Yes, these are important points. But they’re hardly news to security practitioners in the trenches building programmes.

Producing secure software when using an Agile menthod is not quite as simple as “architect and design early”. Yes, that’s important, of course. We’ve been saying “build security in, don’t bolt it on” for years now. That has not changed.

I believe that the key is to integrate security into the Agile process from architecture through testing. Recognize that we have to trust, work with, and make use of the agile process rather than fighting. After all, one of the key values that makes SCRUM so valuable is trust. Trust and collaboration are key to Agile success. I argue that trust and collaboration are keys to Agile that produces secure software.

In SCRUM, what is going to be built is considered during user story creation. That’s the “early” part. A close relationship with the Product Owner is critical to get security user stories onto the backlog and critical during user story prioritization. And, the security person should be familiar with any existing architecture during user story creation. That way, security can be considered in context and not ivory tower.

I’ve seen security develop a basic set of user stories that fit a particular class or type of project. These can be templated and simply added in at the beginning, perhaps tweaked for local variation.

At the beginning of each Sprint, stories are chosen for development from out of the back log, During this process, considerable design takes place. Make security an integral part of that process, either through direct participation or by proxy.

Really, in my experience, all the key voices should be a part of this selection and refinement process. Quality people can offer why a paticular approach is easier to test, architects can offer whether a story has been accounted for in the architecture, etc. Security is one of the important voices, but certainly not the only one.

Security experts need to make themselves available throughout a Sprint to answer questions about implementation details, the correct way to securely build each user story under development.  Partnership. Help SCRUM members be security “eyes and ears” on the ground.

Finally, since writing secure code is very much a discipline and practice, appropriate testing and vulnerability assurance steps need to be a part of every sprint. I think that these need to be part of Definition of Done.

Everyone is involved in security in Agile. Security folk can’t toss security “over the wall” and expect secure results. We have to get our hands dirty, get some implementation grease under the proverbial fingernails in order to earn the trust of the SCRUM teams.

Trust and collaboration are success factors, but these are also security factors. If the entire team are considering security throughout the process, they can at the very least call for help when there is any question. In my experience, over time, many teams will develop their team security expertise, allowing them to be far more self-sufficient – which is part of the essence of Agile.

Us security folk are going to have to give up control and instead become collaborators, partners. I don’t always get security built the way that I might think about it, but it gets built in. And, I learn lots of interesting, creative, innovative approaches from my colleagues along the way.

cheers

/brook

RSA 2013 & Risk

Monday, February 25th, I had the opportunity to participate in the Risk Seminar at RSA 2013.

My co-presenters/panelists are pursuing a number of avenues to strengthen not only their own practices, but for all. As I believe, it turns out that plenty of us practitioners have come to understand just how difficult a risk practice is. We work with an absence of hard longitudinal data, with literally thousands of variant scenarios, generally assessing ambiguous situations. Is that hard enough?

Further complicating our picture and methods is the lack of precision when we discuss risk. We all think we know what we mean, but do we? Threats get “prioritized” into risks. Vulnerabilities, taken by themselves with no context whatsoever, are assigned “risk levels”. Vulnerability discovery tools ratings don’t help this discussion.

We all intuitively understand what risk is; we calculate risk every day. We also intuitively understand that risk has a threat component, a vulnerability component, vulnerabilities have to be exploited, threats must have have the capacity and access to exploit. There are mitigation strategies, and risk always involves some sort of  impact or loss. But expressing these relationships is difficult. We want to shorthand all that complexity into simple, easily digestible statements.

Each RSA attendee received a magazine filled with short articles culled from the publisher’s risk archives. Within the first sentences risk was equated to threat (not that the writer bothered to define “threat”), to vulnerability, to exploit. Our risk language is a jumble!

Still, we plod on, doing the best that we can with the tools at hand, mostly our experience, our ability to correlate and analyze, and any wisdom that we may have picked up along the journey.

Doug Graham from EMC is having some success through focusing on business risk (thus bypassing all those arguments about whether that Cross-site Scripting Vulnerability is really, really dangerous). He has developed risk owners throughout his sister organizations much as I’ve used an extended team of security architects successfully at a number of organizations.

Summer Fowler from Carnegie-Mellon reported on basic research being done on just how we make risk decisions and how we can make our decisions more successful and relevant.

All great stuff, I think.

And me? I presented the risk calculation methodology that Vinay Bansal and I developed at Cisco, based on previous, formative work by Catherine Blackadder Nelson and Rakesh Bharania. Hopefully, someday, the SANS Institute will be able to publish the Smart Guide that Vinay and I wrote describing the method?*

That method, named “Just Good Enough Risk Rating” (JGERR) is lightweight and repeatable process through which assessors can generate numbers that are comparable. That is one of the most difficult issues we face today; my risk rating is quite likely to be different than yours.

It’s difficult to get the assessor’s bias out of risk ratings. Part of that work, I would argue, should be done by every assessor. Instead of pretending that one can be completely rational (impossible and actually wrong-headed, since risk always involves value judgements), I call upon us all to do our personal homework by understanding our own, unique risk preferences. At least let your bias be conscious and understood.

Still, JGERR and similar methods (there are similar methods, good ones! I’m not selling anything here) help to isolate assessor bias and at the very least put some rigour into the assessment.

Thanks  to Jack Jones’ FAIR risk methodology, upon which JGERR is partly based, I believe that we can adopt a consistent risk lexicon, developing a more rigorous understand of the relationships between the various components that make up a risk calculation. Thanks, Jack. Sorry that I missed you; you were speaking at the same time that I was in the room next door to mine. Sigh.

As Chris Houlder, CISO of Autodesk, likes to say, “progress, not perfection”. Yeah, I think I see a few interesting paths to follow.

cheers

/brook

*In order to generate some demand, may I suggest that you contact SANS and tell them that you’d like the guide? The manuscript has been finished and reviewed for more than a year as of this post.

Seriously? Product Security?

Seriously? You responded to my security due diligence question with that?

Hopefully, there’s a lesson in this tale of woe about what not to do when asked about product security?

This incident has been sticking in my craw for about a year. I think it’s time to get it off my chest. If for no other reason, I want to stop thinking about this terrible customer experience. And yes, for once, I’m going to name the guilty company. I wasn’t under NDA in this situation, as far as I know?

There I was, Enterprise Security Architect for a mid-size company (who shall not be named. No gossip, ever, from this blog). Part of my job was to ensure that vendors’ product security was strong enough to protect my company’s security posture. There’s a due diligence responsibility assigned to most infosec people. In order to fulfill this responsibility, it has become a typical practice to research software vendors’ product security practices.Based upon the results, either mitigate uncovered risks to policy and industry standards or raise the risk to organizational decision makers (and there are always risks, right?).

Every software vendor goes through these due diligence investigations on a regular basis. And I do mean “every”.

I’ve lived on both sides of this fence, conducting the investigations and having my company’s software go through many investigations. This process is now a part of the fabric of doing secure business. There should be nothing surprising about the questions. In past positions, we had a vendor questionnaire, a risk scale based upon the expected responses, and standards against which to measure the vendor. These tools help to build a repeatable process. One of these processes is documented in a SANS Institute Smart Guide released in 2011 and was published by Cisco, as well.

Now, I’m going to name names. Sorry, Google, I’m going to detail just what your Docs sales team said to me. Shame on you!

When I asked about Google Docs product security here is the answer, almost verbatim, that I received from the sales team:

“We’re Google. We can hire Vint Cerf if we want. That is enough.”

Need I point out to my brilliant readers that Dr. Vint Cerf, as far as I know, has never claimed to be an information security expert? I’m sure he knows far more about the design of TCP/IP than I? (but I remind readers that I used to write TCP/IP stacks, so I’m not entirely clueless, either). And, Dr. Cerf probably knows a thing or two about Internet Security, since he runs ICANN?

Still, I can tell you authoritatively that TCP/IP security and Domain Name Registry security are only two (fairly small) areas of an information security due diligence process that is now standard for software vendors to pass.

Besides, the sales team didn’t answer my questions. This was a classic “Appeal to Authority“. And, unfortunately, they didn’t even bother to appeal to a security authority. Sorry Vint; they took your name in vain. I suppose this sort of thing happens to someone of your fame all the time?

Behind the scenes, my counter-part application architect and I immediately killed any possible engagement with Google Docs. Google Sales Team, you lost the sale through that single response. The discussion was over; the possibility of a sale was out, door firmly closed.

One of the interesting results from the wide adoption of The Web has been the need for open and transparent engagement. Organizations that engage honestly gain trust through their integrity, even in the face of organizational mis-steps and faux pas. Organizations who attempt the old fashion paradigm, “control all communications”, lose trust, and lose it rapidly and profoundly. Commercial companies, are you paying attention? This is what democracy looks like (at least in part. But that’s a different post, I think?).

Product security is difficult and demanding. There are many facets that must compliment each other to deliver acceptable risk. And, even with the best intentions and execution, one will still have unexpected vulnerabilities crop up from time to time. We only have to look at development of Microsoft’s product security programme to understand how one of the best in the industry (in my humble opinion) will not catch everything. Do Microsoft bugs surface? Yes. Is the vulnerability level today anywhere near what it was 10 years ago? Not even close. Kudos, Microsoft.

It’s long past the time that any company can coast on reputation. I believe that Google do some very interesting things towards the security of their customers. But their  sales team need to learn a few lessons in humility and transparency. Brand offers very little demonstrable protection. Google, you have to answer those due diligence questionnaires honestly and transparently. Otherwise the Infosec person on the other side has nothing against which to base her/his risk rating. And, in the face of no information, the safest bet is to rate “high risk”. Default deny rule.

It’s a big world out there and if your undiscovered vulnerabilities don’t get’cha now, they will eventually. Beware; be patient; be humble; remain inquisitive; work slowly and carefully. You can quote me on that.

cheers,

/brook

Alan Paller’s Security Architecture Call To Action

I was attending the 2nd SANS What Works in Security Architecture Summit, listening to Alan Paller, Director of Research at the SANS Institute, and Michele Guel, National Cybersecurity Award winner. Alan has given us all a call to action, namely, to build a collection of architectural solutions for security architecture practitioners. The body of work would be akin to Up To Date for medical clinicians.

I believe that creating these guides is one of the next critical steps for security architects as a profession. Why?

It’s my contention that most experienced practioners are carrying a bundle of solution patterns in their heads. Like a doctor, we can eyeball a system, view the right type of system diagram, get a few basic pieces of information, and then begin very quickly to assess which of those patterns may be applicable.

In assessing a system for security, of course, it is often like peeling an onion. More and more issues get uncovered as the details are uncovered. Still, I’ve watched experienced architects very quickly zero in to the most relevant questions. They’ve seen so many systems that they intuitively understand where to dig deeper in order to assess attack vectors and what is not important.

One of the reasons this can happen so quickly is almost entirely local to each practitioner’s environment. Architects must know their environments intimately. That local knowledge will eliminate irrelevant threats and attack patterns. Plus, the architect also knows  which  security controls are pre-existing in supporting systems. These controls have already been vetted.

Along with the local knowledge, the experienced practioner will have a sense of the organization’s risk posture and security goals. These will highlight not only threats and classes of vulnerabilities, but also the value of assets under consideration.

Less obvious, a good security architecture practitioner brings a set of practical solutions that are tested, tried, and true that can be applied to the  analysis.

As a discipline, I don’t think we’ve been particularly good at building a collective knowledge base. There is no extant body of solutions to which a practioner can turn. It’s strictly the “school of hard knocks”; one learns by doing, by making mistakes, by missing important pieces, by internalizing the local solution set, and hopefully, through peer review, if available.

But what happens when a lone practioner is confronted with a new situation or technology area with which he or she is unfamiliar? Without strong peer review and mentorship, to where does the architect turn?

Today, there is nothing at the right level.

One can research protocols, vendors, technical details. Once I’ve done my research, I usually have to infer the architectural patterns from these sources.

And, have you tried getting a vendor to convey a product’s architecture? Reviewing a vendor’s architecture is often frought with difficulties: salespeople usually don’t know, and sales engineers just want to make the product work, typically as a cookie cutter recipe. How many times have I heard, “but no other customer requires this!”

There just aren’t many documented architectural patterns available. And, of the little that is available, most has not been vetted for quality and proof.

Alan Paller is calling on us to set down our solutions. SANS’ new Smart Guide series was created for this purpose. These are intended to be short, concise security architecture solutions that can be understood quickly. These won’t be white papers nor the results of research. The solutions must demonstrate a proven track record.

There are two tasks that must be accomplished for the Smart Guide series to be successful.

  • Authors need to write down the architectures of their successes
  • The series will need reviewers to vet guides so that we can rely on the solutions

I think that the Smart Guides are one of the key steps that will help us all mature security architecture practice. From the current state of Art (often based upon personality) we will force ourselves to:

  • Be succinct
  • Be understandable to a broad range of readers and experience
  • Be methodical
  • Set down all the steps required to implement – especially considering local assumptions that may not be true for others
  • Eliminate excess

And, by vetting these guides for worthwhile content, we should begin to deliver more consistency in the practice. I also think that we may discover that there’s a good deal of consensus in what we do every day**. Smart Guides have to juried by a body of peers in order for us to trust the content.

It’s a Call to Action. Let’s build a respository of best practice. Are you willing to answer the call?

cheers

/brook

** Simply getting together with a fair sampling of people who do what I do makes me believe that there is a fair amount of consensus. But that’s entirely my anecdotal experience in the absence of any better data

What is this thing called “Security Architecture”?

What is this thing called “Security Architecture”?

In February 2009, while I was attending an Open Group Architecture Conference in Bangalore, India, several of us got into a conversation about how security fits into enterprise architecture. A couple of the main folks from the Open Group were involved (sorry, I don’t remember who). They’d been struggling with this same problem: Is high level security participation in the process architectural, as defined for IT architecture?

Participants to that particular conversation had considered several possibilities. Maybe there is such a thing as a security architect? Maybe security participation is as a Subject Matter Expert (SME)? Maybe it’s all design and engineering?

Interesting questions.

I think how an organization views security may depend on how security is engaged and for what purposes. If security is engaged at an engineering level only,

  • “please approve my ACLs”
  • “what is the script for hardening this system? Has it been hardened sufficiently?”
  • “is this encryption algorithm acceptable?”
  • “is this password strength within policy?”
  • etc.

Then that organization will not likely have security architects. In fact (and I’ve seen this), there may be absolutely no concept of security architecture. “Security architecture” might be like saying, “when pigs fly”.

If your security folks are policy & governance wonks, then too, there may not be much of a security architecture practice.

Organizations where security folks are brought in to help projects achieve an appropriate security posture and also brought in to vet that a proposal won’t in some way degrade the current security posture of the organization, my guess is these are among the driving tasks for developing a security architecture practice?

However, in my definition of security architecture, I wouldn’t limit the practice to vetting project security. There can be more to it than that.

  • security strategy
  • threat landscape, (emerging term: “threatscape”)
  • technology patterns, blueprints, standards, guides
  • risk analysis, assessment, communication and decisions
  • emerging technology trends and how these affect security

A strong practitioner will be well versed in a few of these and at least conversant in all at some level. Risk analysis is mandatory.

I’ve held the title “Security Architect” for almost 10 years now. In that time, my practice has changed quite a lot, as well as my understanding, as my team’s understanding have grown. Additionally, we’ve partnered alongside an emerging enterprise architecture team and practice, which has helped me understand not only the practice of system architecture, but my place in that practice as a specialist in security.

Interestingly, I’ve seen security architecture lead or become a practice before enterprise architecture! Why is that? To say so, to observe so, seems entirely counter-intuitive – which is where the Open Group folks were coming from, I fear?. They are focused deeply on IT architecture. Security may be a side-show in their minds, perhaps even, “oh, those guys who maintain the IDS systems”?

Still, it remains in my experience that security may certainly lead architecture practice. Your security architects will have to learn to take a more holistic approach to systems than your development and design community may require. There’s a fundamental reason for this!

You can’t apply the right security controls for a system that you do not understand from bottom to top, front to back, side to side, all flows, all protocols, highest sensitivity of data. Period.

Security controls tend to be very specific; they protect particular points of a system or flow. Since controls tend to be specific, we generally overlap them, building our defense out of multiple controls, the classic, “defense-in-depth”. In order to apply the correct set of controls, in order not to specify more than is necessary to achieve an appropriate security posture, each control needs to be understood in the context of the complete system and its requirements. No control can be taken in isolation from the rest of the system. No control is a panacea. No control in isolation is unbreakable. The architectural application of controls builds the appropriate level of risk.

But project information by itself isn’t really enough to get the job done, it turns out. By now I’ve mentored a few folks just starting out into security architecture. While we’re certainly not all the same, teaching the security part is the easy part, it turns out (in my experience). In addition to a working understanding of

  • Knowledge of the local systems and infrastructures, how these work, what are their security strengths and weak points
  • The local policies and standards, and a feel for industry standard approaches
  • What can IT build easily, what will be difficult?
  • Understanding within all the control domains of security at least at a gross level
  • Design skills, ability to see patterns of application across disparate systems
  • Ability to see relationships across systems
  • Ability to apply specific controls to protect complex flows
  • Depth in at least 2 technological areas
  • A good feel for working with people

Due to the need to view complex systems holistically in order to apply appropriate security, the successful security architects quickly get a handle on the complexity and breadth of a system, all the major components, and how all of these will communicate with each other. Security architects need to get proficient at ignoring implementation details and other “noise”, say the details of user interface design (which only rarely, I find, have security implications).

The necessity of viewing a system at an architectural level and then applying patterns where applicable are the bread and butter of an architectural approach. In other words, security must be applied from an architectural manner; practitioners have no choice if they are to see all the likely attack vectors and mitigate these.

Building a strong defense in depth I think demands architectural thinking. That, at least is what we discovered 9-10 years ago on our way to a security architecture practice.

And, it’s important to note that security architects deal in “mis-use” cases, not “use cases”. We have to think like an attacker. I’ve often heard a system designer complain, “but the system isn’t built to work that way”. That may be, but if the attacker can find a way, s/he will if that vector delivers any advantage.

One my working mantras is, “think like an attacker, but question like a team mate”. In a subsequent post, I’d like to take this subject up more deeply as it is at the heart of what we have to do.

And, as I wrote above, because of this necessity to apply security architecturally, security architects may be among your first system architects! “Necessity is the mother of invention”, as they say.

Security architecture may lead other domains or the enterprise practice. I remember in the early days that we often grumbled to each other about having to step into a broader architecture role simply because it was missing. In order to do our job, we needed architecture. In the absence of such a practice, we would have to do it for projects simply because we, the security architects, couldn’t get our work completed. We’d step into the breach. This may not be true in your organization? But in my experience, this has been a dirty little secret in more than one.

So, if you’re looking to get architecture started in your IT shop, see if the security folks have already put their toes in the water.

In the last year, I believe that a consensus is beginning to emerge within the security industry about security architecture. I’ve seen a couple of papers recently that are attempts to codify our body of practice, the skill set, the process of security architecture. It seems an exciting and auspicious time. Is security architecture finally beginning to come of age?

The time seems ripe for a summit of practitioners in order to talk about these very issues. It’s great that SANS is pulling us together next week in Las Vegas, What Works In Security Architecture. Yours truly will be speaking a couple of times at the summit.

Please join me next week, May 24-26, 2010 in Las Vegas. Engage me in lively and spirited dialog about security architecture.

cheers

/brook

Executive Information Security Risk Dialog

Ah, sigh, Marcus and I almost never agree! And, this often a matter of lexicon and scoping. In this case, Marcus stays away from the thorny issue of risk methodology. But this, I think, is precisely the point to discuss.

Think about the word “risk”. How many definitions have you seen? The word is so overloaded and imprecise (and undefined in Marus’ Rant) that saying “risk” without definition will almost surely undermine the technical/C-level management dialog!

I would argue that the reasons for failure of the security to executive risk dialog have less to do with C-level inability to grasp security issues than it has to do with our imprecision in describing risk for them!

Take Jack Jones. When at Nationwide, he found himself mired within Ranum’s rant. I’ve been there, too. Who hasn’t?

To solve the problem and feel more effective, he spent 5 years designing a usable, repeatable, standardized risk methodology. (FAIR. And yes, you’ve heard me rant about it before)

Jack described to me just how dramatically his C-level conversations changed. Suddenly, crisp interactions were taking place. What changed?

I see the failure of the dialog has having these components:

  • poor risk methodology
  • poor communication about risk
  • not treating executives as executives

Let me explain.

Poor risk methodology is the leading cause of inaccurate and unproductive risk dialog. When we say “risk” what is our agreed upon meaning?

I’ve participated in many discussions that treat risk as equivalent with an open vulnerability. Period.

But, of course, that’s wrong, and actually highly misleading. Is “risk” a vulnerability added to some force that has motivation to exercise the vulnerability (“threat” – I prefer “threat agent” for this, by the way)? That again is incomplete – but that is often what we mean when we say information security risk. And, again, it’s wrong.

In fact, as I understand it, threat can be conceived as the interaction between the components that create a credible threat agent who has sufficient motivation and access in order to exercise an exposed vulnerability. This is quite complex. And, unfortunately, the mathematics involved may be beyond many security practitioners?> Certainly, when in the field these mathematics may be too much?

Most of us security professionals are pretty good on the vulnerability side. We’re good at assessing whether a vulnerability is serious or not, whether its impact can be far-reaching or localized,  how much is exposed. It seems to me, that it’s the threat calculation that’s the difficult piece. I’ve seen so many “methodologies” that don’t go deep enough about threat. Certainly there are reasonable lists of threat agent categories. And, maybe the methodology includes threat motivation. Maybe they include capability? But motivation, capability, risk appetite, access? Very few. And how do you calculate these?

And then there’s the issue of impact. In my practice, I like to get the people close enough to business to assess impact. That doesn’t mean I don’t have a part in the calculation. Often times the security person has a better understanding of the potential scope of vulnerability and the threats that may exercise it than a business person. Still, a business person understands what the loss of their data will mean. However impact is assessed, it has a relationship to the threat and vulnerability side. Together these are the components of risk.

I think in this space, it probably doesn’t make sense to go into the various calculations that might go into actually coming up with a risk calculation? Indeed, this is a nontrivial matter, though I have made the case that any consistent number between zero and one on threat and vulnerability side of the equation can effectively work. (But that’s another post about the hour at an RSA conference presentation that helped me to understand this little bit of risk calculation trickery.)

In any event, hopefully you can see that expressing some reasonable indication of risk is a nontrivial exercise?

Simply telling an executive that there is a serious vulnerability, or that there’s a flaw in an architecture that “might” lead to serious harm waters down the risk conversation. When we do this, we set ourselves up for failure. We are likely to get “I except the risk”, or, an argument about how serious the risk is. Neither of these is what we want.

And this is why I mention “poor communication of risk” as an important factor in the failure of risk conversations. Part of this is our risk methodology, as I’ve written. In addition, there is the clarity of communication. We want to keep the conversation focused on business risk. We do not want to bog down in the rightness or wrongness of a particular project or approach, or an argument about the risk calculation. (And these are what Ranum describes) In my opinion, each of these must be avoided in order to get the dialog that is needed. We need a conversation about the acceptable business risks for the organization.

Which brings me to “treating executives as executives”. When we ask executives to come down into the weeds and help us make technical decisions we are not only inviting trouble, but we are also depriving ourselves and the organization of the vision, strategy, and scope that lays at the executive level. And that’s a loss for everyone involved.

I think we make a big mistake asking the executive to assess our risk calculation. We should have solid methodologies. It must be understood that the risk picture were giving is the best possible given information at hand. There needs to be respect from executive to information security that the risk calculations are valid (as much as we can make them).

No FUD! Don’t ask your executives to decide minor technical details. Don’t ask your executives to dispute risk calculations. Instead, ask them to make major risk decisions, major strategic decisions for your organization. Treat them as your executives. Speak to them in business language, about business risks, which include your assessment about how the technical issues will impact the organization in business terms.

I’ve been surprised at how my own shift with respect to these issues has changed my conversation away from pointless arguments to productive decision-making. And yes, Marcus, amazingly enough, bad projects do get killed when I’ve expressed risk clearly and in business terms.

Sometimes it makes business sense to take calculated risks. One of the biggest changes I’ve had to make as a security professional is to avoid prejudice about risk in my arguments. Like many security folk, I have a bias that all risk should be removed. When I started to shift to that, I started to get more real dialog. Not before.

So, Marcus, we disagree once again. But we disagree more about focus rather than on substance? I like to focus on the things that I can change, rather than repeating the loops of the past endlessly.

Cheers,

Brook