Book Signing At RSA

I’m signing books along with James Ransome and Anmol Misra at 11:30AM at the bookstore at RSA in San Francisco, Digital Guru. If you’re at the conference, please do come by and say “hello”.

James and Anmol  were kind enough to give me a few of their book’s pages, Core Software Security, so that I could ramble on about Secure Development Lifecycles (SDL).

cheers,

/brook

The “Real World” of Developer-centric Security

My friend and colleague, Dr. James Ransome, invited me late last Winter to write a chapter for his 10th book on computer security, Core Software Security(with co-author, Anmol Misra published by CRC Press. My chapter is “The SDL In The Real World”, SDL = “Secure Development Lifecycle”. The book was released December 9, 2013. You can get copies from the usual sources (no adverts here, as always).

It was an exciting process. James and I spent hours white boarding possible SDL approaches, which was very fun, indeed*. We collectively challenged ourselves to uncover current SDL assumptions, poke at the validity of these, and find better approaches, if possible.

Many of you already know that I’ve been working towards a different approach to the very difficult, multi-dimensional and multi-variate problem of designing and implementing secure software for a rather long time. Some of my earlier work has been presented to the industry on a regular basis.

Specifically, during the period of 2007-9, I talked about a new (then) approach to security verification that would be easy for developers to integrate into their workflow and which wouldn’t require a deep understanding of security vulnerabilities nor of security testing. At the time, this approach was a radical departure.

The proving ground for these ideas was my program at Cisco, Baseline Application Vulnerability Assessment, or BAVA, for short (“my” here does not exclude the many people who contributed greatly to BAVA’s structure and success. But it was more or less my idea and I was the technical leader for the program).

But, is ease and simplicity all that’s necessary? By now, many vendors have jumped on the bandwagon; BAVA’s tenets are hardly even newsworthy at this point**. Still, the dream has not been realized, as far as I can see. Vulnerability scanning still suffers from a slew of impediments from a developer’s view:

  • Results count vulnerabilities not software errors
  • Results are noisy, often many variations of a single error are reported uniquely
  • Tools are hard to set up
  • Tools require considerable tool  knowledge and experience, too much for developers’ highly over-subscribed days
  • Qualification of results requires more in-depth security knowledge than even senior developers generally have (much less an average developer)

And that’s just the tool side of the problem. What about architecture and design? What about building security in during iterative, fast paced, and fast changing agile development practices? How about continuous integration?

As I was writing my chapter, something crystalized. I named it, “developer-centric security”, which then managed to get wrapped into the press release and marketing materials of the book. Think about this:  how does the security picture change if we re-shape what we do by taking the developer’s perspective rather than a security person’s?

Developer-centric software security then reduced to single, pointed question:

What am I doing to enable developers to innovate securely while they are designing and writing software?

Software development remains a creative and innovative activity. But so often, we on the security side try to put the brakes on innovation in favour of security. Policies, standards, etc., all try to set out the rules by which software should be produced. From an innovator’s view at least some of the time, developers are iterating through solutions to a new problem while searching for the best way to solve it. How might security folk enable that process? That’s the question I started to ask myself.

Enabling creativity, thinking like a developer, while integrating into her or his workflow is the essence of developer-centric security. Trust and verify. (I think we have to get rid of that old “but”)

Like all published works, the book represents a point-in-time. My thinking has accelerated since the chapter was completed. Write me if you’re intrigued, if you’d like more about developer-centric security.

Have a great day wherever you happen to be on this spinning orb we call home, Earth.

cheers,

/brook

*Several of the intermediate diagrams boggle in complexity and their busy quality. Like much software development, we had to work iteratively. Intermediate ideas grew and shifted as we worked. a creative process?

**At the time, after hearing BAVA’s requirements, one vendor told me, “I’ll call you back next year.”. Six months later on a vendor webcast, that same vendor was extolling the very tenets that I’d given them earlier. Sea change?

Of Flaws, Requirements, And Templates

November 13th, 2013, I spoke at the annual BSIMM community get together in Reston, Virginia, USA. I had been asked to co-present on Architecture Risk Assessment (ARA) and threat modeling along side Jim Delgrosso, long time Cigital threat modeler. We gave two talks:

  1. Introduction to  Architecture Risk Assessment And Threat Modeling
  2. Scaling Architecture Risk Assessment And Threat Modeling To An Enterprise

Thanks much, BSIMM folk, for letting me share the tidbits that I’ve managed to pick up along my way. I hope that we delivered on some of  the promises of our talks? One of my personal values from my participation in the conference was interacting with other experienced practitioners.

Make no mistake! ARA-threat modeling is and will remain an art. There is science involved, of course. But people who can do this well learn through experience and the inevitable mistakes and misses. It is a truism that it takes a fair amount of background knowledge (not easily gained):

  • Threat agents
  • Attack methods and goals used by each threat agent type
  • Local systems and infrastructure into which a system under analysis will go
  • Some form of fairly sophisticated risk rating methodology.

These specific knowledge sets sit on top of significant design ability and experience. The assessor has to be able to understand a system architecture and then to decompose it to an appropriate level.

The knowledge domains listed above are pre-requisite to an actual assessment. That is, there are usually years of experience in system and/or software design, in computer architectures, in attack patterns, threat agents, security controls, etc., that the assessor  brings to an ARA. One way of thinking about it is that ARA/threat modeling is applied computer security, computer security applied to the system under analysis.

Because ARA is a learned skill with many local variations, I find it fascinating to match my practice to practices that have been developed  independent of mine. What is the same? Where do we have consensus? And, importantly, what’s different? What have  I missed? How can I improve my practice based upon others’? These co-presentations and conversations are priceless. Interestingly, Jim and I agreed about the basic process we employ:

  • Understand the system architecture, especially the logical/functional representation
  • Uncover intended risk posture of the system and the risk tolerance of the organization
  • Understand the system’s communication flows, to the protocol interaction level
  • Get the data sensitivity of each flow and for each storage. Highest sensitivity rules any resulting security needs
  • Enumerate attack surfaces
  • Apply relevant active threat agents and their methodologies to attack surfaces
  • Filter out protected, insignificant, or unlikely attack vectors and scenarios
  • Output the security that the system or the proposed architecture and design are missing in order to fulfill the intended security posture

There doesn’t seem to be much disagreement on what we do. That’s good. It means that this practice is coalescing.The places were we disagree or approach the problem differently I think are pretty interesting.

Gary McGraw calls security architecture misses, “flaws”. Flaws as opposed to software bugs. bugs can be described as errors in implementation, usually, coding. Flaws would then be those security items which didn’t get considered during architecture and design, or which were not designed correctly  (like poorly implemented authentication, authorization, or encryption). I would agree that implementing some sort of no entropy scramble and then believing that you’ve built “encryption” is, indeed both a design flaw and an implementation error*.

I respect Gary’s opinion greatly. So I carefully considered his argument. My personal “hit”, not really an opinion so much as a possible rationale, is that “flaw” gets more attention than say, “requirement”? This may especially be true at the senior management level? Gary, feel free to comment…

Why do I prefer the term “requirement”? Because I’m typically attempting to fit security into an existing architecture practice. Architecture takes “requirements” and turns these into architecture “views” that will fulfill the requirements. So naturally, if I want security to get implemented, I will have to deliver requirements into that process.

Further, if I name security items that the other architects may have missed, as “flaws”, I’m not likely to make any friends amongst my peers, the other architects working on a system. They may take umbrage in my describing their work as flawed? They bring me into analysis in order to get a security view on the system, to uncover through my expertise security requirements that they don’t have the expertise to discover.

In other words, I have very good reasons, just as Gary does, for using the language of my peers so that my security work fits as seamlessly as possible into an existing architecture practice.

The same goes for architecture diagram preferences. Jim Delgrosso likes to proffer a diagram template for use. That’s a great idea! I could do more with this, absolutely.

But once again, I’m faced with an integration problem. If my peers prefer Data Flow Diagrams (DFD), then I’d better be able to analyze a system based upon its DFD. If architects use UML, I’d better be able to understand UML. Ultimately, if I prize integration, unless there’s no existing architecture approach with which to work, my best integration strategy is to make use of whatever is typical and normal, whatever is expected. Otherwise, if I demand that my peers use my diagram, they may see me as obstructive, not collaborative. I have (so far) focused on integration with existing practices and teams.

As I spend more time teaching (and writing a book about ARA), I’m finding that having accepted whatever I’ve been given in an effort to integrate well, I haven’t created a definitive system  ARA diagram template from which to work (though I have lots of samples). That may be a miss on my part? (architectural miss?)

Some of the different practices I encountered may be due to differing organizational roles? Gary and Jim are hired as consultants. Because they are hired experts, perhaps they can prescribe more? Or, indeed, customers expect a prescriptive, “do it this way” approach? Since I’ve only consulted sparingly and have mostly been hired to seed and mentor security architecture practices from the inside, perhaps I don’t have enough experience to understand consultative demands and expectations? I do know that I’ve had far more success through integration than through prescription. Maybe that’s just my style?

You, my readers, can, of course, choose whatever works for you, depending upon role, maturity of your organization, and such.

Thanks for the ideas, Jim (and Gary). It was truly a great experience to kick practices around with you two.

cheers,

/brook

*We should be long past the point where anyone believes that a proprietary scramble protects much. (Except, of course, I occasionally do come across exactly this!).

Security Testing Is Dead. Rest In Peace? Not!

Apparently, some Google presenters are claiming that we can do away with the testing cycles in our software development life cycles? There’s been plenty of reaction to Alberto Savoia’s Keynote in 2011. But I ran into this again this Spring (2013)! So, I’m sorry to bring this up again, but I want to try for a security-focused statement…

The initial security posture of a piece of software is dependent upon the security requirements for that particular piece of software or system. In fact, the organizational business model influences an organization’s security requirements, which in turn influence the kinds of testing that any particular software or system will need before and after release. We don’t sell and deliver software in a vacuum.

Google certainly present a compelling case for user-led bug hunting. Their bounty programme works! But there are important reasons that Google can pull off user-led testing for some of their applications when other businesses might die trying.

It’s important to understand Google’s business model and infrastructure in order to understand a business driven bounty programme.

  • Google’s secured application infrastructure
  • Build it and they might come
  • If they come, how to make money?
  • If Google can’t monetize, can they build user base?

First and foremost, Google’s web application execution environment has got to have a tremendous amount of security built into it. Any application deployed to that infrastructure inherits many security controls. (I’ll exclude Android and mobility, since these are radically different) Google applications don’t have to implement identity systems, authorization models, user profiles, document storage protection, and the panoply of administrative and network security systems that any commercial, industrial strength cloud must deploy and run successfully. Indeed, I’m willing to guess that each application Google deploys runs within a certain amount of sandboxed isolation such that failure of that application cannot impact the security and performance of the other applications running on the infrastructure. In past lives, this is precisely how we built large application farms: sandbox and isolation. When a vulnerable application gets exploited, other applications sharing the infrastructure cannot be touched.We also made escape from the sandbox quite  difficult in order to protect the underlying infrastructure. Google would be not only remiss, but clueless to allow buggy applications to run in any less isolating environment. And, I’ve met lots of very smart Google folk! Scary smart.

Further, from what I’ve been told, Google has long since implemented significant protections for our Google Docs. Any application that needs to store documents inherits this document storage security. (I’ve been told that Google employ some derivation of Shamir’s Threshold Scheme such that unless an attacker can obtain M of N  stored versions of a document, the attacker gains no data whatsoever. This also thwarts the privileged insider attack)

My simple point is that Google is NOT entirely relying upon its external testers, its bug bounty programme. A fair amount of security is inherent and inherited by Google’s web application experiments.

And, we must understand Google’s business model. As near as I can tell from the outside, Google throws a lot of application “spaghetti” onto the Web. If an application “sticks”, that is, attracts a user base, then Google figure out how to monetize the application. If the application can’t be monetized, Google may still support the application for marketing (popularity, brand enhancement) purposes. Applications that don’t generate interest are summarily terminated.

In my opinion, Google’s business model leaves a lot of wiggle room for buggy software. Many of these experiments have low expectations, perhaps no expectation at the outset? This gives Google time to clean the code as the application builds user base and penetration. If nobody is dependent upon an application, then there’s not a very high security posture requirement. In other words, Google can take time to find the “right product”. This is entirely opposite for security function that must deliver protection independent of any support (like on an end point that can be offline). Users expect security software to be correct on installation: the product has to be built “right”, right from the start.

And, the guts of Google are most likely protected from any nasty vulnerabilities. So, user testing makes a lot of business sense and does not pose a brand risk.

Compare this with an endpoint security product from an established and trusted brand. Not only must the software actually protect the customer’s endpoint, it’s got to work when the endpoint is not connected to anything, much less the Internet (i.e., can’t “phone home”). Additionally, security software must meet a very high standard for not degrading the posture of the target system. That is, the software shouldn’t install vulnerabilities that can be abused alongside the software’s intended functionality. Security software must meet a very high standard of security quality. That’s the nature of the business model.

I would argue that security software vendors don’t have a great deal of wiggle room for user-discovery of vulnerabilities. Neither do medical records software, nor financials. These applications must try to be as clean as possible from the get go. Imagine if your online banking site left its vulnerability discovery to the user community. I think it’s not too much of a leap to imagine the loss in customer confidence that this approach might entail?

I’ll state the obvious: different businesses demand different security postures and have different periods of grace for security bugs. Any statement that makes a claim across these differences is likely spurious.

Google, in light of these obvious differences, may I ask your pundits to speak for your own business, rather than assuming that you may speak for all business models, rather than trumpeting a “new world order”? Everyone, may I encourage us to pay attention to the assumptions inherent in claims? Not all software is created equally, and that’s a “Good Thing” ™.

By the way, Brook Schoenfield is an active Google+ user. I don’t intend to slam Google’s products in any manner. Thank you, Google, for the great software that I use every day.

cheers

/brook

 

 

 

Agile & Security, Enemies For Life?

Are Agile software development and security permanent enemies?

I think not. In fact, I have participated with SCRUM development where building security into the results was more effective than classic waterfall. I’m an Secure Agile believer!

Dwayne Melancon, CTO of Tripwire, opined for BrightTALK on successful Agile for security.

Dwayne, I wish it was that simple! “Engage security early”. How often have I said that? “Prioritize vulnerabilities based on business impact”. Didn’t I say that at RSA? I hope I did?

Yes, these are important points. But they’re hardly news to security practitioners in the trenches building programmes.

Producing secure software when using an Agile menthod is not quite as simple as “architect and design early”. Yes, that’s important, of course. We’ve been saying “build security in, don’t bolt it on” for years now. That has not changed.

I believe that the key is to integrate security into the Agile process from architecture through testing. Recognize that we have to trust, work with, and make use of the agile process rather than fighting. After all, one of the key values that makes SCRUM so valuable is trust. Trust and collaboration are key to Agile success. I argue that trust and collaboration are keys to Agile that produces secure software.

In SCRUM, what is going to be built is considered during user story creation. That’s the “early” part. A close relationship with the Product Owner is critical to get security user stories onto the backlog and critical during user story prioritization. And, the security person should be familiar with any existing architecture during user story creation. That way, security can be considered in context and not ivory tower.

I’ve seen security develop a basic set of user stories that fit a particular class or type of project. These can be templated and simply added in at the beginning, perhaps tweaked for local variation.

At the beginning of each Sprint, stories are chosen for development from out of the back log, During this process, considerable design takes place. Make security an integral part of that process, either through direct participation or by proxy.

Really, in my experience, all the key voices should be a part of this selection and refinement process. Quality people can offer why a paticular approach is easier to test, architects can offer whether a story has been accounted for in the architecture, etc. Security is one of the important voices, but certainly not the only one.

Security experts need to make themselves available throughout a Sprint to answer questions about implementation details, the correct way to securely build each user story under development.  Partnership. Help SCRUM members be security “eyes and ears” on the ground.

Finally, since writing secure code is very much a discipline and practice, appropriate testing and vulnerability assurance steps need to be a part of every sprint. I think that these need to be part of Definition of Done.

Everyone is involved in security in Agile. Security folk can’t toss security “over the wall” and expect secure results. We have to get our hands dirty, get some implementation grease under the proverbial fingernails in order to earn the trust of the SCRUM teams.

Trust and collaboration are success factors, but these are also security factors. If the entire team are considering security throughout the process, they can at the very least call for help when there is any question. In my experience, over time, many teams will develop their team security expertise, allowing them to be far more self-sufficient – which is part of the essence of Agile.

Us security folk are going to have to give up control and instead become collaborators, partners. I don’t always get security built the way that I might think about it, but it gets built in. And, I learn lots of interesting, creative, innovative approaches from my colleagues along the way.

cheers

/brook

RSA 2013 & Risk

Monday, February 25th, I had the opportunity to participate in the Risk Seminar at RSA 2013.

My co-presenters/panelists are pursuing a number of avenues to strengthen not only their own practices, but for all. As I believe, it turns out that plenty of us practitioners have come to understand just how difficult a risk practice is. We work with an absence of hard longitudinal data, with literally thousands of variant scenarios, generally assessing ambiguous situations. Is that hard enough?

Further complicating our picture and methods is the lack of precision when we discuss risk. We all think we know what we mean, but do we? Threats get “prioritized” into risks. Vulnerabilities, taken by themselves with no context whatsoever, are assigned “risk levels”. Vulnerability discovery tools ratings don’t help this discussion.

We all intuitively understand what risk is; we calculate risk every day. We also intuitively understand that risk has a threat component, a vulnerability component, vulnerabilities have to be exploited, threats must have have the capacity and access to exploit. There are mitigation strategies, and risk always involves some sort of  impact or loss. But expressing these relationships is difficult. We want to shorthand all that complexity into simple, easily digestible statements.

Each RSA attendee received a magazine filled with short articles culled from the publisher’s risk archives. Within the first sentences risk was equated to threat (not that the writer bothered to define “threat”), to vulnerability, to exploit. Our risk language is a jumble!

Still, we plod on, doing the best that we can with the tools at hand, mostly our experience, our ability to correlate and analyze, and any wisdom that we may have picked up along the journey.

Doug Graham from EMC is having some success through focusing on business risk (thus bypassing all those arguments about whether that Cross-site Scripting Vulnerability is really, really dangerous). He has developed risk owners throughout his sister organizations much as I’ve used an extended team of security architects successfully at a number of organizations.

Summer Fowler from Carnegie-Mellon reported on basic research being done on just how we make risk decisions and how we can make our decisions more successful and relevant.

All great stuff, I think.

And me? I presented the risk calculation methodology that Vinay Bansal and I developed at Cisco, based on previous, formative work by Catherine Blackadder Nelson and Rakesh Bharania. Hopefully, someday, the SANS Institute will be able to publish the Smart Guide that Vinay and I wrote describing the method?*

That method, named “Just Good Enough Risk Rating” (JGERR) is lightweight and repeatable process through which assessors can generate numbers that are comparable. That is one of the most difficult issues we face today; my risk rating is quite likely to be different than yours.

It’s difficult to get the assessor’s bias out of risk ratings. Part of that work, I would argue, should be done by every assessor. Instead of pretending that one can be completely rational (impossible and actually wrong-headed, since risk always involves value judgements), I call upon us all to do our personal homework by understanding our own, unique risk preferences. At least let your bias be conscious and understood.

Still, JGERR and similar methods (there are similar methods, good ones! I’m not selling anything here) help to isolate assessor bias and at the very least put some rigour into the assessment.

Thanks  to Jack Jones’ FAIR risk methodology, upon which JGERR is partly based, I believe that we can adopt a consistent risk lexicon, developing a more rigorous understand of the relationships between the various components that make up a risk calculation. Thanks, Jack. Sorry that I missed you; you were speaking at the same time that I was in the room next door to mine. Sigh.

As Chris Houlder, CISO of Autodesk, likes to say, “progress, not perfection”. Yeah, I think I see a few interesting paths to follow.

cheers

/brook

*In order to generate some demand, may I suggest that you contact SANS and tell them that you’d like the guide? The manuscript has been finished and reviewed for more than a year as of this post.

Seriously? Product Security?

Seriously? You responded to my security due diligence question with that?

Hopefully, there’s a lesson in this tale of woe about what not to do when asked about product security?

This incident has been sticking in my craw for about a year. I think it’s time to get it off my chest. If for no other reason, I want to stop thinking about this terrible customer experience. And yes, for once, I’m going to name the guilty company. I wasn’t under NDA in this situation, as far as I know?

There I was, Enterprise Security Architect for a mid-size company (who shall not be named. No gossip, ever, from this blog). Part of my job was to ensure that vendors’ product security was strong enough to protect my company’s security posture. There’s a due diligence responsibility assigned to most infosec people. In order to fulfill this responsibility, it has become a typical practice to research software vendors’ product security practices.Based upon the results, either mitigate uncovered risks to policy and industry standards or raise the risk to organizational decision makers (and there are always risks, right?).

Every software vendor goes through these due diligence investigations on a regular basis. And I do mean “every”.

I’ve lived on both sides of this fence, conducting the investigations and having my company’s software go through many investigations. This process is now a part of the fabric of doing secure business. There should be nothing surprising about the questions. In past positions, we had a vendor questionnaire, a risk scale based upon the expected responses, and standards against which to measure the vendor. These tools help to build a repeatable process. One of these processes is documented in a SANS Institute Smart Guide released in 2011 and was published by Cisco, as well.

Now, I’m going to name names. Sorry, Google, I’m going to detail just what your Docs sales team said to me. Shame on you!

When I asked about Google Docs product security here is the answer, almost verbatim, that I received from the sales team:

“We’re Google. We can hire Vint Cerf if we want. That is enough.”

Need I point out to my brilliant readers that Dr. Vint Cerf, as far as I know, has never claimed to be an information security expert? I’m sure he knows far more about the design of TCP/IP than I? (but I remind readers that I used to write TCP/IP stacks, so I’m not entirely clueless, either). And, Dr. Cerf probably knows a thing or two about Internet Security, since he runs ICANN?

Still, I can tell you authoritatively that TCP/IP security and Domain Name Registry security are only two (fairly small) areas of an information security due diligence process that is now standard for software vendors to pass.

Besides, the sales team didn’t answer my questions. This was a classic “Appeal to Authority“. And, unfortunately, they didn’t even bother to appeal to a security authority. Sorry Vint; they took your name in vain. I suppose this sort of thing happens to someone of your fame all the time?

Behind the scenes, my counter-part application architect and I immediately killed any possible engagement with Google Docs. Google Sales Team, you lost the sale through that single response. The discussion was over; the possibility of a sale was out, door firmly closed.

One of the interesting results from the wide adoption of The Web has been the need for open and transparent engagement. Organizations that engage honestly gain trust through their integrity, even in the face of organizational mis-steps and faux pas. Organizations who attempt the old fashion paradigm, “control all communications”, lose trust, and lose it rapidly and profoundly. Commercial companies, are you paying attention? This is what democracy looks like (at least in part. But that’s a different post, I think?).

Product security is difficult and demanding. There are many facets that must compliment each other to deliver acceptable risk. And, even with the best intentions and execution, one will still have unexpected vulnerabilities crop up from time to time. We only have to look at development of Microsoft’s product security programme to understand how one of the best in the industry (in my humble opinion) will not catch everything. Do Microsoft bugs surface? Yes. Is the vulnerability level today anywhere near what it was 10 years ago? Not even close. Kudos, Microsoft.

It’s long past the time that any company can coast on reputation. I believe that Google do some very interesting things towards the security of their customers. But their  sales team need to learn a few lessons in humility and transparency. Brand offers very little demonstrable protection. Google, you have to answer those due diligence questionnaires honestly and transparently. Otherwise the Infosec person on the other side has nothing against which to base her/his risk rating. And, in the face of no information, the safest bet is to rate “high risk”. Default deny rule.

It’s a big world out there and if your undiscovered vulnerabilities don’t get’cha now, they will eventually. Beware; be patient; be humble; remain inquisitive; work slowly and carefully. You can quote me on that.

cheers,

/brook

Alan Paller’s Security Architecture Call To Action

I was attending the 2nd SANS What Works in Security Architecture Summit, listening to Alan Paller, Director of Research at the SANS Institute, and Michele Guel, National Cybersecurity Award winner. Alan has given us all a call to action, namely, to build a collection of architectural solutions for security architecture practitioners. The body of work would be akin to Up To Date for medical clinicians.

I believe that creating these guides is one of the next critical steps for security architects as a profession. Why?

It’s my contention that most experienced practioners are carrying a bundle of solution patterns in their heads. Like a doctor, we can eyeball a system, view the right type of system diagram, get a few basic pieces of information, and then begin very quickly to assess which of those patterns may be applicable.

In assessing a system for security, of course, it is often like peeling an onion. More and more issues get uncovered as the details are uncovered. Still, I’ve watched experienced architects very quickly zero in to the most relevant questions. They’ve seen so many systems that they intuitively understand where to dig deeper in order to assess attack vectors and what is not important.

One of the reasons this can happen so quickly is almost entirely local to each practitioner’s environment. Architects must know their environments intimately. That local knowledge will eliminate irrelevant threats and attack patterns. Plus, the architect also knows  which  security controls are pre-existing in supporting systems. These controls have already been vetted.

Along with the local knowledge, the experienced practioner will have a sense of the organization’s risk posture and security goals. These will highlight not only threats and classes of vulnerabilities, but also the value of assets under consideration.

Less obvious, a good security architecture practitioner brings a set of practical solutions that are tested, tried, and true that can be applied to the  analysis.

As a discipline, I don’t think we’ve been particularly good at building a collective knowledge base. There is no extant body of solutions to which a practioner can turn. It’s strictly the “school of hard knocks”; one learns by doing, by making mistakes, by missing important pieces, by internalizing the local solution set, and hopefully, through peer review, if available.

But what happens when a lone practioner is confronted with a new situation or technology area with which he or she is unfamiliar? Without strong peer review and mentorship, to where does the architect turn?

Today, there is nothing at the right level.

One can research protocols, vendors, technical details. Once I’ve done my research, I usually have to infer the architectural patterns from these sources.

And, have you tried getting a vendor to convey a product’s architecture? Reviewing a vendor’s architecture is often frought with difficulties: salespeople usually don’t know, and sales engineers just want to make the product work, typically as a cookie cutter recipe. How many times have I heard, “but no other customer requires this!”

There just aren’t many documented architectural patterns available. And, of the little that is available, most has not been vetted for quality and proof.

Alan Paller is calling on us to set down our solutions. SANS’ new Smart Guide series was created for this purpose. These are intended to be short, concise security architecture solutions that can be understood quickly. These won’t be white papers nor the results of research. The solutions must demonstrate a proven track record.

There are two tasks that must be accomplished for the Smart Guide series to be successful.

  • Authors need to write down the architectures of their successes
  • The series will need reviewers to vet guides so that we can rely on the solutions

I think that the Smart Guides are one of the key steps that will help us all mature security architecture practice. From the current state of Art (often based upon personality) we will force ourselves to:

  • Be succinct
  • Be understandable to a broad range of readers and experience
  • Be methodical
  • Set down all the steps required to implement – especially considering local assumptions that may not be true for others
  • Eliminate excess

And, by vetting these guides for worthwhile content, we should begin to deliver more consistency in the practice. I also think that we may discover that there’s a good deal of consensus in what we do every day**. Smart Guides have to juried by a body of peers in order for us to trust the content.

It’s a Call to Action. Let’s build a respository of best practice. Are you willing to answer the call?

cheers

/brook

** Simply getting together with a fair sampling of people who do what I do makes me believe that there is a fair amount of consensus. But that’s entirely my anecdotal experience in the absence of any better data

SANS Security Architecture

There are plenty of security conferences, to be sure. You can get emersed in the nitty gritty of attack exploits, learn to better analyze intrusion logs, even work on being a better manager of information security folks (like me).

In the plethora of conferences trying to get your attention, there is one area that doesn’t get much space: Security Architecture.

While the Open Group has finally recognized the Security Architect discipline, conferences dedicated to the practice have been too few and too far between. The SANS Institute hosted one last year in Las Vegas. It was such a success that they’ve decided to do it again.

SANS Security Architecture: Baking Security Into Applications And Networks

This year’s conference is intended to bring security architecture to the practical. The conference will be chock full of approaches and techniques that you can use “whole hog” or cherry pick for those portions which make sense for your organization.

Why do security architects need a conference of their own?

If you ask me (which you didn’t!), security architecture is a difficult, complex practice. There are many branches of knowledge, technical, organizational, political, personal that get brought together by the successful architect. It’s frustrating; it’s tricky. It’s often more Art than engineering.

We need each other, not just to commiserate, but to support each other, to help each other take this Art and make what we do more repeatable, more sustainable, more effective. And, many times when I’ve spoken about the discipline publicly, those who are new to security architecture want to know how to learn, what to learn, how to build a programme. From my completely unscientific experience, there seems a strong need for basic information on how to get started and how to grow.

Why do organizations need Information Security Architects? Well, first off, probably not every organization does need them. But certainly if your organization’s security universe is complex, your projects equally complex, then architecture may be of great help. My friend, Srikanth Narasimhan, has a great presentation on “The Long Tail of Architecture”, explaining where the rewards are found from Enterprise Architecture.

Architecture rewards are not reaped upfront. In fact, applying system architecture principles and processes will probably slow development down. But, when a system is designed not just for the present, but the future requirements; when a system supports a larger strategy and is not entirely tactical; when careful consideration has been given to how seeming disparate systems interact and depend, as things change, that’s where architecting systems delivers it’s rewards.

A prime example of building without architecture is the Winchester Mystery House in San Jose, California, USA. It’s a fun place to visit, with its stairwells without destination, windows opening to walls, and so on. Why? The simple answer is, “no architecture”.

So if your security systems seem like a version of the “Mystery House”, perhaps you need some architecture help?

Security Architecture brings the long tail of reward through planning, strategy, holism, risk practice, “top to bottom, front to back, side to side” view of the matrix of flows and system components to build a true defense in depth. It’s become too complex to simply deploy the next shiney technology and put these in without carefully considering how each part of the defense in depth depends upon and affects, interacts, with the others.

The security architecture contains a focus that addresses how seeming disparate security technologies and processes fit into a whole. And, solutions focused security architects help to build holistic security into systems; security architecture is a fundamental domain of an Enterprise Architecture practice. I’ve come to understand that security architecture has (at least) these two foci, largely through attending last year’s conference. I wonder what I’ll gain this year?

Please join me  in Washington DC, September 28-30, 2011 for SANS Security Architecture.

cheers

/brook

 

Architecture or Engineering?

It’s not exactly the great debate. There may not be that many folks on the planet who care what we call this emerging discipline in which I’m involved? Still, those of us who are involved are trying to work through our differences. We don’t yet have a clear consensus definition. Though I would offer that a consensus on the field of security architecture is beginning to coalesce within the practitioner community. (call the discipline what-you-will)

One of the lively discussions here at the SANS What Works In Security Architecture is, “What do we call ourselves?”

Clearly, there are at least two threads. One thread (my practice) is to call what I do “Information Security Architecture”. The other thread is “Systems Security Engineer”.

I read one of the formative documents from the “Systems Security Engineer” position last month (it’s not yet announced; so I’m sure if I can say from whom, yet?)  And what was described was pretty close to what I’ve been calling “Security Architecture”.

Here’s the interesting part to me: that no matter the name, collectively, we’re realizing that we require this set of functions to be fulfilled. I’ll try to make a succinct statement about the consensus task list that I think is emerging:

  • Study of a proposed system architecture for security needs
  • Discussion with the proposers of the system (“the business” in parlance) about what the system is meant to do and the required security posture
  • Translation of required security posture into security requirements
  • Translation of the security requirements into specific security controls within a system design
  • Given an architecture, assessing the security risk based upon a strong understanding of information security threats for that organization and that type of architecture

In some organizations, the security architect task is complete at delivery of the plan. In others, the architect is expected to help design the specific controls into the system. And then, possibly, the architect is also responsible for the due diligence task of assuring that the controls were actually implemented. In my experience, I am responsible for all of these tasks and I’m called a “Security Architect”.

From the other perspective, the role is called “Systems Security Engineer”. And, it is also expected that the upfront tasks of discussion and analysis are performed by a security architect who is no longer engaged after delivery of the high level items (listed above).

That’s not how it works in my world. And, I would offer that this “leave it after analysis” model might lead to “Ivory Tower” type pronouncements. I’ll never forget one of my early big errors. I was told that “all external SSL communications were required to be bi-directionally authenticated” OK – I’d specify that for every external SSL project. Smile

Then projects started coming back to me, sometimes months later, having been asking for bi-directional authentication, and being told that on the shared java environment,”no can do”.

The way that this environment was set up, it didn’t support bi-directional authentication at an application granularity. And that was the requirement that I had given, not once but several times. I was “following the established standard”, right?

I submit to your comment and discussion that

  1. There’s more than one way to achieve similar security control (in this case, bi-directionally authenticated encrypted tunnels)
  2. It’s very important to specify requirements that can be met
  3. A security architect must understand the existing capabilities before writing requirements.

Instead of ivory tower pronouncements, if I believe that there is a technology gap, my job is to surface and make the gap as transparent as possible to the eventual owners of the system under analysis (“the business”).
How can I do this job if all I have to do is work from paper standards  and then walk away. Frankly, I’d lose all credibility. The only thing that saved me early on was my willingness to admit my errors and then pitch in wholeheartedly to solve any problems that I’d help create (or, really, anything that comes up!)

Some definitions for your consideration:

  • Architecture: the structure and organization of a computer’s hardware or system software
  • Engineering:  the discipline dealing with the art or science of applying scientific knowledge to practical problems

Engineers apply say, cryptography, an algorithm to solve problems. Architects plan that cryptography will be required for a particular set of transmissions or for storage of a set of data.

At least, these are my working definitions for the moment.

So, after listening to both sides, in my typically equivocal manner, I’m wondering if this role that I practice is really a good and full slice of architecture, combined with a good and full slice of engineering?

Practitioners, it seems to me from my experience, need both of these if they are to gain credibility with the various groups with whom they must interact. And “no”, I don’t want to perpetuate any ivory towers!

Send me your ideas for a better name, a combinatorial name. what-have-you

cheers,

/brook

from Las Vegas airport