RSA 2013 & Risk

Monday, February 25th, I had the opportunity to participate in the Risk Seminar at RSA 2013.

My co-presenters/panelists are pursuing a number of avenues to strengthen not only their own practices, but for all. As I believe, it turns out that plenty of us practitioners have come to understand just how difficult a risk practice is. We work with an absence of hard longitudinal data, with literally thousands of variant scenarios, generally assessing ambiguous situations. Is that hard enough?

Further complicating our picture and methods is the lack of precision when we discuss risk. We all think we know what we mean, but do we? Threats get “prioritized” into risks. Vulnerabilities, taken by themselves with no context whatsoever, are assigned “risk levels”. Vulnerability discovery tools ratings don’t help this discussion.

We all intuitively understand what risk is; we calculate risk every day. We also intuitively understand that risk has a threat component, a vulnerability component, vulnerabilities have to be exploited, threats must have have the capacity and access to exploit. There are mitigation strategies, and risk always involves some sort of  impact or loss. But expressing these relationships is difficult. We want to shorthand all that complexity into simple, easily digestible statements.

Each RSA attendee received a magazine filled with short articles culled from the publisher’s risk archives. Within the first sentences risk was equated to threat (not that the writer bothered to define “threat”), to vulnerability, to exploit. Our risk language is a jumble!

Still, we plod on, doing the best that we can with the tools at hand, mostly our experience, our ability to correlate and analyze, and any wisdom that we may have picked up along the journey.

Doug Graham from EMC is having some success through focusing on business risk (thus bypassing all those arguments about whether that Cross-site Scripting Vulnerability is really, really dangerous). He has developed risk owners throughout his sister organizations much as I’ve used an extended team of security architects successfully at a number of organizations.

Summer Fowler from Carnegie-Mellon reported on basic research being done on just how we make risk decisions and how we can make our decisions more successful and relevant.

All great stuff, I think.

And me? I presented the risk calculation methodology that Vinay Bansal and I developed at Cisco, based on previous, formative work by Catherine Blackadder Nelson and Rakesh Bharania. Hopefully, someday, the SANS Institute will be able to publish the Smart Guide that Vinay and I wrote describing the method?*

That method, named “Just Good Enough Risk Rating” (JGERR) is lightweight and repeatable process through which assessors can generate numbers that are comparable. That is one of the most difficult issues we face today; my risk rating is quite likely to be different than yours.

It’s difficult to get the assessor’s bias out of risk ratings. Part of that work, I would argue, should be done by every assessor. Instead of pretending that one can be completely rational (impossible and actually wrong-headed, since risk always involves value judgements), I call upon us all to do our personal homework by understanding our own, unique risk preferences. At least let your bias be conscious and understood.

Still, JGERR and similar methods (there are similar methods, good ones! I’m not selling anything here) help to isolate assessor bias and at the very least put some rigour into the assessment.

Thanks  to Jack Jones’ FAIR risk methodology, upon which JGERR is partly based, I believe that we can adopt a consistent risk lexicon, developing a more rigorous understand of the relationships between the various components that make up a risk calculation. Thanks, Jack. Sorry that I missed you; you were speaking at the same time that I was in the room next door to mine. Sigh.

As Chris Houlder, CISO of Autodesk, likes to say, “progress, not perfection”. Yeah, I think I see a few interesting paths to follow.

cheers

/brook

*In order to generate some demand, may I suggest that you contact SANS and tell them that you’d like the guide? The manuscript has been finished and reviewed for more than a year as of this post.

Seriously? Product Security?

Seriously? You responded to my security due diligence question with that?

Hopefully, there’s a lesson in this tale of woe about what not to do when asked about product security?

This incident has been sticking in my craw for about a year. I think it’s time to get it off my chest. If for no other reason, I want to stop thinking about this terrible customer experience. And yes, for once, I’m going to name the guilty company. I wasn’t under NDA in this situation, as far as I know?

There I was, Enterprise Security Architect for a mid-size company (who shall not be named. No gossip, ever, from this blog). Part of my job was to ensure that vendors’ product security was strong enough to protect my company’s security posture. There’s a due diligence responsibility assigned to most infosec people. In order to fulfill this responsibility, it has become a typical practice to research software vendors’ product security practices.Based upon the results, either mitigate uncovered risks to policy and industry standards or raise the risk to organizational decision makers (and there are always risks, right?).

Every software vendor goes through these due diligence investigations on a regular basis. And I do mean “every”.

I’ve lived on both sides of this fence, conducting the investigations and having my company’s software go through many investigations. This process is now a part of the fabric of doing secure business. There should be nothing surprising about the questions. In past positions, we had a vendor questionnaire, a risk scale based upon the expected responses, and standards against which to measure the vendor. These tools help to build a repeatable process. One of these processes is documented in a SANS Institute Smart Guide released in 2011 and was published by Cisco, as well.

Now, I’m going to name names. Sorry, Google, I’m going to detail just what your Docs sales team said to me. Shame on you!

When I asked about Google Docs product security here is the answer, almost verbatim, that I received from the sales team:

“We’re Google. We can hire Vint Cerf if we want. That is enough.”

Need I point out to my brilliant readers that Dr. Vint Cerf, as far as I know, has never claimed to be an information security expert? I’m sure he knows far more about the design of TCP/IP than I? (but I remind readers that I used to write TCP/IP stacks, so I’m not entirely clueless, either). And, Dr. Cerf probably knows a thing or two about Internet Security, since he runs ICANN?

Still, I can tell you authoritatively that TCP/IP security and Domain Name Registry security are only two (fairly small) areas of an information security due diligence process that is now standard for software vendors to pass.

Besides, the sales team didn’t answer my questions. This was a classic “Appeal to Authority“. And, unfortunately, they didn’t even bother to appeal to a security authority. Sorry Vint; they took your name in vain. I suppose this sort of thing happens to someone of your fame all the time?

Behind the scenes, my counter-part application architect and I immediately killed any possible engagement with Google Docs. Google Sales Team, you lost the sale through that single response. The discussion was over; the possibility of a sale was out, door firmly closed.

One of the interesting results from the wide adoption of The Web has been the need for open and transparent engagement. Organizations that engage honestly gain trust through their integrity, even in the face of organizational mis-steps and faux pas. Organizations who attempt the old fashion paradigm, “control all communications”, lose trust, and lose it rapidly and profoundly. Commercial companies, are you paying attention? This is what democracy looks like (at least in part. But that’s a different post, I think?).

Product security is difficult and demanding. There are many facets that must compliment each other to deliver acceptable risk. And, even with the best intentions and execution, one will still have unexpected vulnerabilities crop up from time to time. We only have to look at development of Microsoft’s product security programme to understand how one of the best in the industry (in my humble opinion) will not catch everything. Do Microsoft bugs surface? Yes. Is the vulnerability level today anywhere near what it was 10 years ago? Not even close. Kudos, Microsoft.

It’s long past the time that any company can coast on reputation. I believe that Google do some very interesting things towards the security of their customers. But their  sales team need to learn a few lessons in humility and transparency. Brand offers very little demonstrable protection. Google, you have to answer those due diligence questionnaires honestly and transparently. Otherwise the Infosec person on the other side has nothing against which to base her/his risk rating. And, in the face of no information, the safest bet is to rate “high risk”. Default deny rule.

It’s a big world out there and if your undiscovered vulnerabilities don’t get’cha now, they will eventually. Beware; be patient; be humble; remain inquisitive; work slowly and carefully. You can quote me on that.

cheers,

/brook

Human Assisted Automated Analysis

I think that we’re beginning to see a pattern emerging for solving some of information security’s most pressing, complex problems.

I get a lot of vendor announcements. That comes with my territory; I feel that I have to stay tuned as new developments appear. Another motivation, of course, are the raft of new attack types, combined attacks, carefully orchestrated attacks: it’s a dangerous Internet out there. It should be no surprise to any of my readers that there are individuals and groups who want what you have, irrespective of your values and your politics (and, of course, in some cases, because of your values and your politics. We all get attacked out of multiple motivations).

So what’s a poor security guy to do? Well, I do my day job, of course. And, I try to stay informed. Plus, as many of you know, I know a few folks in the industry, many of whom are a lot smarter than yours truly. Lucky me, these very smart people sometimes tell me about what problems they’re working on and how they’re approaching that work.

A couple of days ago, I received a link to an article in Computerworld purportedly about “DLP1 in the Cloud” from BEW Global. OK, everyone wants to jump on the “cloud” bandwagon. It’s true. The hype machine is definitely running2. But the cloud claim was not what caught my eye.

Data Loss Prevention (DLP) is one of the trickier problems to solve for unstructured and/or poorly normalized data. We’ve had functioning regular expression or signature-based identification DLP software for quite a while. This is nothing new. In my experience, and, talking to others running DLP programmes, the existing toolset is accurate enough when pointed at highly predictable data types: government numbers, credit card numbers, any field who’s form can be normalized. This is what computer programmes are good at, yes?

But, try to write a regular expression to identify code written in any one of several languages, each with a distinct syntax. That is a problem. When I have broached this problem with vendors, they admit that it’s a tough nut to crack. I fear that one would have come fairly close to writing a language parser3; usually not a trivial problem.

So, what are BEW Global bringing to the problem that’s new? “Firm leverages cloud, human capital to offer data loss prevention services”. Their new element is the addition of a human analyst.

But BEW aren’t the only ones trying to do this on an enterprise scale. Whitehat Security have been doing this for a number of years. Within the last year, I’ve also heard this approach from a few other companies. Look who else is adding human analysts to supplement automated techniques4:

The idea as I understand it is that every variation that can be identified through programmatic techniques simply gets cataloged by the software. Outliers, exceptions, unique patterns get punted off to humans for their analysis as a part of the normal operation of the product.

There is, obviously, nothing new about human powered analysis5. Who else is going to create the algorithms initially? That’s really the only approach when tackling entirely new problems. Additionally, most security tools are enhanced cyclically; this is most often done through human programming. However, the norm has been that the running tool functions entirely by its programming6.

The emerging new architectural pattern includes analysts as a part of the running solution. Humans are not confined to updating the solution (additions, adjustments, new). Analysts are employed as a part of the running product, augmenting processing power with brain power.

Truly difficult problems apparently require highly trained people. It seems obvious in its way. This idea has taken a while to emerge. Whitehat were the first7 that I’d heard of sending outliers to human staff as a part of their standard analysis.

Apparently, we’re starting to see this idea catch on?

Thoughts? Other instances? Have I missed something?

Cheers

/brook
1 Data Loss Prevention
2 I sometimes see that the “cloud” is a couple of servers hosted at one of the big providers. Hmmm… ?!?
3 The usual front end for a compiler.
4 Vendor mention must in no manner be construed as a product or company endorsement by Brook Schoenfield whatsoever, without restriction. My understanding of any one of these products may be faulty. I have listed these as examples only, to the best of my understanding of what has been told to me. I have performed no due diligence on the statements of these organizations. Brook Schoenfield makes no warranty or representation whatsoever on the truthfulness of company claims nor on the fitness of any product.
5 Even if the humans are employing automated tools.
6 I’m including updating signatures and similar data upon which the automation will base its decisions.
7 I haven’t done any research on who was first. I remember who told me about this pattern the first time.

Alan Paller’s Security Architecture Call To Action

I was attending the 2nd SANS What Works in Security Architecture Summit, listening to Alan Paller, Director of Research at the SANS Institute, and Michele Guel, National Cybersecurity Award winner. Alan has given us all a call to action, namely, to build a collection of architectural solutions for security architecture practitioners. The body of work would be akin to Up To Date for medical clinicians.

I believe that creating these guides is one of the next critical steps for security architects as a profession. Why?

It’s my contention that most experienced practioners are carrying a bundle of solution patterns in their heads. Like a doctor, we can eyeball a system, view the right type of system diagram, get a few basic pieces of information, and then begin very quickly to assess which of those patterns may be applicable.

In assessing a system for security, of course, it is often like peeling an onion. More and more issues get uncovered as the details are uncovered. Still, I’ve watched experienced architects very quickly zero in to the most relevant questions. They’ve seen so many systems that they intuitively understand where to dig deeper in order to assess attack vectors and what is not important.

One of the reasons this can happen so quickly is almost entirely local to each practitioner’s environment. Architects must know their environments intimately. That local knowledge will eliminate irrelevant threats and attack patterns. Plus, the architect also knows  which  security controls are pre-existing in supporting systems. These controls have already been vetted.

Along with the local knowledge, the experienced practioner will have a sense of the organization’s risk posture and security goals. These will highlight not only threats and classes of vulnerabilities, but also the value of assets under consideration.

Less obvious, a good security architecture practitioner brings a set of practical solutions that are tested, tried, and true that can be applied to the  analysis.

As a discipline, I don’t think we’ve been particularly good at building a collective knowledge base. There is no extant body of solutions to which a practioner can turn. It’s strictly the “school of hard knocks”; one learns by doing, by making mistakes, by missing important pieces, by internalizing the local solution set, and hopefully, through peer review, if available.

But what happens when a lone practioner is confronted with a new situation or technology area with which he or she is unfamiliar? Without strong peer review and mentorship, to where does the architect turn?

Today, there is nothing at the right level.

One can research protocols, vendors, technical details. Once I’ve done my research, I usually have to infer the architectural patterns from these sources.

And, have you tried getting a vendor to convey a product’s architecture? Reviewing a vendor’s architecture is often frought with difficulties: salespeople usually don’t know, and sales engineers just want to make the product work, typically as a cookie cutter recipe. How many times have I heard, “but no other customer requires this!”

There just aren’t many documented architectural patterns available. And, of the little that is available, most has not been vetted for quality and proof.

Alan Paller is calling on us to set down our solutions. SANS’ new Smart Guide series was created for this purpose. These are intended to be short, concise security architecture solutions that can be understood quickly. These won’t be white papers nor the results of research. The solutions must demonstrate a proven track record.

There are two tasks that must be accomplished for the Smart Guide series to be successful.

  • Authors need to write down the architectures of their successes
  • The series will need reviewers to vet guides so that we can rely on the solutions

I think that the Smart Guides are one of the key steps that will help us all mature security architecture practice. From the current state of Art (often based upon personality) we will force ourselves to:

  • Be succinct
  • Be understandable to a broad range of readers and experience
  • Be methodical
  • Set down all the steps required to implement – especially considering local assumptions that may not be true for others
  • Eliminate excess

And, by vetting these guides for worthwhile content, we should begin to deliver more consistency in the practice. I also think that we may discover that there’s a good deal of consensus in what we do every day**. Smart Guides have to juried by a body of peers in order for us to trust the content.

It’s a Call to Action. Let’s build a respository of best practice. Are you willing to answer the call?

cheers

/brook

** Simply getting together with a fair sampling of people who do what I do makes me believe that there is a fair amount of consensus. But that’s entirely my anecdotal experience in the absence of any better data

SANS Security Architecture

There are plenty of security conferences, to be sure. You can get emersed in the nitty gritty of attack exploits, learn to better analyze intrusion logs, even work on being a better manager of information security folks (like me).

In the plethora of conferences trying to get your attention, there is one area that doesn’t get much space: Security Architecture.

While the Open Group has finally recognized the Security Architect discipline, conferences dedicated to the practice have been too few and too far between. The SANS Institute hosted one last year in Las Vegas. It was such a success that they’ve decided to do it again.

SANS Security Architecture: Baking Security Into Applications And Networks

This year’s conference is intended to bring security architecture to the practical. The conference will be chock full of approaches and techniques that you can use “whole hog” or cherry pick for those portions which make sense for your organization.

Why do security architects need a conference of their own?

If you ask me (which you didn’t!), security architecture is a difficult, complex practice. There are many branches of knowledge, technical, organizational, political, personal that get brought together by the successful architect. It’s frustrating; it’s tricky. It’s often more Art than engineering.

We need each other, not just to commiserate, but to support each other, to help each other take this Art and make what we do more repeatable, more sustainable, more effective. And, many times when I’ve spoken about the discipline publicly, those who are new to security architecture want to know how to learn, what to learn, how to build a programme. From my completely unscientific experience, there seems a strong need for basic information on how to get started and how to grow.

Why do organizations need Information Security Architects? Well, first off, probably not every organization does need them. But certainly if your organization’s security universe is complex, your projects equally complex, then architecture may be of great help. My friend, Srikanth Narasimhan, has a great presentation on “The Long Tail of Architecture”, explaining where the rewards are found from Enterprise Architecture.

Architecture rewards are not reaped upfront. In fact, applying system architecture principles and processes will probably slow development down. But, when a system is designed not just for the present, but the future requirements; when a system supports a larger strategy and is not entirely tactical; when careful consideration has been given to how seeming disparate systems interact and depend, as things change, that’s where architecting systems delivers it’s rewards.

A prime example of building without architecture is the Winchester Mystery House in San Jose, California, USA. It’s a fun place to visit, with its stairwells without destination, windows opening to walls, and so on. Why? The simple answer is, “no architecture”.

So if your security systems seem like a version of the “Mystery House”, perhaps you need some architecture help?

Security Architecture brings the long tail of reward through planning, strategy, holism, risk practice, “top to bottom, front to back, side to side” view of the matrix of flows and system components to build a true defense in depth. It’s become too complex to simply deploy the next shiney technology and put these in without carefully considering how each part of the defense in depth depends upon and affects, interacts, with the others.

The security architecture contains a focus that addresses how seeming disparate security technologies and processes fit into a whole. And, solutions focused security architects help to build holistic security into systems; security architecture is a fundamental domain of an Enterprise Architecture practice. I’ve come to understand that security architecture has (at least) these two foci, largely through attending last year’s conference. I wonder what I’ll gain this year?

Please join me  in Washington DC, September 28-30, 2011 for SANS Security Architecture.

cheers

/brook

 

Pen Testers Cannot Scale To Enterprise Need!

As reported in Information Week, several Black Hat researchers have noted that humans must qualify application vulnerability scanning results. It’s hard to find this statement as “news” since I’ve been saying this publicly, at conferences for at least 4 years. I’m not the only one. Ask the vendors’ Sales Engineers. They will tell you (they have certainly told me!) that one has to qualify testing results.

“At the end of the day, tools don’t find vulnerabilities. People do,” says Nathan Hamiel, one of the researchers. 

Yes, Nathan, this has been the state of affairs for years.

The status quo is great for penetration testers. But it’s terrible for larger organizations. Why? Pen testers are expensive and slow.

I agree with the truisms stated by the researchers. If there were 10’s of thousands of competent penetration testing  practitioners, the fact the the tools generally do NOT find vulnerabilities without human qualification would not matter.

But there are only 1000s, perhaps even less than 3000 penetration testers who can perform the required vulnerability qualification. These few must be apportioned across 10s of millions of lines of vulernable code, much of which is exposed to the hostile Public Internet.

Plus which, human qualified vulnerability testing is slow. A good tester can test maybe 50 applications each year. Against that throughput, organizations who have a strong web presence may deploy 50 applications or upgrades each quarter year. Many of the human qualified scans become stale within a quarter or two.

Hence this situation is untenable for enterprises who have an imperative to remove whatever vulnerabilities may be found. Large organizations may have 1000s of applications deployed. At $4-500/hour, which code do you scan? Even the least critical application might hand a malefactor a compromise.

This is a fundamental mis-match.

The pen testers want to find everything that they can. All fine and good.

But enterprises must reduce their attack surface. I would argue that any reduction is valuable. We don’t have to find every vulnerability in every application. Without adequate testing, there are 100% of the vulnerabilities exposed. Reducing these by even 20% overall is a big win.

Further, it is not cost effective to throw pen testers at this problem. It has to either be completely automated or the work has to be done by developers or QA folk who do not have the knowledge to qualify most results.

Indeed, the web developer him/herself must be able to at least test and remove some of the attack surface while developing. Due to the nature of custom code testing, I’m afraid that the automation is not yet here. That puts the burden on people who have little interest in security for security’s sake.

I’ve spoken to almost every vulnerability testing vendor. Some understand this problem. Some have focused on the expert tester exclusively. But, as near as I can tell, the state of the art is at best hindered by noisy results that typically require human qualification. At worst, some of these tools are simply unsuitable for the volume of code that is vulnerable.

As long as so much noise remains in the results, we, the security industry, need to think differently about this problem.

I’m speaking at SANS What Works in Security Architecture August 29-30, Washington DC, USA. This is a gathering of Security Architecture practioners from around the world. If your role includes Architecture, let me urge you to join us as we show each other what’s worked and as we discuss are mutual problem areas.

cheers

/brook

What Is Our Professional Responsibility?

What Is Our Professional Responsibility?Four times within about a month, I’ve had to deal with “security issues” that were reported as “emergencies”. These appeared as high priority vulnerabilities requiring immediate “fixing” from my team.

Except, none of these were really security issues. Certainly none of these was an emergency.

None of these were bugs or vulnerabilities. In fact, if the security engineer reporting the issue had done even a modicum of investigation, these would never have been reported. False positive.

In one instance, a security engineer had browsed information on a few web pages of a SaaS application and then decided, without any further investigation that the web product had “no security”. She or he even went to far as to “ban” all corporate use of that product. Wow! That’s a pretty drastic consequence for a product who’s security controls were largely turned off by that customer’s IT department. Don’t you think the security engineer should have checked with IT first? I do.

A few days later a competitor of that product pointed a security engineer to an instance that was also configured with few security controls by the customer. The competitor claimed that the “product has no security”. The engineer promptly reported a “security deficiency”.

Obviously, a mature product should have the capability to enforce the security policies of its users, whatever those happen to be. That’s one of the most important tenants of SaaS security: give the customer sufficient tools to enact customer policies. Do not decide for the customer what their appropriate policies must be; let the customer implement policy as required.

Since the customer has the power to enact customer policies, the chosen posture may be wide open or locked down. The security posture depends upon the business needs of each particular customer. Don’t we all know that? Isn’t this obvious? (maybe not?)

In both the cases that I’ve described (and the other two), I would have thought that the engineer would first investigate?

  • What is the actual problem?
  • How does the application work?
  • What are the application’s capabilities?
  • Is there a misconfiguration?
  • Is there a functionality gap?
  • Is there a bug?

When I was a programmer, the rule was, “don’t report a bug in infrastructure, library, or compiler until you have a small working program that positively demonstrates the bug1”. In other words, we had to investigate a problem, thoroughly understand it, isolate it, and provide a working proof before we could call technical support.

Apparently, some security engineers feel no compulsion for this kind of technical precision?

I went to the CSO and asked, “If I failed to investigate a problem, would you be upset? If I did it repeatedly, would you fire me?” Answer: “Yes, on all counts.”

Security managers, what’s our accountability? Are your engineers accountable for the issues that they report?

Are we so hungry for performance metrics that we are mistakenly tracking “incidents reported”? (which is, IMHO, not a very good measure of anything). To what are we holding security investigators accountable?

My understanding of my professional ethics requires me to be as sure as I possibly can before running an issue up the flag pole. Further, I like to present all unknowns as clearly as I can. That way, false positives are minimized. I certainly wouldn’t want to stake the precious trust that I’ve carefully built up on a mere assumption which easily might be a mistake.

Of course, it’s always possible to believe one has discovered a vulnerability that turns out to be misunderstanding or misconfiguration. That can happen to any one of us. Securing multiple technologies across multiple use cases, across many technologies is difficult. Mistakes are far too easy to make. Because of the ease of error, I expect to get one or two of poorly qualified vulnerabilities each year. But four in a month? What?

Let’s all try to be precise and not get carried away in the excitement of the moment. That holds true whether one thinks one has discovered a vulnerability or is taking a bug report. I believe that information security professionals should be seen as “truth tellers”. We must live up to the trust that is placed in us.

By the way, there’s a very exciting conference upcoming at the end of August (2011), Security Architecture: Baking Security into Applications and Networks 2011. This conference is particularly relevant for Security Architects and any practitioner who must design security into systems, or who is charged with building a practice for security architecture and designs. I’ll write more about this conference later. Stay tuned.

cheers,

/brook

1) The small program could not include any of the functionality required by the program that was being written. It had to demonstrate the bug without any ancillary code whatsoever. That meant that one had to understand the bug thoroughly before reporting.

Architecture or Engineering?

It’s not exactly the great debate. There may not be that many folks on the planet who care what we call this emerging discipline in which I’m involved? Still, those of us who are involved are trying to work through our differences. We don’t yet have a clear consensus definition. Though I would offer that a consensus on the field of security architecture is beginning to coalesce within the practitioner community. (call the discipline what-you-will)

One of the lively discussions here at the SANS What Works In Security Architecture is, “What do we call ourselves?”

Clearly, there are at least two threads. One thread (my practice) is to call what I do “Information Security Architecture”. The other thread is “Systems Security Engineer”.

I read one of the formative documents from the “Systems Security Engineer” position last month (it’s not yet announced; so I’m sure if I can say from whom, yet?)  And what was described was pretty close to what I’ve been calling “Security Architecture”.

Here’s the interesting part to me: that no matter the name, collectively, we’re realizing that we require this set of functions to be fulfilled. I’ll try to make a succinct statement about the consensus task list that I think is emerging:

  • Study of a proposed system architecture for security needs
  • Discussion with the proposers of the system (“the business” in parlance) about what the system is meant to do and the required security posture
  • Translation of required security posture into security requirements
  • Translation of the security requirements into specific security controls within a system design
  • Given an architecture, assessing the security risk based upon a strong understanding of information security threats for that organization and that type of architecture

In some organizations, the security architect task is complete at delivery of the plan. In others, the architect is expected to help design the specific controls into the system. And then, possibly, the architect is also responsible for the due diligence task of assuring that the controls were actually implemented. In my experience, I am responsible for all of these tasks and I’m called a “Security Architect”.

From the other perspective, the role is called “Systems Security Engineer”. And, it is also expected that the upfront tasks of discussion and analysis are performed by a security architect who is no longer engaged after delivery of the high level items (listed above).

That’s not how it works in my world. And, I would offer that this “leave it after analysis” model might lead to “Ivory Tower” type pronouncements. I’ll never forget one of my early big errors. I was told that “all external SSL communications were required to be bi-directionally authenticated” OK – I’d specify that for every external SSL project. Smile

Then projects started coming back to me, sometimes months later, having been asking for bi-directional authentication, and being told that on the shared java environment,”no can do”.

The way that this environment was set up, it didn’t support bi-directional authentication at an application granularity. And that was the requirement that I had given, not once but several times. I was “following the established standard”, right?

I submit to your comment and discussion that

  1. There’s more than one way to achieve similar security control (in this case, bi-directionally authenticated encrypted tunnels)
  2. It’s very important to specify requirements that can be met
  3. A security architect must understand the existing capabilities before writing requirements.

Instead of ivory tower pronouncements, if I believe that there is a technology gap, my job is to surface and make the gap as transparent as possible to the eventual owners of the system under analysis (“the business”).
How can I do this job if all I have to do is work from paper standards  and then walk away. Frankly, I’d lose all credibility. The only thing that saved me early on was my willingness to admit my errors and then pitch in wholeheartedly to solve any problems that I’d help create (or, really, anything that comes up!)

Some definitions for your consideration:

  • Architecture: the structure and organization of a computer’s hardware or system software
  • Engineering:  the discipline dealing with the art or science of applying scientific knowledge to practical problems

Engineers apply say, cryptography, an algorithm to solve problems. Architects plan that cryptography will be required for a particular set of transmissions or for storage of a set of data.

At least, these are my working definitions for the moment.

So, after listening to both sides, in my typically equivocal manner, I’m wondering if this role that I practice is really a good and full slice of architecture, combined with a good and full slice of engineering?

Practitioners, it seems to me from my experience, need both of these if they are to gain credibility with the various groups with whom they must interact. And “no”, I don’t want to perpetuate any ivory towers!

Send me your ideas for a better name, a combinatorial name. what-have-you

cheers,

/brook

from Las Vegas airport

What is this thing called “Security Architecture”?

What is this thing called “Security Architecture”?

In February 2009, while I was attending an Open Group Architecture Conference in Bangalore, India, several of us got into a conversation about how security fits into enterprise architecture. A couple of the main folks from the Open Group were involved (sorry, I don’t remember who). They’d been struggling with this same problem: Is high level security participation in the process architectural, as defined for IT architecture?

Participants to that particular conversation had considered several possibilities. Maybe there is such a thing as a security architect? Maybe security participation is as a Subject Matter Expert (SME)? Maybe it’s all design and engineering?

Interesting questions.

I think how an organization views security may depend on how security is engaged and for what purposes. If security is engaged at an engineering level only,

  • “please approve my ACLs”
  • “what is the script for hardening this system? Has it been hardened sufficiently?”
  • “is this encryption algorithm acceptable?”
  • “is this password strength within policy?”
  • etc.

Then that organization will not likely have security architects. In fact (and I’ve seen this), there may be absolutely no concept of security architecture. “Security architecture” might be like saying, “when pigs fly”.

If your security folks are policy & governance wonks, then too, there may not be much of a security architecture practice.

Organizations where security folks are brought in to help projects achieve an appropriate security posture and also brought in to vet that a proposal won’t in some way degrade the current security posture of the organization, my guess is these are among the driving tasks for developing a security architecture practice?

However, in my definition of security architecture, I wouldn’t limit the practice to vetting project security. There can be more to it than that.

  • security strategy
  • threat landscape, (emerging term: “threatscape”)
  • technology patterns, blueprints, standards, guides
  • risk analysis, assessment, communication and decisions
  • emerging technology trends and how these affect security

A strong practitioner will be well versed in a few of these and at least conversant in all at some level. Risk analysis is mandatory.

I’ve held the title “Security Architect” for almost 10 years now. In that time, my practice has changed quite a lot, as well as my understanding, as my team’s understanding have grown. Additionally, we’ve partnered alongside an emerging enterprise architecture team and practice, which has helped me understand not only the practice of system architecture, but my place in that practice as a specialist in security.

Interestingly, I’ve seen security architecture lead or become a practice before enterprise architecture! Why is that? To say so, to observe so, seems entirely counter-intuitive – which is where the Open Group folks were coming from, I fear?. They are focused deeply on IT architecture. Security may be a side-show in their minds, perhaps even, “oh, those guys who maintain the IDS systems”?

Still, it remains in my experience that security may certainly lead architecture practice. Your security architects will have to learn to take a more holistic approach to systems than your development and design community may require. There’s a fundamental reason for this!

You can’t apply the right security controls for a system that you do not understand from bottom to top, front to back, side to side, all flows, all protocols, highest sensitivity of data. Period.

Security controls tend to be very specific; they protect particular points of a system or flow. Since controls tend to be specific, we generally overlap them, building our defense out of multiple controls, the classic, “defense-in-depth”. In order to apply the correct set of controls, in order not to specify more than is necessary to achieve an appropriate security posture, each control needs to be understood in the context of the complete system and its requirements. No control can be taken in isolation from the rest of the system. No control is a panacea. No control in isolation is unbreakable. The architectural application of controls builds the appropriate level of risk.

But project information by itself isn’t really enough to get the job done, it turns out. By now I’ve mentored a few folks just starting out into security architecture. While we’re certainly not all the same, teaching the security part is the easy part, it turns out (in my experience). In addition to a working understanding of

  • Knowledge of the local systems and infrastructures, how these work, what are their security strengths and weak points
  • The local policies and standards, and a feel for industry standard approaches
  • What can IT build easily, what will be difficult?
  • Understanding within all the control domains of security at least at a gross level
  • Design skills, ability to see patterns of application across disparate systems
  • Ability to see relationships across systems
  • Ability to apply specific controls to protect complex flows
  • Depth in at least 2 technological areas
  • A good feel for working with people

Due to the need to view complex systems holistically in order to apply appropriate security, the successful security architects quickly get a handle on the complexity and breadth of a system, all the major components, and how all of these will communicate with each other. Security architects need to get proficient at ignoring implementation details and other “noise”, say the details of user interface design (which only rarely, I find, have security implications).

The necessity of viewing a system at an architectural level and then applying patterns where applicable are the bread and butter of an architectural approach. In other words, security must be applied from an architectural manner; practitioners have no choice if they are to see all the likely attack vectors and mitigate these.

Building a strong defense in depth I think demands architectural thinking. That, at least is what we discovered 9-10 years ago on our way to a security architecture practice.

And, it’s important to note that security architects deal in “mis-use” cases, not “use cases”. We have to think like an attacker. I’ve often heard a system designer complain, “but the system isn’t built to work that way”. That may be, but if the attacker can find a way, s/he will if that vector delivers any advantage.

One my working mantras is, “think like an attacker, but question like a team mate”. In a subsequent post, I’d like to take this subject up more deeply as it is at the heart of what we have to do.

And, as I wrote above, because of this necessity to apply security architecturally, security architects may be among your first system architects! “Necessity is the mother of invention”, as they say.

Security architecture may lead other domains or the enterprise practice. I remember in the early days that we often grumbled to each other about having to step into a broader architecture role simply because it was missing. In order to do our job, we needed architecture. In the absence of such a practice, we would have to do it for projects simply because we, the security architects, couldn’t get our work completed. We’d step into the breach. This may not be true in your organization? But in my experience, this has been a dirty little secret in more than one.

So, if you’re looking to get architecture started in your IT shop, see if the security folks have already put their toes in the water.

In the last year, I believe that a consensus is beginning to emerge within the security industry about security architecture. I’ve seen a couple of papers recently that are attempts to codify our body of practice, the skill set, the process of security architecture. It seems an exciting and auspicious time. Is security architecture finally beginning to come of age?

The time seems ripe for a summit of practitioners in order to talk about these very issues. It’s great that SANS is pulling us together next week in Las Vegas, What Works In Security Architecture. Yours truly will be speaking a couple of times at the summit.

Please join me next week, May 24-26, 2010 in Las Vegas. Engage me in lively and spirited dialog about security architecture.

cheers

/brook

Missed Points, Wrong Message?

Today, as every day, I was bombarded with articles to read. You, too?

Today, Testing for the Masses: More affordable assessment services reveal a new focus on application security caught my eye.

As some of my readers may know, I’ve spent a lot of time and effort on re-creating the customer web application security testing viewpoint. And, of course, Markus and I often don’t agree. So, I trundled off to read what he’d opined on the subject.

Actually, it’s great to see that ideas that were bleeding edge a few years ago are now being championed in the press!

* Build security in the design
* Code securely
* Verify both the design and the code

The “design” part of the problem requires reasonable maturity of a security architecture practice.

And, web application code must be tested and fixed before deployment, just as this article suggests. All good. If we could achieve these goals on a broad scale it would be a much better world that what we’ve got, which has been to deploy more than 100 millon lines of vulnerable code to the web (see Jeremiah Grossman’s white papers on this topic)

Further, the writers suggested that price points of web application testing tools must fall. And here’s where the ball got dropped. Why?

The question of price is related to one of the main limitations on getting at least the simple and egregious bugs out of our web code.

The tool set has until recently been made for and focused to people like Markus Ranum – security experts, penetration testing gurus. These tools have emphasized broad coverage and deep analysis. And that emphasis has been very much at the expense of:

* tool cost
* tool complexity
* noisy results that must be hand qualified by an expert

There aren’t that many Markus Ranum’s in the world. In fact, as far as I know, only one! Seriously, there are not that many experienced web application vulnerability analysts. My estimate of approximately 5000 was knocked down by some pundit (I don’t remember who?) at a SANs conference at which I was speaking. His guesstimate? 1500. Sigh. Imagine how many lines of code each has to analyze to even make a dent in those 100 million lines deployed.

Each of the major web application vulnerability scanners assumed that the user was going to become an expert user in the tool. User interfaces are complex. Setting the scope and test suite are a seriously non-trivial exercise.

And finally, there is the noise in the results – all those false positives. These tools all assume that the results will be analyzed by a security expert.

Taken together, we have what I call the “lint problem”. For those who remember programming in the C language, there is a terrifically powerful analyzer called “lint”. This tool can work through a body of code and find all kinds of additional errors that the compiler will miss. Why doesn’t everyone use lint, then? Actually, most C programmers never use lint.

Lint is too hard and resource intensive to tune. For projects that are more complex than one of two source files, the cost of tuning lint far out weighs it’s value. So, only a few programmers ever used the tool. On a 3 month project, just about the time the project is over is when lint is finally returning just the useful results. Not very timely, I’m afraid.

And so, custom web application vulnerability testing has remained in the hands of experts, people like Markus Ranum, who, indeed, makes his living by reviewing and fixing code (and rumour has it, a very good living, indeed).

So, of course, Markus probably isn’t going to suggest the one movement that will actually bring a significant shift this problem.

But I will.

The vulnerability check must be put into the hands of the developer! Radical idea?

And, that check can’t be a noisy, expert driven analysis.

* The scan must be no more difficult to run than a compiler or linker. (which are, after all, the tools in use by many web developers every day)
* The scan should integrated seamlessly into the existing workflow
* The scan must not add a significant  time to the developer’s workflow
* The results must be easy to interpret
* The results must be as accurate as a compiler. E.g., the developer must be able to trust the results with very high confidence

Attack these bugs at the source. And, to achieve that, give the developer a tool set that s/he can use and trust.

And, the price point has to be in accordance with other, similar tools.

I happen to know of several organizations that have either experimented with this concept or have put programs into place based on these principles (can’t name them, sorry. NDA) And guess what? It works!

Hey, toolmakers, the web developer market is huge! There’s money to be made here, folks.

Markus, I think you missed a key point, and are pointing in the wrong direction (slightly)

cheers

/brook