Alan Paller’s Security Architecture Call To Action

I was attending the 2nd SANS What Works in Security Architecture Summit, listening to Alan Paller, Director of Research at the SANS Institute, and Michele Guel, National Cybersecurity Award winner. Alan has given us all a call to action, namely, to build a collection of architectural solutions for security architecture practitioners. The body of work would be akin to Up To Date for medical clinicians.

I believe that creating these guides is one of the next critical steps for security architects as a profession. Why?

It’s my contention that most experienced practioners are carrying a bundle of solution patterns in their heads. Like a doctor, we can eyeball a system, view the right type of system diagram, get a few basic pieces of information, and then begin very quickly to assess which of those patterns may be applicable.

In assessing a system for security, of course, it is often like peeling an onion. More and more issues get uncovered as the details are uncovered. Still, I’ve watched experienced architects very quickly zero in to the most relevant questions. They’ve seen so many systems that they intuitively understand where to dig deeper in order to assess attack vectors and what is not important.

One of the reasons this can happen so quickly is almost entirely local to each practitioner’s environment. Architects must know their environments intimately. That local knowledge will eliminate irrelevant threats and attack patterns. Plus, the architect also knows  which  security controls are pre-existing in supporting systems. These controls have already been vetted.

Along with the local knowledge, the experienced practioner will have a sense of the organization’s risk posture and security goals. These will highlight not only threats and classes of vulnerabilities, but also the value of assets under consideration.

Less obvious, a good security architecture practitioner brings a set of practical solutions that are tested, tried, and true that can be applied to the  analysis.

As a discipline, I don’t think we’ve been particularly good at building a collective knowledge base. There is no extant body of solutions to which a practioner can turn. It’s strictly the “school of hard knocks”; one learns by doing, by making mistakes, by missing important pieces, by internalizing the local solution set, and hopefully, through peer review, if available.

But what happens when a lone practioner is confronted with a new situation or technology area with which he or she is unfamiliar? Without strong peer review and mentorship, to where does the architect turn?

Today, there is nothing at the right level.

One can research protocols, vendors, technical details. Once I’ve done my research, I usually have to infer the architectural patterns from these sources.

And, have you tried getting a vendor to convey a product’s architecture? Reviewing a vendor’s architecture is often frought with difficulties: salespeople usually don’t know, and sales engineers just want to make the product work, typically as a cookie cutter recipe. How many times have I heard, “but no other customer requires this!”

There just aren’t many documented architectural patterns available. And, of the little that is available, most has not been vetted for quality and proof.

Alan Paller is calling on us to set down our solutions. SANS’ new Smart Guide series was created for this purpose. These are intended to be short, concise security architecture solutions that can be understood quickly. These won’t be white papers nor the results of research. The solutions must demonstrate a proven track record.

There are two tasks that must be accomplished for the Smart Guide series to be successful.

  • Authors need to write down the architectures of their successes
  • The series will need reviewers to vet guides so that we can rely on the solutions

I think that the Smart Guides are one of the key steps that will help us all mature security architecture practice. From the current state of Art (often based upon personality) we will force ourselves to:

  • Be succinct
  • Be understandable to a broad range of readers and experience
  • Be methodical
  • Set down all the steps required to implement – especially considering local assumptions that may not be true for others
  • Eliminate excess

And, by vetting these guides for worthwhile content, we should begin to deliver more consistency in the practice. I also think that we may discover that there’s a good deal of consensus in what we do every day**. Smart Guides have to juried by a body of peers in order for us to trust the content.

It’s a Call to Action. Let’s build a respository of best practice. Are you willing to answer the call?

cheers

/brook

** Simply getting together with a fair sampling of people who do what I do makes me believe that there is a fair amount of consensus. But that’s entirely my anecdotal experience in the absence of any better data

SANS Security Architecture

There are plenty of security conferences, to be sure. You can get emersed in the nitty gritty of attack exploits, learn to better analyze intrusion logs, even work on being a better manager of information security folks (like me).

In the plethora of conferences trying to get your attention, there is one area that doesn’t get much space: Security Architecture.

While the Open Group has finally recognized the Security Architect discipline, conferences dedicated to the practice have been too few and too far between. The SANS Institute hosted one last year in Las Vegas. It was such a success that they’ve decided to do it again.

SANS Security Architecture: Baking Security Into Applications And Networks

This year’s conference is intended to bring security architecture to the practical. The conference will be chock full of approaches and techniques that you can use “whole hog” or cherry pick for those portions which make sense for your organization.

Why do security architects need a conference of their own?

If you ask me (which you didn’t!), security architecture is a difficult, complex practice. There are many branches of knowledge, technical, organizational, political, personal that get brought together by the successful architect. It’s frustrating; it’s tricky. It’s often more Art than engineering.

We need each other, not just to commiserate, but to support each other, to help each other take this Art and make what we do more repeatable, more sustainable, more effective. And, many times when I’ve spoken about the discipline publicly, those who are new to security architecture want to know how to learn, what to learn, how to build a programme. From my completely unscientific experience, there seems a strong need for basic information on how to get started and how to grow.

Why do organizations need Information Security Architects? Well, first off, probably not every organization does need them. But certainly if your organization’s security universe is complex, your projects equally complex, then architecture may be of great help. My friend, Srikanth Narasimhan, has a great presentation on “The Long Tail of Architecture”, explaining where the rewards are found from Enterprise Architecture.

Architecture rewards are not reaped upfront. In fact, applying system architecture principles and processes will probably slow development down. But, when a system is designed not just for the present, but the future requirements; when a system supports a larger strategy and is not entirely tactical; when careful consideration has been given to how seeming disparate systems interact and depend, as things change, that’s where architecting systems delivers it’s rewards.

A prime example of building without architecture is the Winchester Mystery House in San Jose, California, USA. It’s a fun place to visit, with its stairwells without destination, windows opening to walls, and so on. Why? The simple answer is, “no architecture”.

So if your security systems seem like a version of the “Mystery House”, perhaps you need some architecture help?

Security Architecture brings the long tail of reward through planning, strategy, holism, risk practice, “top to bottom, front to back, side to side” view of the matrix of flows and system components to build a true defense in depth. It’s become too complex to simply deploy the next shiney technology and put these in without carefully considering how each part of the defense in depth depends upon and affects, interacts, with the others.

The security architecture contains a focus that addresses how seeming disparate security technologies and processes fit into a whole. And, solutions focused security architects help to build holistic security into systems; security architecture is a fundamental domain of an Enterprise Architecture practice. I’ve come to understand that security architecture has (at least) these two foci, largely through attending last year’s conference. I wonder what I’ll gain this year?

Please join me  in Washington DC, September 28-30, 2011 for SANS Security Architecture.

cheers

/brook

 

Pen Testers Cannot Scale To Enterprise Need!

As reported in Information Week, several Black Hat researchers have noted that humans must qualify application vulnerability scanning results. It’s hard to find this statement as “news” since I’ve been saying this publicly, at conferences for at least 4 years. I’m not the only one. Ask the vendors’ Sales Engineers. They will tell you (they have certainly told me!) that one has to qualify testing results.

“At the end of the day, tools don’t find vulnerabilities. People do,” says Nathan Hamiel, one of the researchers. 

Yes, Nathan, this has been the state of affairs for years.

The status quo is great for penetration testers. But it’s terrible for larger organizations. Why? Pen testers are expensive and slow.

I agree with the truisms stated by the researchers. If there were 10’s of thousands of competent penetration testing  practitioners, the fact the the tools generally do NOT find vulnerabilities without human qualification would not matter.

But there are only 1000s, perhaps even less than 3000 penetration testers who can perform the required vulnerability qualification. These few must be apportioned across 10s of millions of lines of vulernable code, much of which is exposed to the hostile Public Internet.

Plus which, human qualified vulnerability testing is slow. A good tester can test maybe 50 applications each year. Against that throughput, organizations who have a strong web presence may deploy 50 applications or upgrades each quarter year. Many of the human qualified scans become stale within a quarter or two.

Hence this situation is untenable for enterprises who have an imperative to remove whatever vulnerabilities may be found. Large organizations may have 1000s of applications deployed. At $4-500/hour, which code do you scan? Even the least critical application might hand a malefactor a compromise.

This is a fundamental mis-match.

The pen testers want to find everything that they can. All fine and good.

But enterprises must reduce their attack surface. I would argue that any reduction is valuable. We don’t have to find every vulnerability in every application. Without adequate testing, there are 100% of the vulnerabilities exposed. Reducing these by even 20% overall is a big win.

Further, it is not cost effective to throw pen testers at this problem. It has to either be completely automated or the work has to be done by developers or QA folk who do not have the knowledge to qualify most results.

Indeed, the web developer him/herself must be able to at least test and remove some of the attack surface while developing. Due to the nature of custom code testing, I’m afraid that the automation is not yet here. That puts the burden on people who have little interest in security for security’s sake.

I’ve spoken to almost every vulnerability testing vendor. Some understand this problem. Some have focused on the expert tester exclusively. But, as near as I can tell, the state of the art is at best hindered by noisy results that typically require human qualification. At worst, some of these tools are simply unsuitable for the volume of code that is vulnerable.

As long as so much noise remains in the results, we, the security industry, need to think differently about this problem.

I’m speaking at SANS What Works in Security Architecture August 29-30, Washington DC, USA. This is a gathering of Security Architecture practioners from around the world. If your role includes Architecture, let me urge you to join us as we show each other what’s worked and as we discuss are mutual problem areas.

cheers

/brook

What Is Our Professional Responsibility?

What Is Our Professional Responsibility?Four times within about a month, I’ve had to deal with “security issues” that were reported as “emergencies”. These appeared as high priority vulnerabilities requiring immediate “fixing” from my team.

Except, none of these were really security issues. Certainly none of these was an emergency.

None of these were bugs or vulnerabilities. In fact, if the security engineer reporting the issue had done even a modicum of investigation, these would never have been reported. False positive.

In one instance, a security engineer had browsed information on a few web pages of a SaaS application and then decided, without any further investigation that the web product had “no security”. She or he even went to far as to “ban” all corporate use of that product. Wow! That’s a pretty drastic consequence for a product who’s security controls were largely turned off by that customer’s IT department. Don’t you think the security engineer should have checked with IT first? I do.

A few days later a competitor of that product pointed a security engineer to an instance that was also configured with few security controls by the customer. The competitor claimed that the “product has no security”. The engineer promptly reported a “security deficiency”.

Obviously, a mature product should have the capability to enforce the security policies of its users, whatever those happen to be. That’s one of the most important tenants of SaaS security: give the customer sufficient tools to enact customer policies. Do not decide for the customer what their appropriate policies must be; let the customer implement policy as required.

Since the customer has the power to enact customer policies, the chosen posture may be wide open or locked down. The security posture depends upon the business needs of each particular customer. Don’t we all know that? Isn’t this obvious? (maybe not?)

In both the cases that I’ve described (and the other two), I would have thought that the engineer would first investigate?

  • What is the actual problem?
  • How does the application work?
  • What are the application’s capabilities?
  • Is there a misconfiguration?
  • Is there a functionality gap?
  • Is there a bug?

When I was a programmer, the rule was, “don’t report a bug in infrastructure, library, or compiler until you have a small working program that positively demonstrates the bug1”. In other words, we had to investigate a problem, thoroughly understand it, isolate it, and provide a working proof before we could call technical support.

Apparently, some security engineers feel no compulsion for this kind of technical precision?

I went to the CSO and asked, “If I failed to investigate a problem, would you be upset? If I did it repeatedly, would you fire me?” Answer: “Yes, on all counts.”

Security managers, what’s our accountability? Are your engineers accountable for the issues that they report?

Are we so hungry for performance metrics that we are mistakenly tracking “incidents reported”? (which is, IMHO, not a very good measure of anything). To what are we holding security investigators accountable?

My understanding of my professional ethics requires me to be as sure as I possibly can before running an issue up the flag pole. Further, I like to present all unknowns as clearly as I can. That way, false positives are minimized. I certainly wouldn’t want to stake the precious trust that I’ve carefully built up on a mere assumption which easily might be a mistake.

Of course, it’s always possible to believe one has discovered a vulnerability that turns out to be misunderstanding or misconfiguration. That can happen to any one of us. Securing multiple technologies across multiple use cases, across many technologies is difficult. Mistakes are far too easy to make. Because of the ease of error, I expect to get one or two of poorly qualified vulnerabilities each year. But four in a month? What?

Let’s all try to be precise and not get carried away in the excitement of the moment. That holds true whether one thinks one has discovered a vulnerability or is taking a bug report. I believe that information security professionals should be seen as “truth tellers”. We must live up to the trust that is placed in us.

By the way, there’s a very exciting conference upcoming at the end of August (2011), Security Architecture: Baking Security into Applications and Networks 2011. This conference is particularly relevant for Security Architects and any practitioner who must design security into systems, or who is charged with building a practice for security architecture and designs. I’ll write more about this conference later. Stay tuned.

cheers,

/brook

1) The small program could not include any of the functionality required by the program that was being written. It had to demonstrate the bug without any ancillary code whatsoever. That meant that one had to understand the bug thoroughly before reporting.

Architecture or Engineering?

It’s not exactly the great debate. There may not be that many folks on the planet who care what we call this emerging discipline in which I’m involved? Still, those of us who are involved are trying to work through our differences. We don’t yet have a clear consensus definition. Though I would offer that a consensus on the field of security architecture is beginning to coalesce within the practitioner community. (call the discipline what-you-will)

One of the lively discussions here at the SANS What Works In Security Architecture is, “What do we call ourselves?”

Clearly, there are at least two threads. One thread (my practice) is to call what I do “Information Security Architecture”. The other thread is “Systems Security Engineer”.

I read one of the formative documents from the “Systems Security Engineer” position last month (it’s not yet announced; so I’m sure if I can say from whom, yet?)  And what was described was pretty close to what I’ve been calling “Security Architecture”.

Here’s the interesting part to me: that no matter the name, collectively, we’re realizing that we require this set of functions to be fulfilled. I’ll try to make a succinct statement about the consensus task list that I think is emerging:

  • Study of a proposed system architecture for security needs
  • Discussion with the proposers of the system (“the business” in parlance) about what the system is meant to do and the required security posture
  • Translation of required security posture into security requirements
  • Translation of the security requirements into specific security controls within a system design
  • Given an architecture, assessing the security risk based upon a strong understanding of information security threats for that organization and that type of architecture

In some organizations, the security architect task is complete at delivery of the plan. In others, the architect is expected to help design the specific controls into the system. And then, possibly, the architect is also responsible for the due diligence task of assuring that the controls were actually implemented. In my experience, I am responsible for all of these tasks and I’m called a “Security Architect”.

From the other perspective, the role is called “Systems Security Engineer”. And, it is also expected that the upfront tasks of discussion and analysis are performed by a security architect who is no longer engaged after delivery of the high level items (listed above).

That’s not how it works in my world. And, I would offer that this “leave it after analysis” model might lead to “Ivory Tower” type pronouncements. I’ll never forget one of my early big errors. I was told that “all external SSL communications were required to be bi-directionally authenticated” OK – I’d specify that for every external SSL project. Smile

Then projects started coming back to me, sometimes months later, having been asking for bi-directional authentication, and being told that on the shared java environment,”no can do”.

The way that this environment was set up, it didn’t support bi-directional authentication at an application granularity. And that was the requirement that I had given, not once but several times. I was “following the established standard”, right?

I submit to your comment and discussion that

  1. There’s more than one way to achieve similar security control (in this case, bi-directionally authenticated encrypted tunnels)
  2. It’s very important to specify requirements that can be met
  3. A security architect must understand the existing capabilities before writing requirements.

Instead of ivory tower pronouncements, if I believe that there is a technology gap, my job is to surface and make the gap as transparent as possible to the eventual owners of the system under analysis (“the business”).
How can I do this job if all I have to do is work from paper standards  and then walk away. Frankly, I’d lose all credibility. The only thing that saved me early on was my willingness to admit my errors and then pitch in wholeheartedly to solve any problems that I’d help create (or, really, anything that comes up!)

Some definitions for your consideration:

  • Architecture: the structure and organization of a computer’s hardware or system software
  • Engineering:  the discipline dealing with the art or science of applying scientific knowledge to practical problems

Engineers apply say, cryptography, an algorithm to solve problems. Architects plan that cryptography will be required for a particular set of transmissions or for storage of a set of data.

At least, these are my working definitions for the moment.

So, after listening to both sides, in my typically equivocal manner, I’m wondering if this role that I practice is really a good and full slice of architecture, combined with a good and full slice of engineering?

Practitioners, it seems to me from my experience, need both of these if they are to gain credibility with the various groups with whom they must interact. And “no”, I don’t want to perpetuate any ivory towers!

Send me your ideas for a better name, a combinatorial name. what-have-you

cheers,

/brook

from Las Vegas airport

What is this thing called “Security Architecture”?

What is this thing called “Security Architecture”?

In February 2009, while I was attending an Open Group Architecture Conference in Bangalore, India, several of us got into a conversation about how security fits into enterprise architecture. A couple of the main folks from the Open Group were involved (sorry, I don’t remember who). They’d been struggling with this same problem: Is high level security participation in the process architectural, as defined for IT architecture?

Participants to that particular conversation had considered several possibilities. Maybe there is such a thing as a security architect? Maybe security participation is as a Subject Matter Expert (SME)? Maybe it’s all design and engineering?

Interesting questions.

I think how an organization views security may depend on how security is engaged and for what purposes. If security is engaged at an engineering level only,

  • “please approve my ACLs”
  • “what is the script for hardening this system? Has it been hardened sufficiently?”
  • “is this encryption algorithm acceptable?”
  • “is this password strength within policy?”
  • etc.

Then that organization will not likely have security architects. In fact (and I’ve seen this), there may be absolutely no concept of security architecture. “Security architecture” might be like saying, “when pigs fly”.

If your security folks are policy & governance wonks, then too, there may not be much of a security architecture practice.

Organizations where security folks are brought in to help projects achieve an appropriate security posture and also brought in to vet that a proposal won’t in some way degrade the current security posture of the organization, my guess is these are among the driving tasks for developing a security architecture practice?

However, in my definition of security architecture, I wouldn’t limit the practice to vetting project security. There can be more to it than that.

  • security strategy
  • threat landscape, (emerging term: “threatscape”)
  • technology patterns, blueprints, standards, guides
  • risk analysis, assessment, communication and decisions
  • emerging technology trends and how these affect security

A strong practitioner will be well versed in a few of these and at least conversant in all at some level. Risk analysis is mandatory.

I’ve held the title “Security Architect” for almost 10 years now. In that time, my practice has changed quite a lot, as well as my understanding, as my team’s understanding have grown. Additionally, we’ve partnered alongside an emerging enterprise architecture team and practice, which has helped me understand not only the practice of system architecture, but my place in that practice as a specialist in security.

Interestingly, I’ve seen security architecture lead or become a practice before enterprise architecture! Why is that? To say so, to observe so, seems entirely counter-intuitive – which is where the Open Group folks were coming from, I fear?. They are focused deeply on IT architecture. Security may be a side-show in their minds, perhaps even, “oh, those guys who maintain the IDS systems”?

Still, it remains in my experience that security may certainly lead architecture practice. Your security architects will have to learn to take a more holistic approach to systems than your development and design community may require. There’s a fundamental reason for this!

You can’t apply the right security controls for a system that you do not understand from bottom to top, front to back, side to side, all flows, all protocols, highest sensitivity of data. Period.

Security controls tend to be very specific; they protect particular points of a system or flow. Since controls tend to be specific, we generally overlap them, building our defense out of multiple controls, the classic, “defense-in-depth”. In order to apply the correct set of controls, in order not to specify more than is necessary to achieve an appropriate security posture, each control needs to be understood in the context of the complete system and its requirements. No control can be taken in isolation from the rest of the system. No control is a panacea. No control in isolation is unbreakable. The architectural application of controls builds the appropriate level of risk.

But project information by itself isn’t really enough to get the job done, it turns out. By now I’ve mentored a few folks just starting out into security architecture. While we’re certainly not all the same, teaching the security part is the easy part, it turns out (in my experience). In addition to a working understanding of

  • Knowledge of the local systems and infrastructures, how these work, what are their security strengths and weak points
  • The local policies and standards, and a feel for industry standard approaches
  • What can IT build easily, what will be difficult?
  • Understanding within all the control domains of security at least at a gross level
  • Design skills, ability to see patterns of application across disparate systems
  • Ability to see relationships across systems
  • Ability to apply specific controls to protect complex flows
  • Depth in at least 2 technological areas
  • A good feel for working with people

Due to the need to view complex systems holistically in order to apply appropriate security, the successful security architects quickly get a handle on the complexity and breadth of a system, all the major components, and how all of these will communicate with each other. Security architects need to get proficient at ignoring implementation details and other “noise”, say the details of user interface design (which only rarely, I find, have security implications).

The necessity of viewing a system at an architectural level and then applying patterns where applicable are the bread and butter of an architectural approach. In other words, security must be applied from an architectural manner; practitioners have no choice if they are to see all the likely attack vectors and mitigate these.

Building a strong defense in depth I think demands architectural thinking. That, at least is what we discovered 9-10 years ago on our way to a security architecture practice.

And, it’s important to note that security architects deal in “mis-use” cases, not “use cases”. We have to think like an attacker. I’ve often heard a system designer complain, “but the system isn’t built to work that way”. That may be, but if the attacker can find a way, s/he will if that vector delivers any advantage.

One my working mantras is, “think like an attacker, but question like a team mate”. In a subsequent post, I’d like to take this subject up more deeply as it is at the heart of what we have to do.

And, as I wrote above, because of this necessity to apply security architecturally, security architects may be among your first system architects! “Necessity is the mother of invention”, as they say.

Security architecture may lead other domains or the enterprise practice. I remember in the early days that we often grumbled to each other about having to step into a broader architecture role simply because it was missing. In order to do our job, we needed architecture. In the absence of such a practice, we would have to do it for projects simply because we, the security architects, couldn’t get our work completed. We’d step into the breach. This may not be true in your organization? But in my experience, this has been a dirty little secret in more than one.

So, if you’re looking to get architecture started in your IT shop, see if the security folks have already put their toes in the water.

In the last year, I believe that a consensus is beginning to emerge within the security industry about security architecture. I’ve seen a couple of papers recently that are attempts to codify our body of practice, the skill set, the process of security architecture. It seems an exciting and auspicious time. Is security architecture finally beginning to come of age?

The time seems ripe for a summit of practitioners in order to talk about these very issues. It’s great that SANS is pulling us together next week in Las Vegas, What Works In Security Architecture. Yours truly will be speaking a couple of times at the summit.

Please join me next week, May 24-26, 2010 in Las Vegas. Engage me in lively and spirited dialog about security architecture.

cheers

/brook

What’s Wrong With Customer Support?

I know that it’s been a while since I’ve posted. Let me apologize for that. I’ve got a lot of topics that I want to start covering here soon. Stay tuned.

Still, this morning’s reply from Intuit customer support was just too rich to pass up. Let me offer this little laugh to you to brighten you day. And then, I want to say a few words about why these interchanges are endemic to customer support as practiced by most companies.

I’ve heard support folks called “case closing machines”. Not very nice. But unfortunately, far too true. These are the metrics upon which customer support are rated: how many support cases can be moved through the queue within a time period.

The result of this rating, of course, is that the support person is far more interested in responding quickly to dispose of a request than in actually understanding what has been requested. And that, I find, wastes everyone’s time.

Let me give some context. I use Quicken 2oo7 on Macintosh for my home accounting. I’ve been a Quicken and Intuit customer for years. I was having trouble updating transactions to account from my financial institution. Heck, first thing to do is always update the software, right?

OK. I ran the R2 installer and got a weird error message claiming that I had “no valid copy” of the the software. Huh? Since I was in a hurry (first mistake!), I fired off a support ticket. 10 minutes later, I checked the running version of my software and low and behold, I’d already installed R2. Uh, oh! Bad ticket, no need for support.

Well, there wasn’t any way to recall the issue. So I let it grind through support while I installed the R3 updater successfully. (this didn’t solve the original problem, unfortunately. but that’s another story)

When I received the response to my ticket from Quicken, at first I was just going to ignore it. But the activist in me just can’t let lie!

I wrote back a quick note suggesting that the error message in the R2 updater might be just a little misleading.

Here’s my reply to Quicken Support. I’ll follow that with the first couple of lines of the second reply from Quicken Support:

“Yes – I figured your suggestion out about 15 minutes after putting in
the ticket. Sorry about that.

But, I would suggest that your error message could say something of the
nature:

“your quicken is already upgraded to R2”

Rather than “no valid quicken found” (or whatever the error says).

these are very different messaging. I would have had no need to file a
ticket (that’s money in intuit’s pockets, yes?)

Your programmers and QA staff need to do better, I think!

thanks”

I’ll admit that my suggestion might be a bit terse. It does say, I hope clearly, that I found my solution, yes? I even apologize for the submitting the ticket: “Sorry about that”

then I go on to make a suggestion, “Change your error message. It’s misleading”

Here’s what I got back from “Karuna, Quicken Customer Care”:

“Brook, in this case I would advise you to uninstall and reinstall Quicken using the download link which I am providing you with. The download link contains the complete software with the updated patch. Please follow the steps mentioned below to uninstall Quicken…”

The email interchange goes on in detail on how to accomplish the reinstallation of Quicken 2007, including a download of the entire installer! If I had followed these instructions, it would have wasted hours of my time; I wouldn’t be writing this post, smile.

It is true that filing the issue was my mistake; there wasn’t anything really wrong.

My mistake was based upon a very poorly written error message in the R2 installer for Quicken 2007 which suggested that I had no valid copy of Quicken 2007. Which, of course, I do**.

Intuit support didn’t understand my note – probably the person working the issue didn’t really read it? Can you spell “scan”?

My response to Intuit required no reply whatsoever. Rather I suggested action on the part of Intuit Product Management and perhaps the development teams, should there be merit in my suggestion.
Quite obviously, keeping ridiculous metrics like “cases closed/time period” forces a support team to actually waste huge amounts of time sending useless and inappropriate replies to people who don’t need them and can’t use them. All in the  name of productivity.

Most companies do it this way. It’s rare that I actually get a first reply that is on target demonstrating true understanding! Support don’t actually read what’s been written. I get asked all the time to “please supply the version and machine information” which I have already very carefully listed in the request. How absurd is that?

Support assume:

  • the incompetence of the request
  • the poor articulation of the requester

Support prioritize:

  • speed of delivery over understanding
  • delivering an answer, any answer
  • the first lines of the question (which are often only context)
  • any answer that can found in existing the knowledge base, no matter how inappropriate

If the support person had only truly read what I wrote s/he would have fully known that:

  • I didn’t need a reply
  • I had made a suggestion to be passed to Product Management
  • my problem was solved before the first reply from Quicken last week

Scanning the email (esp. if the scanner’s English is a 2nd or 3rd language) is highly prone to this kind of misunderstanding.

Ugh!

I think I’ll try to find out who’s the executive in charge of Intuit’s support operations and talk to that person. Because this is just too ridiculous to let go. We in the software industry need to get a lot more professional if customers are going to trust us.

Treating our support people as “case closing machines” is demeaning and dehumanizing. Instead of valuing and training problem solvers, we’ve created a situation whereby the competent use the support role only as a stepping stone on the way to another position. Who’s left behind? Those with low self-esteem, low expectations, those who don’t care about the job. Is that who you want responding to your customers?

And, of course, these sorts of email interchange and frustration go back and forth every day wasting everyone’s time with useless solutions to problems that don’t exist.

cheers

/brook schoenfield

**I pay for my software – it’s ethical. And, as John Caron used to say to me, “People don’t pay for software. They register the software because of the bugs for which they’ll need support” Tee, hee. all too true, John.

Missed Points, Wrong Message?

Today, as every day, I was bombarded with articles to read. You, too?

Today, Testing for the Masses: More affordable assessment services reveal a new focus on application security caught my eye.

As some of my readers may know, I’ve spent a lot of time and effort on re-creating the customer web application security testing viewpoint. And, of course, Markus and I often don’t agree. So, I trundled off to read what he’d opined on the subject.

Actually, it’s great to see that ideas that were bleeding edge a few years ago are now being championed in the press!

* Build security in the design
* Code securely
* Verify both the design and the code

The “design” part of the problem requires reasonable maturity of a security architecture practice.

And, web application code must be tested and fixed before deployment, just as this article suggests. All good. If we could achieve these goals on a broad scale it would be a much better world that what we’ve got, which has been to deploy more than 100 millon lines of vulnerable code to the web (see Jeremiah Grossman’s white papers on this topic)

Further, the writers suggested that price points of web application testing tools must fall. And here’s where the ball got dropped. Why?

The question of price is related to one of the main limitations on getting at least the simple and egregious bugs out of our web code.

The tool set has until recently been made for and focused to people like Markus Ranum – security experts, penetration testing gurus. These tools have emphasized broad coverage and deep analysis. And that emphasis has been very much at the expense of:

* tool cost
* tool complexity
* noisy results that must be hand qualified by an expert

There aren’t that many Markus Ranum’s in the world. In fact, as far as I know, only one! Seriously, there are not that many experienced web application vulnerability analysts. My estimate of approximately 5000 was knocked down by some pundit (I don’t remember who?) at a SANs conference at which I was speaking. His guesstimate? 1500. Sigh. Imagine how many lines of code each has to analyze to even make a dent in those 100 million lines deployed.

Each of the major web application vulnerability scanners assumed that the user was going to become an expert user in the tool. User interfaces are complex. Setting the scope and test suite are a seriously non-trivial exercise.

And finally, there is the noise in the results – all those false positives. These tools all assume that the results will be analyzed by a security expert.

Taken together, we have what I call the “lint problem”. For those who remember programming in the C language, there is a terrifically powerful analyzer called “lint”. This tool can work through a body of code and find all kinds of additional errors that the compiler will miss. Why doesn’t everyone use lint, then? Actually, most C programmers never use lint.

Lint is too hard and resource intensive to tune. For projects that are more complex than one of two source files, the cost of tuning lint far out weighs it’s value. So, only a few programmers ever used the tool. On a 3 month project, just about the time the project is over is when lint is finally returning just the useful results. Not very timely, I’m afraid.

And so, custom web application vulnerability testing has remained in the hands of experts, people like Markus Ranum, who, indeed, makes his living by reviewing and fixing code (and rumour has it, a very good living, indeed).

So, of course, Markus probably isn’t going to suggest the one movement that will actually bring a significant shift this problem.

But I will.

The vulnerability check must be put into the hands of the developer! Radical idea?

And, that check can’t be a noisy, expert driven analysis.

* The scan must be no more difficult to run than a compiler or linker. (which are, after all, the tools in use by many web developers every day)
* The scan should integrated seamlessly into the existing workflow
* The scan must not add a significant  time to the developer’s workflow
* The results must be easy to interpret
* The results must be as accurate as a compiler. E.g., the developer must be able to trust the results with very high confidence

Attack these bugs at the source. And, to achieve that, give the developer a tool set that s/he can use and trust.

And, the price point has to be in accordance with other, similar tools.

I happen to know of several organizations that have either experimented with this concept or have put programs into place based on these principles (can’t name them, sorry. NDA) And guess what? It works!

Hey, toolmakers, the web developer market is huge! There’s money to be made here, folks.

Markus, I think you missed a key point, and are pointing in the wrong direction (slightly)

cheers

/brook

Executive Information Security Risk Dialog

Ah, sigh, Marcus and I almost never agree! And, this often a matter of lexicon and scoping. In this case, Marcus stays away from the thorny issue of risk methodology. But this, I think, is precisely the point to discuss.

Think about the word “risk”. How many definitions have you seen? The word is so overloaded and imprecise (and undefined in Marus’ Rant) that saying “risk” without definition will almost surely undermine the technical/C-level management dialog!

I would argue that the reasons for failure of the security to executive risk dialog have less to do with C-level inability to grasp security issues than it has to do with our imprecision in describing risk for them!

Take Jack Jones. When at Nationwide, he found himself mired within Ranum’s rant. I’ve been there, too. Who hasn’t?

To solve the problem and feel more effective, he spent 5 years designing a usable, repeatable, standardized risk methodology. (FAIR. And yes, you’ve heard me rant about it before)

Jack described to me just how dramatically his C-level conversations changed. Suddenly, crisp interactions were taking place. What changed?

I see the failure of the dialog has having these components:

  • poor risk methodology
  • poor communication about risk
  • not treating executives as executives

Let me explain.

Poor risk methodology is the leading cause of inaccurate and unproductive risk dialog. When we say “risk” what is our agreed upon meaning?

I’ve participated in many discussions that treat risk as equivalent with an open vulnerability. Period.

But, of course, that’s wrong, and actually highly misleading. Is “risk” a vulnerability added to some force that has motivation to exercise the vulnerability (“threat” – I prefer “threat agent” for this, by the way)? That again is incomplete – but that is often what we mean when we say information security risk. And, again, it’s wrong.

In fact, as I understand it, threat can be conceived as the interaction between the components that create a credible threat agent who has sufficient motivation and access in order to exercise an exposed vulnerability. This is quite complex. And, unfortunately, the mathematics involved may be beyond many security practitioners?> Certainly, when in the field these mathematics may be too much?

Most of us security professionals are pretty good on the vulnerability side. We’re good at assessing whether a vulnerability is serious or not, whether its impact can be far-reaching or localized,  how much is exposed. It seems to me, that it’s the threat calculation that’s the difficult piece. I’ve seen so many “methodologies” that don’t go deep enough about threat. Certainly there are reasonable lists of threat agent categories. And, maybe the methodology includes threat motivation. Maybe they include capability? But motivation, capability, risk appetite, access? Very few. And how do you calculate these?

And then there’s the issue of impact. In my practice, I like to get the people close enough to business to assess impact. That doesn’t mean I don’t have a part in the calculation. Often times the security person has a better understanding of the potential scope of vulnerability and the threats that may exercise it than a business person. Still, a business person understands what the loss of their data will mean. However impact is assessed, it has a relationship to the threat and vulnerability side. Together these are the components of risk.

I think in this space, it probably doesn’t make sense to go into the various calculations that might go into actually coming up with a risk calculation? Indeed, this is a nontrivial matter, though I have made the case that any consistent number between zero and one on threat and vulnerability side of the equation can effectively work. (But that’s another post about the hour at an RSA conference presentation that helped me to understand this little bit of risk calculation trickery.)

In any event, hopefully you can see that expressing some reasonable indication of risk is a nontrivial exercise?

Simply telling an executive that there is a serious vulnerability, or that there’s a flaw in an architecture that “might” lead to serious harm waters down the risk conversation. When we do this, we set ourselves up for failure. We are likely to get “I except the risk”, or, an argument about how serious the risk is. Neither of these is what we want.

And this is why I mention “poor communication of risk” as an important factor in the failure of risk conversations. Part of this is our risk methodology, as I’ve written. In addition, there is the clarity of communication. We want to keep the conversation focused on business risk. We do not want to bog down in the rightness or wrongness of a particular project or approach, or an argument about the risk calculation. (And these are what Ranum describes) In my opinion, each of these must be avoided in order to get the dialog that is needed. We need a conversation about the acceptable business risks for the organization.

Which brings me to “treating executives as executives”. When we ask executives to come down into the weeds and help us make technical decisions we are not only inviting trouble, but we are also depriving ourselves and the organization of the vision, strategy, and scope that lays at the executive level. And that’s a loss for everyone involved.

I think we make a big mistake asking the executive to assess our risk calculation. We should have solid methodologies. It must be understood that the risk picture were giving is the best possible given information at hand. There needs to be respect from executive to information security that the risk calculations are valid (as much as we can make them).

No FUD! Don’t ask your executives to decide minor technical details. Don’t ask your executives to dispute risk calculations. Instead, ask them to make major risk decisions, major strategic decisions for your organization. Treat them as your executives. Speak to them in business language, about business risks, which include your assessment about how the technical issues will impact the organization in business terms.

I’ve been surprised at how my own shift with respect to these issues has changed my conversation away from pointless arguments to productive decision-making. And yes, Marcus, amazingly enough, bad projects do get killed when I’ve expressed risk clearly and in business terms.

Sometimes it makes business sense to take calculated risks. One of the biggest changes I’ve had to make as a security professional is to avoid prejudice about risk in my arguments. Like many security folk, I have a bias that all risk should be removed. When I started to shift to that, I started to get more real dialog. Not before.

So, Marcus, we disagree once again. But we disagree more about focus rather than on substance? I like to focus on the things that I can change, rather than repeating the loops of the past endlessly.

Cheers,

Brook

Architect Surprise!

Architect Surprise!

“Architect Surprise” is what I had last Thursday.

No, it’s not a new recipe. Nor is it a building whose structure is shocking. Although most certainly I was surprised enough to sit up and take notice.

I had been attending a series of security meetings where we agonize over the information security issues that are most on our minds.

I’ve been working with this group for 3-4 years. I find baring my security soul to peers and betters deeply helps my practice. And, if I’ve learned something, I’m happy to share any tidbits that I’ve gleaned so that we don’t all make my mistakes. Information security is just plain hard! (my 2 cents)

I can’t tell you who attends these meetings or who runs them.

Still, the facilitators are amazing professionals who work to help us enlighten each other. Let’s just say that it’s the CISOs (Chief Information Security Officers) or their direct reports for a lot of companies you’ve heard of and a few that maybe you haven’t – but they’re all at enterprise level. We have a strict “what is said in the room stays in the room” policy. And I intend to fully honor both the letter and spirit of that agreement.

I’m often the lone techie in the room. I’m not at all sure why these folks tolerate me? But they have been wonderfully open to me, and I deeply appreciate both their confidence, their help and in many cases, the friendship that has been given to me.

Thursday, we had a panel on “generation Y” in the workplace. An erudite and articulate group of folks was assembled to answer our questions. (Most of the usual attendees of the meetings are at least older than Gen-X, smile)

My architect surprise happened almost at the beginning. 2 of these folks, one of whom has been in the workplace for 3-4 years and another person that she recently helped to hire are “Solutions Architects” at a big insurance company.

Wow! At the company for whom I work, “Solutions Architect” is almost as high as one can go. technically.

So, I thought, maybe they’re just using this term in a really different way that I’m used to? I let it go and got on with the panel.

Then, later, it became clear that indeed, they probably did use the term “systems architect” as I know it. I don’t know what they mean by “solutions” and expect that there is some semantic difference?

However, even having someone straight from school (Masters, EE) fulfuill the role of architect I think is generally not a good idea.

I want to be clear that my worry has nothing to do with age, though level of experience is very key.

And here is why: I have long preached that systems architecture practice really requires someone to be deep in at least 2 technical areas. When I write “deep” I don’t mean “studied in school” or “majored” or “masters degree”.

I mean, “practiced successfully”.  I mean, deep, as in, understands an area fully to a core level of practice and broadly across most common cases.

And, I’m sorry, young readers – that kind of experience is very hard to get without a practicum. It takes years. I don’t think it’s easy to short-cut the process.

Not that there can’t exist the odd talent who gets deep early and quickly. The extraordinary does manifest.

But most of us mortals have to live and work to gain our depth.

Why at least two areas?

Because I have seen that something magical and unexpected happens after the second deepening. Even more so on the third, the fourth. Besides, with each area, the next gets easier and faster.

It is the ability to identify and make use of patterns. And patterns are the basis for systems architecture. Not rules. Not following guidelines (architects create these from use patterns) Patterns. Design patterns. Data flow patterns. Communications patterns. Security control patterns.

I have watched great engineers founder as architects. Not everyone is cut out for noting and using abstract patterns, for identifying and working through the relationships between system components.(but that’s a different post)

Systems architecture isn’t about code modules and interfaces. Although that is wonderful place to build (and deepen) design skills. It’s the “school of hard knocks” where I did a great deal of my learning. But systems architecture steps above that to a greater level of abstraction (and sometimes to a much greater level of complexity).

No, I wouldn’t  push my fresh-out-of-school recruits into an architect position. It’s not fair to them. And it’s not fair to the practice.

One of the panelists was delighted to have been given the chance to be an architect straight out of school. But I think she’s missing something important – her practicum that will set her free to design well, perhaps even innovatively.

I believe that systems architecture requires breadth and depth. This can only be won through time, successes, and the all important mistakes and missteps.

cheers

/brook