What Is Our Professional Responsibility?

What Is Our Professional Responsibility?Four times within about a month, I’ve had to deal with “security issues” that were reported as “emergencies”. These appeared as high priority vulnerabilities requiring immediate “fixing” from my team.

Except, none of these were really security issues. Certainly none of these was an emergency.

None of these were bugs or vulnerabilities. In fact, if the security engineer reporting the issue had done even a modicum of investigation, these would never have been reported. False positive.

In one instance, a security engineer had browsed information on a few web pages of a SaaS application and then decided, without any further investigation that the web product had “no security”. She or he even went to far as to “ban” all corporate use of that product. Wow! That’s a pretty drastic consequence for a product who’s security controls were largely turned off by that customer’s IT department. Don’t you think the security engineer should have checked with IT first? I do.

A few days later a competitor of that product pointed a security engineer to an instance that was also configured with few security controls by the customer. The competitor claimed that the “product has no security”. The engineer promptly reported a “security deficiency”.

Obviously, a mature product should have the capability to enforce the security policies of its users, whatever those happen to be. That’s one of the most important tenants of SaaS security: give the customer sufficient tools to enact customer policies. Do not decide for the customer what their appropriate policies must be; let the customer implement policy as required.

Since the customer has the power to enact customer policies, the chosen posture may be wide open or locked down. The security posture depends upon the business needs of each particular customer. Don’t we all know that? Isn’t this obvious? (maybe not?)

In both the cases that I’ve described (and the other two), I would have thought that the engineer would first investigate?

  • What is the actual problem?
  • How does the application work?
  • What are the application’s capabilities?
  • Is there a misconfiguration?
  • Is there a functionality gap?
  • Is there a bug?

When I was a programmer, the rule was, “don’t report a bug in infrastructure, library, or compiler until you have a small working program that positively demonstrates the bug1”. In other words, we had to investigate a problem, thoroughly understand it, isolate it, and provide a working proof before we could call technical support.

Apparently, some security engineers feel no compulsion for this kind of technical precision?

I went to the CSO and asked, “If I failed to investigate a problem, would you be upset? If I did it repeatedly, would you fire me?” Answer: “Yes, on all counts.”

Security managers, what’s our accountability? Are your engineers accountable for the issues that they report?

Are we so hungry for performance metrics that we are mistakenly tracking “incidents reported”? (which is, IMHO, not a very good measure of anything). To what are we holding security investigators accountable?

My understanding of my professional ethics requires me to be as sure as I possibly can before running an issue up the flag pole. Further, I like to present all unknowns as clearly as I can. That way, false positives are minimized. I certainly wouldn’t want to stake the precious trust that I’ve carefully built up on a mere assumption which easily might be a mistake.

Of course, it’s always possible to believe one has discovered a vulnerability that turns out to be misunderstanding or misconfiguration. That can happen to any one of us. Securing multiple technologies across multiple use cases, across many technologies is difficult. Mistakes are far too easy to make. Because of the ease of error, I expect to get one or two of poorly qualified vulnerabilities each year. But four in a month? What?

Let’s all try to be precise and not get carried away in the excitement of the moment. That holds true whether one thinks one has discovered a vulnerability or is taking a bug report. I believe that information security professionals should be seen as “truth tellers”. We must live up to the trust that is placed in us.

By the way, there’s a very exciting conference upcoming at the end of August (2011), Security Architecture: Baking Security into Applications and Networks 2011. This conference is particularly relevant for Security Architects and any practitioner who must design security into systems, or who is charged with building a practice for security architecture and designs. I’ll write more about this conference later. Stay tuned.

cheers,

/brook

1) The small program could not include any of the functionality required by the program that was being written. It had to demonstrate the bug without any ancillary code whatsoever. That meant that one had to understand the bug thoroughly before reporting.