Is API Authentication Enough?

Reading a recent article, Five Common Cloud Security Threats and Data Breaches“, posted by Andreas Dann I came across some advice for APIs that is worth exploring.

The table stakes (basic) security measures that Mr. Dann suggests are a must. I absolutely support each of the recommendations given in the article. In addition to Mr. Dann’s recommendations, I want to explore the security needs of highly exposed inputs like internet-available APIs.

Again and again, I see recommendations like Mr. Dann’s: “…all APIs must be protected by adequate means of authentication and authorization”. Furthermore, I have been to countless threat modelling sessions where the developers place great reliance on authorization, often sole reliance (unfortunately, as we shall see.)

For API’s with large open source, public access, or which involve relatively diverse customer bases, authentication and authorization will be insufficient.

As True Positives‘ AppSec Assurance Strategist, I’d be remiss to misunderstand just what protection, if any, controls like authentication and authorization provide in each particular context.

For API’s essential context can be found in something else Mr. Dann states, “APIs…act as the front door of the system, they are attacked and scanned continuously.” One of the only certainties we have in digital security is that any IP address routable from the internet will be poked and prodded by attackers. Period.

That’s because, as Mr. Dann notes, the entire internet address space is continuously scanned for weaknesses by adversaries. Many adversaries, constantly. This has been true for a very long time.

So, we authenticate to ensure that entities who have no business accessing our systems don’t. Then, we authorize to ensure that entities can only perform those actions that they should and cannot perform actions which they mustn’t. That should do it, right?

Only if you can convince yourself that your authorized user base doesn’t include adversaries. There lies “the rub”, as it were. Only systems that can verify the trust level of authorized entities have that luxury. There actually aren’t very many cases involving relatively assured user trust.

As I explain in Securing Systems, Secrets Of A Cyber Security Architect, and Building In Security At Agile Speed, many common user bases will include attackers. That means that we must go further than authentication and authorization, as Mr. Dann implies with, “…the API of each system must follow established security guidelines”. Mr. Dann does not expand on his statement, so I will attempt to provide some detail.

How do attackers gain access?

For social media and fremium APIs (and services, for that matter), accounts are given away to any legal email address. One has only to glance at the number of cats, dogs, and birds who have Facebook accounts to see how trivial it is to get an account. I’ve never met a dog who actually uses, or is even interested in Facebook. Maybe there’s a (brilliant?) parrot somewhere? (I somehow doubt it.)

Retail browsing is essentially the same. A shop needs customers; all parties are invited. Even if a credit card is required for opening an account, which used to be the norm, but isn’t so much anymore, stolen credit cards are cheaply and readily available for account creation.

Businesses which require payment still don’t get a pass. The following scenario is from a real-world system for which I was responsible.

Customers paid for cloud processing. There were millions of customers, among which were numerous nation-state governments whose processing often included sensitive data. It would be wrong to assume that nation-states with spy agencies could not afford to purchase a processing account in order to poke and prod just in case a bug should allow that state to get at the sensitive data items of one of its adversaries. I believe (strongly) that it is a mistake to assume that well-funded adversaries won’t simply purchase a license or subscription if the resulting payoff is sufficient. spy agencies often work very slowly and tenaciously, often waiting years for an opportunity.

Hence, we must assume that our adversaries are authenticated and authorized. That should be enough, right? Authorization should prevent misuse. But does it?

All software fails. Rule #1, including our hard won and carefully built authorization schemes.

“[E]stablished security guidelines” should expand to a robust and rigorous Security Development Lifecyle (SDL or S-SDLC), as James Ransome and I lay out in Building In Security At Agile Speed. There’s a lot of work involved in “established security”, or perhaps better termed, “consensus, industry standard software security”. Among software security experts, there is a tremendous amount of agreement in what “robust and rigorous” software security entails.

(Please see NIST’s Secure Software Development Framework (SSDF) which is quite close to and validates the industry standard SDL in our latest book. I’ll note that our SDL includes operational security. I would guess that Mr. Dann was including typical administrative and operational controls.)

Most importantly, for exposed interfaces, I continue to recommend regular, or better, continual fuzz testing.

The art of software is constraining software behaviour to the desired and expected. Any unexpected behaviour is probably unintended and thus, will be considered a bug. Fuzzing is an excellent way to find unintended programme behaviour. Fuzzing tools, both open source and commercial, have gained maturity while at the same time, commercial tools’ price has been decreasing. The barriers to fuzzing are finally disappearing.

Rest assured that you probably have authorized adversaries amongst your users. That doesn’t imply that we abandon authentication and authorization. These controls continue to provide useful functions, if only to tie bad behaviour to entities.

Along with authentication and authorization, we must monitor for anomalous and abusive behavour for the inevitable failures of our protective software. At the same time, we must do all we can to remove issues before release, while at the same time, using techniques like fuzzing to identify those unknown, unknowns that will leak through.
 

Cheers,

/brook

Security Research Is Threat Modeling’s Constructive Criticism

(the following is partially excerpted from my next book)

Adam Shostack, author of Threat Modeling: Designing for Security has succinctly described a process of constructive critique within engineering:

“The back and forth of design and critique is not only a critical part of how an individual design gets better, but fields in which such criticism is the norm advance faster.”

The Spectre/Meltdown issues are the result of a design critique such as Shostack describes in his pithy quote given above.  In fact, one of the fundamental functions that I believe security research can play is providing designers with a constructive critique.

Using Spectre and Meltdown as an example of engineering critique, let’s look at some of the headlines from before the official issue announcement by the researchers:

“Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign”, John Leyden and Chris Williams 2 Jan 2018 at 19:29, The Register, https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/

“A CRITICAL INTEL FLAW BREAKS BASIC SECURITY FOR MOST COMPUTERS”, Andy Greenberg, January 3, 2018, Wired Magazine, https://www.wired.com/story/critical-intel-flaw-breaks-basic-security-for-most-computers/

There were dozens of similar headlines (many merely repeating the first few, especially, The Register’s), all declaiming a “flaw” in CPUs. I want to draw the reader’s attention to the word, “flaw”. Are these issues “flaws”? Or, are they the result of something else?

The Register headline and that first article were based upon speculation that had been occurring amongst the open source community supporting the Linux Kernel. A couple of apparently odd changes that had been made to kernel code. But, the issues to which these changes responded were “embargoed”, that is, the reasoning behind the changes was known only to those making the changes.

Unlike typical open source changes, whose reasoning is public and often discussed by members of the community, these kernel changes had been made in opaquely without public comment, which of course, set concerned kernel community members wondering.

To observers not familiar with the reasoning behind the changes, it was clear that something was amiss and likely in relation to CPU functions; anxious observers were guessing what might be the motivation for those code changes.

Within the digital security universe, there exists an important dialog between security research aimed at discovering new attack techniques and the designers of the systems and protocols upon which that research is carried out. As Adam noted so very wryly, achieving solid designs, even great ones, and most importantly, resilient designs in the face of omnipresent attack requires an interchange of constructive critique. That is how Spectre and Meltdown were discovered and presented.

Neither of this collection of (at the time of announcement) new techniques involved exercising a flaw, that is, a design error – in other words, the headlines quoted just above were erroneous and rather misleading[2].

Speculative execution and the use of kernel mapped user memory pages by operating systems were intentional design choices that had been working as designed for more than 10 years. Taken together, at least some of the increases in CPU performance over that period can directly be tied to speculative execution design.

Furthermore, and quite importantly to this discussion, these design choices were made within the context of a rather different threat landscape. Some of today’s very active threat actors didn’t exist, or at least, were not nearly as active and certainly not as technically sophisticated circa 2005 as they are today, May 2018.

If I recall correctly (and I should be able to remember, since I was the technical lead for Cisco’s Web infrastructure and application security team at that time), in 2005, network attacks were being eclipsed by application focused attack methods, especially, web attack methods.

Today, web attacks are very “ho, hum”, very run of the ordinary, garden variety. But in 2005, when the first CPU speculative execution pipelines were being released, web applications were targets of choice at the cutting edge of digital security. Endpoint worms and gaining entrance through poor network ingress controls had been security’s focus up until the web application attack boom (if I may title it so?). At about that time, targeting web applications was fast displacing network periphery concerns. Attackers were in the process of shifting to targets that used a standard protocol (HTTP) which was guaranteed to pass through organizational firewalls. Many of the new targets’ were becoming always  available via the Public Internet.

Since the web application attack boom, attacks and targets have continued to evolve. The threat landscape changed dramatically over the years since the initial design of speculative execution CPUs. Alongside the changes in types of attackers as well as their targets, attacker and researcher sophistication has grown, as has an attackers’ toolbox. 2018 is a different security world than 2005. I see no end to this curve of technical growth in my crystal ball.

The problem is, when threat modeling, whether in 2005 or 2018, one considers the attacks of the past, those of moment, and then one must try one’s best to project from current understanding to attacks that might arise within the foreseeable future. Ten or twelve years seems an awfully long horizon of prescience, especially when considering the rate at which technical change continues to take place.

As new research begins to chew at the edges of any design, I believe that the wise and diligent practitioner revisits their existing threat models in light of developments.

If I were to fault the CPU and operating system makers whose products are subject to Spectre or Meltdown, it would be for a failure to anticipate where research might lead, as research has unfolded. CPU threat modelers could have taken into account advances in research indicating unexpected uses of cache memory.

Speculative execution leaves remnants of a speculated execution branch in cache memory when a branch has not been taken. It is those remnants that lie at the heart of this line of research.

A close examination of the unfolding research might very well have led those responsible for updating CPU threat models to consider the potential for something like Spectre and Meltdown. (Or, perhaps the threat models were updated, but other challenges prevented updates to CPU designs? CPU threat modelers, please tell us all what the real story is)

I’ve found a publication chain during the 3 years previous that, to me, points towards the new techniques. Spectre and Meltdown are not stand-alone discoveries, but lie on a body of CPU research that had been published regularly for several years.

As I wrote for McAfee’s Security Matters blog in January of 2018 (as a member of McAfee’s Advanced Threat Research Team),

“Meltdown and Spectre are new techniques that build upon previous work, such as “KASLR”  and other papers that discuss practical side-channel attacks. The current disclosures build upon such side-channels attacks through the innovative use of speculative execution….An earlier example of side-channel based upon memory caches was posted to Github in 2016 by one of the Spectre/Meltdown researchers, Daniel Gruss.” Daniel Gruss is one of the Spectre and Meltdown paper authors.

Reading these earlier papers, it appears to me that some of the parent techniques that would be used for the Spectre and Meltdown breakthroughs could have been read (should have been read?) by CPU security architects in order to re-evaluate the CPU’s threat model. That previously published research was most certainly available.

Of course, hindsight is always 20/20; I had the Spectre and Meltdown papers in hand as I reviewed previous research. Going the other way might be more difficult?

Spectre and Meltdown did not just spring miraculously from the head of Zeus, as it were. They are the results of a fairly long and concerted effort to discover problems with and thus, hopefully, improve the designs of modern processors. Indeed, the researchers engaged in responsible disclosure, not wishing to publish until fixes could be made available.

To complete our story, the driver that tipped the researchers to an early, zero-day disclosure (that is, disclosure without available mitigations or repairs) were the numerous speculative (if you’ve forgive the pun?) journalism (headlines quoted above) that gained traction based upon misleading (at best) or wrong conclusions. Claiming a major design “flaw” in millions of processors is certainly a reader catching headline. But, unfortunately, these claims were vastly off the mark since no flaw existed in the CPU or operating system designs.

While it may be more “interesting” to imagine a multi-year conspiracy to cover up known design issues by evil CPU makers, no such conspiracy appears to have taken place.

Rather, in the spirit of responsible disclosure, the researchers were waiting for mitigations to be made available to customers; CPU manufacturers and operating system coders were heads down at work figuring out what appropriate mitigations might be, and just how to implement these with the least amount of disruption. None of these parties was publicly discussing just why changes were being made, especially to the open source Linux kernel.

Which is precisely what one would expect in order to protect millions of CPU users: embargo the technical details to foil attackers. There is actually nothing unusual about such a process; it’s all very normal and typical, and unfortunately for news media, quite banal[3].

What we see through the foregoing example about Spectre and Meltdown is precisely the sort of rich dialog that should occur between designers and critics (researchers, in this case).

Designs are built against the backdrop and within the context of their security “moment”. Our designs cannot improve without collective critique amongst the designers; such dialog internal to an organization or at least, a development team is essential. I have spoken about this process repeatedly at conferences: “It takes a village to threat model,” (to misquote a famous USA politician.)

But, there’s another level, if you will, that can reach for a greater constructive critique.

Once a design is made available to independent critics, that is, security researchers, research discoveries can and I believe, should become part of an ongoing re-evaluation of the threat model, that is, the security of the design. In this way, we can, as an industry, reach for the constructive critique called for by Adam Shostack.

[1]In my humble experience, Adam is particularly good at expressing complex processes briefly and clearly. One of his many gifts as a technologist and leader in the security architecture space.

[2]Though salacious headlines apparently increase readership and thus advertising revenue. Hence, the misleading but emotion plucking headlines.

[3]Disclosure: I’ve been involved in numerous embargoed issues over the years.

KRACKs Against Wi-Fi Serious But Not End of the World

On October 12, researcher Mathy Vanhoef announced a set of Wi-Fi attacks that he named KRACKs, for key reinstallation attacks. These attack scenarios are against the WPA2 authentication and encryption key establishment portions of the most recent set of protocols. The technique is through key reinstallation.

The attack can potentially allow attackers to send attacker controlled data to your Wi-Fi connected device. In some situations, the attacker can break the built in Wi-Fi protocol’s encryption to reveal users’ Wi-Fi messages to the attacker.

However key reinstallation depends on either working with the inherent timing of a Wi-Fi during a discreet, somewhat rare (in computer terms) exchange or the technique depends upon the attacker forcing the vulnerable exchange through some method (see below for examples). Both of these scenarios take a bit of attacker effort, perhaps more effort than using any one of the existing methods for attacking users over Wi-Fi?

For this reason, while KRACKs Attacks is a serious issue that should be fixed as soon as possible (see below), our collective digital lives probably won’t experience a tectonic shift from Wi-Fi key reinstallation attacks (“KRACKs Attacks”).

Please read on for a technical analysis of the issue.

“… because messages may be lost or dropped, the Access Point (AP) will retransmit message 3 if it did not receive an appropriate response as acknowledgment. As a result, the client may receive message 3 multiple times. Each time it receives this message, it will reinstall the same encryption key, and thereby reset the incremental transmit packet number (nonce) and receive replay counter used by the encryption protocol. We show that an attacker can force these nonce resets by collecting and replaying retransmissions of message 3 of the 4-way handshake. By forcing nonce reuse in this manner, the encryption protocol can be attacked, e.g., packets can be replayed, decrypted, and/or forged.”

–Mathy Vanhoef, “Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2”

The highlighted text above is Vanhoef’s, not mine.

Depending upon which key establishment exchange is attacked, as Vanhoef notes, injection of messages might be the result. But, for some exchanges, decryption might also be possible.

For typical Wi-Fi traffic exchange, AES-CCMP is used between the client (user) and the access point (AP, the Wi-Fi router). Decryption using KRACKs is not thought to be possible. But decryption is not the only thing to worry about.

An attacker might craft a message that contains malware, or at least, a “dropper,” a small program that when run will attempt to install malware. Having received the dropper, even though the message may make no sense to the receiving program, the dropper is still retained in memory or temporary, perhaps even permanent, storage by the receiving computer. The attacker then has to figure out some way to run the dropper—in attack parlance, to detonate the exploit that has been set up through the Wi-Fi message forgery.

“Simplified, against AES-CCMP an adversary can replay and decrypt (but not forge) packets. This makes it possible to hijack TCP streams and inject malicious data into them. Against WPA-TKIP and GCMP the impact is catastrophic: packets can be replayed, decrypted, and forged.“

–Mathy Vanhoef, “Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2”

Packet forgery is worse, as the receiver may consider that the data is legitimate, making it easier for an attacker to have the receiver accept unexpected data items, influencing the course of an exchange and establish a basis upon which to conduct follow-on, next-step exploits.

In my opinion KRACKs are a serious problem to which the wise will wish to respond.

The essential problem with Wi-Fi is that it is free to intercept. Radio waves travel over the air, which, as we all know, is a shared medium. One need not “plug in” to capture Wi-Fi (or any radio-carried) packets. One simply must craft a receiver strong enough to “hear,” that is, receive the transmissions of interest.

But although certainly serious, are KRACKs Attacks the end of the known digital universe? I think not.

First, the attacker has to be present and alert to the four-way, WPA2 handshake. If this has already passed successfully, the attacker’s key reinstallation is no longer possible. The handshake must be interjected at exchange #3, which will require some precision. Or, the attacker must force the key exchange.

One could, I imagine, write an attack that would identify all WPA2 handshake exchanges within range, and inject the attacker’s message #3 to each handshake to deliver some sort of follow-on exploit. This would be a shotgun approach, and might on large, perhaps relatively open networks have potential attacker benefits.[1]

Supplicants (as Wi-Fi clients are called) do not constantly handshake. The handshake interchange takes place upon authentication and reauthentication, and in some networks upon shifting from one AP to another when the client changes location (a “handoff”). These are discreet triggers, not a constant flow of handshake message #3. Johnny or Janie attacker has to be ready and set up in anticipation of message #3. That is, right after message #3 goes by, attacker’s #3 must be sent. Once the completed handshake message #4 goes by, I believe that the attack opportunity closes.

I fear that the timing part might be tricky for the average criminal attacker. It would be much easier to employ readily available Wi-Fi sniffing tools to capture sufficient traffic to allow the attacker to crack weak Wi-Fi passwords.

There are well-understood methods for forcing Wi-Fi authentication (see DEAUTH, below). Since these methods can be used to reveal the password for the network or the user, using one of these may be a more productive attack?

There are other methods for obtaining a Wi-Fi password, as well: the password must be stored somewhere on each connecting device or the user would need to reenter the password upon every connection/reconnection. Any attack that gives attacker access to privileged information kept by any device’s operating system can be used to gain locally stored secrets like Wi-Fi passwords.

Wi-Fi password theft (whether WPA-Enterprise or WPA-Personal) through Wi-Fi deauthentication, “DEAUTH”, has been around for some time; there are numerous tools available to make the attack relatively trivial and straight forward.

Nation-state attackers may have access to banks of supercomputers or massive parallel processing systems that they can apply for rapid password cracking of even complex passwords. A targeted nation-state–sponsored Wi-Fi password-cracking attack is difficult to entirely prevent with today’s technologies. There are likely other adversaries with access to sufficient resources to crack complex passwords in a reasonable amount of time, as well[2].

Obviously, as Vanhoef suggests, as soon as your operating system or Wi-Fi client vendor offers an update to this issue, patch it quickly. Problem, hopefully, solved. Please see https://www.krackattacks.com for updates from security vendors that are working with the discoverer. Question your vendor. If your vendor does not respond, pressure them; that’s my suggestion.

Other actions that organizations can couple with good Wi-Fi hygiene are to use good traffic and event analysis tools, such as modern Security Information Event Management (SIEM) software, network ingress and egress capture for anomaly analysis, and perhaps ingress/egress gateways that prevent many types of attack and forms of traffic.

“One can use VPN, SSH2 tunnels, etc., to ensure some safety from ARP poisoning attacks. Mathy Vanhoef also uses an SSL stripper that depends on badly configured web servers. Make sure your TLS implementation is correct!” – Carric Dooley, Global Lead, Foundstone Consulting Services

Organization Wi-Fi hygiene includes rapidly removing rogue access points, Wi-Fi authentication based upon the organization’s central authentication mechanism (thus don’t employ a WPA password), strong password construction rules, and certificates issued to each Wi-Fi client without which access to Wi-Fi will be denied. Add Wi-Fi event logs to SIEM collection and analysis. Find out whether organization access points generate an event on key reinstallation. This is a fairly rare event. If above-normal events are being generated, your Wi-Fi may be suffering from a KRACK.

None of the foregoing measures will directly prevent key reinstallation attacks. Reasonable security practices do make gaining access to Wi-Fi more difficult, which will prevent or slow other Wi-Fi attacks. Plus, physical security ought to pay attention to anyone sitting in the parking lot with a Wi-Fi antenna pointing at a campus building. But that was just as true before KRACKs were discovered.

For home users, use a complex password on your home Wi-Fi. Make cracking your password difficult. Remember, you usually must enter the password only once for each client device.

Maintain multiple Wi-Fi networks (probably, multiple access points), each with different passwords: one for work or other sensitive use; and another for smart TVs and the like and other potentially vulnerable devices such as Internet of Things devices—isolate those from your network shares, printers, and other sensitive data and trusted devices. Install a third Wi-Fi for your guests. That network should be set up so that each session is isolated from the others; all your guests need is to get to the internet. Each SSID network must have a separate, highly varied password: numbers, lowercase, uppercase, symbols. Make it a long passphrase that resists dictionary attacks.

Wi-Fi routers are a commodity; I do not suggest spending a great deal of money. How much speed do your occasional guests really need? Few Internet connections are as fast as even bottom-tier home Wi-Fi. Most of your visitors probably do not need access to your sensitive data, yes? Should your kids have their own Wi-Fi so that their indiscretions do not become yours?

Multiple Wi-Fi networks will help to slow and perhaps confound KRACKs cybercriminals. “Which network should I focus on?” You increase the reconnaissance the attacker must perform to promulgate a successful attack. Maybe they will move on to simpler home network.

If you happen to notice someone sitting in a car in your neighborhood with an open laptop, be suspicious. If they have what looks like an antenna, maybe let law enforcement know.

Basic digital security practices can perhaps help? For instance, use a VPN even over Wi-Fi. Treat Wi-Fi, particularly, publicly available Wi-Fi as an untrustable network. Be cognizant of where the Wi-Fi exists. WPA2 is vulnerable to decryption, so don’t trust airports, hotels, cafes, anywhere where the implementers and maintainers of the network are unknown.

Users of some Android and Linux clients should be aware that an additional implementation error allows an attacker immediate access to the decryption of client and access point Wi-Fi traffic. Remember, all those smart TVs, smart scales, home automation centers, and thermostats are usually nothing more than specialized versions of Linux. Hence, each may be vulnerable. On the other hand, if these are segregated onto a separate, insecure Wi-Fi, at least attackers will not have readily gained your more sensitive network and devices. While waiting for a patch for your vulnerable Android device, perhaps it makes sense to put it on the untrusted home Wi-Fi as a precaution.

You may have noticed that I did not suggest changing the home Wi-Fi passwords. Doing so will not prevent this attack. Still, the harder your WPA2 password is to crack, the more likely common cybercriminals will move on to easier pickings.

As is often the case in these serious issues, reasonable security practices may help. At the same time, failure to patch quickly leaves one vulnerable for as long as it takes to patch. I try to remember that attackers will become ever more facile with these methods and what they can do with them. They will build tools, which, if these have value, will then become ever more widespread. It is a race between attacker capability and patching. To delay deploying patches is typically a dance with increasing risk of exploitation. In the meantime, the traffic from this attack will be anomalous. Watch for it, if you can.

Many thanks to Carric Dooley from Foundstone Professional Services for his suggestions to complete and round out this analysis as well as for keeping me technically accurate.

Cheers,

/brook

[1] “When a client joins a network, it […] will install this key after receiving message 3 of the 4-way handshake. Once the key is installed, it will be used to encrypt normal data frames using an encryption protocol. However, because messages may be lost or dropped, the Access Point (AP) will retransmit message 3 if it did not receive an appropriate response as acknowledgment. […] Each time it receives this message, it will reinstall the same encryption key, and thereby reset the incremental transmit packet number (nonce) and receive replay counter used by the encryption protocol. We show that an attacker can force these nonce resets by collecting and replaying retransmissions of message 3 of the 4-way handshake. By forcing nonce reuse in this manner, the encryption protocol can be attacked, e.g., packets can be replayed, decrypted, and/or forged.”

[2] Interestingly, my LinkedIn password was among those stolen during the 2011 LinkedIn breach. A research team attempted to crack passwords. After 3 months, they cracked something like 75% of the passwords. However, my highly varied, but merely 6 character password was never cracked. While today’s cracking is factors more sophisticated and rapid, still, a varied non-dictionary password still slows the process down considerably.

Why no “S” in IoT?

My friend, Chris Romeo, a security architect and innovator for whom I have deep respect and from whom I learn a great deal just posted a blog about the lack of security (“S”) in Internet of Things (IoT), “The S In IoT Stands For Security“.

Of course, I agree with Chris about his three problem areas:

  1. Lack of industry knowledge
  2. Developer’s lack of security knowledge and understanding
  3. Time to market pressure, particularly as it exists for startups

I don’t think one can argue with these; in fact, these problems have been and continue to be pervasive, well beyond and before the advent of IoT. Still, I would like to add two more points to Chris’ explanation of the problem space, if I may add to what I hope will be an ongoing and lively conversation within the security industry?

First, IoT products are not just being produced at startups; big players are building IoT, too. Where we place sensors and for what purposes is a universe of different solutions, all the way from moisture sensors in soil (for farming), through networkable household gadgets like light bulbs and thermostats, to the activity monitor on a human’s wrist, to wired up athletes, to the by now famous, mirai botnet camera.

Differing use cases involve varying types and levels (postures) of security. It is a mistake to lump these all together.

When at Intel, I personally reviewed any number of IoT projects that involved significant security involvement, analysis, and implementation. I’m going to guess that most of the major activity monitoring server-side components should have had quite a lot of basic web and cloud security in-built? At least for some products, interaction between the device (IoT sensor) and any intermediary device, like a smart phone or personal computer, has also been given some significant security thought – and it turns out, that this is an area where security has been improving as device capabilities have grown.

It’s hard to make an all out statement on the state of IoT security, though I believe that there are troublesome areas, as Chris points out, most especially for startups. Another problem area continues to be medical devices, which have tended to lag badly because they’ve been produced with more open architectures, as has been presented at security conferences time and again. Then, there’s the Jeep Wrangler remote control hack.

Autonomous autos are going to push the envelop for security, since it’s a nexus between machine learning, big (Big BIG) data, consumer convenience, and physical safety. The companies that put the cars together are generally larger companies, though many of these do not lead in the cyber security space. Naturally, the technology that goes into the cars comes from a plethora of concerns, huge, mid-sized, tiny, and startup. Security consciousness and delivery will be a range from clueful to completely clueless across that range. A repeating problem area for security continues to be complex integrations where differing security postures are pasted together without sufficient thought about how interactions are changing the overall security of the integrated system.

Given the foregoing, I believe that it’s important to qualify IoT somewhat, since context, use case, and semantics/structure matter.

Next, and perhaps more important (save the most important for last?) are the economics of development and delivery. This is highlighted best by the mirai camera: it’s not just startups who have pressure. In the camera manufacturer’s case (as I understand it) they have near zero economic incentive to deliver a secure product. And this is after the mirai attack!

What’s going on here? Importantly, the camaras are still being sold. The access point with a well-known, default password is presumably still on the market, though new cameras may have been remediated. Security people may tear their hair out with an emphatic, “Don’t ever do that!” But, consider the following, please.

  • Product occurs distant (China, in this case) from the consumption of the product
  • Production and use occur within different (vastly different, in this case) legal jurisdictions
  • The incentive is to produce product as cheaply as possible
  • Eventual users do not buy “security” (whatever that means to the customer) from the manufacturer
  • The eventual user buys from manufacturer’s customer or that customer’s customer (multiple entities between consumer and producer)

Where’s the incentive to even consider security in the above picture? I don’t see it.

Instead, I see, as a Korean partner of a company that I once worked for said about acquiring a TCP/IP stack, “Get Internet for free?” – meaning of course, use an open source network stack that could be downloaded from the Internet with no cost.

The economics dictate that, manufacturer download an operating system, usually some Linux variant. Add an working driver, perhaps based upon an existing one? Maybe there’s a configuration thingy tied to a web server that perhaps, came with the Linux. Cheap. Fast. Let’s make some money.

We can try to regulate this up, down, sideways, but I do not believe that this will change much. Big companies will do the right thing, and charge more proportionately. Startups will try to work around the regulations. Companies operating under different jurisdictions or from places where adherence is not well policed or can be bribed away will continue to deliver default passwords to an open source operating system which delivers a host ripe for misuse (which is what mirai turns out to be).

Until we shift the economics of the situation, nothing will change. In the case of mirai, since the consumer of the botnet camera was likely not affected during that attack, she or he will not be applying pressure to the camera manufacturer, either. The manufacturer has little incentive to change; the consumer has little pressure to enforce via buying choices.

By the way, I don’t see consumers becoming more security knowledgeable. Concerned about digital security? Sure. Willing to change the default password on a camera or a wifi router? Many consumers don’t even go that far.

We’re in a bind here; the outlook does not look rosy to me. I open to suggestions.

Thanks, Chris, for furthering this discussion.

cheers,

/brook

RSA 2017: 5 Opportunities!

I feel incredibly grateful that RSA Conference 2017, Digital Guru and IOActive have given me so many opportunities this year to share with you, to meet you.

I will speak about the hard earned lessons that I (we’ve) gained through years of threat modeling programmes and training on Wednesday morning, February 15th. The very same day, I will give a shortened version of the threat model class that I, along with a couple of Intel practitioners, have developed. And Justin Cragin, Intel Principal Engineer and cloud guru, and I will share some of our thoughts on DevOps as a vehicle for product security on Thursday morning (see below for the schedule).

IOActive have asked me to participate in a panel discussion Tuesday afternoon at 3PM at their usual RSA event, “IOAsis“, at Elan on Howard Street, across from Moscone West. I believe that you may need to register beforehand to attend IOAsis. IOActive’s programme is always chock full of interesting information and lively interchange.

Finally, once again, I’ll sign books at 2PM on Wednesday in front of the Digital Guru official RSA bookstore. In the past, it’s been located in the South Lobby of Moscone during the RSA conference. One doesn’t need a pass to get to the book store. Please stop by just to say “hi”; book purchase not required, though, of course, I’m happy to personally sign copies of my books.

Follows, my RSA talks schedule:

Tuesday

3:00 P.M. PST – Security Plan Development: Move to a Better Security Reality
Presented by: Brad Hegrat, IOActive
IOAsis, Elan Event Venue 
Talk Description

Wednesday

Session ID

Title

Room

Start Time

End Time

AIR-W02

Threat Modeling the Trenches to the Clouds

Marriott Marquis | Yerba Buena 8

8:00 AM

8:45 AM

LAB3-W04

Threat Modeling Demystified

Moscone West | 2022

10:30 AM

12:30 PM

Book Signing, RSA Bookstore 2-2:30P

Thursday

Session ID

Title

Room

Start Time

End Time

ASD-R03

DevSecOps: Rapid Security for Rapid Delivery

Moscone West | 2005

9:15 AM

10:00 AM

“We sell Hammers” – Not Security!

“We sell Hammers” – Not Security!

Several former Home Depot employees said they were not surprised the company had been hacked. They said that over the years, when they sought new software and training, managers came back with the same response: “We sell hammers.

Failure to understand the dependence that we have not just on our obvious digital devices – smart phone, laptop, tablet, fancy fitness bling on your wrist – but also on a matrix of interconnection tying all these devices and billions more together – will land you in the hot seat. For about three billion out of the seven billion people on this planet, we have long since passed the point where we are isolated entities who act alone and in some measure of unconnected global anonymity. For most of us, our lives are not just dependent upon technology itself, but also on the capabilities of innumerable, faceless business entities acting on our behalf.

Consider the following, common, but trivial example.

When I swipe my credit card at the pump to purchase petrol, that transaction passes through any number of computation devices and applications operated by a chain of business entities. The following is a typical scenario (an example flow – but not the only one, of course):

  • The point of sale device1, itself (likely supplied by a point of sale provider)
  • The networking equipment at the station2
  • The station’s Internet provider’s equipment (networking, security, applications – you have no idea!)
  • One or more telecom company’s networking infrastructure across the Internet backbone
  • The point of sale company or their proxy
  • More networking equipment and Internet providers
  • A credit card payment processor
  • More networking equipment and Internet providers
  • The card issuer who must validate the card and agree to pay the transaction for me

And so on…. All just to fill my tank up. It’s seamless and invisible – the communications between entities usually bring up an encrypted tunnel, though the protection offered is not as solid as you may hope – Invisible and seamless, except when the processing is not so invisible, like during a compromise and breach.

Every one of these invisible players has to have good enough security to protect me, and you, if you also use some sort of payment card for your petrol.

Home Depot, and Target before them, (and who knows who’s next?) failed to understand that in order to sell a hammer in the Internet world, you’re participating in this huge web of digital interconnection. Even more so, if you’re large enough, your business network will have become an eco-system of digital entities, many of whose security practices will affect your security posture in fairly profound ways. When 2 (or more) systems connect, each may affect the security posture of the other, sometimes in profound ways.

And there be pirates in them waters, Matey. As I wrote in the introduction to my next book, Securing Systems: Applied security architecture and threat models:

“…as of this writing, we are engaged in a cyber arms race of extraordinary size, composition, complexity, and velocity.”

One of the biggest problems for security practitioners remains that the cyber “arms race” isn’t just between a couple of nation-states. Foremost, the nation-state cyber war has to cross the same digital ocean that we use for our daily lives and digital entertainment. The shared web makes every digital citizen, potential “collateral damage”. But, there are more players than governments.

As can occur in a ground war, virtual “warlords” have private cyber armies marauding for loot, my loot, your loot. Those phishing spam don’t come from your friends, right3? Just trying to categorize the various entities engaged in cyber attacks could generate a couple of fine PhD theses and perhaps even provide years of follow-on papers? The number and varying loyalties of the many players who carry out cyber attacks increases the “size” of the problem, adds to the “composition”, and generates a great deal of “complexity”. It’s enough to make a well-meaning box-store retailer bury its collective head in the virtual sand. Which is precisely what happened to that hammer seller, Home Depot.

But answering the “who” doesn’t complete the picture. There’s the macro “how”, as well4. The Internet seems to suffer the “tragedy of the commons“.

In order to keep the Internet sufficiently interesting with compelling content such that we want to participate, it absolutely must remain neutral in character5. While Internet democracy certainly appears to be quite messy, the very thing that drives the diversity of content on the Internet is its level playing field6.

But leaving the Internet as an open field for all to enjoy means that some will take advantage of the many simply because the “pickings” are too rich to ignore. There is just too much to steal to let those resources lay untouched. And the pirates don’t! People actually do answer those “Nigerian Prince” scam emails. Really, someone does. People do buy those knock-off drugs. For the 3 billion of us who are digitally connected, it’s a dangerous digital day, every single day. Watch what you click!

In short, if you’re reading this on my blog site, you are perhaps an unwitting participant in that “cyber arms race of extraordinary size, composition, complexity, and velocity.” And so is every business that employs modern digital capabilities, whether for payments, or any other task. Failure to understand just how dependent a business is upon this matrix of digital interaction will make one a Target (pun intended). CEO’s, you may want to pay closer attention? Ignoring the current realities could cost you your job, perhaps even your career7!

If you think that you only sell objects and not some level of digital security, I fear that you are likely to be very sadly mistaken?

cheers,

/brook

  1. My friend and former colleague, Lucy McCoy, wrote the communications code in the first generation of gas pump payment terminals. At that time, terminals communicated via modem and phone line. She was a serial communications wiz. I remember the point of sale terminal laid out in her lab area. Lucy has since passed away. She was a brilliant engineer; she gave my code the best quality testing ever.
  2. The transactions have to get from station to payment processing, right? Who runs those cable modems and routers at the station? Could be the Internet provider, or maybe not. I run my own modem/routers/switches at home to which I have full admin access.
  3. I don’t know any spammers, as far as I know? Perhaps I make an unwarranted assumption that you don’t, either?
  4. The “what” and “why” of cyber attack seem pretty clear. Beyond attackers after money, they are after some other advantage: geo-political, business, just causes (pick your favourite or most hated cause), career enhancement, what-have-you. This is all pretty well documented. The security industry seems preoccupied with the “what”, i.e., the technical details of exploits. Again, these technical details seem pretty well documented.
  5. Imagine if your most hated or feared government had control over your Internet use, even the Internet itself, and proceeded to feed you exactly what they wanted you to know and prevented you from any other content. How would you like that?
  6. The richness and depth that is an emergent and continuing quality of the Internet, to me, demonstrates the absolute genius of the originators and early framers of the protocols and design.
  7. Of course, if I had a severance of $15.9 million, maybe I wouldn’t very much mind ending my career?

Of Flaws, Requirements, And Templates

November 13th, 2013, I spoke at the annual BSIMM community get together in Reston, Virginia, USA. I had been asked to co-present on Architecture Risk Assessment (ARA) and threat modeling along side Jim Delgrosso, long time Cigital threat modeler. We gave two talks:

  1. Introduction to  Architecture Risk Assessment And Threat Modeling
  2. Scaling Architecture Risk Assessment And Threat Modeling To An Enterprise

Thanks much, BSIMM folk, for letting me share the tidbits that I’ve managed to pick up along my way. I hope that we delivered on some of  the promises of our talks? One of my personal values from my participation in the conference was interacting with other experienced practitioners.

Make no mistake! ARA-threat modeling is and will remain an art. There is science involved, of course. But people who can do this well learn through experience and the inevitable mistakes and misses. It is a truism that it takes a fair amount of background knowledge (not easily gained):

  • Threat agents
  • Attack methods and goals used by each threat agent type
  • Local systems and infrastructure into which a system under analysis will go
  • Some form of fairly sophisticated risk rating methodology.

These specific knowledge sets sit on top of significant design ability and experience. The assessor has to be able to understand a system architecture and then to decompose it to an appropriate level.

The knowledge domains listed above are pre-requisite to an actual assessment. That is, there are usually years of experience in system and/or software design, in computer architectures, in attack patterns, threat agents, security controls, etc., that the assessor  brings to an ARA. One way of thinking about it is that ARA/threat modeling is applied computer security, computer security applied to the system under analysis.

Because ARA is a learned skill with many local variations, I find it fascinating to match my practice to practices that have been developed  independent of mine. What is the same? Where do we have consensus? And, importantly, what’s different? What have  I missed? How can I improve my practice based upon others’? These co-presentations and conversations are priceless. Interestingly, Jim and I agreed about the basic process we employ:

  • Understand the system architecture, especially the logical/functional representation
  • Uncover intended risk posture of the system and the risk tolerance of the organization
  • Understand the system’s communication flows, to the protocol interaction level
  • Get the data sensitivity of each flow and for each storage. Highest sensitivity rules any resulting security needs
  • Enumerate attack surfaces
  • Apply relevant active threat agents and their methodologies to attack surfaces
  • Filter out protected, insignificant, or unlikely attack vectors and scenarios
  • Output the security that the system or the proposed architecture and design are missing in order to fulfill the intended security posture

There doesn’t seem to be much disagreement on what we do. That’s good. It means that this practice is coalescing.The places were we disagree or approach the problem differently I think are pretty interesting.

Gary McGraw calls security architecture misses, “flaws”. Flaws as opposed to software bugs. bugs can be described as errors in implementation, usually, coding. Flaws would then be those security items which didn’t get considered during architecture and design, or which were not designed correctly  (like poorly implemented authentication, authorization, or encryption). I would agree that implementing some sort of no entropy scramble and then believing that you’ve built “encryption” is, indeed both a design flaw and an implementation error*.

I respect Gary’s opinion greatly. So I carefully considered his argument. My personal “hit”, not really an opinion so much as a possible rationale, is that “flaw” gets more attention than say, “requirement”? This may especially be true at the senior management level? Gary, feel free to comment…

Why do I prefer the term “requirement”? Because I’m typically attempting to fit security into an existing architecture practice. Architecture takes “requirements” and turns these into architecture “views” that will fulfill the requirements. So naturally, if I want security to get implemented, I will have to deliver requirements into that process.

Further, if I name security items that the other architects may have missed, as “flaws”, I’m not likely to make any friends amongst my peers, the other architects working on a system. They may take umbrage in my describing their work as flawed? They bring me into analysis in order to get a security view on the system, to uncover through my expertise security requirements that they don’t have the expertise to discover.

In other words, I have very good reasons, just as Gary does, for using the language of my peers so that my security work fits as seamlessly as possible into an existing architecture practice.

The same goes for architecture diagram preferences. Jim Delgrosso likes to proffer a diagram template for use. That’s a great idea! I could do more with this, absolutely.

But once again, I’m faced with an integration problem. If my peers prefer Data Flow Diagrams (DFD), then I’d better be able to analyze a system based upon its DFD. If architects use UML, I’d better be able to understand UML. Ultimately, if I prize integration, unless there’s no existing architecture approach with which to work, my best integration strategy is to make use of whatever is typical and normal, whatever is expected. Otherwise, if I demand that my peers use my diagram, they may see me as obstructive, not collaborative. I have (so far) focused on integration with existing practices and teams.

As I spend more time teaching (and writing a book about ARA), I’m finding that having accepted whatever I’ve been given in an effort to integrate well, I haven’t created a definitive system  ARA diagram template from which to work (though I have lots of samples). That may be a miss on my part? (architectural miss?)

Some of the different practices I encountered may be due to differing organizational roles? Gary and Jim are hired as consultants. Because they are hired experts, perhaps they can prescribe more? Or, indeed, customers expect a prescriptive, “do it this way” approach? Since I’ve only consulted sparingly and have mostly been hired to seed and mentor security architecture practices from the inside, perhaps I don’t have enough experience to understand consultative demands and expectations? I do know that I’ve had far more success through integration than through prescription. Maybe that’s just my style?

You, my readers, can, of course, choose whatever works for you, depending upon role, maturity of your organization, and such.

Thanks for the ideas, Jim (and Gary). It was truly a great experience to kick practices around with you two.

cheers,

/brook

*We should be long past the point where anyone believes that a proprietary scramble protects much. (Except, of course, I occasionally do come across exactly this!).

Security Testing Is Dead. Rest In Peace? Not!

Apparently, some Google presenters are claiming that we can do away with the testing cycles in our software development life cycles? There’s been plenty of reaction to Alberto Savoia’s Keynote in 2011. But I ran into this again this Spring (2013)! So, I’m sorry to bring this up again, but I want to try for a security-focused statement…

The initial security posture of a piece of software is dependent upon the security requirements for that particular piece of software or system. In fact, the organizational business model influences an organization’s security requirements, which in turn influence the kinds of testing that any particular software or system will need before and after release. We don’t sell and deliver software in a vacuum.

Google certainly present a compelling case for user-led bug hunting. Their bounty programme works! But there are important reasons that Google can pull off user-led testing for some of their applications when other businesses might die trying.

It’s important to understand Google’s business model and infrastructure in order to understand a business driven bounty programme.

  • Google’s secured application infrastructure
  • Build it and they might come
  • If they come, how to make money?
  • If Google can’t monetize, can they build user base?

First and foremost, Google’s web application execution environment has got to have a tremendous amount of security built into it. Any application deployed to that infrastructure inherits many security controls. (I’ll exclude Android and mobility, since these are radically different) Google applications don’t have to implement identity systems, authorization models, user profiles, document storage protection, and the panoply of administrative and network security systems that any commercial, industrial strength cloud must deploy and run successfully. Indeed, I’m willing to guess that each application Google deploys runs within a certain amount of sandboxed isolation such that failure of that application cannot impact the security and performance of the other applications running on the infrastructure. In past lives, this is precisely how we built large application farms: sandbox and isolation. When a vulnerable application gets exploited, other applications sharing the infrastructure cannot be touched.We also made escape from the sandbox quite  difficult in order to protect the underlying infrastructure. Google would be not only remiss, but clueless to allow buggy applications to run in any less isolating environment. And, I’ve met lots of very smart Google folk! Scary smart.

Further, from what I’ve been told, Google has long since implemented significant protections for our Google Docs. Any application that needs to store documents inherits this document storage security. (I’ve been told that Google employ some derivation of Shamir’s Threshold Scheme such that unless an attacker can obtain M of N  stored versions of a document, the attacker gains no data whatsoever. This also thwarts the privileged insider attack)

My simple point is that Google is NOT entirely relying upon its external testers, its bug bounty programme. A fair amount of security is inherent and inherited by Google’s web application experiments.

And, we must understand Google’s business model. As near as I can tell from the outside, Google throws a lot of application “spaghetti” onto the Web. If an application “sticks”, that is, attracts a user base, then Google figure out how to monetize the application. If the application can’t be monetized, Google may still support the application for marketing (popularity, brand enhancement) purposes. Applications that don’t generate interest are summarily terminated.

In my opinion, Google’s business model leaves a lot of wiggle room for buggy software. Many of these experiments have low expectations, perhaps no expectation at the outset? This gives Google time to clean the code as the application builds user base and penetration. If nobody is dependent upon an application, then there’s not a very high security posture requirement. In other words, Google can take time to find the “right product”. This is entirely opposite for security function that must deliver protection independent of any support (like on an end point that can be offline). Users expect security software to be correct on installation: the product has to be built “right”, right from the start.

And, the guts of Google are most likely protected from any nasty vulnerabilities. So, user testing makes a lot of business sense and does not pose a brand risk.

Compare this with an endpoint security product from an established and trusted brand. Not only must the software actually protect the customer’s endpoint, it’s got to work when the endpoint is not connected to anything, much less the Internet (i.e., can’t “phone home”). Additionally, security software must meet a very high standard for not degrading the posture of the target system. That is, the software shouldn’t install vulnerabilities that can be abused alongside the software’s intended functionality. Security software must meet a very high standard of security quality. That’s the nature of the business model.

I would argue that security software vendors don’t have a great deal of wiggle room for user-discovery of vulnerabilities. Neither do medical records software, nor financials. These applications must try to be as clean as possible from the get go. Imagine if your online banking site left its vulnerability discovery to the user community. I think it’s not too much of a leap to imagine the loss in customer confidence that this approach might entail?

I’ll state the obvious: different businesses demand different security postures and have different periods of grace for security bugs. Any statement that makes a claim across these differences is likely spurious.

Google, in light of these obvious differences, may I ask your pundits to speak for your own business, rather than assuming that you may speak for all business models, rather than trumpeting a “new world order”? Everyone, may I encourage us to pay attention to the assumptions inherent in claims? Not all software is created equally, and that’s a “Good Thing” ™.

By the way, Brook Schoenfield is an active Google+ user. I don’t intend to slam Google’s products in any manner. Thank you, Google, for the great software that I use every day.

cheers

/brook

 

 

 

Seriously? Product Security?

Seriously? You responded to my security due diligence question with that?

Hopefully, there’s a lesson in this tale of woe about what not to do when asked about product security?

This incident has been sticking in my craw for about a year. I think it’s time to get it off my chest. If for no other reason, I want to stop thinking about this terrible customer experience. And yes, for once, I’m going to name the guilty company. I wasn’t under NDA in this situation, as far as I know?

There I was, Enterprise Security Architect for a mid-size company (who shall not be named. No gossip, ever, from this blog). Part of my job was to ensure that vendors’ product security was strong enough to protect my company’s security posture. There’s a due diligence responsibility assigned to most infosec people. In order to fulfill this responsibility, it has become a typical practice to research software vendors’ product security practices.Based upon the results, either mitigate uncovered risks to policy and industry standards or raise the risk to organizational decision makers (and there are always risks, right?).

Every software vendor goes through these due diligence investigations on a regular basis. And I do mean “every”.

I’ve lived on both sides of this fence, conducting the investigations and having my company’s software go through many investigations. This process is now a part of the fabric of doing secure business. There should be nothing surprising about the questions. In past positions, we had a vendor questionnaire, a risk scale based upon the expected responses, and standards against which to measure the vendor. These tools help to build a repeatable process. One of these processes is documented in a SANS Institute Smart Guide released in 2011 and was published by Cisco, as well.

Now, I’m going to name names. Sorry, Google, I’m going to detail just what your Docs sales team said to me. Shame on you!

When I asked about Google Docs product security here is the answer, almost verbatim, that I received from the sales team:

“We’re Google. We can hire Vint Cerf if we want. That is enough.”

Need I point out to my brilliant readers that Dr. Vint Cerf, as far as I know, has never claimed to be an information security expert? I’m sure he knows far more about the design of TCP/IP than I? (but I remind readers that I used to write TCP/IP stacks, so I’m not entirely clueless, either). And, Dr. Cerf probably knows a thing or two about Internet Security, since he runs ICANN?

Still, I can tell you authoritatively that TCP/IP security and Domain Name Registry security are only two (fairly small) areas of an information security due diligence process that is now standard for software vendors to pass.

Besides, the sales team didn’t answer my questions. This was a classic “Appeal to Authority“. And, unfortunately, they didn’t even bother to appeal to a security authority. Sorry Vint; they took your name in vain. I suppose this sort of thing happens to someone of your fame all the time?

Behind the scenes, my counter-part application architect and I immediately killed any possible engagement with Google Docs. Google Sales Team, you lost the sale through that single response. The discussion was over; the possibility of a sale was out, door firmly closed.

One of the interesting results from the wide adoption of The Web has been the need for open and transparent engagement. Organizations that engage honestly gain trust through their integrity, even in the face of organizational mis-steps and faux pas. Organizations who attempt the old fashion paradigm, “control all communications”, lose trust, and lose it rapidly and profoundly. Commercial companies, are you paying attention? This is what democracy looks like (at least in part. But that’s a different post, I think?).

Product security is difficult and demanding. There are many facets that must compliment each other to deliver acceptable risk. And, even with the best intentions and execution, one will still have unexpected vulnerabilities crop up from time to time. We only have to look at development of Microsoft’s product security programme to understand how one of the best in the industry (in my humble opinion) will not catch everything. Do Microsoft bugs surface? Yes. Is the vulnerability level today anywhere near what it was 10 years ago? Not even close. Kudos, Microsoft.

It’s long past the time that any company can coast on reputation. I believe that Google do some very interesting things towards the security of their customers. But their  sales team need to learn a few lessons in humility and transparency. Brand offers very little demonstrable protection. Google, you have to answer those due diligence questionnaires honestly and transparently. Otherwise the Infosec person on the other side has nothing against which to base her/his risk rating. And, in the face of no information, the safest bet is to rate “high risk”. Default deny rule.

It’s a big world out there and if your undiscovered vulnerabilities don’t get’cha now, they will eventually. Beware; be patient; be humble; remain inquisitive; work slowly and carefully. You can quote me on that.

cheers,

/brook