Zero Trust Means AppSec

AT&T Cybersecurity sent me an invitation to a zero trust white paper that said:

“No user or application should be inherently trusted.

…key principles:

Principle #1: Connect users to applications and resources, not the corporate network

Principle #2: Make applications invisible to the internet, to eliminate the attack surface

Principle #3: Use a proxy architecture, not a passthrough firewall, for content inspection and security”

Anybody see what’s missing here? It’s in the 1st line: “no user should be…trusted”

Authentication and authorization are not magic!

“content inspection” has proven amazingly difficult. We can look for known signatures and typical abnormalities. Sure.

But message encapsulated attacks, i.e., attacks intended to exploit vulnerabilities in your processing code tend to be incredibly specific. When they are, identification will lag exploitation by some period. Nothing is perfect protection.

This implies that attacks can and will ride in through your authenticated/authorized users that your content protection will fail to identify.

The application must also be built “zero trust”: trust no input. Assume that exploitable conditions will be released despite best efforts. Back in the day, we called it “defensive programming”.

Defensive programming, or whatever hyped buzzword you want to call it today must remain an critical line of defence in every application.

That is, software security matters. No amount of cool (expensive) defensive tech saves us from doing our best to prevent the worst in our code.

At today’s state of the art, it’s got to be layers for survivability.

/brook

 

Is API Authentication Enough?

Reading a recent article, Five Common Cloud Security Threats and Data Breaches“, posted by Andreas Dann I came across some advice for APIs that is worth exploring.

The table stakes (basic) security measures that Mr. Dann suggests are a must. I absolutely support each of the recommendations given in the article. In addition to Mr. Dann’s recommendations, I want to explore the security needs of highly exposed inputs like internet-available APIs.

Again and again, I see recommendations like Mr. Dann’s: “…all APIs must be protected by adequate means of authentication and authorization”. Furthermore, I have been to countless threat modelling sessions where the developers place great reliance on authorization, often sole reliance (unfortunately, as we shall see.)

For API’s with large open source, public access, or which involve relatively diverse customer bases, authentication and authorization will be insufficient.

As True Positives‘ AppSec Assurance Strategist, I’d be remiss to misunderstand just what protection, if any, controls like authentication and authorization provide in each particular context.

For API’s essential context can be found in something else Mr. Dann states, “APIs…act as the front door of the system, they are attacked and scanned continuously.” One of the only certainties we have in digital security is that any IP address routable from the internet will be poked and prodded by attackers. Period.

That’s because, as Mr. Dann notes, the entire internet address space is continuously scanned for weaknesses by adversaries. Many adversaries, constantly. This has been true for a very long time.

So, we authenticate to ensure that entities who have no business accessing our systems don’t. Then, we authorize to ensure that entities can only perform those actions that they should and cannot perform actions which they mustn’t. That should do it, right?

Only if you can convince yourself that your authorized user base doesn’t include adversaries. There lies “the rub”, as it were. Only systems that can verify the trust level of authorized entities have that luxury. There actually aren’t very many cases involving relatively assured user trust.

As I explain in Securing Systems, Secrets Of A Cyber Security Architect, and Building In Security At Agile Speed, many common user bases will include attackers. That means that we must go further than authentication and authorization, as Mr. Dann implies with, “…the API of each system must follow established security guidelines”. Mr. Dann does not expand on his statement, so I will attempt to provide some detail.

How do attackers gain access?

For social media and fremium APIs (and services, for that matter), accounts are given away to any legal email address. One has only to glance at the number of cats, dogs, and birds who have Facebook accounts to see how trivial it is to get an account. I’ve never met a dog who actually uses, or is even interested in Facebook. Maybe there’s a (brilliant?) parrot somewhere? (I somehow doubt it.)

Retail browsing is essentially the same. A shop needs customers; all parties are invited. Even if a credit card is required for opening an account, which used to be the norm, but isn’t so much anymore, stolen credit cards are cheaply and readily available for account creation.

Businesses which require payment still don’t get a pass. The following scenario is from a real-world system for which I was responsible.

Customers paid for cloud processing. There were millions of customers, among which were numerous nation-state governments whose processing often included sensitive data. It would be wrong to assume that nation-states with spy agencies could not afford to purchase a processing account in order to poke and prod just in case a bug should allow that state to get at the sensitive data items of one of its adversaries. I believe (strongly) that it is a mistake to assume that well-funded adversaries won’t simply purchase a license or subscription if the resulting payoff is sufficient. spy agencies often work very slowly and tenaciously, often waiting years for an opportunity.

Hence, we must assume that our adversaries are authenticated and authorized. That should be enough, right? Authorization should prevent misuse. But does it?

All software fails. Rule #1, including our hard won and carefully built authorization schemes.

“[E]stablished security guidelines” should expand to a robust and rigorous Security Development Lifecyle (SDL or S-SDLC), as James Ransome and I lay out in Building In Security At Agile Speed. There’s a lot of work involved in “established security”, or perhaps better termed, “consensus, industry standard software security”. Among software security experts, there is a tremendous amount of agreement in what “robust and rigorous” software security entails.

(Please see NIST’s Secure Software Development Framework (SSDF) which is quite close to and validates the industry standard SDL in our latest book. I’ll note that our SDL includes operational security. I would guess that Mr. Dann was including typical administrative and operational controls.)

Most importantly, for exposed interfaces, I continue to recommend regular, or better, continual fuzz testing.

The art of software is constraining software behaviour to the desired and expected. Any unexpected behaviour is probably unintended and thus, will be considered a bug. Fuzzing is an excellent way to find unintended programme behaviour. Fuzzing tools, both open source and commercial, have gained maturity while at the same time, commercial tools’ price has been decreasing. The barriers to fuzzing are finally disappearing.

Rest assured that you probably have authorized adversaries amongst your users. That doesn’t imply that we abandon authentication and authorization. These controls continue to provide useful functions, if only to tie bad behaviour to entities.

Along with authentication and authorization, we must monitor for anomalous and abusive behavour for the inevitable failures of our protective software. At the same time, we must do all we can to remove issues before release, while at the same time, using techniques like fuzzing to identify those unknown, unknowns that will leak through.
 

Cheers,

/brook

MITRE D3FEND Is Your Friend

Defenders have to find the correct set of defences for each threat. Many attacks have no direct prevention making a many-to-to-many relationship. Threat:defence == M:N 

Certainly some vulnerabilities can be prevented or mitigated with a single action. But overall, there are other threats require an understanding of the issue and which defences might be applied.

While prevention is the holy grail, often we must:

— Limit access (in various ways and at various levels in the stack)
— Mitigate effects
— Make exploitation more difficult (attacker cost)

MITREcorp D3FEND maps defences to attacks

“…tailor defenses against specific cyber threats…”

In my threat modelling classes, the 2 most difficult areas for participants are finding all the relevant attack scenarios and then putting together a reasonable set of defences for each credible attack scenario. Typically, newbie threat modellers will identify only a partial defence. Or, misapply a defence.

MITRE ATT&CK helps to find all the steps an attacker might take from initial contact to compromise.

MITRE D3FEND and the ontology between ATT&CK and D3FEND give defenders what we need to then build a decent defence.

https://www.csoonline.com/article/3625470/mitre-d3fend-explained-a-new-knowledge-graph-for-cybersecurity-defenders.html#tk.rss_all

Security Research Is Threat Modeling’s Constructive Criticism

(the following is partially excerpted from my next book)

Adam Shostack, author of Threat Modeling: Designing for Security has succinctly described a process of constructive critique within engineering:

“The back and forth of design and critique is not only a critical part of how an individual design gets better, but fields in which such criticism is the norm advance faster.”

The Spectre/Meltdown issues are the result of a design critique such as Shostack describes in his pithy quote given above.  In fact, one of the fundamental functions that I believe security research can play is providing designers with a constructive critique.

Using Spectre and Meltdown as an example of engineering critique, let’s look at some of the headlines from before the official issue announcement by the researchers:

“Kernel-memory-leaking Intel processor design flaw forces Linux, Windows redesign”, John Leyden and Chris Williams 2 Jan 2018 at 19:29, The Register, https://www.theregister.co.uk/2018/01/02/intel_cpu_design_flaw/

“A CRITICAL INTEL FLAW BREAKS BASIC SECURITY FOR MOST COMPUTERS”, Andy Greenberg, January 3, 2018, Wired Magazine, https://www.wired.com/story/critical-intel-flaw-breaks-basic-security-for-most-computers/

There were dozens of similar headlines (many merely repeating the first few, especially, The Register’s), all declaiming a “flaw” in CPUs. I want to draw the reader’s attention to the word, “flaw”. Are these issues “flaws”? Or, are they the result of something else?

The Register headline and that first article were based upon speculation that had been occurring amongst the open source community supporting the Linux Kernel. A couple of apparently odd changes that had been made to kernel code. But, the issues to which these changes responded were “embargoed”, that is, the reasoning behind the changes was known only to those making the changes.

Unlike typical open source changes, whose reasoning is public and often discussed by members of the community, these kernel changes had been made in opaquely without public comment, which of course, set concerned kernel community members wondering.

To observers not familiar with the reasoning behind the changes, it was clear that something was amiss and likely in relation to CPU functions; anxious observers were guessing what might be the motivation for those code changes.

Within the digital security universe, there exists an important dialog between security research aimed at discovering new attack techniques and the designers of the systems and protocols upon which that research is carried out. As Adam noted so very wryly, achieving solid designs, even great ones, and most importantly, resilient designs in the face of omnipresent attack requires an interchange of constructive critique. That is how Spectre and Meltdown were discovered and presented.

Neither of this collection of (at the time of announcement) new techniques involved exercising a flaw, that is, a design error – in other words, the headlines quoted just above were erroneous and rather misleading[2].

Speculative execution and the use of kernel mapped user memory pages by operating systems were intentional design choices that had been working as designed for more than 10 years. Taken together, at least some of the increases in CPU performance over that period can directly be tied to speculative execution design.

Furthermore, and quite importantly to this discussion, these design choices were made within the context of a rather different threat landscape. Some of today’s very active threat actors didn’t exist, or at least, were not nearly as active and certainly not as technically sophisticated circa 2005 as they are today, May 2018.

If I recall correctly (and I should be able to remember, since I was the technical lead for Cisco’s Web infrastructure and application security team at that time), in 2005, network attacks were being eclipsed by application focused attack methods, especially, web attack methods.

Today, web attacks are very “ho, hum”, very run of the ordinary, garden variety. But in 2005, when the first CPU speculative execution pipelines were being released, web applications were targets of choice at the cutting edge of digital security. Endpoint worms and gaining entrance through poor network ingress controls had been security’s focus up until the web application attack boom (if I may title it so?). At about that time, targeting web applications was fast displacing network periphery concerns. Attackers were in the process of shifting to targets that used a standard protocol (HTTP) which was guaranteed to pass through organizational firewalls. Many of the new targets’ were becoming always  available via the Public Internet.

Since the web application attack boom, attacks and targets have continued to evolve. The threat landscape changed dramatically over the years since the initial design of speculative execution CPUs. Alongside the changes in types of attackers as well as their targets, attacker and researcher sophistication has grown, as has an attackers’ toolbox. 2018 is a different security world than 2005. I see no end to this curve of technical growth in my crystal ball.

The problem is, when threat modeling, whether in 2005 or 2018, one considers the attacks of the past, those of moment, and then one must try one’s best to project from current understanding to attacks that might arise within the foreseeable future. Ten or twelve years seems an awfully long horizon of prescience, especially when considering the rate at which technical change continues to take place.

As new research begins to chew at the edges of any design, I believe that the wise and diligent practitioner revisits their existing threat models in light of developments.

If I were to fault the CPU and operating system makers whose products are subject to Spectre or Meltdown, it would be for a failure to anticipate where research might lead, as research has unfolded. CPU threat modelers could have taken into account advances in research indicating unexpected uses of cache memory.

Speculative execution leaves remnants of a speculated execution branch in cache memory when a branch has not been taken. It is those remnants that lie at the heart of this line of research.

A close examination of the unfolding research might very well have led those responsible for updating CPU threat models to consider the potential for something like Spectre and Meltdown. (Or, perhaps the threat models were updated, but other challenges prevented updates to CPU designs? CPU threat modelers, please tell us all what the real story is)

I’ve found a publication chain during the 3 years previous that, to me, points towards the new techniques. Spectre and Meltdown are not stand-alone discoveries, but lie on a body of CPU research that had been published regularly for several years.

As I wrote for McAfee’s Security Matters blog in January of 2018 (as a member of McAfee’s Advanced Threat Research Team),

“Meltdown and Spectre are new techniques that build upon previous work, such as “KASLR”  and other papers that discuss practical side-channel attacks. The current disclosures build upon such side-channels attacks through the innovative use of speculative execution….An earlier example of side-channel based upon memory caches was posted to Github in 2016 by one of the Spectre/Meltdown researchers, Daniel Gruss.” Daniel Gruss is one of the Spectre and Meltdown paper authors.

Reading these earlier papers, it appears to me that some of the parent techniques that would be used for the Spectre and Meltdown breakthroughs could have been read (should have been read?) by CPU security architects in order to re-evaluate the CPU’s threat model. That previously published research was most certainly available.

Of course, hindsight is always 20/20; I had the Spectre and Meltdown papers in hand as I reviewed previous research. Going the other way might be more difficult?

Spectre and Meltdown did not just spring miraculously from the head of Zeus, as it were. They are the results of a fairly long and concerted effort to discover problems with and thus, hopefully, improve the designs of modern processors. Indeed, the researchers engaged in responsible disclosure, not wishing to publish until fixes could be made available.

To complete our story, the driver that tipped the researchers to an early, zero-day disclosure (that is, disclosure without available mitigations or repairs) were the numerous speculative (if you’ve forgive the pun?) journalism (headlines quoted above) that gained traction based upon misleading (at best) or wrong conclusions. Claiming a major design “flaw” in millions of processors is certainly a reader catching headline. But, unfortunately, these claims were vastly off the mark since no flaw existed in the CPU or operating system designs.

While it may be more “interesting” to imagine a multi-year conspiracy to cover up known design issues by evil CPU makers, no such conspiracy appears to have taken place.

Rather, in the spirit of responsible disclosure, the researchers were waiting for mitigations to be made available to customers; CPU manufacturers and operating system coders were heads down at work figuring out what appropriate mitigations might be, and just how to implement these with the least amount of disruption. None of these parties was publicly discussing just why changes were being made, especially to the open source Linux kernel.

Which is precisely what one would expect in order to protect millions of CPU users: embargo the technical details to foil attackers. There is actually nothing unusual about such a process; it’s all very normal and typical, and unfortunately for news media, quite banal[3].

What we see through the foregoing example about Spectre and Meltdown is precisely the sort of rich dialog that should occur between designers and critics (researchers, in this case).

Designs are built against the backdrop and within the context of their security “moment”. Our designs cannot improve without collective critique amongst the designers; such dialog internal to an organization or at least, a development team is essential. I have spoken about this process repeatedly at conferences: “It takes a village to threat model,” (to misquote a famous USA politician.)

But, there’s another level, if you will, that can reach for a greater constructive critique.

Once a design is made available to independent critics, that is, security researchers, research discoveries can and I believe, should become part of an ongoing re-evaluation of the threat model, that is, the security of the design. In this way, we can, as an industry, reach for the constructive critique called for by Adam Shostack.

[1]In my humble experience, Adam is particularly good at expressing complex processes briefly and clearly. One of his many gifts as a technologist and leader in the security architecture space.

[2]Though salacious headlines apparently increase readership and thus advertising revenue. Hence, the misleading but emotion plucking headlines.

[3]Disclosure: I’ve been involved in numerous embargoed issues over the years.

KRACKs Against Wi-Fi Serious But Not End of the World

On October 12, researcher Mathy Vanhoef announced a set of Wi-Fi attacks that he named KRACKs, for key reinstallation attacks. These attack scenarios are against the WPA2 authentication and encryption key establishment portions of the most recent set of protocols. The technique is through key reinstallation.

The attack can potentially allow attackers to send attacker controlled data to your Wi-Fi connected device. In some situations, the attacker can break the built in Wi-Fi protocol’s encryption to reveal users’ Wi-Fi messages to the attacker.

However key reinstallation depends on either working with the inherent timing of a Wi-Fi during a discreet, somewhat rare (in computer terms) exchange or the technique depends upon the attacker forcing the vulnerable exchange through some method (see below for examples). Both of these scenarios take a bit of attacker effort, perhaps more effort than using any one of the existing methods for attacking users over Wi-Fi?

For this reason, while KRACKs Attacks is a serious issue that should be fixed as soon as possible (see below), our collective digital lives probably won’t experience a tectonic shift from Wi-Fi key reinstallation attacks (“KRACKs Attacks”).

Please read on for a technical analysis of the issue.

“… because messages may be lost or dropped, the Access Point (AP) will retransmit message 3 if it did not receive an appropriate response as acknowledgment. As a result, the client may receive message 3 multiple times. Each time it receives this message, it will reinstall the same encryption key, and thereby reset the incremental transmit packet number (nonce) and receive replay counter used by the encryption protocol. We show that an attacker can force these nonce resets by collecting and replaying retransmissions of message 3 of the 4-way handshake. By forcing nonce reuse in this manner, the encryption protocol can be attacked, e.g., packets can be replayed, decrypted, and/or forged.”

–Mathy Vanhoef, “Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2”

The highlighted text above is Vanhoef’s, not mine.

Depending upon which key establishment exchange is attacked, as Vanhoef notes, injection of messages might be the result. But, for some exchanges, decryption might also be possible.

For typical Wi-Fi traffic exchange, AES-CCMP is used between the client (user) and the access point (AP, the Wi-Fi router). Decryption using KRACKs is not thought to be possible. But decryption is not the only thing to worry about.

An attacker might craft a message that contains malware, or at least, a “dropper,” a small program that when run will attempt to install malware. Having received the dropper, even though the message may make no sense to the receiving program, the dropper is still retained in memory or temporary, perhaps even permanent, storage by the receiving computer. The attacker then has to figure out some way to run the dropper—in attack parlance, to detonate the exploit that has been set up through the Wi-Fi message forgery.

“Simplified, against AES-CCMP an adversary can replay and decrypt (but not forge) packets. This makes it possible to hijack TCP streams and inject malicious data into them. Against WPA-TKIP and GCMP the impact is catastrophic: packets can be replayed, decrypted, and forged.“

–Mathy Vanhoef, “Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2”

Packet forgery is worse, as the receiver may consider that the data is legitimate, making it easier for an attacker to have the receiver accept unexpected data items, influencing the course of an exchange and establish a basis upon which to conduct follow-on, next-step exploits.

In my opinion KRACKs are a serious problem to which the wise will wish to respond.

The essential problem with Wi-Fi is that it is free to intercept. Radio waves travel over the air, which, as we all know, is a shared medium. One need not “plug in” to capture Wi-Fi (or any radio-carried) packets. One simply must craft a receiver strong enough to “hear,” that is, receive the transmissions of interest.

But although certainly serious, are KRACKs Attacks the end of the known digital universe? I think not.

First, the attacker has to be present and alert to the four-way, WPA2 handshake. If this has already passed successfully, the attacker’s key reinstallation is no longer possible. The handshake must be interjected at exchange #3, which will require some precision. Or, the attacker must force the key exchange.

One could, I imagine, write an attack that would identify all WPA2 handshake exchanges within range, and inject the attacker’s message #3 to each handshake to deliver some sort of follow-on exploit. This would be a shotgun approach, and might on large, perhaps relatively open networks have potential attacker benefits.[1]

Supplicants (as Wi-Fi clients are called) do not constantly handshake. The handshake interchange takes place upon authentication and reauthentication, and in some networks upon shifting from one AP to another when the client changes location (a “handoff”). These are discreet triggers, not a constant flow of handshake message #3. Johnny or Janie attacker has to be ready and set up in anticipation of message #3. That is, right after message #3 goes by, attacker’s #3 must be sent. Once the completed handshake message #4 goes by, I believe that the attack opportunity closes.

I fear that the timing part might be tricky for the average criminal attacker. It would be much easier to employ readily available Wi-Fi sniffing tools to capture sufficient traffic to allow the attacker to crack weak Wi-Fi passwords.

There are well-understood methods for forcing Wi-Fi authentication (see DEAUTH, below). Since these methods can be used to reveal the password for the network or the user, using one of these may be a more productive attack?

There are other methods for obtaining a Wi-Fi password, as well: the password must be stored somewhere on each connecting device or the user would need to reenter the password upon every connection/reconnection. Any attack that gives attacker access to privileged information kept by any device’s operating system can be used to gain locally stored secrets like Wi-Fi passwords.

Wi-Fi password theft (whether WPA-Enterprise or WPA-Personal) through Wi-Fi deauthentication, “DEAUTH”, has been around for some time; there are numerous tools available to make the attack relatively trivial and straight forward.

Nation-state attackers may have access to banks of supercomputers or massive parallel processing systems that they can apply for rapid password cracking of even complex passwords. A targeted nation-state–sponsored Wi-Fi password-cracking attack is difficult to entirely prevent with today’s technologies. There are likely other adversaries with access to sufficient resources to crack complex passwords in a reasonable amount of time, as well[2].

Obviously, as Vanhoef suggests, as soon as your operating system or Wi-Fi client vendor offers an update to this issue, patch it quickly. Problem, hopefully, solved. Please see https://www.krackattacks.com for updates from security vendors that are working with the discoverer. Question your vendor. If your vendor does not respond, pressure them; that’s my suggestion.

Other actions that organizations can couple with good Wi-Fi hygiene are to use good traffic and event analysis tools, such as modern Security Information Event Management (SIEM) software, network ingress and egress capture for anomaly analysis, and perhaps ingress/egress gateways that prevent many types of attack and forms of traffic.

“One can use VPN, SSH2 tunnels, etc., to ensure some safety from ARP poisoning attacks. Mathy Vanhoef also uses an SSL stripper that depends on badly configured web servers. Make sure your TLS implementation is correct!” – Carric Dooley, Global Lead, Foundstone Consulting Services

Organization Wi-Fi hygiene includes rapidly removing rogue access points, Wi-Fi authentication based upon the organization’s central authentication mechanism (thus don’t employ a WPA password), strong password construction rules, and certificates issued to each Wi-Fi client without which access to Wi-Fi will be denied. Add Wi-Fi event logs to SIEM collection and analysis. Find out whether organization access points generate an event on key reinstallation. This is a fairly rare event. If above-normal events are being generated, your Wi-Fi may be suffering from a KRACK.

None of the foregoing measures will directly prevent key reinstallation attacks. Reasonable security practices do make gaining access to Wi-Fi more difficult, which will prevent or slow other Wi-Fi attacks. Plus, physical security ought to pay attention to anyone sitting in the parking lot with a Wi-Fi antenna pointing at a campus building. But that was just as true before KRACKs were discovered.

For home users, use a complex password on your home Wi-Fi. Make cracking your password difficult. Remember, you usually must enter the password only once for each client device.

Maintain multiple Wi-Fi networks (probably, multiple access points), each with different passwords: one for work or other sensitive use; and another for smart TVs and the like and other potentially vulnerable devices such as Internet of Things devices—isolate those from your network shares, printers, and other sensitive data and trusted devices. Install a third Wi-Fi for your guests. That network should be set up so that each session is isolated from the others; all your guests need is to get to the internet. Each SSID network must have a separate, highly varied password: numbers, lowercase, uppercase, symbols. Make it a long passphrase that resists dictionary attacks.

Wi-Fi routers are a commodity; I do not suggest spending a great deal of money. How much speed do your occasional guests really need? Few Internet connections are as fast as even bottom-tier home Wi-Fi. Most of your visitors probably do not need access to your sensitive data, yes? Should your kids have their own Wi-Fi so that their indiscretions do not become yours?

Multiple Wi-Fi networks will help to slow and perhaps confound KRACKs cybercriminals. “Which network should I focus on?” You increase the reconnaissance the attacker must perform to promulgate a successful attack. Maybe they will move on to simpler home network.

If you happen to notice someone sitting in a car in your neighborhood with an open laptop, be suspicious. If they have what looks like an antenna, maybe let law enforcement know.

Basic digital security practices can perhaps help? For instance, use a VPN even over Wi-Fi. Treat Wi-Fi, particularly, publicly available Wi-Fi as an untrustable network. Be cognizant of where the Wi-Fi exists. WPA2 is vulnerable to decryption, so don’t trust airports, hotels, cafes, anywhere where the implementers and maintainers of the network are unknown.

Users of some Android and Linux clients should be aware that an additional implementation error allows an attacker immediate access to the decryption of client and access point Wi-Fi traffic. Remember, all those smart TVs, smart scales, home automation centers, and thermostats are usually nothing more than specialized versions of Linux. Hence, each may be vulnerable. On the other hand, if these are segregated onto a separate, insecure Wi-Fi, at least attackers will not have readily gained your more sensitive network and devices. While waiting for a patch for your vulnerable Android device, perhaps it makes sense to put it on the untrusted home Wi-Fi as a precaution.

You may have noticed that I did not suggest changing the home Wi-Fi passwords. Doing so will not prevent this attack. Still, the harder your WPA2 password is to crack, the more likely common cybercriminals will move on to easier pickings.

As is often the case in these serious issues, reasonable security practices may help. At the same time, failure to patch quickly leaves one vulnerable for as long as it takes to patch. I try to remember that attackers will become ever more facile with these methods and what they can do with them. They will build tools, which, if these have value, will then become ever more widespread. It is a race between attacker capability and patching. To delay deploying patches is typically a dance with increasing risk of exploitation. In the meantime, the traffic from this attack will be anomalous. Watch for it, if you can.

Many thanks to Carric Dooley from Foundstone Professional Services for his suggestions to complete and round out this analysis as well as for keeping me technically accurate.

Cheers,

/brook

[1] “When a client joins a network, it […] will install this key after receiving message 3 of the 4-way handshake. Once the key is installed, it will be used to encrypt normal data frames using an encryption protocol. However, because messages may be lost or dropped, the Access Point (AP) will retransmit message 3 if it did not receive an appropriate response as acknowledgment. […] Each time it receives this message, it will reinstall the same encryption key, and thereby reset the incremental transmit packet number (nonce) and receive replay counter used by the encryption protocol. We show that an attacker can force these nonce resets by collecting and replaying retransmissions of message 3 of the 4-way handshake. By forcing nonce reuse in this manner, the encryption protocol can be attacked, e.g., packets can be replayed, decrypted, and/or forged.”

[2] Interestingly, my LinkedIn password was among those stolen during the 2011 LinkedIn breach. A research team attempted to crack passwords. After 3 months, they cracked something like 75% of the passwords. However, my highly varied, but merely 6 character password was never cracked. While today’s cracking is factors more sophisticated and rapid, still, a varied non-dictionary password still slows the process down considerably.

Why no “S” in IoT?

My friend, Chris Romeo, a security architect and innovator for whom I have deep respect and from whom I learn a great deal just posted a blog about the lack of security (“S”) in Internet of Things (IoT), “The S In IoT Stands For Security“.

Of course, I agree with Chris about his three problem areas:

  1. Lack of industry knowledge
  2. Developer’s lack of security knowledge and understanding
  3. Time to market pressure, particularly as it exists for startups

I don’t think one can argue with these; in fact, these problems have been and continue to be pervasive, well beyond and before the advent of IoT. Still, I would like to add two more points to Chris’ explanation of the problem space, if I may add to what I hope will be an ongoing and lively conversation within the security industry?

First, IoT products are not just being produced at startups; big players are building IoT, too. Where we place sensors and for what purposes is a universe of different solutions, all the way from moisture sensors in soil (for farming), through networkable household gadgets like light bulbs and thermostats, to the activity monitor on a human’s wrist, to wired up athletes, to the by now famous, mirai botnet camera.

Differing use cases involve varying types and levels (postures) of security. It is a mistake to lump these all together.

When at Intel, I personally reviewed any number of IoT projects that involved significant security involvement, analysis, and implementation. I’m going to guess that most of the major activity monitoring server-side components should have had quite a lot of basic web and cloud security in-built? At least for some products, interaction between the device (IoT sensor) and any intermediary device, like a smart phone or personal computer, has also been given some significant security thought – and it turns out, that this is an area where security has been improving as device capabilities have grown.

It’s hard to make an all out statement on the state of IoT security, though I believe that there are troublesome areas, as Chris points out, most especially for startups. Another problem area continues to be medical devices, which have tended to lag badly because they’ve been produced with more open architectures, as has been presented at security conferences time and again. Then, there’s the Jeep Wrangler remote control hack.

Autonomous autos are going to push the envelop for security, since it’s a nexus between machine learning, big (Big BIG) data, consumer convenience, and physical safety. The companies that put the cars together are generally larger companies, though many of these do not lead in the cyber security space. Naturally, the technology that goes into the cars comes from a plethora of concerns, huge, mid-sized, tiny, and startup. Security consciousness and delivery will be a range from clueful to completely clueless across that range. A repeating problem area for security continues to be complex integrations where differing security postures are pasted together without sufficient thought about how interactions are changing the overall security of the integrated system.

Given the foregoing, I believe that it’s important to qualify IoT somewhat, since context, use case, and semantics/structure matter.

Next, and perhaps more important (save the most important for last?) are the economics of development and delivery. This is highlighted best by the mirai camera: it’s not just startups who have pressure. In the camera manufacturer’s case (as I understand it) they have near zero economic incentive to deliver a secure product. And this is after the mirai attack!

What’s going on here? Importantly, the camaras are still being sold. The access point with a well-known, default password is presumably still on the market, though new cameras may have been remediated. Security people may tear their hair out with an emphatic, “Don’t ever do that!” But, consider the following, please.

  • Product occurs distant (China, in this case) from the consumption of the product
  • Production and use occur within different (vastly different, in this case) legal jurisdictions
  • The incentive is to produce product as cheaply as possible
  • Eventual users do not buy “security” (whatever that means to the customer) from the manufacturer
  • The eventual user buys from manufacturer’s customer or that customer’s customer (multiple entities between consumer and producer)

Where’s the incentive to even consider security in the above picture? I don’t see it.

Instead, I see, as a Korean partner of a company that I once worked for said about acquiring a TCP/IP stack, “Get Internet for free?” – meaning of course, use an open source network stack that could be downloaded from the Internet with no cost.

The economics dictate that, manufacturer download an operating system, usually some Linux variant. Add an working driver, perhaps based upon an existing one? Maybe there’s a configuration thingy tied to a web server that perhaps, came with the Linux. Cheap. Fast. Let’s make some money.

We can try to regulate this up, down, sideways, but I do not believe that this will change much. Big companies will do the right thing, and charge more proportionately. Startups will try to work around the regulations. Companies operating under different jurisdictions or from places where adherence is not well policed or can be bribed away will continue to deliver default passwords to an open source operating system which delivers a host ripe for misuse (which is what mirai turns out to be).

Until we shift the economics of the situation, nothing will change. In the case of mirai, since the consumer of the botnet camera was likely not affected during that attack, she or he will not be applying pressure to the camera manufacturer, either. The manufacturer has little incentive to change; the consumer has little pressure to enforce via buying choices.

By the way, I don’t see consumers becoming more security knowledgeable. Concerned about digital security? Sure. Willing to change the default password on a camera or a wifi router? Many consumers don’t even go that far.

We’re in a bind here; the outlook does not look rosy to me. I open to suggestions.

Thanks, Chris, for furthering this discussion.

cheers,

/brook

RSA 2017: 5 Opportunities!

I feel incredibly grateful that RSA Conference 2017, Digital Guru and IOActive have given me so many opportunities this year to share with you, to meet you.

I will speak about the hard earned lessons that I (we’ve) gained through years of threat modeling programmes and training on Wednesday morning, February 15th. The very same day, I will give a shortened version of the threat model class that I, along with a couple of Intel practitioners, have developed. And Justin Cragin, Intel Principal Engineer and cloud guru, and I will share some of our thoughts on DevOps as a vehicle for product security on Thursday morning (see below for the schedule).

IOActive have asked me to participate in a panel discussion Tuesday afternoon at 3PM at their usual RSA event, “IOAsis“, at Elan on Howard Street, across from Moscone West. I believe that you may need to register beforehand to attend IOAsis. IOActive’s programme is always chock full of interesting information and lively interchange.

Finally, once again, I’ll sign books at 2PM on Wednesday in front of the Digital Guru official RSA bookstore. In the past, it’s been located in the South Lobby of Moscone during the RSA conference. One doesn’t need a pass to get to the book store. Please stop by just to say “hi”; book purchase not required, though, of course, I’m happy to personally sign copies of my books.

Follows, my RSA talks schedule:

Tuesday

3:00 P.M. PST – Security Plan Development: Move to a Better Security Reality
Presented by: Brad Hegrat, IOActive
IOAsis, Elan Event Venue 
Talk Description

Wednesday

Session ID

Title

Room

Start Time

End Time

AIR-W02

Threat Modeling the Trenches to the Clouds

Marriott Marquis | Yerba Buena 8

8:00 AM

8:45 AM

LAB3-W04

Threat Modeling Demystified

Moscone West | 2022

10:30 AM

12:30 PM

Book Signing, RSA Bookstore 2-2:30P

Thursday

Session ID

Title

Room

Start Time

End Time

ASD-R03

DevSecOps: Rapid Security for Rapid Delivery

Moscone West | 2005

9:15 AM

10:00 AM

Finally Making JGERR Available

Originally conceived when I was at Cisco, Just Good Enough Risk Rating (JGERR) is a lightweight risk rating approach that attempts to solve some of the problems articulated by Jack Jones’ Factor Analysis Of Information Risk (FAIR). FAIR is a “real” methodology; JGERR might be said to be FAIR’s “poor cousin”.

FAIR, while relatively straightforward, requires some study. Vinay Bansal and I needed something that could be taught in a short time and applied to the sorts of risk assessment moments that regularly occur when assessing a system to uncover the risk posture and to produced a threat model.

Our security architects at Cisco were (probably still are?) very busy people who have to make a series of fast risk ratings during each assessment. A busy architect might have to assess more than one system in a day. That meant that whatever approach we developed had to be fast and easily understandable.

Vinay and I were building on Catherine Nelson and Rakesh Bharania’s Rapid Risk spreadsheet. But it had arithmetic problems as well as did not have a clear separation of risk impact from those terms that will substitute for probability in a risk rating calculation. We had collected hundreds of Rapid Risk scores and we were dissatisfied with the collected results.

Vinay and I developed a new spreadsheet and a new scoring method which actively followed FAIR’s example by separating out terms that need to be thought of (and calculated) separately. Just Good Enough Risk Rating (JGERR) was born. This was about 2008, if I recall correctly?

In 2010, when I was on the steering committee for the SANS What Works in Security Architecture Summits (they are no longer offering these), one of Alan Paller’s ideas was to write a series of short works explaining successful security treatments for common problems. The concept was to model these on the short diagnostic and treatment booklets used by medical doctors to inform each other of standard approaches and techniques.

Michele Guel, Vinay, and myself wrote a few of these as the first offerings. The works were to be peer-reviewed by qualified security architects, which all of our early attempts were. The first “Smart Guide” was published to coincide with a Summit held in September of 2011. However, SANS Institute has apparently cancelled not only the Summit series, but also the Smart Guide idea. None of the guides seem to have been posted to the SANS online library.

Over the years, I’ve presented JGERR at various conferences and it is the basis for Chapter 4 of Securing Systems. Cisco has by now, collected hundreds of JGERR scores. I spoke to a Director who oversaw that programme a year or so ago, and she said that JGERR is still in use. I know that several companies have considered  and/or adapted JGERR for their use.

Still, the JGERR Smart Guide was never published. I’ve been asked for a copy many times over the years. So, I’m making JGERR available from here at brookschoenfield.com should anyone continue to have interest.

cheers,

/brook schoenfield

“We sell Hammers” – Not Security!

“We sell Hammers” – Not Security!

Several former Home Depot employees said they were not surprised the company had been hacked. They said that over the years, when they sought new software and training, managers came back with the same response: “We sell hammers.

Failure to understand the dependence that we have not just on our obvious digital devices – smart phone, laptop, tablet, fancy fitness bling on your wrist – but also on a matrix of interconnection tying all these devices and billions more together – will land you in the hot seat. For about three billion out of the seven billion people on this planet, we have long since passed the point where we are isolated entities who act alone and in some measure of unconnected global anonymity. For most of us, our lives are not just dependent upon technology itself, but also on the capabilities of innumerable, faceless business entities acting on our behalf.

Consider the following, common, but trivial example.

When I swipe my credit card at the pump to purchase petrol, that transaction passes through any number of computation devices and applications operated by a chain of business entities. The following is a typical scenario (an example flow – but not the only one, of course):

  • The point of sale device1, itself (likely supplied by a point of sale provider)
  • The networking equipment at the station2
  • The station’s Internet provider’s equipment (networking, security, applications – you have no idea!)
  • One or more telecom company’s networking infrastructure across the Internet backbone
  • The point of sale company or their proxy
  • More networking equipment and Internet providers
  • A credit card payment processor
  • More networking equipment and Internet providers
  • The card issuer who must validate the card and agree to pay the transaction for me

And so on…. All just to fill my tank up. It’s seamless and invisible – the communications between entities usually bring up an encrypted tunnel, though the protection offered is not as solid as you may hope – Invisible and seamless, except when the processing is not so invisible, like during a compromise and breach.

Every one of these invisible players has to have good enough security to protect me, and you, if you also use some sort of payment card for your petrol.

Home Depot, and Target before them, (and who knows who’s next?) failed to understand that in order to sell a hammer in the Internet world, you’re participating in this huge web of digital interconnection. Even more so, if you’re large enough, your business network will have become an eco-system of digital entities, many of whose security practices will affect your security posture in fairly profound ways. When 2 (or more) systems connect, each may affect the security posture of the other, sometimes in profound ways.

And there be pirates in them waters, Matey. As I wrote in the introduction to my next book, Securing Systems: Applied security architecture and threat models:

“…as of this writing, we are engaged in a cyber arms race of extraordinary size, composition, complexity, and velocity.”

One of the biggest problems for security practitioners remains that the cyber “arms race” isn’t just between a couple of nation-states. Foremost, the nation-state cyber war has to cross the same digital ocean that we use for our daily lives and digital entertainment. The shared web makes every digital citizen, potential “collateral damage”. But, there are more players than governments.

As can occur in a ground war, virtual “warlords” have private cyber armies marauding for loot, my loot, your loot. Those phishing spam don’t come from your friends, right3? Just trying to categorize the various entities engaged in cyber attacks could generate a couple of fine PhD theses and perhaps even provide years of follow-on papers? The number and varying loyalties of the many players who carry out cyber attacks increases the “size” of the problem, adds to the “composition”, and generates a great deal of “complexity”. It’s enough to make a well-meaning box-store retailer bury its collective head in the virtual sand. Which is precisely what happened to that hammer seller, Home Depot.

But answering the “who” doesn’t complete the picture. There’s the macro “how”, as well4. The Internet seems to suffer the “tragedy of the commons“.

In order to keep the Internet sufficiently interesting with compelling content such that we want to participate, it absolutely must remain neutral in character5. While Internet democracy certainly appears to be quite messy, the very thing that drives the diversity of content on the Internet is its level playing field6.

But leaving the Internet as an open field for all to enjoy means that some will take advantage of the many simply because the “pickings” are too rich to ignore. There is just too much to steal to let those resources lay untouched. And the pirates don’t! People actually do answer those “Nigerian Prince” scam emails. Really, someone does. People do buy those knock-off drugs. For the 3 billion of us who are digitally connected, it’s a dangerous digital day, every single day. Watch what you click!

In short, if you’re reading this on my blog site, you are perhaps an unwitting participant in that “cyber arms race of extraordinary size, composition, complexity, and velocity.” And so is every business that employs modern digital capabilities, whether for payments, or any other task. Failure to understand just how dependent a business is upon this matrix of digital interaction will make one a Target (pun intended). CEO’s, you may want to pay closer attention? Ignoring the current realities could cost you your job, perhaps even your career7!

If you think that you only sell objects and not some level of digital security, I fear that you are likely to be very sadly mistaken?

cheers,

/brook

  1. My friend and former colleague, Lucy McCoy, wrote the communications code in the first generation of gas pump payment terminals. At that time, terminals communicated via modem and phone line. She was a serial communications wiz. I remember the point of sale terminal laid out in her lab area. Lucy has since passed away. She was a brilliant engineer; she gave my code the best quality testing ever.
  2. The transactions have to get from station to payment processing, right? Who runs those cable modems and routers at the station? Could be the Internet provider, or maybe not. I run my own modem/routers/switches at home to which I have full admin access.
  3. I don’t know any spammers, as far as I know? Perhaps I make an unwarranted assumption that you don’t, either?
  4. The “what” and “why” of cyber attack seem pretty clear. Beyond attackers after money, they are after some other advantage: geo-political, business, just causes (pick your favourite or most hated cause), career enhancement, what-have-you. This is all pretty well documented. The security industry seems preoccupied with the “what”, i.e., the technical details of exploits. Again, these technical details seem pretty well documented.
  5. Imagine if your most hated or feared government had control over your Internet use, even the Internet itself, and proceeded to feed you exactly what they wanted you to know and prevented you from any other content. How would you like that?
  6. The richness and depth that is an emergent and continuing quality of the Internet, to me, demonstrates the absolute genius of the originators and early framers of the protocols and design.
  7. Of course, if I had a severance of $15.9 million, maybe I wouldn’t very much mind ending my career?