Why no “S” in IoT?

My friend, Chris Romeo, a security architect and innovator for whom I have deep respect and from whom I learn a great deal just posted a blog about the lack of security (“S”) in Internet of Things (IoT), “The S In IoT Stands For Security“.

Of course, I agree with Chris about his three problem areas:

  1. Lack of industry knowledge
  2. Developer’s lack of security knowledge and understanding
  3. Time to market pressure, particularly as it exists for startups

I don’t think one can argue with these; in fact, these problems have been and continue to be pervasive, well beyond and before the advent of IoT. Still, I would like to add two more points to Chris’ explanation of the problem space, if I may add to what I hope will be an ongoing and lively conversation within the security industry?

First, IoT products are not just being produced at startups; big players are building IoT, too. Where we place sensors and for what purposes is a universe of different solutions, all the way from moisture sensors in soil (for farming), through networkable household gadgets like light bulbs and thermostats, to the activity monitor on a human’s wrist, to wired up athletes, to the by now famous, mirai botnet camera.

Differing use cases involve varying types and levels (postures) of security. It is a mistake to lump these all together.

When at Intel, I personally reviewed any number of IoT projects that involved significant security involvement, analysis, and implementation. I’m going to guess that most of the major activity monitoring server-side components should have had quite a lot of basic web and cloud security in-built? At least for some products, interaction between the device (IoT sensor) and any intermediary device, like a smart phone or personal computer, has also been given some significant security thought – and it turns out, that this is an area where security has been improving as device capabilities have grown.

It’s hard to make an all out statement on the state of IoT security, though I believe that there are troublesome areas, as Chris points out, most especially for startups. Another problem area continues to be medical devices, which have tended to lag badly because they’ve been produced with more open architectures, as has been presented at security conferences time and again. Then, there’s the Jeep Wrangler remote control hack.

Autonomous autos are going to push the envelop for security, since it’s a nexus between machine learning, big (Big BIG) data, consumer convenience, and physical safety. The companies that put the cars together are generally larger companies, though many of these do not lead in the cyber security space. Naturally, the technology that goes into the cars comes from a plethora of concerns, huge, mid-sized, tiny, and startup. Security consciousness and delivery will be a range from clueful to completely clueless across that range. A repeating problem area for security continues to be complex integrations where differing security postures are pasted together without sufficient thought about how interactions are changing the overall security of the integrated system.

Given the foregoing, I believe that it’s important to qualify IoT somewhat, since context, use case, and semantics/structure matter.

Next, and perhaps more important (save the most important for last?) are the economics of development and delivery. This is highlighted best by the mirai camera: it’s not just startups who have pressure. In the camera manufacturer’s case (as I understand it) they have near zero economic incentive to deliver a secure product. And this is after the mirai attack!

What’s going on here? Importantly, the camaras are still being sold. The access point with a well-known, default password is presumably still on the market, though new cameras may have been remediated. Security people may tear their hair out with an emphatic, “Don’t ever do that!” But, consider the following, please.

  • Product occurs distant (China, in this case) from the consumption of the product
  • Production and use occur within different (vastly different, in this case) legal jurisdictions
  • The incentive is to produce product as cheaply as possible
  • Eventual users do not buy “security” (whatever that means to the customer) from the manufacturer
  • The eventual user buys from manufacturer’s customer or that customer’s customer (multiple entities between consumer and producer)

Where’s the incentive to even consider security in the above picture? I don’t see it.

Instead, I see, as a Korean partner of a company that I once worked for said about acquiring a TCP/IP stack, “Get Internet for free?” – meaning of course, use an open source network stack that could be downloaded from the Internet with no cost.

The economics dictate that, manufacturer download an operating system, usually some Linux variant. Add an working driver, perhaps based upon an existing one? Maybe there’s a configuration thingy tied to a web server that perhaps, came with the Linux. Cheap. Fast. Let’s make some money.

We can try to regulate this up, down, sideways, but I do not believe that this will change much. Big companies will do the right thing, and charge more proportionately. Startups will try to work around the regulations. Companies operating under different jurisdictions or from places where adherence is not well policed or can be bribed away will continue to deliver default passwords to an open source operating system which delivers a host ripe for misuse (which is what mirai turns out to be).

Until we shift the economics of the situation, nothing will change. In the case of mirai, since the consumer of the botnet camera was likely not affected during that attack, she or he will not be applying pressure to the camera manufacturer, either. The manufacturer has little incentive to change; the consumer has little pressure to enforce via buying choices.

By the way, I don’t see consumers becoming more security knowledgeable. Concerned about digital security? Sure. Willing to change the default password on a camera or a wifi router? Many consumers don’t even go that far.

We’re in a bind here; the outlook does not look rosy to me. I open to suggestions.

Thanks, Chris, for furthering this discussion.

cheers,

/brook

RSA 2017: 5 Opportunities!

I feel incredibly grateful that RSA Conference 2017, Digital Guru and IOActive have given me so many opportunities this year to share with you, to meet you.

I will speak about the hard earned lessons that I (we’ve) gained through years of threat modeling programmes and training on Wednesday morning, February 15th. The very same day, I will give a shortened version of the threat model class that I, along with a couple of Intel practitioners, have developed. And Justin Cragin, Intel Principal Engineer and cloud guru, and I will share some of our thoughts on DevOps as a vehicle for product security on Thursday morning (see below for the schedule).

IOActive have asked me to participate in a panel discussion Tuesday afternoon at 3PM at their usual RSA event, “IOAsis“, at Elan on Howard Street, across from Moscone West. I believe that you may need to register beforehand to attend IOAsis. IOActive’s programme is always chock full of interesting information and lively interchange.

Finally, once again, I’ll sign books at 2PM on Wednesday in front of the Digital Guru official RSA bookstore. In the past, it’s been located in the South Lobby of Moscone during the RSA conference. One doesn’t need a pass to get to the book store. Please stop by just to say “hi”; book purchase not required, though, of course, I’m happy to personally sign copies of my books.

Follows, my RSA talks schedule:

Tuesday

3:00 P.M. PST – Security Plan Development: Move to a Better Security Reality
Presented by: Brad Hegrat, IOActive
IOAsis, Elan Event Venue 
Talk Description

Wednesday

Session ID

Title

Room

Start Time

End Time

AIR-W02

Threat Modeling the Trenches to the Clouds

Marriott Marquis | Yerba Buena 8

8:00 AM

8:45 AM

LAB3-W04

Threat Modeling Demystified

Moscone West | 2022

10:30 AM

12:30 PM

Book Signing, RSA Bookstore 2-2:30P

Thursday

Session ID

Title

Room

Start Time

End Time

ASD-R03

DevSecOps: Rapid Security for Rapid Delivery

Moscone West | 2005

9:15 AM

10:00 AM

Finally Making JGERR Available

Originally conceived when I was at Cisco, Just Good Enough Risk Rating (JGERR) is a lightweight risk rating approach that attempts to solve some of the problems articulated by Jack Jones’ Factor Analysis Of Information Risk (FAIR). FAIR is a “real” methodology; JGERR might be said to be FAIR’s “poor cousin”.

FAIR, while relatively straightforward, requires some study. Vinay Bansal and I needed something that could be taught in a short time and applied to the sorts of risk assessment moments that regularly occur when assessing a system to uncover the risk posture and to produced a threat model.

Our security architects at Cisco were (probably still are?) very busy people who have to make a series of fast risk ratings during each assessment. A busy architect might have to assess more than one system in a day. That meant that whatever approach we developed had to be fast and easily understandable.

Vinay and I were building on Catherine Nelson and Rakesh Bharania’s Rapid Risk spreadsheet. But it had arithmetic problems as well as did not have a clear separation of risk impact from those terms that will substitute for probability in a risk rating calculation. We had collected hundreds of Rapid Risk scores and we were dissatisfied with the collected results.

Vinay and I developed a new spreadsheet and a new scoring method which actively followed FAIR’s example by separating out terms that need to be thought of (and calculated) separately. Just Good Enough Risk Rating (JGERR) was born. This was about 2008, if I recall correctly?

In 2010, when I was on the steering committee for the SANS What Works in Security Architecture Summits (they are no longer offering these), one of Alan Paller’s ideas was to write a series of short works explaining successful security treatments for common problems. The concept was to model these on the short diagnostic and treatment booklets used by medical doctors to inform each other of standard approaches and techniques.

Michele Guel, Vinay, and myself wrote a few of these as the first offerings. The works were to be peer-reviewed by qualified security architects, which all of our early attempts were. The first “Smart Guide” was published to coincide with a Summit held in September of 2011. However, SANS Institute has apparently cancelled not only the Summit series, but also the Smart Guide idea. None of the guides seem to have been posted to the SANS online library.

Over the years, I’ve presented JGERR at various conferences and it is the basis for Chapter 4 of Securing Systems. Cisco has by now, collected hundreds of JGERR scores. I spoke to a Director who oversaw that programme a year or so ago, and she said that JGERR is still in use. I know that several companies have considered  and/or adapted JGERR for their use.

Still, the JGERR Smart Guide was never published. I’ve been asked for a copy many times over the years. So, I’m making JGERR available from here at brookschoenfield.com should anyone continue to have interest.

cheers,

/brook schoenfield

“We sell Hammers” – Not Security!

“We sell Hammers” – Not Security!

Several former Home Depot employees said they were not surprised the company had been hacked. They said that over the years, when they sought new software and training, managers came back with the same response: “We sell hammers.

Failure to understand the dependence that we have not just on our obvious digital devices – smart phone, laptop, tablet, fancy fitness bling on your wrist – but also on a matrix of interconnection tying all these devices and billions more together – will land you in the hot seat. For about three billion out of the seven billion people on this planet, we have long since passed the point where we are isolated entities who act alone and in some measure of unconnected global anonymity. For most of us, our lives are not just dependent upon technology itself, but also on the capabilities of innumerable, faceless business entities acting on our behalf.

Consider the following, common, but trivial example.

When I swipe my credit card at the pump to purchase petrol, that transaction passes through any number of computation devices and applications operated by a chain of business entities. The following is a typical scenario (an example flow – but not the only one, of course):

  • The point of sale device1, itself (likely supplied by a point of sale provider)
  • The networking equipment at the station2
  • The station’s Internet provider’s equipment (networking, security, applications – you have no idea!)
  • One or more telecom company’s networking infrastructure across the Internet backbone
  • The point of sale company or their proxy
  • More networking equipment and Internet providers
  • A credit card payment processor
  • More networking equipment and Internet providers
  • The card issuer who must validate the card and agree to pay the transaction for me

And so on…. All just to fill my tank up. It’s seamless and invisible – the communications between entities usually bring up an encrypted tunnel, though the protection offered is not as solid as you may hope – Invisible and seamless, except when the processing is not so invisible, like during a compromise and breach.

Every one of these invisible players has to have good enough security to protect me, and you, if you also use some sort of payment card for your petrol.

Home Depot, and Target before them, (and who knows who’s next?) failed to understand that in order to sell a hammer in the Internet world, you’re participating in this huge web of digital interconnection. Even more so, if you’re large enough, your business network will have become an eco-system of digital entities, many of whose security practices will affect your security posture in fairly profound ways. When 2 (or more) systems connect, each may affect the security posture of the other, sometimes in profound ways.

And there be pirates in them waters, Matey. As I wrote in the introduction to my next book, Securing Systems: Applied security architecture and threat models:

“…as of this writing, we are engaged in a cyber arms race of extraordinary size, composition, complexity, and velocity.”

One of the biggest problems for security practitioners remains that the cyber “arms race” isn’t just between a couple of nation-states. Foremost, the nation-state cyber war has to cross the same digital ocean that we use for our daily lives and digital entertainment. The shared web makes every digital citizen, potential “collateral damage”. But, there are more players than governments.

As can occur in a ground war, virtual “warlords” have private cyber armies marauding for loot, my loot, your loot. Those phishing spam don’t come from your friends, right3? Just trying to categorize the various entities engaged in cyber attacks could generate a couple of fine PhD theses and perhaps even provide years of follow-on papers? The number and varying loyalties of the many players who carry out cyber attacks increases the “size” of the problem, adds to the “composition”, and generates a great deal of “complexity”. It’s enough to make a well-meaning box-store retailer bury its collective head in the virtual sand. Which is precisely what happened to that hammer seller, Home Depot.

But answering the “who” doesn’t complete the picture. There’s the macro “how”, as well4. The Internet seems to suffer the “tragedy of the commons“.

In order to keep the Internet sufficiently interesting with compelling content such that we want to participate, it absolutely must remain neutral in character5. While Internet democracy certainly appears to be quite messy, the very thing that drives the diversity of content on the Internet is its level playing field6.

But leaving the Internet as an open field for all to enjoy means that some will take advantage of the many simply because the “pickings” are too rich to ignore. There is just too much to steal to let those resources lay untouched. And the pirates don’t! People actually do answer those “Nigerian Prince” scam emails. Really, someone does. People do buy those knock-off drugs. For the 3 billion of us who are digitally connected, it’s a dangerous digital day, every single day. Watch what you click!

In short, if you’re reading this on my blog site, you are perhaps an unwitting participant in that “cyber arms race of extraordinary size, composition, complexity, and velocity.” And so is every business that employs modern digital capabilities, whether for payments, or any other task. Failure to understand just how dependent a business is upon this matrix of digital interaction will make one a Target (pun intended). CEO’s, you may want to pay closer attention? Ignoring the current realities could cost you your job, perhaps even your career7!

If you think that you only sell objects and not some level of digital security, I fear that you are likely to be very sadly mistaken?

cheers,

/brook

  1. My friend and former colleague, Lucy McCoy, wrote the communications code in the first generation of gas pump payment terminals. At that time, terminals communicated via modem and phone line. She was a serial communications wiz. I remember the point of sale terminal laid out in her lab area. Lucy has since passed away. She was a brilliant engineer; she gave my code the best quality testing ever.
  2. The transactions have to get from station to payment processing, right? Who runs those cable modems and routers at the station? Could be the Internet provider, or maybe not. I run my own modem/routers/switches at home to which I have full admin access.
  3. I don’t know any spammers, as far as I know? Perhaps I make an unwarranted assumption that you don’t, either?
  4. The “what” and “why” of cyber attack seem pretty clear. Beyond attackers after money, they are after some other advantage: geo-political, business, just causes (pick your favourite or most hated cause), career enhancement, what-have-you. This is all pretty well documented. The security industry seems preoccupied with the “what”, i.e., the technical details of exploits. Again, these technical details seem pretty well documented.
  5. Imagine if your most hated or feared government had control over your Internet use, even the Internet itself, and proceeded to feed you exactly what they wanted you to know and prevented you from any other content. How would you like that?
  6. The richness and depth that is an emergent and continuing quality of the Internet, to me, demonstrates the absolute genius of the originators and early framers of the protocols and design.
  7. Of course, if I had a severance of $15.9 million, maybe I wouldn’t very much mind ending my career?

Book Signing At RSA

I’m signing books along with James Ransome and Anmol Misra at 11:30AM at the bookstore at RSA in San Francisco, Digital Guru. If you’re at the conference, please do come by and say “hello”.

James and Anmol  were kind enough to give me a few of their book’s pages, Core Software Security, so that I could ramble on about Secure Development Lifecycles (SDL).

cheers,

/brook

The “Real World” of Developer-centric Security

My friend and colleague, Dr. James Ransome, invited me late last Winter to write a chapter for his 10th book on computer security, Core Software Security(with co-author, Anmol Misra published by CRC Press. My chapter is “The SDL In The Real World”, SDL = “Secure Development Lifecycle”. The book was released December 9, 2013. You can get copies from the usual sources (no adverts here, as always).

It was an exciting process. James and I spent hours white boarding possible SDL approaches, which was very fun, indeed*. We collectively challenged ourselves to uncover current SDL assumptions, poke at the validity of these, and find better approaches, if possible.

Many of you already know that I’ve been working towards a different approach to the very difficult, multi-dimensional and multi-variate problem of designing and implementing secure software for a rather long time. Some of my earlier work has been presented to the industry on a regular basis.

Specifically, during the period of 2007-9, I talked about a new (then) approach to security verification that would be easy for developers to integrate into their workflow and which wouldn’t require a deep understanding of security vulnerabilities nor of security testing. At the time, this approach was a radical departure.

The proving ground for these ideas was my program at Cisco, Baseline Application Vulnerability Assessment, or BAVA, for short (“my” here does not exclude the many people who contributed greatly to BAVA’s structure and success. But it was more or less my idea and I was the technical leader for the program).

But, is ease and simplicity all that’s necessary? By now, many vendors have jumped on the bandwagon; BAVA’s tenets are hardly even newsworthy at this point**. Still, the dream has not been realized, as far as I can see. Vulnerability scanning still suffers from a slew of impediments from a developer’s view:

  • Results count vulnerabilities not software errors
  • Results are noisy, often many variations of a single error are reported uniquely
  • Tools are hard to set up
  • Tools require considerable tool  knowledge and experience, too much for developers’ highly over-subscribed days
  • Qualification of results requires more in-depth security knowledge than even senior developers generally have (much less an average developer)

And that’s just the tool side of the problem. What about architecture and design? What about building security in during iterative, fast paced, and fast changing agile development practices? How about continuous integration?

As I was writing my chapter, something crystalized. I named it, “developer-centric security”, which then managed to get wrapped into the press release and marketing materials of the book. Think about this:  how does the security picture change if we re-shape what we do by taking the developer’s perspective rather than a security person’s?

Developer-centric software security then reduced to single, pointed question:

What am I doing to enable developers to innovate securely while they are designing and writing software?

Software development remains a creative and innovative activity. But so often, we on the security side try to put the brakes on innovation in favour of security. Policies, standards, etc., all try to set out the rules by which software should be produced. From an innovator’s view at least some of the time, developers are iterating through solutions to a new problem while searching for the best way to solve it. How might security folk enable that process? That’s the question I started to ask myself.

Enabling creativity, thinking like a developer, while integrating into her or his workflow is the essence of developer-centric security. Trust and verify. (I think we have to get rid of that old “but”)

Like all published works, the book represents a point-in-time. My thinking has accelerated since the chapter was completed. Write me if you’re intrigued, if you’d like more about developer-centric security.

Have a great day wherever you happen to be on this spinning orb we call home, Earth.

cheers,

/brook

*Several of the intermediate diagrams boggle in complexity and their busy quality. Like much software development, we had to work iteratively. Intermediate ideas grew and shifted as we worked. a creative process?

**At the time, after hearing BAVA’s requirements, one vendor told me, “I’ll call you back next year.”. Six months later on a vendor webcast, that same vendor was extolling the very tenets that I’d given them earlier. Sea change?

Of Flaws, Requirements, And Templates

November 13th, 2013, I spoke at the annual BSIMM community get together in Reston, Virginia, USA. I had been asked to co-present on Architecture Risk Assessment (ARA) and threat modeling along side Jim Delgrosso, long time Cigital threat modeler. We gave two talks:

  1. Introduction to  Architecture Risk Assessment And Threat Modeling
  2. Scaling Architecture Risk Assessment And Threat Modeling To An Enterprise

Thanks much, BSIMM folk, for letting me share the tidbits that I’ve managed to pick up along my way. I hope that we delivered on some of  the promises of our talks? One of my personal values from my participation in the conference was interacting with other experienced practitioners.

Make no mistake! ARA-threat modeling is and will remain an art. There is science involved, of course. But people who can do this well learn through experience and the inevitable mistakes and misses. It is a truism that it takes a fair amount of background knowledge (not easily gained):

  • Threat agents
  • Attack methods and goals used by each threat agent type
  • Local systems and infrastructure into which a system under analysis will go
  • Some form of fairly sophisticated risk rating methodology.

These specific knowledge sets sit on top of significant design ability and experience. The assessor has to be able to understand a system architecture and then to decompose it to an appropriate level.

The knowledge domains listed above are pre-requisite to an actual assessment. That is, there are usually years of experience in system and/or software design, in computer architectures, in attack patterns, threat agents, security controls, etc., that the assessor  brings to an ARA. One way of thinking about it is that ARA/threat modeling is applied computer security, computer security applied to the system under analysis.

Because ARA is a learned skill with many local variations, I find it fascinating to match my practice to practices that have been developed  independent of mine. What is the same? Where do we have consensus? And, importantly, what’s different? What have  I missed? How can I improve my practice based upon others’? These co-presentations and conversations are priceless. Interestingly, Jim and I agreed about the basic process we employ:

  • Understand the system architecture, especially the logical/functional representation
  • Uncover intended risk posture of the system and the risk tolerance of the organization
  • Understand the system’s communication flows, to the protocol interaction level
  • Get the data sensitivity of each flow and for each storage. Highest sensitivity rules any resulting security needs
  • Enumerate attack surfaces
  • Apply relevant active threat agents and their methodologies to attack surfaces
  • Filter out protected, insignificant, or unlikely attack vectors and scenarios
  • Output the security that the system or the proposed architecture and design are missing in order to fulfill the intended security posture

There doesn’t seem to be much disagreement on what we do. That’s good. It means that this practice is coalescing.The places were we disagree or approach the problem differently I think are pretty interesting.

Gary McGraw calls security architecture misses, “flaws”. Flaws as opposed to software bugs. bugs can be described as errors in implementation, usually, coding. Flaws would then be those security items which didn’t get considered during architecture and design, or which were not designed correctly  (like poorly implemented authentication, authorization, or encryption). I would agree that implementing some sort of no entropy scramble and then believing that you’ve built “encryption” is, indeed both a design flaw and an implementation error*.

I respect Gary’s opinion greatly. So I carefully considered his argument. My personal “hit”, not really an opinion so much as a possible rationale, is that “flaw” gets more attention than say, “requirement”? This may especially be true at the senior management level? Gary, feel free to comment…

Why do I prefer the term “requirement”? Because I’m typically attempting to fit security into an existing architecture practice. Architecture takes “requirements” and turns these into architecture “views” that will fulfill the requirements. So naturally, if I want security to get implemented, I will have to deliver requirements into that process.

Further, if I name security items that the other architects may have missed, as “flaws”, I’m not likely to make any friends amongst my peers, the other architects working on a system. They may take umbrage in my describing their work as flawed? They bring me into analysis in order to get a security view on the system, to uncover through my expertise security requirements that they don’t have the expertise to discover.

In other words, I have very good reasons, just as Gary does, for using the language of my peers so that my security work fits as seamlessly as possible into an existing architecture practice.

The same goes for architecture diagram preferences. Jim Delgrosso likes to proffer a diagram template for use. That’s a great idea! I could do more with this, absolutely.

But once again, I’m faced with an integration problem. If my peers prefer Data Flow Diagrams (DFD), then I’d better be able to analyze a system based upon its DFD. If architects use UML, I’d better be able to understand UML. Ultimately, if I prize integration, unless there’s no existing architecture approach with which to work, my best integration strategy is to make use of whatever is typical and normal, whatever is expected. Otherwise, if I demand that my peers use my diagram, they may see me as obstructive, not collaborative. I have (so far) focused on integration with existing practices and teams.

As I spend more time teaching (and writing a book about ARA), I’m finding that having accepted whatever I’ve been given in an effort to integrate well, I haven’t created a definitive system  ARA diagram template from which to work (though I have lots of samples). That may be a miss on my part? (architectural miss?)

Some of the different practices I encountered may be due to differing organizational roles? Gary and Jim are hired as consultants. Because they are hired experts, perhaps they can prescribe more? Or, indeed, customers expect a prescriptive, “do it this way” approach? Since I’ve only consulted sparingly and have mostly been hired to seed and mentor security architecture practices from the inside, perhaps I don’t have enough experience to understand consultative demands and expectations? I do know that I’ve had far more success through integration than through prescription. Maybe that’s just my style?

You, my readers, can, of course, choose whatever works for you, depending upon role, maturity of your organization, and such.

Thanks for the ideas, Jim (and Gary). It was truly a great experience to kick practices around with you two.

cheers,

/brook

*We should be long past the point where anyone believes that a proprietary scramble protects much. (Except, of course, I occasionally do come across exactly this!).

Security Testing Is Dead. Rest In Peace? Not!

Apparently, some Google presenters are claiming that we can do away with the testing cycles in our software development life cycles? There’s been plenty of reaction to Alberto Savoia’s Keynote in 2011. But I ran into this again this Spring (2013)! So, I’m sorry to bring this up again, but I want to try for a security-focused statement…

The initial security posture of a piece of software is dependent upon the security requirements for that particular piece of software or system. In fact, the organizational business model influences an organization’s security requirements, which in turn influence the kinds of testing that any particular software or system will need before and after release. We don’t sell and deliver software in a vacuum.

Google certainly present a compelling case for user-led bug hunting. Their bounty programme works! But there are important reasons that Google can pull off user-led testing for some of their applications when other businesses might die trying.

It’s important to understand Google’s business model and infrastructure in order to understand a business driven bounty programme.

  • Google’s secured application infrastructure
  • Build it and they might come
  • If they come, how to make money?
  • If Google can’t monetize, can they build user base?

First and foremost, Google’s web application execution environment has got to have a tremendous amount of security built into it. Any application deployed to that infrastructure inherits many security controls. (I’ll exclude Android and mobility, since these are radically different) Google applications don’t have to implement identity systems, authorization models, user profiles, document storage protection, and the panoply of administrative and network security systems that any commercial, industrial strength cloud must deploy and run successfully. Indeed, I’m willing to guess that each application Google deploys runs within a certain amount of sandboxed isolation such that failure of that application cannot impact the security and performance of the other applications running on the infrastructure. In past lives, this is precisely how we built large application farms: sandbox and isolation. When a vulnerable application gets exploited, other applications sharing the infrastructure cannot be touched.We also made escape from the sandbox quite  difficult in order to protect the underlying infrastructure. Google would be not only remiss, but clueless to allow buggy applications to run in any less isolating environment. And, I’ve met lots of very smart Google folk! Scary smart.

Further, from what I’ve been told, Google has long since implemented significant protections for our Google Docs. Any application that needs to store documents inherits this document storage security. (I’ve been told that Google employ some derivation of Shamir’s Threshold Scheme such that unless an attacker can obtain M of N  stored versions of a document, the attacker gains no data whatsoever. This also thwarts the privileged insider attack)

My simple point is that Google is NOT entirely relying upon its external testers, its bug bounty programme. A fair amount of security is inherent and inherited by Google’s web application experiments.

And, we must understand Google’s business model. As near as I can tell from the outside, Google throws a lot of application “spaghetti” onto the Web. If an application “sticks”, that is, attracts a user base, then Google figure out how to monetize the application. If the application can’t be monetized, Google may still support the application for marketing (popularity, brand enhancement) purposes. Applications that don’t generate interest are summarily terminated.

In my opinion, Google’s business model leaves a lot of wiggle room for buggy software. Many of these experiments have low expectations, perhaps no expectation at the outset? This gives Google time to clean the code as the application builds user base and penetration. If nobody is dependent upon an application, then there’s not a very high security posture requirement. In other words, Google can take time to find the “right product”. This is entirely opposite for security function that must deliver protection independent of any support (like on an end point that can be offline). Users expect security software to be correct on installation: the product has to be built “right”, right from the start.

And, the guts of Google are most likely protected from any nasty vulnerabilities. So, user testing makes a lot of business sense and does not pose a brand risk.

Compare this with an endpoint security product from an established and trusted brand. Not only must the software actually protect the customer’s endpoint, it’s got to work when the endpoint is not connected to anything, much less the Internet (i.e., can’t “phone home”). Additionally, security software must meet a very high standard for not degrading the posture of the target system. That is, the software shouldn’t install vulnerabilities that can be abused alongside the software’s intended functionality. Security software must meet a very high standard of security quality. That’s the nature of the business model.

I would argue that security software vendors don’t have a great deal of wiggle room for user-discovery of vulnerabilities. Neither do medical records software, nor financials. These applications must try to be as clean as possible from the get go. Imagine if your online banking site left its vulnerability discovery to the user community. I think it’s not too much of a leap to imagine the loss in customer confidence that this approach might entail?

I’ll state the obvious: different businesses demand different security postures and have different periods of grace for security bugs. Any statement that makes a claim across these differences is likely spurious.

Google, in light of these obvious differences, may I ask your pundits to speak for your own business, rather than assuming that you may speak for all business models, rather than trumpeting a “new world order”? Everyone, may I encourage us to pay attention to the assumptions inherent in claims? Not all software is created equally, and that’s a “Good Thing” ™.

By the way, Brook Schoenfield is an active Google+ user. I don’t intend to slam Google’s products in any manner. Thank you, Google, for the great software that I use every day.

cheers

/brook

 

 

 

Agile & Security, Enemies For Life?

Are Agile software development and security permanent enemies?

I think not. In fact, I have participated with SCRUM development where building security into the results was more effective than classic waterfall. I’m an Secure Agile believer!

Dwayne Melancon, CTO of Tripwire, opined for BrightTALK on successful Agile for security.

Dwayne, I wish it was that simple! “Engage security early”. How often have I said that? “Prioritize vulnerabilities based on business impact”. Didn’t I say that at RSA? I hope I did?

Yes, these are important points. But they’re hardly news to security practitioners in the trenches building programmes.

Producing secure software when using an Agile menthod is not quite as simple as “architect and design early”. Yes, that’s important, of course. We’ve been saying “build security in, don’t bolt it on” for years now. That has not changed.

I believe that the key is to integrate security into the Agile process from architecture through testing. Recognize that we have to trust, work with, and make use of the agile process rather than fighting. After all, one of the key values that makes SCRUM so valuable is trust. Trust and collaboration are key to Agile success. I argue that trust and collaboration are keys to Agile that produces secure software.

In SCRUM, what is going to be built is considered during user story creation. That’s the “early” part. A close relationship with the Product Owner is critical to get security user stories onto the backlog and critical during user story prioritization. And, the security person should be familiar with any existing architecture during user story creation. That way, security can be considered in context and not ivory tower.

I’ve seen security develop a basic set of user stories that fit a particular class or type of project. These can be templated and simply added in at the beginning, perhaps tweaked for local variation.

At the beginning of each Sprint, stories are chosen for development from out of the back log, During this process, considerable design takes place. Make security an integral part of that process, either through direct participation or by proxy.

Really, in my experience, all the key voices should be a part of this selection and refinement process. Quality people can offer why a paticular approach is easier to test, architects can offer whether a story has been accounted for in the architecture, etc. Security is one of the important voices, but certainly not the only one.

Security experts need to make themselves available throughout a Sprint to answer questions about implementation details, the correct way to securely build each user story under development.  Partnership. Help SCRUM members be security “eyes and ears” on the ground.

Finally, since writing secure code is very much a discipline and practice, appropriate testing and vulnerability assurance steps need to be a part of every sprint. I think that these need to be part of Definition of Done.

Everyone is involved in security in Agile. Security folk can’t toss security “over the wall” and expect secure results. We have to get our hands dirty, get some implementation grease under the proverbial fingernails in order to earn the trust of the SCRUM teams.

Trust and collaboration are success factors, but these are also security factors. If the entire team are considering security throughout the process, they can at the very least call for help when there is any question. In my experience, over time, many teams will develop their team security expertise, allowing them to be far more self-sufficient – which is part of the essence of Agile.

Us security folk are going to have to give up control and instead become collaborators, partners. I don’t always get security built the way that I might think about it, but it gets built in. And, I learn lots of interesting, creative, innovative approaches from my colleagues along the way.

cheers

/brook