All Logs: Bad!

I heard a rather distressing story last night. A major tech company’s security architects routinely demand requirements like this example:

“Send all logs to security”

This is so bad on so many levels.

What does “all” mean?
— Every debugging instrument in the code? That should slow performance down to a crawl. It might even log sensitive information and leave plenty of  clues about how to exploit the code (“information disclosure”).
— Every possible item in every log that the code makes?
– debugging
– behaviour
– performance
– scale
– etc.
Astute readers will note that some of these are probably at best, only marginally of security interest.

Consider what the inundation of log detail does to the poor analysts who must sift through a mountain of dross for indicators of compromise (IoC). That’s not a job I want.

If you’re giving your developers requirements like “send all logs”, you are causing far more harm than good.

Any savvy developer is going to understand that most of what their code logs isn’t relevant (until perhaps, post-attack analysis. It’s good practice to archive runtime logs “somewhere”).

Slamming developers with unspecific, perhaps impossible requirements burns precious influence in a useless firestorm. You’re letting your partners know that you don’t understand the problem, don’t care about them, and ultimately, do not offer value to their process.

As I’ve said many times, “Developers are very good at disinviting disruptive and valueless people to their meetings.”

Besides, such “requirements” are lazy. If you throw this kind of junk at people, you aren’t doing your job.

Our clients routinely ask us what they should monitor for security. We consider the tech, the architecture, the platform very carefully. To start, we try to cherry pick the most security relevant items.

We usually offer a plan that goes from essentials towards what should become a mature security operations practice. What to monitor from application and running logs typically requires a journey of discovery: start manageably and build.

“All” is a lazy, disinterested answer. Security people need to be better than that.

Cheers,

/brook

Zero Trust Means AppSec

AT&T Cybersecurity sent me an invitation to a zero trust white paper that said:

“No user or application should be inherently trusted.

…key principles:

Principle #1: Connect users to applications and resources, not the corporate network

Principle #2: Make applications invisible to the internet, to eliminate the attack surface

Principle #3: Use a proxy architecture, not a passthrough firewall, for content inspection and security”

Anybody see what’s missing here? It’s in the 1st line: “no user should be…trusted”

Authentication and authorization are not magic!

“content inspection” has proven amazingly difficult. We can look for known signatures and typical abnormalities. Sure.

But message encapsulated attacks, i.e., attacks intended to exploit vulnerabilities in your processing code tend to be incredibly specific. When they are, identification will lag exploitation by some period. Nothing is perfect protection.

This implies that attacks can and will ride in through your authenticated/authorized users that your content protection will fail to identify.

The application must also be built “zero trust”: trust no input. Assume that exploitable conditions will be released despite best efforts. Back in the day, we called it “defensive programming”.

Defensive programming, or whatever hyped buzzword you want to call it today must remain an critical line of defence in every application.

That is, software security matters. No amount of cool (expensive) defensive tech saves us from doing our best to prevent the worst in our code.

At today’s state of the art, it’s got to be layers for survivability.

/brook