Separating truth from F.U.D. Part 2

Brian Minick

, I discussed how my experiences taught me that the new cyber security risks are not tied to technology problems, but to people. The attackers are people, not computer programs. The malicious programs used in attacks are just tools used by the attackers. Because we are trying to stop people, technology will never be the only solution. Technology cannot be created that other people will never be able to figure a way around. Based on this, how is it that so many security vendors tell us that their product will solve our problem?

In my opinion, the reason so many technologies can make claims about finding breaches and have data to support it is because either the definition of breach is very liberal or, quite frankly, because there are so many issues out there. When having adware installed on a PC is considered a breach, you know you are reaching. I am accustomed to talk about malware in two ways. Garden variety, which although not good, is abundant and generally ends up on systems by chance or because someone went somewhere they shouldn’t have. In other words, an attacker did not directly target your system. We would clean it up when we had time, but if we didn’t get to it…oh well.

The second variety was what we cared about. These were tools intentionally placed on our systems. Advanced attackers who wanted to break into our systems used these tools. They were not random. They were not put there by coincidence. These were not used to push ads to our computers or track where we browsed. These were used to steal our intellectual property, the future of our company. These were the priority.

The next time a vendor shows you a compromise, try to discern whether that compromise was specifically targeted at you or if you just happened to win the malware lottery. If you won the lottery, congratulate the vendor on finding something you didn’t know about, but don’t lose your mind over it. If it was targeted, get help immediately.

Some of you may be saying, now hold on a minute Brian, we tested a log management or analytics based product and it found all kinds of scary things. How do I explain that?

I’m glad you asked. Another gimmick I liked was the vendor that put their technology in and then showed me all kinds of “suspicious activity” that I didn’t know was happening. When they left, I had a list of strange things that I needed to look into and a sinking feeling that I had somehow been missing all this and absolutely needed to buy that product so I could keep seeing it. It was like taking a hit of crack cocaine. I saw things I never knew was there before and I had to start paying in order to keep seeing them.

Unfortunately, also like drug-induced visions, most of the “suspicious activity” that these systems identify turned out to be nothing. They may look like something interesting but after you investigate, they disappear or are explained away.

We used to dedicate a number of resources to tracking down these anomalies in logs and other data sources. It was kind of like gambling. Every now and then we won. It was just enough to make us keep playing, but after stepping back and looking at it, we discovered that we wasted more resources sifting through false positives than anything else. We internally referred to these tools as work generation tools. We spent a lot of time investigating what they found and often the only thing they found were false positives. All they did was generate work. I was well resourced, but I certainly didn’t need systems that generated busy work for the team.

Here is my conclusion from these experiences. Many people who have never actually defended an enterprise, with the best of intentions, are able to point out some irrelevant risk and look smart doing it. Just because someone can find garbage risks doesn’t mean their technology is any good or that they know what they are doing…sorry.