Over the years I’ve worked with a lot of different security tools and solutions. From the network layer through application, there are many places for security solutions to solve unique and complex challenges. With every security solution there are a few different factors that are evaluated by the security team. Here are several of the top considerations:
- Product Efficacy - Does the solution solve the intended problem?
- Coverage - Is this a narrow or wide solution?
- Precision - True Positive vs False Positive Rate
- Burden of installation and onboarding - Is this a multi-team many month installation process or a simple integration?
- Burden of use/maintenance - How complex is the tool? Do I need professional services plus a team of security engineers to babysit the tool each day?
I have plenty of opinions on each of these factors, but I want to focus on the debate between coverage and precision. Often products will offer high coverage or higher precision, but not both. The question then becomes, which do you optimize for?
Tool A: Report more issues, even if many of them are false positives. We don’t want to miss any items.
Tool B: Report fewer issues total, but those that are reported are accurate. Low false positives.
At the surface, many folks will choose Tool A which provides more coverage of issues. This may be why many early tools touted the thousands of issues they could find (even if half were false positives). This is an understandable choice because no security team wants to miss a valid issue. However, there is a massive unintended consequence for this choice when it comes time for security workflow optimization and automation.
Let’s consider tools A and B with more concrete numbers.
Tool A - Low Precision - More Issues found, but more mistakes (false positives) found too
Tool A generates a report of 100 security findings that breakdown as follows:
- 70 true positives (items correctly identified as bad)
- 30 false positives (items incorrectly identified as bad)
- 2 false negatives (bad items that were missed)
these don’t show in the total of 100 because they weren’t identified by the tool
The numbers are what we’d expect of a security tool. It found a lot of issues, a good portion are false positives and there are few items missed (false positives).
Tool A within a security workflow
As the security team begins to address the 100 issues they’ll start to run into problems. The developers will receive the security reports from the security team and begin investigating to fix the issues. While the security team was excited at the coverage, the developers will begin to realize that nearly one out of every three reported issues is wrong. This will destroy the developers’ trust in the reported issues and likely cause them to reject the entire report outright.
Now what? The classic action of many security teams is for a security analyst to review all 100 findings and determine which are valid issues and which are false positives. Needless to say you’ve destroyed the efficiency of your process and moved to a workflow gated by the speed of your limited security resources. It might work, but it will never scale.
Tool B - High Precision - Fewer Issues found, but those that are found are right
Tool B scans the same target and generates a report of 52 security findings that breakdown as follows:
- 50 true positives
- 2 false positives
- 22 false negatives
On one hand we have 22 false negatives instead of only 2 false negatives with tool A. So we have missed a lot of valid items. However, because the accuracy is so high, only 2 false positives to 50 true positives, the output of our tool can be trusted. This is crucial in security programs because with trust and accuracy you can automate and scale.
Tool B within a security workflow
Tool A required a dedicated security analyst to validate every request before sending to the developers. Now with Tool B you can directly send results from the tool to the developer (with proper context of course) and not risk destroying your relationship. Developers will have faith that the results are accurate and will be thankful to have a helpful security team.
What about all of those missed items?
Here’s where we have to shift our thinking. Instead of settling for one tool to try and do a lot of things with mediocre results and augmenting with human effort, shift to a model where you use multiple accurate and reliable tools for specific focus areas. Odds are that tools of Type B will be good at particular types of issues and bad at others. So it’s important to identifying another Tool C and D that excels where B is weak, while still holding to the high accuracy approach. Combined together, you have coverage, accuracy, a workflow that can be automated! Plus your coveted security analysts and engineers aren’t a requirement for the process to function.
Don’t settle for security tooling that requires human validation in order to operationalize and integrate into workflows.You’ll find greater gains from using tools with higher precision and orchestrating together one or more in order to obtain your desired coverage. This approach will enable your security team to implement a series of security solutions for a huge net win to the company and then move on to other important problems!
Scale at the speed of computers, not at the speed of humans!