Under the risk of disclosing my age, I can say that I have been in the security industry for almost fifteen years. I remember when Amazon was only an online bookstore. I have seen the rise of the cloud (we called it virtualization back then), server consolidation, the shift from fat clients to three-tiered architectures, and more. Over the years, I must have talked to thousands of security pros at hundreds of organizations, and, while the tactical challenges vary, at a high level there are a few simple principles they look for when selecting a vendor partner to help them defend and protect their key software infrastructure.
Operates in real-world diverse environments
Infrastructures are like cities. Some boroughs are built to the best architectural plans with symmetry, consistent streets and avenue naming and a good balance of industrial and residential properties. Other areas are leftover from villages and forts that got consumed by the city territories. Others yet overtime got converted into sleek residential and artsy lofts from meatpacking plants. Similarly, in a medium-size IT environment, one can find legacy Windows servers, three-tier client-server architectures, PHP and Ruby apps, containers, microservices, serverless, public cloud, and everything in between.
Our job as security industry pros is not to ask why – it’s to protect what is there. The best tools can work across a variety of architectures and deployment models.
Helps make security easy for developers and operations
There are always fewer security professionals than developers. Typical ratios today are something like 1 security engineer to 10 DevOps engineers to 100 developers. This leads to security pros acting more like coaches than like quarterbacks. Much of the day-to-day security responsibility is shifting towards the developers themselves (secure coding practices) and to DevOps team (security operations as a part of CI/CD and deployment lifecycle). According to a recent survey quoted by SD Times(*), as much as 68% of developers are already using some sort of a tool to help them manage security, but are those the right tools and who is in charge of configuring them and optimizing the tool for the threat surface of the application under development? The right security tool should be like a game plan template – useful for the coach to explain what their thoughts are, but also a great tool for the team to use during the training session and to implement during the actual home field game. This way, the security team expertise is amplified and they can keep up with the pace of agile development.
Adaptable and self-learning without manual configuration
Nimble business practices, distributed teams, and diverse infrastructure means that there are very few folks, if any, who have the complete picture of the current operating environment. Applications get deployed and deleted; shadow IT projects pop up here and there; hot patches are deployed; API versions change without being documented. A security tool that requires precise and accurate documentation to be effective is doomed to fail. A good tool follows four simple steps:
- Discover: Identify the currently active elements of the environment.
- Learn: Categorize the elements and learn the patterns of behavior. Compare the actual with what has been documented or claimed.
- Adapt: Where possible, automatically adjust the security settings to reflect new threat vectors, block threats, or compensate for discovered vulnerabilities.
- Alert: Where discovered changes affect critical assets or remediation is not obvious, alert the security professionals. Humans know best!
Gone are the days when a good network firewall was all one needed for a good security posture. M&M security model (hard on the outside, soft on the inside) breaks down under the weight of intranets, satellite environments, public clouds, backend services, third-party APIs, employees working from home, untrusted on-premises contractors with BYODs and much more.
The new world means the Zero-trust model is king because the perimeter is porous. Internal APIs and internal assets need to be monitored and protected as vigorously as the perimeter.
Monitors for all stages of kill-chain
As we know from the model defined by MITRE, cyber attack kill-chains follow seven stages: recon, weaponize, deliver, exploit, control, maintain, and execute. The good news for defenders is that disrupting one stage of the kill-chain disrupts the cyberattack attempts. The bad news? If you disrupt in the later stages of the attack and have not detected the root cause, it is easy for the malicious actor to use the same foundation to try again. Moreover, according to a recent study sponsored by IBM (***) the average time to identify a data breach was 197 days, and then 69 days to contain it. The right security tool should cut off reconnaissance by flagging suspicious user activity, block weaponization with strict policies and validation, restrict delivery by monitoring sideways movements, hinder exploitation by being close to developers, and block all stages of the attack execution by monitoring the application logic and behavior.
Traceable(™) answers most of the requirements for a good 360 security tool.
It has deployment options and data collection vehicles for anything presenting application logic on layer 7, from legacy monolithic PHP applications to Kubernetes instrumented with Istio service mesh. It is optimized for DevSecOps and provides a platform for security pros to find anomalies and teach developers how to make use of the tool themselves. It has a rich set of data and a powerful ML platform to learn the developer’s intent, discover the APIs and other assets, and adapt to the environment – all while improving the observability for human decision making. It monitors both north-south and east-west APIs to provide as much zero-trust information as the team can handle. It monitors the application logic, user, and data behavior in context to become one of the best and most comprehensive application security monitoring tools.
Once you try it, you will love it as much as I do!