Baked In v. Bolt On Security
Security practioners often describe security planning as baked-in or bolted-on. This blog unpacks this analogy and challenges the derision of bolting-on security.
Baking Security In
The first lesson a new security practitioner learns is to “bake security in instead of bolting it on.” This article challenges that cliche as a tautology and explores the real life application of the analogy. I have firsthand watched security teams delay projects without justification. This approach accomplishes several undesirable outcomes:
- undermined value of security professionals
- increased tension in the creative process, especially with business partners
- decreased revenue via delayed time to market cost
As security practitioners we must balance time to market (read… availability) with confidentiality and integrity. The real question is how do we strike that balance.
One strategy to balance security with time to market is planning the bolt holes. When architects designed the Internet, they could not conceive the security concerns ahead. Let’s review how the internet architects got it right. (Bear with me…) The original use case for the internet was collaboration among trusted partners for research. These amazing architects designed a framework that enables everything from search engines to SnapChat. Robert Kahn and Vint Cerf developed the TCP/IP protocol in the 1970s and the it gained adoption in 1983 (https://en.wikipedia.org/wiki/History_of_the_Internet). If you’re not familiar, TCP/IP is the protocol that moves information through the Internet and your home network. How did solving a problem related to academic data exchange lead to the rise of Paris Hilton through social media? The simple answer is an architecture with bolt holes in the right place. The founding architects developed a model that packaged data into layers. They planned for growth and new ways to wrap the data for transport. The layered architecture provided a way to enforce confidentiality and integrity, even though they were solving an availability problem.
The simplest example to illustrate the beauty of this designed arrived in 1995 with SSL. The first SSL implementation introduces wide spread encryption on the internet. The Internet could provide integrity and confidentiality to all the information flowing through it. This bolt-on addition to the original architecture enabled online banking.
In plain and simple terms…. The internet was 12 year old before it was “secure” and that is okay. If the security guy insisted on the cryptography to secure internet transmissions before rollout… we would have delayed the collaboration and demand that birthed the internet indefinitely. We learned a lot through those twelve years that fed the next innovation. We had to bring the system online to identify the things we didn’t know. If we insist on securing every possible vulnerability, projects will never finish. We have to revitalize the role of risk analyst in the security practitioner job code.
In a recent software development project, I found myself completely blocked securing a container build process. I knew there was a vulnerability I had to handle and I couldn’t move forward until I resolved it. After a frustrating hour, I relaxed all the controls and the build process started working again. This outcome was inconceivable. As I reinstated the controls one-by-one the conflict between two controls became apparent, and I secured the build process. Yes, this is troubleshooting 101. However, when we pre-occupy ourselves with the minutia, we can’t see the forest for the trees. The vulnerability was mitigated by three other controls, and I would be the only person who could exploit the vulnerability. I clearly had bolt holes in the right place but couldn’t swap my security hat for my operations hat long enough to get it working.
Enterprises flounder over action plans to address security low severity issues. Security practitioners insist on patching low risk vulnerabilities on low risk systems. Don’t get me wrong - I want to patch all the vulnerabilities too. My complaint is the misapplication of context. Another way to contextualize the risk of a vulnerability is a penetration test. This activity can be expensive but often proves high value when executed by knowledgeable testers. A good penetration test will come back with low severity findings that you should fix in the next iteration of the project (imagine an encryption cipher vulnerable to advanced attacks). A good penetration test might reveal critical vulnerabilities that the enterprise wants to fix before anyone else finds them. A good penetration test may not reveal either vulnerability class if they aren’t there. The key architectural concern I’m leading up to is this: The Software Development cycle must have the bolt holes to address vulnerabilities revealed through penetration tests, user feedback, and a myriad of other possible paths.
Architects must plan the bolt holes instead of insisting we review all the ingredients before we start a bake. Security practitioners must bridge the gap between risk management, operations, and security if we want business leaders to value the role.
Photo Credit: Pixelbay