Despite our best efforts to write secure code, computer security breaches at major banks, retailers and government agencies are making front page headlines on a regular basis. Here are five reasons that explain why writing better code may only address a fraction of application security risk.
Limited Code Ownership
Modern applications are written by developers using a combination of their own software together with open source components, 3rd party libraries and development frameworks. Much like manufacturers use a mix of in-house and sourced components in their finished products. Various industry studies suggest that only 10 to 30 percent of custom application code is written by a company’s developers. So even the most secure coding practices in the world, perfectly executed, will only address at best 20 percent of the potential risk.
No Clear Ownership
DevOps has dramatically changed the development landscape. With a focus on rapid deployment and continuous iteration, it’s often unclear who is responsible for security and to what extent. Automated tooling, if available, has also given way to a false sense of security in the code being produced. Given the number of false positives produced by testing tools, many programmers may simply not be listening anymore.
This statistic varies depending on industry, but in most large organizations at least half of all of software applications are purchased from a 3rd party vendor. While a small amount of customization can be done, it’s unlikely that there is a way to access the source code. Therefore, there is no way to verify whether they have been developed with acceptable security best practices. As we’ve witnessed over the past two years, many high-profile security breaches have exploited vulnerabilities in 3rd party applications or the IT supply chain.
The best time to enforce rigorous software security standards is early in the development lifecycle. Unfortunately, many applications were developed before stringent security standards were a core element of the original design. Many can’t be taken out of production and re-written to remediate a security flaw. Especially since the timeframe for software remediation can take up to three months in a large organization. Organizations that are running major systems which have known security flaws are hoping that some combination of virtual firewall patching and serendipity will prevent a catastrophic and highly public outcome.
A further range of risks arises from the application platforms and run-time environments on which applications are deployed. No amount of secure development practices will protect against vulnerabilities in the application platform (e.g. Apache Tomcat, WebLogic, JBoss, WebSphere, etc.), or in the runtime environment itself (like Java, which is favored in the banking sector).
So while it should remain our goal, even the most secure coding is not enough to prevent large scale disasters when it comes to application security. In addition to security awareness training for developers and secure coding policies and procedures, companies should consider instituting security controls at the application deployment phase. These include penetration testing and an emerging technology that analyst firm Gartner calls Runtime Application Self-Protection or RASP. This approach implements security protection (not merely detection) within the execution environment, and it’s a great compliment to secure coding teams.