In the Covid-19 hit economy, we have seen increased adoption of digital systems and processes across all sectors. While the trend has enabled business continuity, in the quest to expand their digital ecosystem, many organizations have also increased their exposure to a myriad of cyber-threats and vulnerabilities.
Every year, businesses strengthen their defenses against a new breed of malware, even as they fall for old-school social engineering and phishing scams. According to the 2020 Cost of Data Breach Report, the average cost of data breaches in 2020 is estimated at USD 3.86 Bn, with organizations spending around 280 days to detect and contain a breach. Verizon’s Data Breach Investigations Report 2020 provides a clear picture of all emerging threats and points out that web applications are a major vector for data breaches; around 43% of breaches analyzed in the report targeted web apps. The web application attacks have doubled as compared to last year. Needless to say, organizations need to step in as early as possible to mitigate the impact of such breaches caused by application vulnerabilities and avoid steep financial and reputational costs.
What Prevents Quick Mitigation of Incidents and Vulnerabilities?
While countering zero-day exploits is more complex, enterprises are often exposed to common or known vulnerabilities in their applications. According to the Verizon report, SQL injection and PHP injection vulnerabilities (which aren’t new) are the most commonly exploited vulnerabilities. Does it mean that development teams are not following the best practices and leaving their code exposed to such vulnerabilities?
The above question about developers’ role and accountability in dealing with vulnerabilities is slightly misplaced, as the problem is more about the process than people. The appropriate line of inquiry is – why is that even after the increasing prevalence of continuous testing and shift left approaches in software development, applications remain vulnerable, and why organizations fail to resolve such incidents and vulnerabilities in time?
Here are some of the common reasons or ‘speed bumps’ that prevent organizations from quickly mitigating application vulnerabilities:
Complex Toolchain – As no two organizations employ the same software development practices, tools, and systems, things can go wrong due to many reasons. Complex CI/CD toolchains make it difficult for delivery leaders to track all issues and changes during the planning, development, staging, and deployment stages. As a result, at times, vulnerable code is inadvertently pushed into production.
Lack of Backwards Traceability – It is seen that while most organizations focus on continuous improvements in their velocity and deployment frequencies to achieve a quicker time to market, they still haven’t cracked the code to trace issues and vulnerabilities back to the modified files, source code, and author. Due to lack of backwards traceability, when a vulnerability is detected, it’s not easy to find its root cause or who is the best fit to resolve it immediately.
Partial Automation and Manual Triaging – While chatops and advanced ticketing tools can enable operations teams to achieve a higher degree of automation and speed; manual processes haven’t ceased to exist. In fact, in most organizations, teams still follow manual processes to triage issues and defects, even if they are using modern tools to collaborate. Such practices or old-habits prevent organizations from a fast, coordinated response required to deal with vulnerabilities.
Lack of Context – Even when a vulnerability is detected and the ticket is created, either manually or automatically, it often lacks sufficient context for teams to act on it quickly. They have to spend several crucial hours in understanding the issue before they start actual troubleshooting.
Limited Documentation of Tribal Knowledge – Many times, the experienced members in a team depend on heuristics to resolve issues. While it can generate good results, not all team members know the same tips and tricks to resolve issues quickly. Lack of effective documentation of organizational knowledge and inefficient ways to tap into such knowledge resources prevents its democratization.
Not Learning from the Past – Organizations fail to fully utilize the learnings from past incidents. It is seen that many times events leading to an incident follow a certain recurring pattern. However, due to a lack of data-driven insights, organizations fail to identify such patterns in their application performance data and keep on facing similar preventable issues.
Due to all the above reasons, organizations fail to lower their resolution times. The detection and resolution of incidents and vulnerabilities becomes even more difficult with enterprise applications developed over decades on a monolithic architecture that are now being refactored to microservices architecture. In such cases, there are millions of lines of legacy code, which no one wants to touch or disturb.
How to Detect and Address Application Vulnerabilities?
There are many web application scanners in the market, which can help in detecting known vulnerabilities in the code. Teams can use free or opensource scanners like OWASP Zed Attack Proxy (ZAP), which offers API access, allowing organizations to automate their vulnerability scanning by making it a part of their CI/CD pipeline. However, this solves only a part of the problem. As discussed, in addition to the detection of vulnerabilities, teams also need traceability across their development lifecycle to find and patch the vulnerable piece of code.
One of the ways to achieve this is by tracking and analyzing logs, metrics, events, and traces across the distributed ecosystem. Organizations need to implement advanced APM (Application Performance Management) solutions to achieve effective tracing of issues and vulnerabilities. However, code instrumentation and log management have their definite overheads. As most APM tools use agents to collect logs, they can impact application performance. Further, it is not always easy to gain actionable insights from logs and metrics collected from different sources due to the lack of a unified analytics platform. On the other hand, open-source solutions, such as those based on the ELK stack, require significant configuration and management expertise.
How Klera Helps in Resolving Application Vulnerability Challenges?
Klera readily connects with your CI/CD pipeline using pre-built, bidirectional intelligent connectors to collect data from tools like ZAP, ServiceNow, Jira, Git, SonarQube, Jenkins, Kubernetes, and more. It uses this data to create an interactive dashboard, which helps in detecting vulnerabilities in the code and then connect the dots to find out:
- The modified files, commit ID, and exact line of code responsible for the vulnerability
- Who made the commit or modified the file and how frequently
- Which Jira issues or change requirements led to the file modification
By automating end-to-end traceability, organizations can quickly detect issues and vulnerabilities in their application and assign them to the right team members for a quick resolution. See Klera in action and understand how it helps ensure traceability across the distributed development lifecycle here.
Further, Klera serves several ALM and DevOps use cases, to help you make the most of your software development tools and processes, and realize the true potential of your data. For instance, you can gather data from Git, Jenkins, Sonar, and Artifactory to gauge the health of your CI/CD build pipeline and set up cognitive, proactive alerts for a timely response. Its no-code platform and bi-directional intelligent connectors allow you to quickly develop intelligent applications to remove process bottlenecks, automate routines, extract actionable insights, detect important trends and patterns, gather feedback, generate reports and more. Using Klera’s several pre-built and custom applications, you can accelerate deliveries and ensure improved quality, reliability, and resilience against cyber-attacks and vulnerabilities.