We don’t live in a world ruled by software; we are the software (more precisely, a simulation) — at least, that’s the consensus among a growing community of intellectuals, philosophers, and even business thought leaders like Elon Musk. And if we are indeed a part of a simulation, then it’s easy to deduce why so many things are broken in every sphere of our lives. Software systems are susceptible to bizarre breakdowns and errors.
More than 60% of the developers and testers in a survey admitted having released untested or broken code, which led to bugs in the production. Among the common blunders, many of them mentioned wiping out entire databases or shutting down a production server. Perhaps, we will find ways to avoid such blunders, but it brings us to one of the bigger puzzles in software development:
How Much Testing is Enough?
Quality Engineering (QE) departments often struggle with the above question. The development and quality teams usually perform a wide range of tests before release, including unit tests, integration tests, functional tests, penetration tests, regression tests, and more. The degree of test automation varies, so does its quality, coverage, and effectiveness. As a result, businesses have to take risks while releasing their software with some known as well as unknown defects. Bugs are the way of life and new methods such as canary deployment are used to cut down risks while releasing software incrementally to the real world.
From a pure business standpoint, it is simple to say that teams should test only till the associated costs do not exceed the ROI. However, as discussed, the lack of visibility into the different processes can prevent businesses from assessing the true costs of their testing. Though the leadership in most enterprises realize that there is a lot of room to cut down hidden or opportunity costs by optimizing the processes, they lack proper tools to get end-to-end visibility and control over their distributed setups.
Has the Pandemic Complicated Quality Engineering?
Last year in September, research commissioned by Cloudbees gathered inputs on the impact of remote work and the pandemic on software development from 347 industry leaders. A significant 59.49% of the respondents reported an increase in productivity among software development teams since the start of the pandemic. This is in line with the permanent work-from-home announcements (obviously with some caveats) by tech giants including Google, Facebook, Microsoft, Twitter, and more.
It may appear that the tech giants have responded well to the challenges posed by the pandemic. However, it is early to gauge the long-term effects of the pandemic on software development and productivity. Past research work, as cited in this article by Essec Business School, indicates that teamwork factors, such as psychological safety or shared understanding (Edmondson 1999; Kude 2019) that are essential building blocks of functioning software development teams, get impacted in the work-from-home setups.
While there’s no specific report available on the pandemic’s impact on QE, it is safe to say that it’s not the pandemic that has led to any major productivity drains. A report published just before the pandemic indicated that around 60% of knowledge workers wasted their time “doing work about their work.” It meant activities like searching through emails, discussing projects, and attending unnecessary meetings. It also reported that an average worker switched between 10 apps 25 times per day, which fragmented communication and reduced efficiency. Disparate tools and data silos have always been the biggest obstacles to productivity.
The QE department, which is responsible for process control and oversight with the implementation of industry standards and metrics, needs to relook its working methodologies to adapt to the pandemic. An immediate challenge for testing and development teams is to gather real-world feedback, which might be missing in many industries. For example, a ticketing solution developed for the aviation industry might provide false or inadequate feedback due to reduced air traffic these days.
Similarly, the communication and reporting practices, which were common earlier, may not work well in the new paradigm. The tools and dashboards might lack sufficient context and detailed comments. Hence, teams that were used to face-to-face communication, now normally resort to individual chats to collaborate and resolve issues. This is a deviation from the standard practice of raising issues during team meetings, which made everyone aware. In situations like these, code changes without everyone’s awareness can lead to potential conflicts.
Further, in large armies of developers and testers working remotely, it is not easy to identify who is finding the bugs frequently and running actual tests versus who is just approving or updating. QE teams also face difficulties in assessing their code coverage accurately. Many times, certain blocks of code are tested fairly, while others remain untouched. Also, with a large number of test suites across different projects, it is difficult to find out which test cases were written but never executed or had no steps/expected outcome.
Hence, from developers and testers to product owners and VPs, every stake owner now needs increased visibility, cross-tool bi-directional connectivity, and traceability to stay on the same page, connect the dots, and also get a sense of the work in their context. QE teams need to revamp their dashboards to monitor test suites with seamless data collection and near real-time insights on team progress, productivity, blockages, quality/defect trends, tech-debt, test automation, risks, and more.
How Klera Helps QE Teams?
Klera has been working with global technology teams, helping them focus on their real work while providing solutions that enable increased data-driven visibility and control across tools and processes. It provides a no-code framework to connect with disparate tools, automate workflows, create interactive dashboards, and more. One of Klera’s global customers, a leader in the cloud-based unified communications space, has developed several apps for QE to increase its preparedness for the pandemic and ensure high accountability and productivity with remote work. Its Senior Director, QE recently mentioned that Klera’s apps are now being used daily like the vital signs monitor in an ICU. He mentioned that he has complete command and control, in a purely data-driven way, of the risks, productivity, post-mortem capabilities, predictability, and overall ROI of the work being done by everyone, even when they are working remotely. “I don’t need to talk to anyone at all and I still know exactly what is going on daily just by monitoring information across Jira and TestRail,” he added. You can watch our webinar on Quality & Reliability Engineering to learn more about how Klera helps QE teams in delivering reliable and high-quality software while ensuring speed and cost-effectiveness.