Perspective

Automated impact-based test selection and prioritization

Ajit Rajshekar,

Associate Director – Intelligent Automation

Effective testing is essential to the entire software development process. But the test architects and SMEs often need help in choosing the optimum test for their regression needs. 

 

Thanks to advances in technology, they can get that help in a new solution called Automated Impact-Based Test Selection and Prioritization. The solution or approach is part of a concept known as “test lifecycle automation,” which substantially reduces the potential for human error. This, in turn, helps enterprises deliver bug-free applications to their users.

I wrote about this concept in July 2021 in a blog post that introduced the concept of self-healing test automation, which uses algorithms to fix flaky tests without human intervention. An excerpt: 

 

Over the last few years, an idea has been gaining traction in the testing industry. While the tests themselves are automated, the ancillary activities which go into the entire test lifecycle, like script maintenance, impact analysis, defect triaging, etc., are still extremely manual. Relying on manual processes tends to slow the software development process. Hence, we now see a paradigm shift in the philosophy to encompass “test lifecycle automation” and not just “test automation.” The rise of AI/ML techniques paired with programming capabilities goes a long way to help this developing philosophy.

Thus, the goal is to automate throughout the testing lifecycle. This speeds the SDLC process because automation engineers can focus on creating new scripts, on optimized testing, and spend less time maintaining the old ones. 

Exploring the appeal of impact-based test selection and prioritization

Today, we’ll investigate impact-based test selection and prioritization, another cog in the wheel of test lifecycle automation that’s attracting a lot of interest. This testing solution gives the test SMEs a better arsenal for limiting human error and improving the quality of applications. 

When we spoke to our customers within Virtusa, we realized a few glaring issues: 

  • Enterprises adopting continuous integration/testing pipelines are still performing manual, impact-based test selection. 
  • Enterprises want to optimize their test efforts while maximizing the impact of tests in every sprint.
  • Enterprises need to discover the untested source code in every sprint. 
  • Stakeholders want a dashboard showing this information in an automated manner. 

Let’s dig deeper into this. 

Current challenges for the automation team

Current challenges for the automation team
Current challenges for the automation team

 

This diagram illustrates a typical sprint cycle: Developers create the features for the sprint’s requirements, and the SDETs (software development engineers in tests) create their automated test scripts. Every build includes an impact analysis to determine the regression suite; the test suite is executed against the build. Defects are logged, and the backlog goes back into the next sprint. 

Currently, enterprises typically execute a standard regression suite against every build. Next, the SMEs manually choose extra tests to be added to the regression suite. The flaw in this approach is that it depends on the experience of the SMEs — not the change set for that build.

It also has many manual steps that waste time and effort in every sprint. And, more importantly, it is prone to human error. On the other hand, enterprises might execute the entire test suite as a regression to be confident about the build. Again, this adds time and effort for the entire test process.

How impact-based test selection and prioritization works

Image
Image

 

Impact-based test selection and prioritization is an intelligent solution that learns and develops a source code to test mapping. This method uses profilers and applies heuristics from historical data to generate the test suite based on the commit data that goes into every build. 

It primarily consists of four main steps:

  1. Source code to test case mapping. The tool uses a software agent during test case or test script execution. The agent attempts to map the test cases to the source code files, using profilers to instrument the byte code of the application and store it in a data lake.
  2. Test prioritization heuristics. From historical test execution records and defects records, certain test prioritization heuristics are applied to determine the higher-priority test cases. This is an emerging field of data science known as MSR — mining software repositories.
  3. Derive commit changes. After the build is generated, the agent creates a list of all added/modified/deleted files and pushes it into the test suite generator.
  4. Test suite generator. Using the generated change set, the tool attempts to determine the correct impact-based suite based on code churn and a prioritized list of test cases using historical data. 

These test-prioritization heuristics can further be refined to include certain reinforced learning (RL) algorithms to help refine the prioritized list of test cases. This is done by selecting and prioritizing test cases according to their duration, previous last execution, and failure history under the guidance of a reward function built into the algorithm.

Key benefits of impact-based test selection and prioritization

The benefits of this process are straightforward: It helps test teams select the optimum regression test suite based on code changes. This, in turn, improves product quality and streamlines the customer experience. The data-driven approach to impact analysis makes it immune to human error, leading to significant savings in time to determine the test regression suite. This is crucial for squads churning out frequent builds to test. 

Test lifecycle automation is positioned to be one of the main disruptive innovations changing the way testing is performed in the coming decade. The use of AI/ML techniques will be a catalyst and primary growth driver in the software testing realm.

 

Ajit Rajshekar

Ajit Rajshekar

Associate Director – Intelligent Automation

Ajit Rajshekar is a test architect with over 15 years of quality engineering experience, mainly in the product engineering domain (medical devices). His interests include exploring and using new methodologies/tools in test automation and cognitive intelligence (AI/ML) in testing.

Accelerate your time to market with smart automation

 

Virtusa intelligent automation helps drive SDLC transformation

Related content