AI-Assisted Test Automation - Market Analysis

Would you like to reduce your test script maintenance and decrease the time your product takes to reach the market? As the systems we are testing become more complex, the methods we use for testing have to evolve as well.

When software testing first started as an independent discipline, we used tools like spreadsheets for managing and executing test cases. Later we moved towards using more sophisticated tools for requirements and test management and tools became available for automated testing. 

We used to have a defined set of requirements, now our requirements change daily or even hourly as we become ever more agile. We can't easily handle this agility with automation; the cost for maintaining tests can easily skyrocket and there just aren’t enough people with the sufficient automation skills to maintain them.

One proposed solution is a category of tools that have been launched into the market in the past few years and embrace AI technologies for test creation, execution, and maintenance. There are some unique ways these products aim to solve testing problems.

Write tests without knowing what happens under the hood

Record and replay tools are nothing new. They have been around for a long time now and leave a bad taste in the mouth of many testers. However, tools like Retest, TestCompleteRainforestQA, Mabl, and don't require you to know code, unique object identifiers or programming languages. They record multiple unique identifiers for each field during recording and create a golden master concept for future executions. 

When the unique identifiers and functionality inevitably change in the project, then there is no need to update the test scripts; the tests detect changes and self-heal automatically. While this sounds great, in practice it might still be difficult to amend tests for major functional changes and a user would be required to record a test from scratch and delete the old one. Also, what if a tool makes the wrong decision to self-heal and lets a critical bug slip through?

 Image and text recognition

Visual text and image recognition has become more popular in recent years. Screenshots are taken during test execution and then compared to previous screenshots with changes highlighted.

There are various tools that do this. For example AppliTools and Mabl compare full-page images, RainforestQA recognises page elements to be interacted with, and TestComplete provides OCR functionality for recognising text in images.

While these tools give great feedback on what has changed since the last deployment, they still require human input to review screenshots and categorise whether the changes are bugs or new features. 

Generate test scripts automatically

Before generating any test scripts, there must be an understanding of their objective. Test generation is quite common for unit tests; however, for functional tests most testers would correctly question the need and approach for automation. How can you confirm if tests cover all possible scenarios and outputs? How would you know if it is providing relevant testing? Would reviewing automatically generated tests take more time than writing them manually?

When products are very complex or involve many complex inputs, it is sometimes difficult to envisage and design test cases for every possible scenario.  AI test generation could help here, but generating tests using property-based or model-based testing methods may be more beneficial. There aren't many tools in the market that would easily generate functional tests using AI techniques. Some existing, like UltimateSoftware/Agent, but aren’t used commonly.

Automation gives us an opportunity to improve testing for complex systems, but it's not always an answer to every testing challenge. Before starting to jump on any tools, it is necessary to define what it is that we are trying to solve. If we are clear on what we want to achieve then AI test tools can increase productivity and time to market. A lot of recent AI tools are still very new, but they will become more sophisticated and more polished in future.


Written by Gita Malinovska, Senior Consultant at Piccadilly Group

Squashing bugs is good for your bottom line: the importance of defect management

Squashing bugs is good for your bottom line: the importance of defect management

Defect management is a simple yet key activity of application development. If an application has too many bugs to function properly, it simply won’t be used. While defects inevitably appear during and even after development, a reliable defect manager can minimise their impact on a project’s cost and delivery. Defect management clearly plays a crucial role within any programme.

Dev/Ops for Testers

Dev/Ops for Testers

Dev/Ops is a trend that isn’t going away, with the number of people identifying themselves as part of a Dev/Ops team having doubled between 2014 and 2017.  This trend is particularly noted in the US, with teams now identifying themselves as working within the tech sector rather than Financial Services.