Skip to main content

Increasing test automation stability and visibility


  • 2,000 unique tests had low stability (25% passing rate)
  • Complex and inconvenient test results reporting
  • QA team didn’t have capacity for test failure analysis
  • Automation results were ignored by decision makers: no analysis of the causes of failed tests, no trust in the automation testing process
  • Lack of visibility into failed tests and failure causes
  • Absence of clear reporting of QA engineers’ workload and performance


Integration with allowed the client to:

  • Collect history for previous test runs
  • Identify passing, failing, and unstable tests
  • Select stable tests in a separate run
  • Assign unstable test for refactoring and add them to a separate run for implementation
  • Configure charts to track a refactoring progress
  • Accelerate test failure analysis through access to related logs, screenshots, and attachments in one place


  • Automation stability improved from 25% to 95%
  • Analysis efforts of QA engineers decreased by 10 times
  • Client stakeholders use data to make release decisions
  • became the main tool for tracking test automation progress and health