Skip to main content

Auto-Analysis of launches

The analysis feature of the ReportPortal makes it possible for the application to check and pass part of the routine duties by itself.

Auto-analysis performs automated defect triaging and defines the reason for the test item failure and sets:

  • a defect type;
  • a link to BTS (in case if it exists);
  • comment (in case if it exists);

The process of Auto-Analysis is based on previous user-investigated users' results using Machine Learning.

An auto-analyzer is presented by a combination of several services: OpenSearch, Analyzer service(two instances Analyzer and Analyzer train), Metrics gatherer:

  • OpenSearch contains an analytical base, stores training data for retraining of models and saves metrics for metrics gatherer.
  • Analyzer instance performs all operations, connected with the basic functionality (indexing/removing logs, searching logs, auto-analysis, ML suggestions).
  • Analyzer train instance is responsible for training models for Auto-analysis and ML suggestions functionality.
  • Metrics gatherer calculates metrics about the analyzer usage and requests deletion of custom models if metrics goes down.

You have the option to disable the Analyzer by removing the Analyzer, Analyzer train, and Metrics gatherer services from the installation.

There are several ways to use an analyzer in our test automation reporting dashboard:

  • Use the ReportPortal Analyzer: manual (analysis is switched on only for chosen launch manually) or auto (analysis is switched on after the launch finishing automatically);

  • Implement and configure your custom Analyzer and do not deploy ReportPortal service Analyzer;

  • Do not use any Analyzers at all and do an analytical routine by yourself;

important

The Auto Analyzer service is a part of the ReportPortal bundle.

ReportPortal Analyzer. How the Auto-Analysis is working

ReportPortal's Auto Analyzer allows users to reduce the time spent on test execution investigation by analyzing test failures in automatic mode. The default analysis component is running along with OpenSearch which is used for test logs indexing. For effective using Auto–Analysis you should come through several stages.

Create an analytical base in the OpenSearch

First of all, you need to create an analytical base. For that, you should start to analyze test results manually.

All test items with a defect type which have been analyzed manually or automatically by ReportPortal are sent to the OpenSearch.

The following info is sent:

  • An item ID;
  • Logs (each log should be with level Error and higher (log level >= 40 000));
  • Issue type;
  • Flag: “Analyzed by” (where shows by whom the test item has been analyzed by a user or by ReportPortal);
  • A launch name;
  • Launch ID;
  • Unique ID;
  • Test case ID;

For the better analysis, we merge small logs (which consist of 1-2 log lines and words number less or equal 100) together. We store this merged log message as a separate document if there are no other big logs (consisting of more than 2 log lines or having a stacktrace) in the test item. We store this merged log message in a separate field "merged_small_logs" for all the big logs if there are ones.

The Analyzer preprocesses log messages from the request for test failure analysis: extracts error message, stacktrace, numbers, exceptions, urls, paths, parameters and other parts from text to search for the most similar items by these parts in the analytical base. These parts are saved in a separate fields for each log entry.

Each log entry along with its defect type is saved to OpenSearch in the form of a separate document. All documents created compose an Index. The more test results index has, the more accurate results will be generated by the end of the analysis process.

tip

If you do not sure how many documents(logs) are contained in the Index at that moment, you can check it. For that, perform the following actions:

Test items of a launch in Debug mode are not sent to the service Analyzer. If the test item is deleted or moved to the Debug mode, it is removed from the Index.

Auto-Analysis process

After your Index has been completed. You can start to use the auto-analysis feature.

Analysis can be launched automatically (via Project Settings) or manually (via the menu on All launches view). After the process is started, all items with defect type “To investigate” with logs (log level >= 40 000) from the analyzed launch are picked and sent to the Analyzer Service and the service OpenSearch for investigations.

How OpenSearch returns candidates for Analysis

Here is a simplified procedure of the Auto-analysis candidates searching via OpenSearch.

When a "To investigate" test item appears we search for the most similar test items in the analytical base. We create a query which searches by several fields, message similarity is a compulsory condition, other conditions boost the better results and they will have a higher score (boost conditions are similarity by unique id, launch name, error message, found exceptions, numbers in the logs and etc.).

Then OpenSearch receives a log message and divides it into the terms (words) with a tokenizer and calculates the importance of each term (word). For that OpenSearch computes TF-IDF for each term (word) in the analyzed log. If the level of term importance is low, the OpenSearch ignores it.

note

Term frequency (TF) – how many time term (word) is used in an analyzed log;

Document frequency (DF) – in how many documents this term (word) is used in Index;

TF-IDF (TF — term frequency, IDF — inverse document frequency) — a statistical measure used to assess the importance of a term (word) in the context of a log that is part of an Index. The weight of a term (word) is proportional to the amount of use of this term (word) in the analyzed log and inversely proportional to the frequency of term (word) usage in Index.

The term (word) with the highest level of importance is the term (word) that is used very frequently in analyzed log and moderately in the Index.

After all important terms are defined, OpenSearch calculates the level of equality between an analyzed log and each log in the Index. For each log from the Index is calculated a score.

note

How calculated a score:

score(q,d) =

coord(q,d) - SUM ( tf(t in d), idf(t)², t.getBoost(), ) (t in q) Where:

  • score(q,d) is the relevance score of log “d” for query “q”.
  • coord(q,d) is the coordination factor: the percent of words equality between analyzed log and particular log from the OpenSearch.
  • The sum of the weights for each word “t” in the query “q” for log “d”.
  • tf(t in d) is a frequency of the word in the analyzed log.
  • idf(t) is the inverse frequency of the word in all saved logs in the Index.
  • t.getBoost() is the boost that has been applied to the query. The higher priority for logs with:
    • The same Launch name;
    • The same UID;
    • Manual analysis;
    • Error message;
    • The same numbers in the log;
    • and etc.

The results are sorted by the score, in case the scores are the same, they are sorted by "start_time" field, which helps to boost the test items with closer to today dates. So the latest defect types will be higher in the returned by OpenSearch results.

The OpenSearch returns to the service Analyzer 10 logs with the highest score for each log. Analyzer regroups all the results by a defect type and chooses the best representative for each defect type group, based on their scores.

note

In the case the test item has several logs, the best representative for a defect type group will become the log with the highest score among all logs.

How Auto-analysis makes decisions for candidates, returned by OpenSearch

The OpenSearch returns to the service Analyzer 10 logs with the highest score for each query and all these candidates will be processed further by the ML model. Analyzer regroups all the results by a defect type and chooses the best representative for each defect type group, based on their scores.

The ML model is an XGBoost model which features (about 30 features) represent different statistics about the test item, log message texts, launch info and etc, for example:

  • the percent of selected test items with the following defect type
  • max/min/mean scores for the following defect type
  • cosine similarity between vectors, representing error message/stacktrace/the whole message/urls/paths and other text fields
  • whether it has the same unique id, from the same launch
  • the probability for being of a specific defect type given by the Random Forest Classifier trained on Tf-Idf vectors

The model gives a probability for each defect type group, and we choose the defect type group with the highest probability and the probability should be >= 50%.

A defect comment and a link to BTS of the best representative from this group come to the analyzed item.

The Auto-analysis model is retrained for the project and this information can be found in the section "How models are retrained" below.

So this is how Auto-Analysis works and defines the most relevant defect type on the base of the previous investigations. We give an ability to our users to configure auto-analysis manually.

Auto-analysis Settings

All settings and configurations of Analyzer and OpenSearch are situated on a separate tab on Project settings.

  1. Login into ReportPortal instance as Administrator or project member with PROJECT MANAGER role on the project;

  2. Come on Project Settings, choose Auto-Analysis section;

In this section user can perform the following actions:

  1. Switch ON/OFF auto-analysis;

  2. Choose a base for analysis (All launches/ Launches with the same name);

  3. Configure OpenSearch settings;

  4. Remove/Generate OpenSearch index.

Switch ON/OFF automatic analysis

To activate the "Auto-Analysis" functionality in a project, perform the following steps:

  1. Login ReportPortal instance as Administrator or project member with PROJECT MANAGER role on the project.

  2. Select ON in the "Auto-Analysis" selector on the Project settings / Auto-analysis section.

  3. Click the "Submit" button. Now "Auto-Analysis" will start as soon as any launch finishes.

Base for analysis

You can choose which results from previous runs should be considered in Auto-Analysis for defining the failure reason.

There are five options:

  • All previous launches
  • Current and all previous launches with the same name
  • All previous launches with the same name
  • Only previous launch with the same name
  • Only current launch

If you choose “All previous launches” option, test results in the launch will have analyzed on the base of all runs before the current launch regardless of the launch name.

If you choose “Current and all previous launches with the same name” option, test results in the launch will have analyzed on the base of current and all previous launches that have the same Launch name.

If you choose “All previous launches with the same name” option, test results in the launch will have analyzed on the base of all launches before current launch that have the same Launch name.

If you choose “Only previous launch with the same name” option, test results in the launch will have analyzed on the base of last run before current launch with the same name.

If you choose “Only current launch” option, test results in the launch will have analyzed on the base of current launch.

Imagine that the launches in the image below are part of your ReportPortal project, and currently, Smoke Launch 3 is being analyzed.

So, launches that will have analyzed if you choose “All previous launches” option: Smoke Launch 1, Smoke Launch 2, Regression Launch 1, Regression Launch 2, Regression Launch 3.

Launches that will have analyzed if you choose “Current and all previous launches with the same name” option: Smoke Launch 3, Smoke Launch 1, Smoke Launch 2.

Launches that will have analyzed if you choose “All previous launches with the same name” option: Smoke Launch 1, Smoke Launch 2.

Launches that will have analyzed if you choose “Only previous launch with the same name” option: Smoke Launch 2.

Launches that will have analyzed if you choose “Only current launch” option: Smoke Launch 3.

You can choose those configurations via Project configuration or from the list of actions on All launches view.

Remove/Generate OpenSearch index

There two possible actions that can be performed under Index in OpenSearch.

You can remove the Index from OpenSearch and all logs with there defect type will be deleted. ML will be set to zero. All data with your investigations will be deleted from the OpenSearch. For creating a new one you could start to investigate test results manually or generate data based on previous results on the project once again.

note

Your investigations in ReportPortal will not be changed. The operation concerns only OpenSearch base.

Another option, you can generate the Index in OpenSearch. In the case of generation, all data will be removed from OpenSearch and the new one will be generated based on all previous investigations on the project following current analysis settings.

At the end of the process, you will receive a letter with info about the end of the process and with several items that will be appeared in OpenSearch.

You can use index generation for several goals. For example, assume two hypothetical situations when index generation can be used:

  • by accident you remove the index, but now you want to restore it.
note

The new base will be generated following logs and settings that are existing on the moment of operating. So index before removing and index after generation can be different.

  • you have changed a parameter Number of log lines for 3. But your existing index contains logs with value ALL. You can generate a new index, the old index will be removed, and a new one will be generated. Logs in the new index will contain 3 lines;

We strongly do not recommend use auto-analysis until the new index will be generated.

Manual analysis

Analysis can be launched manually. To start the analysis manually, perform the following steps:

  1. Navigate to the "Launches" page.

  2. Select the "Analysis" option from the context menu next to the selected launch name.

  3. Choose the scope of previous results on the base of which test items should be auto-analyzed. The default is the one that is chosen on the setting page, but you can change it manually.

Via this menu you can choose 3 options unlike on Project Settings:

  • All launches;

  • Launches with the same name;

  • Only current launch;

Options All launches and Launches with the same name are working the same as on project settings. If you choose Only current launch, the system is analyzing the test items of chosen launch only on a base of already investigated date of this launch.

  1. Choose which items from launch should be analyzed:
  • Only To investigated;
  • Items analyzed automatically (by AA);
  • Items analyzed manually;

In case the user chooses Only To investigate items - the system is analyzing only items with defect type "To investigate" in the chosen launch;

In case the user chooses Items analyzed automatically (by AA) - the system is analyzing only items that have been already analyzed by auto-analysis. The results of the previous run of analysis will be set to zero and items will be analyzed once again.

In case the user chooses Items analyzed manually - the system is analyzing only items that have been already analyzed by the user manually. The results of the previous run of analysis will be set to zero and items will be analyzed once again.

In the case of multi-combination - the system is analyzing results dependence on chosen options.

note

The Ignore flag is saved. If the item has flag Ignore in AA, it will not be re-analyzed.

tip

For option Only current launch you can not choose Items analyzed automatically (by AA) and Items analyzed manually simultaneously.

  1. Click the "Analysis" button. Now "Auto-Analysis" will start.

Any launches with an active analyzing process will be marked with the "Analysis" label.

Label AA

When the test item is analyzed by the ReportPortal, a label "AA" is set on the test item on a Step Level. You can filter results with a parameter “Analysed by RP (AA)”

Ignore in Auto-Analysis

If you don't want to save some test items in OpenSearch, you can "Ignore in Auto-Analysis". For that you can choose this action in “Make decision” modal:

Or from the action list for several test items:

When you choose “Ignore in AA”, logs of the chosen item are removed from the OpenSearch.