Over the last few years Artificial Intelligence (AI) has been changing the testing process in many ways. Robots are programmed to think. What is more, they do tasks at high speed and with accuracy, and they don’t get bored with it.
Let’s see how ReportPortal uses AI power for the key features: Analyzer, Unique errors, Machine Learning (ML) suggestions, Quality Gates.
AI for shift-left testing
According to the shift-left testing approach, you should run automation tests regularly so that you understand what is happening as quickly as possible. For example, you can run a daily regression to check what happened to a product after a new code merge. Accordingly, with many runs, it will take a lot of time to analyze the test results. This is where AI comes to the rescue.
You can train Analyzer, and then it will take a part of your test failure analysis routine work and set a defect type, a link to Bug Tracking System (BTS) (in case it exists), comment (in case it exists). How does it work? We can mark 1 of the bugs as a System Issue in 10 Launches and put it as a Product Bug in Launch 11. So, the Analyzer will mark this issue as a Product Bug the next time it is started.
Note: The total count of tests should be the same every time for a specific Launch. Don’t skip the tests.
If Auto-Analysis didn’t define any failures, you can open “Make decision” modal and see ML Suggestion for this item. AI will tell you that the error log is very similar to another log. You can compare these logs and apply the defect type from ML suggestion or set it manually.
One more feature with AI-based defects triage is Unique Errors. The system automatically groups tests by the same errors: when you expand some error log, you see a list of steps where it occurred. It is very convenient when preliminary work has already been done instead of you. Thus, you can select these grouped by AI items and apply a defect type for them using bulk operation.
AI for CI/CD pipepline
ReportPortal speeds up a CI/CD pipeline thanks to Quality Gates feature, which has AI-driven “New errors” rule. In what situation can it be useful? Suppose you have already identified some errors, and they are minor, and you can go to release with them, and there is one more build to test. If you care about new unique bugs, you can create Quality Gate with “New errors” rule which works in conjunction with the Unique Errors functionality. Quality Gate will fail if a new defect is detected.
“Amount of issues” Quality Gate rule is relevant to AI as well because Quality Gate with this rule is running after finish of Auto-Analysis. For example, if you have the rule “fail Quality Gate if there is at least 1 Product Bug”, and Auto-Analysis happens and marks an issue as Product Bug, then Quality Gate fails.
Overall, thanks to AI, you can get a test execution report and evaluate product health without any clicks. This magic process step by step:
Auto-Analysis is ON.
Quality Gates functionality is ON.
Integration with Jenkins is configured.
Launch is finished.
Auto-Analysis performs automated defect triaging and sets defect types.
ReportPortal assesses Launch quality using the created Quality Gates rules.
ReportPortal sends auto feedback to CI/CD tool with status Passed or Failed.
Based on ReportPortal feedback, CI/CD tool fails a build or promotes it to the next stage.
AI enhances the value of ReportPortal by saving your time and resources on automation tests results analysis (consequently – reducing costs). AI highlights areas that require more testing and attention. This can help stakeholders quickly understand the quality of the product and make informed decisions.