📄️ Optimal Performance Hardware setup
Find below the recommended hardware configuration to set up ReportPortal and have good performance using our centralized test automation tool.
📄️ Basic monitoring configuration
The common ReportPortal instance consists of two main parts – the application server and the database server – both of which must be covered by basic system-level and application-level metrics. Basic system-level monitoring should include metrics tracking the main application and database servers' VM and cluster resources, such as:
📄️ Deploy with Docker on Linux/Mac
Make your test automation reporting more portable. Reduce the risk of configuration issues with your test reporting tool using Docker.
📄️ Deploy with Docker on Windows
A portable way to manage your real time test results. Using Docker makes it easy to share test execution report and collaborate with other team members.
📄️ Deploy without Docker
*The instruction designed for the version 5.3.5 and might be outdated for the latest versions.
📄️ Deploy with Kubernetes
We use Helm package manager charts to bootstrap a ReportPortal deployment on a Kubernetes cluster
📄️ Maintain commands Cheat sheet
Export as env var:
📄️ Additional configuration parameters
| Configuration parameter | Default Value | Service | Description |
📄️ Setup TLS(SSL) in Traefik 2.0.x
This is a short guideline that provides information on how to configure ReportPortal to use Let TLS/SSL certificate setup for your existing ReportPortal environment.
📄️ Deploy on Ubuntu OS
*Provided by @Tset Noitamotuahe. The article might be outdated.
📄️ Deploy with AWS ECS Fargate
Provided by contributor, not verified by RP team, please use with caution.
📄️ ReportPortal 23.1 File storage options
In ReportPortal 23.1 we can use multiple ways to store log attachments, user pictures and plugins.
📄️ Scaling Up the ReportPortal Service API
Due to the current implementation specifics Asynchronous Reporting Scheme, horizontal auto-scaling of the ReportPortal service API is not feasible. However, manual scaling is achievable. This limitation stems from the way RabbitMQ, in conjunction with the API, manages the number of queues on the RabbitMQ side.