Scaling Up the ReportPortal Service API
Due to the current implementation specifics Asynchronous Reporting Scheme, horizontal auto-scaling of the ReportPortal service API is not feasible. However, manual scaling is achievable. This limitation stems from the way RabbitMQ, in conjunction with the API, manages the number of queues on the RabbitMQ side.
Given that ReportPortal can receive a substantial volume of concurrent streams from different project spaces, a mechanism has been implemented. This mechanism determines the number of queues based on the hash of the launch object and distributes them across different queues to increase the likelihood of processing.
In simpler terms, this means that a launch that arrives later in time has the chance to be processed and recorded in the database from the queue, even while another launch from a different project with a large number of test cases has already entered the queue. Instead of a situation where the queue would only reach the last launch after processing the earlier, larger one, the API will handle different queues concurrently. This approach allows the later launch to be processed without undue delay.
Scaling up configuration for ReportPortal API Service
To scale your ReportPortal services in Kubernetes, you need to adjust the
queues.totalNumber in your
Update Replica Count: Change
2for additional replication.
Edit Total Number of Queues: Modify
20to increase the total available queues.
Use the following formula for calculation:
perPodNumber = totalNumber / serviceapi.replicaCount
To scale your ReportPortal services using Docker, update the environment variables and duplicate the API values block.
- Set Environment Variables:
RP_AMQP_QUEUESPERPODto your API environment variables.
Docker Compose v2
Duplicate API Values Block: Create a copy of the API values block and rename
api_replica_1to facilitate scaling.
docker-compose.yml API values block
Docker Compose v3.3+
- Add replicas:
deploy.replicas: 2to your API: