Deploying into Production

This is a set of best practices for deploying the Private AI container. Particular focus is given to health checks, which when configured correctly allow the system to recover from any crashes or other problems.

When running the container with an orchestrator like Kubernetes or Docker Swarm Mode, we recommend you to leverage the orchestrator's health check mechanism rather than using the built-in restart capability of Docker.

Running a single container

If your use case requires running a single container for a limited period of time (e.g. a batch job), it is possible to start the container directly using the docker CLI.

When you do so, it is possible to leverage the Docker restart option to allow for your task to run to completion even in case of failures. Use this command to start the container with restarts enabled:

Copy
Copied
docker run -d -v "full path to license.json":/app/license/license.json \
-p 8080:8080 --name privateai --restart unless-stopped crprivateaiprod.azurecr.io/deid:<version>

For more information about using the restart flag, have a look at the official docker documentation

Use this command to stop the container:

Copy
Copied
docker stop privateai

Your task should also be written in a way that it probes the container for liveness using the /healthz route. Set your code to call the /healthz route every 5 seconds until the route is responding with status code 200. The container is now ready to receive traffic on endpoints like /process/text.

In most environments, the container is ready to receive traffic in less than a minute.

Concurrency

For optimum performance, please use the concurrency settings given in Prerequisites and System Requirements for your chosen hardware setup. Note that concurrency currently does not apply to batch requests, i.e. sending 10 batches of 100 examples is more performant than sending a single batches of 1000 examples. This behaviour will be improved in an upcoming release.

Metrics

We provide Prometheus metrics for the containers which are available via /metrics route. The metrics are accessible via endpoint only, they are not pushed or published to any remote server. The metrics are provided in a plain text format, and you can view them directly, for example:

Copy
Copied
curl localhost:8080/metrics

Using the Private AI container as a base image

You can use the Private AI image as a base image in a Dockerfile if that fits better with your workflow. However, in order to make sure that the processes in the base image works correctly:

  1. The Private AI container uses various ports in the 8080-8090 range. Please use a port outside this range for your process.
  2. The Private AI container has an entrypoint script that initializes various systems and overriding the ENTRYPOINT keyword might result in unexpected behaviour. Therefore, it is advised to not override the ENTRYPOINT of the base image. However, if you still would like to override it, please make sure to run the entrypoint point script that is located under the /app/docker/entrypoint.d folder.
© Copyright 2024 Private AI.