Running The Container

The CPU container can be run with the following command:

Copy
Copied
docker run --rm -v "full path to license.json":/app/license/license.json \
-p 8080:8080 -it crprivateaiprod.azurecr.io/deid:<version>

The command to run the GPU container requires an additional --gpus flag to specify the GPU ID to use. The full GPU container also requires 4GB shared memory, which is set via the --shm-size flag. For the text-only GPU container the shared memory flag is not necessary:

Copy
Copied
docker run --gpus '"device=<GPU_ID, usually 0>"' --shm-size=<4g, only required for non-text container> --rm -v "full path to license.json":/app/license/license.json \
-p 8080:8080 -it crprivateaiprod.azurecr.io/deid:<version>

It is recommended to deploy the container on single GPU machines. For multi-GPU machines, please launch a container instance for each GPU and specify the GPU_ID accordingly. You can get the GPU_ID using the nvidia-smi command if you have access to runner. You can find more information regarding using GPUs with docker here.

For private or public cloud deployment, please see Deployment and the Kubernetes Setup Guide.

info

crprivateaiprod.azurecr.io is intended for image distribution only. For production use, it is strongly recommended to set up a container registry inside your own compute environment to host the image.

Apple Silicon

Whilst not officially supported, it is possible to run the Private AI container on Apple Silicon-based Macs, such as the M1 Macbook Pro. To do this, please make sure you use Docker Desktop 4.25 or later and enable Rosetta2 support. During our testing we didn't encounter issues with M1 Macs but did encounter some container startup issues on M2 Macs. If this occurs, please try disabling Rosetta2 support.

Due to the need to emulate x86 instructions, performance is significantly lower than x86-based machines, let alone GPU-equipped machines. On a M1 Macbook Pro, our tests revealed a throughput of approximately 250 words per second.

Authentication and External Communications

The container makes external communications to Private AI's servers for authentication and usage reporting. To this end, please make sure that https://apim-auth-prod.azure-api.net:443/license-verification/license_status and https://app.amberflo.io:443/ingest/ are reachable. These communications do not contain any customer data - if training data is required, this must be given to Private AI separately. Please see the FAQ for more details on what is sent. An authentication call is made upon the first API call after the Docker image is started, and again at pre-defined intervals based on your subscription.

An "airgapped" version of the container that doesn't require external communication can be delivered upon request for Scale Plan customers.

URI-Based File Support

Running the container with the above commands allows for base64-encoded files to be processed with /process/files/base64. However, to utilize the /process/files/uri route, a volume where input files are stored and PAI_OUTPUT_FILE_DIR must be provided. Note that PAI_OUTPUT_FILE_DIR must reside inside the mounted volume.

Copy
Copied
docker run --rm -v "full path to your license.json file":/app/license/license.json \
-e PAI_OUTPUT_FILE_DIR=<full path to output> \
-v <full path to files>:<full path to files> \
-v <full path to output>:<full path to output> \
-p 8080:8080 -it crprivateaiprod.azurecr.io/deid:<version>

For example, if your license file is in your home directory, the input directory you wish to mount is called inputfiles and the output directory is output:

Copy
Copied
docker run --rm -v /home/<username>/license.json:/app/license/license.json \
-e PAI_OUTPUT_FILE_DIR=/home/<username>/output \
-v /home/<username>/inputfiles:/home/<username>/inputfiles \
-v /home/<username>/output:/home/<username>/output \
-p 8080:8080 -it crprivateaiprod.azurecr.io/deid:3.0.0-cpu
© Copyright 2024 Private AI.