Environment Variables

The Private AI container supports a number of environment variables. The environment variables can be set in the Docker run command as follows:

Copy
Copied
docker run --rm -e <ENVIRONMENT_VARIABLE1>=<VALUE> -e <ENVIRONMENT_VARIABLE2>=<VALUE> -p 8080:8080 -it deid:<version number>

Supported Environment Variables

Variable Name
Description
PAI_ACCURACY_MODES Controls which entity detection models are loaded at container start. By default, the container loads all models. Setting this environment variable allows for faster startup and reduced RAM and GPU memory usage. A request specifying an accuracy mode that wasn't loaded will return an error. Allowed values are the accuracy modes specified in the accuracy field in entity_detection, e.g. PAI_ACCURACY_MODES=high
PAI_ALLOW_LIST Allows for the allow list to be set globally, instead of passing into each POST request. An example could be PAI_ALLOW_LIST='["John","Grace"]'. Please see Processing Text
PAI_DISABLE_GPU_CHECK When defined and set to any value, the startup GPU check is disabled. This variable is only applicable to the GPU container and allows the GPU container to run in fallback CPU mode
PAI_DISABLE_RAM_CHECK When defined and set to any value, the sufficient RAM check performed at container startup is disabled. Please note that Private AI cannot guarantee container stability if this is switched off
PAI_ENABLED_CLASSES Allows for the enabled classes to be set globally, instead of passing it into each POST request. An example could be PAI_ENABLED_CLASSES="NAME," or PAI_ENABLED_CLASSES="NAME,AGE,ORGANIZATION". Please see Processing Text
PAI_ENABLE_AUDIO When defined and set to any value, the container loads the functionality to process audio files. This is off by default to save startup time and RAM
PAI_ENABLE_PII_COUNT_METERING When defined and set to any value, Aggregated entity detection counts are sent back to Private AI servers for reporting and visualization inside the dashboard. Note that feature is off by default
PAI_LOG_LEVEL Controls the verbosity of the container logging output. Allowed values are info, warning or error. Default is info
PAI_MARKER_FORMAT Allows for the redaction marker format to be set globally, instead of passing into each POST request. Please see Processing Text
PAI_OUTPUT_FILE_DIR The directory where /v3/process/files/uri will write processed files to. Note that this does not need to be specified for /v3/process/files/base64
PAI_PROJECT_ID Sets a default project_id that will be used if a request doesn't contain one. Please see Processing Text
PAI_SYNTHETIC_PII_ACCURACY_MODES Same as PAI_ACCURACY_MODES, except for synthetic entity generation models. Unlike PAI_ACCURACY_MODES, this environment variable can be set empty via PAI_SYNTHETIC_PII_ACCURACY_MODES= to disable synthetic entity generation
PAI_WORKERS Number of pre/post-processing workers used in the GPU container. Defaults to 16 - increasing this number allows for higher throughput, at the cost of increased RAM usage
PAI_ENABLE_EMBEDDINGS When define and set to any value, it enables /v3/embed/text endpoint. The endpoint is used to get the embeddings of a text string. Make sure that you have shm-size flag set in the docker run (or equivalent) command. Minimum shm-size requirment at the time of writing this docuemnt is 288m (example, docker run --shm-size=288m <rest of the options>.
PAI_ENABLE_REPORTING Enables reporting to a Logstash server
LOGSTASH_HOST The Logstash server's host info
LOGSTASH_PORT The port of the Logstash server
LOGSTASH_TTL Sets the time to live value (in seconds) of the data queued for Logstash. Data will be lost if the queued data is not sent successfully before the ttl value.
PAI_REPORT_ENTITY_COUNTS Enables entity counts (per piece of text deidentified) to be added to reporting
PAI_MAX_IMAGE_PIXELS Configures the max allowed pixels in the images processed. Default value is 178956970

To change the port used by the container, please set the host port as per the command below:

Copy
Copied
docker run --rm -v "full path to license.json":/app/license/license.json \
-p <host port>:8080 -it crprivateaiprod.azurecr.io/deid:<version>
© Copyright 2024 Private AI.