3.0.0 Beta Quickstart Guide

Getting the Container

The container is distributed via a container registry on Azure, which you can login to with the following command:

docker login -u INSERT_UNIQUE_CLIENT_ID -p INSERT_UNIQUE_CLIENT_PW crprivateaiprod.azurecr.io

Please reach out to our customer support team at info@private-ai.com if you need a login

Starting up the Container

In 3.0, we moved to license files for authentication. To start the container, please mount the license file as follows:

docker run --rm -v "full path to your license.json file":/app/license/license.json -p 8080:8080 -it crprivateaiprod.azurecr.io/deid:<tag>

For example:

docker run --rm -v "/home/johnsmith/paisandbox/my-license-file.json":/app/license/license.json -p 8080:8080 -it crprivateaiprod.azurecr.io/deid:3.0.0beta2-full_cpu

Sending requests

You can then make a request to the container like this:

curl --request POST --url http://localhost:8080/v3/process_text   --header 'Content-Type: application/json'   --header 'x-api-key: <key>'   --data '{"text": ["Hello John"]}'

Note that for the 3.0 beta, you must still supply your API key in the POST request. This will be removed in the 3.0 final release.

File Support

Starting the container with file support

If you want to try out file support, you'll need to start up the container with a few extra arguments:

docker run --rm -v "full path to your license.json file":/app/license/license.json \
-e PAI_OUTPUT_FILE_DIR=<full path to files>/output_dir \
-v <full path to files>:<full path to files> \
-p 8080:8080 -it crprivateaiprod.azurecr.io/deid:3.0.0beta2-full_cpu

For example, if your license file is in your home directory, the input directory you wish to mount is called inputfiles and the output directory is inputfiles/output:

docker run --rm -v /home/<username>/license.json:/app/license/license.json \
-e PAI_OUTPUT_FILE_DIR=/home/<username>/inputfiles/output \
-v /home/<username>/inputfiles:/home/<username>/inputfiles \
-p 8080:8080 -it crprivateaiprod.azurecr.io/deid:3.0.0beta2-full_cpu

Note that the output directory must reside within the input directory. Also ensure that the full path to the directory specified exists and has read / write permissions.

Sending requests to the new process_file endpoint

You can make a request like this, for any file located within the input volume:

curl --request POST --url http://localhost:8080/v3/process_file --header 'Content-Type: application/json' --header 'x-api-key: <key>'   --data '{"uri": "/home/<username>/inputfiles/testing.pdf"}'

The API request is synchronous, and will return once the redacted file is written to the output directory. Also note that for the 3.0 beta, you must still supply your API key in the POST request. This will be removed in the 3.0 final release.

You can find the new API reference here: https://docs.private-ai.com/reference/3.0.0beta2/operation/process_file_v3_process_text_post/

And the release notes are available here, together with some example requests and further detail on what’s changed in 3.0: https://docs.private-ai.com/release-notes/

© Copyright 2022, Private AI.