This repository contains a Dockerized client application for a Confidential AI/LLM Service example. This confai-promter.py
app leverages the Attested-TLS
protocol, implemented in the GENXT confido
library. The confido library is a confdential computing middleware that serves as a specialised TLS certificate handler that, using a chain of cryptographically interlinked messages, verifies the authenticity and integrity of the entire software stack of the remote VM running the inference engine. It also independently verifies the CPU-signed evidence of the VM’s memory isolation. Within the Attested-TLS protocol, the CPU implementation is the only explicit trust anchor expected by the user.
This example works on any platform and only requires Docker and make
to be installed.
You can install make
on Ubuntu or any other Debian-based system by running the following command in your terminal:
sudo apt-get install make
On CentOS, RHEL, or any other RedHat-based system, you can use the following command:
sudo yum install make
If you have Homebrew installed, you can install make by running:
brew install make
If you don't have Homebrew, you can install it by following the instructions on the Homebrew website.
You can run the following commands using make
:
-
make build
: This command builds the Docker image with the tagconfidential-ai-example
. It creates a containerised environment with all the required dependencies, including theconfido
Attested-TLS library, thehttpx
standard HTTP client (which supports user-defined trusted TLS certificates), and the open-source OpenAI library for interacting with LLM inference engines. -
make chat
: This command runs the Docker container and executes theconfai-promter.py
Python script.
The confido
library, configured by confai-promter.py
, outputs debug messages during the verification of the Attested-TLS Evidence. This evidence includes the Trusted Platform Module (TPM) cryptographic report, as well as GPU and CPU Trusted Execution Environment (TEE) Remote Attestation reports. Notably, all these reports are cryptographically interconnected, and this interconnection is independently verified by the confido
client library against CPU and GPU vendors' certificates and well-known software hash-sums.
For an example of the output generated by running the make chat
command, refer to the example-output.md file.