Skip to content

Latest commit

 

History

History

openai-extension-textcompletion

Azure Functions OpenAI extension - text completion

Open project in GitHub Codespaces Node version Deployment time

OverviewRun the sampleKey conceptsTroubleshootingNext steps

Overview

This sample demonstrates how to use Azure OpenAI text completions with Azure Functions the Azure OpenAI extension.

Application architecture

This application is made from multiple components:

Run the sample

Prerequisites

Cost estimation

Pricing varies per region and usage, so it isn't possible to predict exact costs for your usage. However, you can use the Azure pricing calculator for the resources below to get an estimate.

  • Azure Functions: Flex Consumption plan, Free for the first 250K executions. Pricing per execution and memory used. Pricing
  • Azure OpenAI: Standard tier, chat model. Pricing per 1K tokens used, and at least 1K tokens are used per question. Pricing
  • Azure Blob Storage: Standard tier with LRS. Pricing per GB stored and data transfer. Pricing

⚠️ To avoid unnecessary costs, remember to take down your app if it's no longer in use, either by deleting the resource group in the Portal or running azd down --purge.

Setup development environment

You can run this project directly in your browser by using GitHub Codespaces, which will open a web-based VS Code.

  1. Fork the project to create your own copy of this repository.
  2. On your forked repository, select the Code button, then the Codespaces tab, and clink on the button Create codespace on main. Screenshot showing how to create a new codespace
  3. Wait for the Codespace to be created, it should take a few minutes.

If you prefer to run the project locally, follow these instructions.

Deploy Azure resources

Open a terminal in the project root and follow these steps to deploy the Azure resources needed:

# Open the sample directory
cd samples/openai-extension-textcompletion

# Install dependencies
npm install

# Deploy the sample to Azure
azd auth login
azd up

You will be prompted to select a base location for the resources. If you're unsure of which location to choose, select eastus2. The deployment process will take a few minutes.

Test the application

Once the resources are deployed, you can run the following command to run the application locally:

npm start

This command will start the Azure Functions application locally. You can test the application by sending a GET request to the /whois endpoint:

curl http://localhost:7071/api/whois/Albert%20Einstein

You should receive a response with information about Albert Einstein. You also try sending a POST request to the /completions endpoint:

curl http://localhost:7071/api/completions -H "Content-Type: application/json" -d '{"prompt": "Capital of France?"}'

You should receive a response with the completion of the prompt. Alternatively, you can also open the file api.http and click on Send Request to test the endpoints.

Clean up

To clean up all the Azure resources created by this sample:

  1. Run azd down --purge
  2. When asked if you are sure you want to continue, enter y

The resource group and all the resources will be deleted.

Key concepts

Querying a LLM (Large Language Model) allows you to perform a wide range of tasks, such as generating completions, answering questions, summarizing text, and more. Here we either pass a prompt to the LLM directly, or use a prompt template with parameters from the query to generate an answer.

Open the src/functions folder to see the code for the Azure Functions. Our API is composed of two endpoints:

  • POST /completions: This endpoint take a JSON object with a prompt property and returns a completion generated by the LLM. It uses the OpenAI text completion input binding to generate the completion.

  • GET /whois/<name>: This endpoint takes a name as a route parameter and returns information about the person, using a prompt template to query the LLM through the OpenAI text completion input binding.

Troubleshooting

If you have any issue when running or deploying this sample, please check the troubleshooting guide. If you can't find a solution to your problem, please open an issue.

Next steps

Here are some resources to learn more about the technologies used in this sample: