Teneo Developers

OpenAI Connector

The Teneo GPT Connector allows Teneo Developers to connect to the GPT models and include its functionalities in Teneo solutions. You can choose to use OpenAI API or the OpenAI service on Microsoft Azure. In the following, you will find a Teneo solution which contains all required code to get you started.

Before making use of this, please ensure that the agreements in place regarding intended use cases from OpenAI and that the Data Protection Agreement complies with your use case

OpenAI Logo.svg


You will need an OpenAI account or a Microsoft Azure account to generate your API key.

Option 1: Create OpenAI API account

To use GPT-3 models, first you need to create your OpenAI API account. If you are new user, you might obtain some free credit (normally to be consumed in the first 3 months), which should be enough to set up a demo solution.

After creating your account, please obtain your secret key by clicking on Personal on top-right and choose View API keys in the drop-down list.


Now click on the + Create new secret key button. You will see the following pop-up window:


Please copy the API key and save it somewhere else before you click OK to close this window. You will not be able to view the complete key again. If you forget this key, you will have to delete it and create a new one.

Choose your language model

Now you need to choose a language model you need. OpenAI has released the API for their GPT 3.5 model which is used in their famous application ChatGPT with a reasonable price at $0.002 per 1k tokens. You can find the API reference here.

Besides the GPT 3.5 model, OpenAI provides you 4 GPT 3 models with different capabilities and prices which are (in the order of least to most capable): Ada, Babbage, Curie and Davinci. More capable model will be more expensive and require more response time, and each model has different versions, for example the current version of the Davinci model is text-davinci-003. You can find all available models here: Models - OpenAI API.


The GPT 3 models also provides fine-tuning options with extra payment required while the GPT 3.5 model is not fine-tunable. You can find the details of fine-tuning here.

Option 2: Create Azure OpenAI service

To use GPT models via Microsoft Azure, first you need to apply for the access here, otherwise you will not be allowed to create an OpenAI service on Azure.

After creating an OpenAI service, first you need to copy your key and the URL of the endpoint. You will find this information from the Key and Endpoint page under Resource Management. You only need one of the keys to connect to Azure OpenAI service from Teneo.


Now you need to deploy a model by clicking on the Create button from the Model deployments page.


Choose the language model from the drop-down list and give it a deployment name. Your model´s deployment name will be used in your Teneo Solution. Please note that the available models here may differ according to the region you select during creating the Azure OpenAI service. The gpt-35-turbo model corresponds to the GPT 3.5 model, while the rest are GPT 3 models.


Teneo GPT Connector Solution

The solution contains codes in the form of a class GptHelper with the following functionalities:

  • Chat with GPT
  • Detect profanity

You can find the GptHelper.groovy file with the source code in the resource file manager of the solution. This allows you to easily modify or extend the class if needed.

You can edit the class straight from Teneo by selecting the GptHelper.groovy file and clicking 'Open'. See: Opening and editing files in an external editor for more details.

Set up the connector

Under Globals -> Variables, you need to configure the variable gptHelper with your generated API key and the code of the model you would like to use.


The required arguments are:

  • endpoint: If you use OpenAI´s API, the endpoint should be https://api.openai.com/v1/ at this moment. You can check the base URL from here. If you use Azure OpenAI service, you can copy the endpoint URL from the Keys and Endpoint.
  • apiKey: Your API key, which should be stored by yourself.
  • model: The model’s name. If you use OpenAI´s API you should put gpt-3.5-turbo for GPT 3.5 model, or check the model names from here if you are using GPT 3. If you use Azure OpenAI service, you should put your model deployment name (not the model´s name)
  • background: The background of this chatbot. You can add one or a few sentences to describe the main purpose of your solution.
  • platform: If you are using OpenAI’s API, put OpenAI here. If you are using Azure OpenAI service, put Azure.
  • apiVersion: This is optional and only applies to the Azure OpenAI service. It has the value 2022-12-01 by default and you do not need to change it at this moment. You can always check the current API version by clicking on View code from the GPT-3 playground in Azure OpenAI Studio.


Chat with GPT

Under Resources -> Integration, you can find the Chat with GPT method which can be used across your Teneo solution. It will send the current input together with the dialogue history to the GPT model, return the answer text, and append the current question-answer pair to the dialogue history.


Under Globals -> Variables, you can find two variables related to chating with GPT. One is called gptUsed and has a boolean value, which is being set to true inside the Chat with GPT method in the Integration to indicate that the current answer is powered by GPT.


Finally, you can find an integer type variable called gptMaxLength which control the maximum length of the dialog history accumulated. Remember that when chating with GPT, the full dialog history of the current dialog session will be sent to the API. If the dialog is too long, you will probably suffer from a long response time. You can adjust this threshold as you need.


In Globals -> Scripts -> Post-processing, you will find the script node called Build dialog history which have the following functionalities:

  • Append the current question-answer pair to the dialogue history when GPT is not used.
  • Drop the first user input and chatbot output when the length of dialogue history is greater than the threshold you set up.


The flow User asks about calories in a drink in folder Project shows you an example of building a flow powered by GPT to cover a certain field of your project.

The flow General Chat Powered by GPT in folder General Chat shows you an example of building a flow powered by GPT to cover small talks.

The flow User asks about calories in a drink in folder Project shows you an example of building a flow powered by GPT to cover a certain field of your project.

The flow General Chat Powered by GPT in folder General Chat shows you an example of building a flow powered by GPT to cover small talks.

Detect profanity

Other than the Completion endpoint to complete a dialogue, OpenAI also provides the Moderations endpoint to detect if the user input violates its content policy, which includes the following categories:

  • hate
  • threatening
  • self-harm
  • sexual
  • child sexual
  • violence
  • graphic violence

Under Globals -> Variables you can find a boolean type variable profanityDetected which will be set to true if the current input is classified as one of the categories above by the Moderation API.


The classification is done by the code in the scipt node Profanity detection in Globals -> Scripts -> Pre-processing.