How To Use OpenAI GPT Chat Models With API
In this post, we’ll make a simple, but powerful script, that will use OpenAI GPT model with their API. Furthermore, in the following example, we’ll make a chatbot, which will be able to remember previous prompts and conversation.
In case you’re familiar with ChatGPT, this is basically what we’re making. However, the difference between ChatGPT and this example is that you’ll be able to further process the AIs answer in code.
Prerequisites
We’ll be working with LangChain library, which enables us to use large language models (LLMs) from various providers. However, for the purpose of this tutorial, we’ll stick with OpenAI GPT models. Since their API calls aren’t free, you’ll need to upgrade to a paid plan on their website.
After you’re done with that, you just need to create the API key and paste it into the script. We’ll get to that part in a moment. First of all, let’s import all the necessary libraries and tools we’ll need for this project.
import json
import os
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferWindowMemory
from langchain.chains import ConversationChain
I pasted my API key inside a json file, for which I made a function to read it and retrieve that API key.
ROOT = os.path.dirname(__file__)
def get_token(token_name):
auth_file = open(os.path.join(ROOT, 'auth.json'))
auth_data = json.load(auth_file)
token = auth_data[token_name]
return token
os.environ['OPENAI_API_KEY'] = get_token('openai-token')
Creating components for our chatbot
Now that we authorized the connection with OpenAIs API, we can start building our chatbot. Additionally, this process is fairly simple, because LangChain already provides everything we need.
Firstly, we need to define the model and its properties. Secondly, our chatbot will have the memory to remember the conversation, so we need define an object for that. And lastly, we need to put these two components together with a ConversationChain object.
chat = ChatOpenAI(temperature=0.9, model_name="gpt-3.5-turbo")
memory = ConversationBufferWindowMemory()
conversation = ConversationChain(
llm=chat,
memory=memory
)
Put it in action
For the final part of this tutorial, we’re going to put the chatbot to use by simply putting it inside an endless loop. Furthermore, the algorithm will take an input inside the console and use it to prompt the LLM to give us the answer.
while True:
prompt = input('You: ')
response = conversation.predict(input=prompt)
print('AI:', response)
Conclusion
To conclude, we made a simple chatbot using OpenAI API to access their GPT model for chatting. I learned a lot while working on this project and I hope it proves helpful to you aswell.