How To: Chat with GPT-3
#
End Result#
OverviewWe're building a react app that lets you talk to GPT-3. You can find the finished code on GitHub.
This should be easy to follow if you're familiar with React and making API calls. If that's not you, then I'd recommend first going through some React tutorials (and in particular getting familiar with functional components and useState
).
Most of the code in the project is for rendering the chat application. Feel free to poke around, but to understand how we're using GPT-3 you can focus on src/services/gpt.ts
.
We're hitting that service in components/MessageList/index.tsx
.
We'll be tackling gpt.ts
function by function.
#
Entry pointconst GPTService = { async getAIResponse(messages: Message[]): Promise<string> { const prompt = getPrompt(messages); const result = await getCompletion(prompt); return result; }};
This is our entry point. We call this function every time the user types a message.
#
Building our promptexport interface Message { author: string; message: string; timestamp: number;}
function getPrompt(messages: Message[]): string { const start = `The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.
`; const additionalPrompt = "AI:";
const lines = messages.map((m) => `${m.author}: ${m.message}\n`); const trimmed = trimLines(start.length + additionalPrompt.length, lines); const combinedLines = trimmed.join(""); return start + combinedLines + additionalPrompt;}
Because GPT-3 is so general-purpose (text -> text), it's important that it understands what we want.
We're telling it what we want (a chat between a human and an AI assistant) 3 different ways:
- Explicitly in the start string.
- In the initial message passed in
components/MessageList/index.tsx
that says, "Hi! I'm a chatbot built with GPT-3. What would you like to talk about?" - Implicitly, in that the "authors" are "Human" and "AI" so the chat transcript will look like this:
AI: Hi! I'm a chatbot built with GPT-3. What would you like to talk about?
Human: What's the best book to learn about human progress?
#
Trimming our promptfunction trimLines(additional: number, lines: string[]): string[] { // As the chat continues, there's a tradeoff: // More lines = higher cost, better result // 2048 (max allowable tokens for prompt and response) - 300 (our max response length) is upper bound for tokens. // We will assume 1 token ~= 4 characters (as mentioned by OpenAI) // and keep a window of ~500 tokens (~40 one sentence chat lines). const maxPromptLength = 500 * 4; return trimLinesHelper(additional, lines, maxPromptLength);}
function trimLinesHelper(additional: number, lines: string[], hardMax: number): string[] { let characterCount = additional; const trimmedLines = _.takeRightWhile(lines, (line) => { characterCount += line.length; return characterCount <= hardMax; }); return trimmedLines;}
We take as many lines of the chat as we can until we hit 2000 characters.
Davinci (their best and most expensive language model) costs 0.06 per 1k tokens, so this is roughly 3 cents per completion when we're using the full window.
Alternatively, you could use a cheaper model (Ada is $0.0008 per 1k tokens) or reduce that upper limit (if we assuming 1 line ~= 10 words ~= 12 tokens we could keep 20 lines for 20 * 12 = 240 tokens). Is 20 lines (10 each) enough? Depends on your use case.
#
Getting a completionconst RESPONSE_TOKEN_MAXIMUM = 300;
async function getCompletion(prompt: string): Promise<string> { const data = { prompt, max_tokens: RESPONSE_TOKEN_MAXIMUM, // Temperature ranges 0 -> 1. // .9 is great for a fun conversation. // 0 is better if you're trying to get answers to particular questions. // .3 might be better for a bot that has a personality but is primarily used for answering questions. temperature: 0.9, // The response will stop either when max_tokens is used or a stop token is hit, whichever comes first. // For some reason, new lines ('\n') don't seem to work reliably. We expect to receive a sentence or two // and then \nHuman: but 'AI:' is in there for edge cases as well. stop: ['AI:', `Human:`], // The number of alternative responses to return. I've had very consistent results, but you may occasionally // get a response from GPT-3 that you can't format into an appropriate chat response. If you n to 2 or 3 // then you can check to see if they look correct and pick one that does. n: 1, }; const result = await axios({ method: "post", url: "https://api.openai.com/v1/engines/davinci/completions", data, headers: { Authorization: "Bearer <your token>", }, }); // I'd recommend adding error handling here if using this in a serious application. return result.data.choices[0].text;}
#
Using this projectIf you're planning on taking this as the boilerplate for one of your projects, then feel free, but be warned: I kept everything simple for demonstrative purposes.
- You'll need to build a backend service to protect your OpenAI API key.
- You'll need to build some robustness into the OpenAI API call (what if the call fails? what if GPT-3 returns complete nonsense?)
If you need some help or want to chat, feel free to shoot me an email at zak@zakmiller.com.
#
Extra creditThis is a simple project, but there's room to expand on it.
- Try changing the description from "helpful, creative, clever, and very friendly" to something else. Or let the user choose which personality they want to interact with.
- Try changing the temperature and see how the AI responds.
- Give the user some example prompt choices to get them started.