Discover how to leverage Python and OpenAI's GPT-4 API to create interactive AI applications that respond to user queries. Learn the essentials from API integration to parsing responses effectively.
Key Insights
- Connect to the OpenAI GPT-4 API using Python by generating an API key, installing the OpenAI module via pip, and instantiating a client object to interact with the model.
- Send structured prompts to GPT-4 by creating messages defined as dictionaries specifying roles (system or user) and content (instructions or queries), enabling targeted and meaningful AI responses.
- Parse AI responses delivered in JSON (JavaScript Object Notation) format, allowing for convenient extraction and display of structured data instead of unstructured raw text blocks.
Note: These materials offer prospective students a preview of how our classes are structured. Students enrolled in this course will receive access to the full set of materials, including video lectures, project-based assignments, and instructor feedback.
This is a lesson preview only. For the full lesson, purchase the course here.
Now, cranking it up a notch, we are going to connect to the OpenAI API and commence chatting with the AI. We need to generate an OpenAI API key, which we've already done, which you saw back even before lesson 00. We are going to install in the terminal the OpenAI module and then import it into our Python page, our code.
We are going to write a prompt to instruct the AI model how to respond to a question. And we are going to send a request to the OpenAI model. The model we are using is called the GPT-4.0 model, which is very cutting edge.
As of this recording, anyway, I mean, the thing was only debuted to the public, made available to the public in October of 2024. And it's still the latest, greatest model as of this recording. It handles image submissions to text.
That ArtMink app that I showed you that I that's in the app store for free, that uses the ChatGPT-4.0 model, without which you can't submit images for analysis. So prior to that, you know, like last year's, go back 2023, ChatGPT-3, you couldn't submit images for analysis. We are going to receive and parse a response from the AI in the form of what is called JSON, which is a structured object of key value pairs, name value pairs.
In other words, we could just get a blob of text to come back from the OpenAI API, like the AI can just answer us with a good string of text, which is fine. But maybe better would be to break the response down into little nuggets that we could parse so that we can output them. So again, if you look at, you know, if you go look at my app, how does all this data get put into these various little slots? Well, because the data is coming back from the API in what is called JSON, JavaScript Object Notation, which is the structured format, kind of this ubiquitous standard format for sharing data around the web, it's coming back as JSON and then getting parsed, as they say, or broken into its constituent little pieces and then output it to individual tags.
So without JSON, you're going to just basically get a blob of text back, which is you don't want. We're going to output the AI response to the browser to start without an index page. So the render template thing, we're not going to get into that right away in this lesson because the focus is shifting to the AI, the OpenAI API.
So once we get the request back, mission accomplished, we're just going to dump text into the browser. All right, we are going to make it server02.py and then install on the command line, OpenAI, the module, and then import same into this server file. So from server01.py, do a save as, call the new file server02.
Server01, file, save as, server02. Quit the server and install the OpenAI module in the terminal. We can't have the server running for that.
We need to be seeing the command prompt. We will do control C, and now we're at our command prompt. And we're going to type pip install OpenAI, all lowercase, one word, hit ENTER.
Look at all that stuff. Bam, it looks like it worked. Successfully installed.
I see green. I love it. Okay, that's what you're looking for.
That little red thing, it would throw a giant red error, multi-line red error if it didn't work. We're golden. In server02 now, the Python file, we're going to import the OpenAI module with this import OpenAI line.
Import OpenAI. Notice how here in the editor, when you mention something, like import something, if you haven't used it yet, it's kind of like 10% grayed out or something. It's like waiting for you to use it.
All right, so what we're going to do is log into your OpenAI account and generate an API key. This requires you to have a premium paid account. However, if you're being provided an API key to use for this course, skip, skip, skip.
We did this already, but you would go to this link. You would log into your account. If you went here without being logged in, it would tell you to log in.
And you would click create a new secret key. We can do this again. Create a new secret key.
Create, boom, and then copy what it gives you. I'm not going to do that part. We've already got our key.
The key will be just for this project. Call it chatbot. Nah, we'll just call it.
We're not calling it chatbot. We're calling it AI for Python class. It's not just for this project.
We can use it for multiple. We can use it for both projects. We're doing two big projects, right? A chatbot and an image-based analyzer, kind of like my Artmink app that will be used for food analysis.
We'll just use one. We're not going to need two keys for this. Click the create secret key button, which I did.
Create new secret key. Plus create new secret key and then create secret key. Click the copy button to copy the key.
In the server below the app instantiation line, we're going to declare a variable called app key. And we're going to assign the key you copied as the variable's name. We've already got the key copied over to a text file.
And we're going to set it equal as a string to an all uppercase name. And the reason it's uppercase is because it's a constant, a flag to anyone looking at the code. Hey, don't change this.
Now in a more mature, robust application, you would hide your key. But we're not going to. That's a little bit outside the scope of what we're doing.
We're not going to hide the key. We're just going to keep it right here. And that's a fake.
This is a fake key you're looking at. And even for the real one, it wouldn't work by the time you're watching this. So please don't try to use the key in your projects, because it isn't going to work.
We're going to go to API key. We're going to copy this key. And then we're going to go to our server 02.
And we're going to declare the variable right under the app flask name thing. We're going to declare API key uppercase as a constant. And in quotes, paste our key.
Save your file. Step three, we're going to instantiate a client object. The OpenAI client constructor method.
Capital client, capitalized client. It's a function that's called on the OpenAI module. So it's kind of the OpenAI module is one of its big namesake, you know, the main functions is called client.
Like, who are you? You're the client. And you are identifying yourself. That's your ID.
That's going to return an object that we're going to save as client. And then we can call methods and properties on this client to communicate sending requests to the OpenAI API, to the AI, to ChatGPT, and get answers back. All right.
We go client equals OpenAI dot capital client API key equals our API key. So you pass the API key in, but you set it, you know, but as the value of the lowercase API key property. Now we're going to try sending a chat message to the OpenAI model.
Now try is in italics here because we are trying. There's literally the code we're using is called a try except block. We're going to try something.
And if it doesn't work, we're going to run. It's kind of like an if else except it involves trying something. And what happens if it fails? The except block will run if our try attempt fails.
It will print an error message. So inside the try block, we're going to send our request, our prompt to the ChatGPT, to the GPT 4.0 model. If it works, we will handle, we will receive and handle a response and answer.
But if it fails, this except block, kind of like the else part of an if statement, will run and it'll just print an error message. So hopefully we won't ever have to see the except block working. Because if it does, that means the thing we're trying to do didn't work.
We're going to go to our home route. And we can leave the function the same. But instead, we're going to change it to chat just to kind of focus on what we're making here, what we're doing.
And also as a reminder that this index name is not required. You know, a home page of a website is typically called index. But you don't have to call in Flask the route of your, you know, the function of your home route indexing, call it what you like, and we're going to call it chat.
Whatever it's called, it'll run when you hit the route. When you hit the route, the chat route will run, the chat function will run. And what the function is going to do is send your request to the OpenAI model.
To the Open, yeah, to the OpenAI API, where the model is waiting to receive. So inside the function, we're going to begin with a try block. And we're going to send, we're going to call the, we're going to, okay, so we have our client object, right? It got returned by instantiating the client method of our OpenAI module.
So on the client object, we're going to drill into the chat property and then the completions property and call the create method on that. That's four C things in a row. Four words in a row that start with C. Client, chat, completions, create.
And there is a description below here of what all those mean. We can look at those later. Let's just keep rolling for now.
So inside our chat, we're going to say try our chat method and hit ENTER. We're going to say client.chat.completions.create. And we're going to open up that, those parentheses on the create method and pass in, we're going to pass in an object. Actually, rather, no, excuse me.
We're not going to pass in an object directly. We're going to pass in, we're going to set various parameters. So one of the parameters, first one is called model.
And that value will be set equal to the model with which we want to communicate. And that is GPT for little o, which is a string in quotes. The value of the model property is a string.
And that will be an equal sign, GPT dash for little o, comma, next property, next parameter. The next parameter that we're going to set is called messages. And messages takes a value as a list.
And the list takes or has two items in it, both of which are objects or dictionaries in Python. So after model equals GPT for little o, the name of the model, comma, messages equals a list of two dictionaries. There, like so.
And you can do this comma. We're not going to add anything after that, but it's pretty typical to put a comma at the end of your last property. That's called a trailing comma.
All right, before we dive in any further and start filling up these two dictionaries with content, let's pause a moment in case you're wondering. You know, I get students all the time, I don't want to just type the code. I want to know what it all means.
Fair enough. So client.chat.completions.create is a method chain used in OpenAI's Python API, the OpenAI package. So client is an instance of the OpenAI's API client.
That's the thing that we instantiated here by passing it our API key. We said, hey, we're the client. Client is the main entry point to interact with OpenAI's API.
.chat accesses the chat endpoint, which is specifically for chat-based models like GPT-4, GPT-4 little o, and so forth, what we're using. Completions further narrows it down to chat completions, which involves sending a series of messages in a conversational format and receiving AI-generated responses. And.create is the actual function call that sends requests to the OpenAI API to generate a response based on the given parameters and store the response as an object that can be saved.
So the entire command is going to, so this is going to return a response that we are going to save. And then the model GPT-4.0 was released in October 2024, featured lower token costs. That was one of the big breakthroughs.
When I was in the middle of or nearing completion of my ArtMink app at that time, right, and, you know, this app allows users to upload up to eight images if they have a subscription or just one. I mean, it's very, it consumes a lot of tokens to do this, right? So I was really psyched when they announced GPT-4.0, not due to any improvements. It wasn't better or smarter, really, that I noticed, but, and they didn't really tout it as such, but what they did make a big deal of, and I did notice immediately, was that it featured lower token costs.
So it knocked the price of the request down. So every appraisal made via my app got about half the price. So one from like two cents to one cent or so, depending on the amount of pictures, but about half the price, which made it instantly way more economically feasible to develop and serve up web applications.
So it's a big deal, you know, AI-powered web applications. So the message property is set equal to a list of two dictionaries. Each dictionary has two keys, a role and content, the value of which are as follows.
So the AI's role is system. And for content, we put helpful assistant, which is a standard designation, which tells the AI that our chat could involve about any subject, right? We're not saying you are an expert on, you know, you're not, you're a chef or something, right? We're not telling it, or you're a baseball trivia buff. We're saying you're a helpful assistant.
We, in theory, we could ask it just about anything, really. It's that smart, actually. So let's go ahead and provide those two values to the first dictionary, those two properties, role and content.
And the second dictionary also has role and content. So the dictionaries are, the first dictionary is the AI. And the second dictionary is you, the user, right? Because it's a two-way conversation between the AI, the system, and you, the user.
The AI's content is just the role defined. And the user's content is the specific prompt or query or question. I'm going to keep it simple to begin.
We're going to come in here and say, role system, content, you are a helpful assistant. Role system, content, you are a helpful assistant. Let's break this out a little bit better.
Okay. You don't have, you can put all this stuff on one line. It's just a little better if you break it up like that.
That's pretty typical. I mean, maybe put the close of the create method, right? There's closing parentheses for that. You could put that on the same line, maybe.
Or not. If you really want to be precise about it, the opening and closing square brackets of the list should kind of be at the same depth or indentation level, right? So message starts here and there closes the list. Whereas client.chat completions create the method, it starts way to the left.
So it's closing parentheses should be there. And in either case, you've got this nice color coding in vs. Code. We see that the create method parentheses are blue.
So you can look for the beginning in any blue thing. And the messages list is orange. And the interior dictionaries are purple.
So that really helps a lot. So there's your role and your content. I guess we keep this one up here.
It's fine. Copy. We can just copy paste this.
The role, write it up nice. Comma between the dictionaries there. User, that's you.
That's AI. That's you. And that's going to be a question.
We're going to say, what is a Grand Slam in sports? And what is the origin of the term? Fine, we'll get a little more specific. And what is the origin of the term, right? Like, we don't want to just know that it's winning all four major tennis championships, Wimbledon, French Open, US Open and Australian Open in the same year. That's interesting.
Grand Slam, baseball, home run with the bases loaded. Yeah, but where did it come from, though? Like, why do they call it that? That'd be interesting. We ask a little more specifically.
On the next line, still inside the try block, we're going to extract and return the response content, which is the AI's answer to our question. Accessing the text response, the actual answer text, involves drilling into several objects, which probably doesn't, you know, it's not a big surprise. I mean, look at how much drilling, how much dot syntax there is to get to the create method.
So the answer coming back, you got to drill a little bit to get the to unpack the answer. It's kind of like the answer comes back double, triple wrapped or so. So rather than return a rendering of the template, we're going to return a response.
Try, we're going, so we don't, we're not returning the rendering of the template. We're going to return response.choices0.content, response.choices0.message content. Actually, this entire thing returns a response.
Okay. This entire create thing, so we have to say response equals all that there. So the response is going to equal the return value of this, and that is what we're then going to receive and drill into.
It returns, response has a choices list array inside of it, where at item zero, at index zero, the first item in that choices list has a message property in which there's a content property, and the content property is the actual text answer sent by the open AI. To put it in more simple real-world terms, it's kind of like if you get a package in the mail, it's not, you know, you order something on Amazon, a book, it's not just a book, it's in a box, you open it up, maybe it's in another thing, it's wrapped in some inner packaging, and there's some, you know, styrofoam popcorn or what have you, you got to drill a little bit to get the thing out of when it's delivered to you. That's what we're doing here.
Now, for a more technical explanation, response.choices0.message.content, response is an object sent back to the open AI API. It equals the name of your query. It's like the entire query is set equal to that, right? There, it's in the book, I just hadn't typed it yet.
I kind of, I had all that, and then I went back and typed the response. Response is the name we gave the object returned, it's a return value by the open AI API. Choices is a property of the response object.
We could, in other words, we didn't have to call it response, we could have called it answer or something, the value of which is a list. So, choices0 is the first item in the choices list, and it contains the AI's answer to our question. Message.content contains the AI's actual answer as text.
It's like choices.message.content is the answer, right? At that point, it is the answer. It's not contained in anything anymore, it's text. Now, if the try part fails, the run part will run.
It's like, if the try part fails, it's kind of like an if statement being false. The else part will run. So, if the try part fails, the accept part will run and return an error message.
We're going to add the accept part, the syntax of which features an exception object is E. This is standard stuff. That, I'll let you Google if you want to know all about that. And it's going to return, we're just going to run this, we're just going to put a string in the browser, error.
We'll take the E, the exception object, the error object, and stringify it and publish it like so, if you see an error. Hopefully, we won't see an error. So, that needs to go.
Python is very finicky about indenting. You have to line up. So, notice the try thing is already squiggly, like an error.
It's because it's waiting for the accept part. There, look, it's happy now. Except, we're going to return exception capitalized as E, or you can call it anything you like, but E. Oh, sorry.
It's not return. It's accept. Accept exception as E. Return, whatever we want, will return a little concatenated string.
You could use string interpolation, but you could also say this. You could say, F accept squiggles E. That would also work. Maybe we do that instead.
But hopefully, we'll never have to see that thing. So, hopefully, we won't see this exception run, but if we do, it would return this error message into the browser. And again, we're returning content.
That's text. That's the AI's text answer. So, we're not rendering a template.
We're just kind of dumping text directly into the browser, right? So, when we run server 02 and go to the home route, refresh the browser, we're going to, it's going to send the request automatically because it hits the route, right? As soon as it hits the route, it's going to try sending this request prompt to the API. And if it works, it's going to get this response, which we're going to unpack and return. That is dump into the browser.
Actually put that here. Output AI's text answer to the browser. Unpack and output, right? You got to unpack it kind of.
If request to open AI API fails, right? We run this thing. Hopefully, that's not happening. And that ought to do it.
This is big. I mean, here's your very first time communicating with open AI, ChatGPT via an app that you made. So, kudos if it works.
We're going to take a little break after this as well. Or I will. I mean, you don't have to.
You're on whenever you want to do it. Pause any time you want. Okay.
We're going to quit the server. We need to switch servers. So, we're going to type control C in the terminal to turn off the currently running server.
And then we're going to start server 02. And we don't have a server running anyway. So, we're going to say Python server 2.py. It looks like it's working.
And the proof is in the pudding. Error code. Oh, ChatGPT 4.0 does not exist.
Did I write a zero? I meant little o. Oh, yeah. I wrote a zero. Okay.
That's good. Let's look at the error. This is great, actually.
It says error code. So, we did get to see an error. Error.
Accept error code 404. Error message. The model ChatGPT 4.0 does not exist.
It does not exist. It's for little o. So, we come in here. Take out the zero.
Put little o. Save. We do not have to. It restarted the server automatically.
All we have to do now is refresh. See if we get the answer. It's spinning its wheels.
Looks like it's sending a request. There. Boom.
High five, everybody. The term Grand Slam is used across various sports to describe a significant achievement, typically involving winning, which is what we're doing right now. We are winning, yo.
Look at that. Yes. Winning.
Look at that. Baseball. Grand Slam.
Golf. You know, and this is just dumping text, right? We're not trying to parse it. We didn't instruct ChatGPT to give it to us as JSON in some nice format or anything.
We're just asking for text and dumping text, which is besides the point, right? The whole goal is can we establish a connection to the OpenAI API and send a request and get a result, which we did. Mission accomplished on this lesson. Absolutely.
Refresh the browser. If all goes as expected, our chat message will be sent to the OpenAI API, which will respond with an impressively detailed answer. Our route function returns the response.
But since we are not rendering any HTML page, the result is raw text dumped into the browser, which is just fine to start. Absolutely wonderful to start. Here's the final code for this server.
API key. You can call it OpenAI API key if you like. That's fine.
In fact, I like that name. Let's call it, why don't we change the name? Because there's a lot of APIs, right? But, you know, Bitcoin, Chuck Norris jokes, whatever. We're not using those.
We're using OpenAI. Chat, response, equals, all that stuff. Return the response.
Handle the error. We did get an error the first time because I had for zero instead of little o, which I'm really glad we had that error, that mistake. And then this standard line at the end.
Yeah. Pat on the back. Good job.
Awesome job, everybody.