Discover how to leverage Python and OpenAI's GPT-4 API to create interactive AI applications that respond to user queries. Learn the essentials from API integration to parsing responses effectively.
Key Insights
- Connect to the OpenAI GPT-4 API using Python by generating an API key, installing the OpenAI module via pip, and instantiating a client object to interact with the model.
- Send structured prompts to GPT-4 by creating messages defined as dictionaries specifying roles (system or user) and content (instructions or queries), enabling targeted and meaningful AI responses.
- Parse AI responses delivered in JSON (JavaScript Object Notation) format, allowing for convenient extraction and display of structured data instead of unstructured raw text blocks.
Note: These materials offer prospective students a preview of how our classes are structured. Students enrolled in this course will receive access to the full set of materials, including video lectures, project-based assignments, and instructor feedback.
This is a lesson preview only. For the full lesson, purchase the course here.
Now, cranking it up a notch, we are going to connect to the OpenAI API and commence chatting with the AI. We need to generate an OpenAI API key, which we've already done, which you saw back even before Lesson 00. We are going to install in the terminal the OpenAI module and then import it into our Python page—our code.
We are going to write a prompt to instruct the AI model how to respond to a question. And we are going to send a request to the OpenAI model. The model we are using is called the GPT-4.0 model, which is very cutting edge.
As of this recording, anyway—I mean, the thing was only debuted to the public, made available to the public in October of 2024. And it's still the latest, greatest model as of this recording. It handles image submissions to text.
That ArtMink app that I showed you—that’s in the App Store for free—uses the ChatGPT-4.0 model, without which you can't submit images for analysis. So prior to that, you know, like last year’s—go back to 2023—ChatGPT-3, you couldn't submit images for analysis. We are going to receive and parse a response from the AI in the form of what is called JSON, which is a structured object of key-value pairs, name-value pairs.
In other words, we could just get a blob of text to come back from the OpenAI API, like the AI can just answer us with a good string of text, which is fine. But maybe better would be to break the response down into little nuggets that we could parse so that we can output them. So again, if you look at, you know, if you go look at my app—how does all this data get put into these various little slots? Well, because the data is coming back from the API in what is called JSON—JavaScript Object Notation—which is the structured format, kind of this ubiquitous standard format for sharing data around the web. It's coming back as JSON and then getting parsed, as they say, or broken into its constituent little pieces and then output to individual tags.
So without JSON, you're just going to basically get a blob of text back, which you don't want. We're going to output the AI response to the browser to start—without an index page. So the render template thing, we're not going to get into that right away in this lesson because the focus is shifting to the AI, the OpenAI API.
So once we get the request back—mission accomplished—we're just going to dump text into the browser. All right, we are going to make it server02.py and then install on the command line OpenAI, the module, and then import the same into this server file. So from server01.py, do a Save As, call the new file server02.py.
Server01: File > Save As > server02. Quit the server and install the OpenAI module in the terminal. We can't have the server running for that.
We need to be seeing the command prompt. We will do Control+C, and now we're at our command prompt. And we're going to type `pip install openai`, all lowercase, one word, hit ENTER.
Look at all that stuff. Bam, it looks like it worked. Successfully installed.
I see green. I love it. Okay, that's what you're looking for.
That little red thing—it would throw a giant red error, multi-line red error, if it didn't work. We're golden. In server02 now, the Python file, we're going to import the OpenAI module with this `import openai` line.
Import OpenAI. Notice how here in the editor, when you mention something, like import something, if you haven't used it yet, it's kind of like 10% grayed out or something. It's like waiting for you to use it.
All right, so what we're going to do is log into your OpenAI account and generate an API key. This requires you to have a premium paid account. However, if you're being provided an API key to use for this course, skip, skip, skip.
We did this already, but you would go to this link. You would log into your account. If you went here without being logged in, it would tell you to log in.
And you would click Create a New Secret Key. We can do this again. Create a new secret key.
Create, boom, and then copy what it gives you. I'm not going to do that part. We've already got our key.
The key will be just for this project. Call it Chatbot. Nah, we'll just call it—
We're not calling it Chatbot. We're calling it AI for Python Class. It's not just for this project.
We can use it for multiple. We can use it for both projects. We're doing two big projects, right? A chatbot and an image-based analyzer—kind of like my ArtMink app that will be used for food analysis.
We'll just use one. We're not going to need two keys for this. Click the Create Secret Key button, which I did.
Create New Secret Key. Plus Create New Secret Key, and then Create Secret Key. Click the copy button to copy the key.
In the server, below the app instantiation line, we're going to declare a variable called API_KEY. And we're going to assign the key you copied as the variable's name. We've already got the key copied over to a text file.
And we're going to set it equal as a string to an all-uppercase name. And the reason it's uppercase is because it's a constant—a flag to anyone looking at the code: Hey, don't change this.
Now in a more mature, robust application, you would hide your key. But we're not going to. That's a little bit outside the scope of what we're doing.
We're not going to hide the key. We're just going to keep it right here. And that's a fake.
This is a fake key you're looking at. And even for the real one, it wouldn't work by the time you're watching this. So please don't try to use the key in your projects, because it isn't going to work.
We're going to go to API Key. We're going to copy this key. And then we're going to go to our server02.py.
And we're going to declare the variable right under the Flask app name line. We're going to declare API_KEY (uppercase) as a constant. And in quotes, paste our key.
Save your file. Step three: we're going to instantiate a Client object using the OpenAI client constructor method.
Capital "Client, " capitalized "Client." It's a function that's called on the OpenAI module. So the OpenAI module—one of its main functions—is called `Client`.
Like, who are you? You're the client. And you are identifying yourself. That's your ID.
That's going to return an object that we're going to save as `client`. And then we can call methods and properties on this client to communicate—sending requests to the OpenAI API, to the AI, to ChatGPT—and get answers back. All right.
We go `client = openai. Client(api_key=API_KEY)`. So you pass the API key in, but you set it as the value of the lowercase `api_key` property. Now we're going to try sending a chat message to the OpenAI model.
Now "try" is in italics here because we are trying—literally the code we're using is called a try-except block. We're going to try something.
And if it doesn't work, we're going to run the except block. It's kind of like an if-else, except it involves trying something. And what happens if it fails? The except block will run if our try attempt fails.
It will print an error message. So inside the try block, we're going to send our request—our prompt—to ChatGPT, to the GPT-4.0 model. If it works, we will receive and handle a response and answer.
But if it fails, the except block—kind of like the else part of an if statement—will run, and it'll just print an error message. So hopefully, we won't ever have to see the except block working. Because if it does, that means the thing we're trying to do didn't work.
We're going to go to our home route. And we can leave the function the same. But instead, we're going to change it to `chat` just to kind of focus on what we're making here—what we're doing.
And also as a reminder that this index name is not required. You know, a homepage of a website is typically called `index`. But you don't have to call—in Flask—the route of your, you know, the function of your home route `index`. Call it what you like, and we're going to call it `chat`.
Whatever it's called, it'll run when you hit the route. When you hit the route, the `chat` route will run, the `chat` function will run. And what the function is going to do is send your request to the OpenAI model.
To the OpenAI API, where the model is waiting to receive. So inside the function, we're going to begin with a `try` block. And we're going to call—okay, so we have our `client` object, right? It got returned by instantiating the `Client` method of our OpenAI module.
So on the `client` object, we're going to drill into the `chat` property, then the `completions` property, and call the `create` method on that. That's four C-things in a row: Client, chat, completions, create.
And there is a description below here of what all those mean. We can look at those later. Let's just keep rolling for now.
So inside our `chat`, we're going to say try our `chat` method and hit ENTER. We're going to say `client.chat.completions.create`, and we're going to open up the parentheses on the `create` method and pass in—we're going to pass in an object. Actually, rather, no—excuse me.
We're not going to pass in an object directly. We're going to set various parameters. So one of the parameters—the first one—is called `model`.
And that value will be set equal to the model with which we want to communicate, and that is `"gpt-4-o"`, which is a string in quotes. The value of the `model` property is a string.
And that will be an equal sign, `model="gpt-4-o"`, comma. Next property, next parameter.
The next parameter that we're going to set is called `messages`, and `messages` takes a value as a list.
And the list has two items in it, both of which are objects (or dictionaries in Python). So after `model="gpt-4-o"`, comma, `messages=[{…}, {…}]`—there, like so.
And you can do this—comma—we're not going to add anything after that, but it's pretty typical to put a comma at the end of your last property. That's called a trailing comma.
All right, before we dive in any further and start filling up these two dictionaries with content, let's pause a moment in case you're wondering. You know, I get students all the time who say, "I don't want to just type the code—I want to know what it all means."
Fair enough. So `client.chat.completions.create` is a method chain used in OpenAI's Python API—the OpenAI package. So `client` is an instance of OpenAI’s API Client.
That's the thing we instantiated here by passing it our API key. We said, “Hey, we’re the client.” `client` is the main entry point to interact with OpenAI’s API.
`.chat` accesses the chat endpoint, which is specifically for chat-based models like GPT-4, GPT-4-o, and so forth—what we're using. `.completions` further narrows it down to chat completions, which involves sending a series of messages in a conversational format and receiving AI-generated responses. And `.create` is the actual function call that sends requests to the OpenAI API to generate a response based on the given parameters and store the response as an object that can be saved.
So the entire command is going to return a response that we are going to save. And then the model GPT-4-o was released in October 2024, featuring lower token costs. That was one of the big breakthroughs.
When I was in the middle of—or nearing completion of—my ArtMink app at that time, right, and you know, this app allows users to upload up to eight images if they have a subscription, or just one. I mean, it consumes a lot of tokens to do this, right? So I was really psyched when they announced GPT-4-o—not due to any improvements. It wasn't better or smarter, really, that I noticed, and they didn’t really tout it as such—but what they did make a big deal of (and I noticed immediately) was that it featured lower token costs.
So it knocked the price of the request down. So every appraisal made via my app got about half the price. So it went from like two cents to one cent or so, depending on the number of pictures—but about half the price—which made it instantly way more economically feasible to develop and serve up web applications.
So it's a big deal—AI-powered web applications. So the `messages` property is set equal to a list of two dictionaries. Each dictionary has two keys: `role` and `content`, the value of which are as follows.
The AI's role is `system`. And for `content`, we put `"You are a helpful Assistant."` That’s a standard designation, which tells the AI that our chat could involve about any subject, right? We’re not saying “You are a chef” or “You are a baseball trivia buff.” We're saying “You are a helpful Assistant.”
We, in theory, could ask it just about anything, really. It's that smart, actually. So let's go ahead and provide those two values to the first dictionary: `role` and `content`.
And the second dictionary also has `role` and `content`. So the dictionaries are: the first dictionary is the AI, and the second dictionary is you—the user—right? Because it’s a two-way conversation between the AI (the system) and you (the user).
The AI’s content is just the role defined. And the user’s content is the specific prompt or query or question. I’m going to keep it simple to begin.
We’re going to come in here and say: `role="system", content="You are a helpful Assistant."`
Let’s break this out a little bit better.
Okay. You can put all this stuff on one line. It’s just a little better if you break it up like that.
That’s pretty typical. I mean, maybe put the close of the `create` method—right? There’s a closing parenthesis for that—you could put that on the same line, maybe.
Or not. If you really want to be precise about it, the opening and closing square brackets of the list should be at the same indentation level, right? So `messages` starts here and then closes the list. Whereas `client.chat.completions.create` starts way to the left.
So its closing parenthesis should be there. And in either case, you've got this nice color coding in vs. Code. We see that the `create` method’s parentheses are blue.
So you can look for the beginning of any blue thing. And the `messages` list is orange. And the interior dictionaries are purple.
So that really helps a lot. So there’s your `role` and your `content`. I guess we keep this one up here.
It’s fine. Copy. We can just copy-paste this.
The `role`, write it up nice. Comma between the dictionaries there. `user`—that’s you.
That’s AI. That’s you. And that’s going to be a question.
We’re going to say: “What is a Grand Slam in sports, and what is the origin of the term?” Fine, we’ll get a little more specific. We don’t want to just know that it’s winning all four major tennis championships—Wimbledon, French Open, US Open, and Australian Open—in the same year. That’s interesting.
Grand Slam—baseball—home run with the bases loaded. Yeah, but where did it come from though? Like, why do they call it that? That’d be interesting. We ask a little more specifically.
On the next line, still inside the try block, we're going to extract and return the response content, which is the AI's answer to our question. Accessing the text response—the actual answer text—involves drilling into several objects, which probably doesn't come as a big surprise. I mean, look at how much dot syntax there is to get to the `create` method.
So the answer coming back—you've got to drill a little bit to unpack the answer. It's kind of like the answer comes back double or triple wrapped. So rather than return a rendering of a template, we're going to return a response.
Try—so we’re not returning the rendering of a template. We're going to return `response.choices[0].message.content`. Actually, this entire thing returns a response.
Okay. This entire `create` call—so we have to say `response = client.chat.completions.create(…)`. So the `response` variable is going to equal the return value of that, and that is what we're going to drill into.
`response` has a `choices` list (array) inside of it, where at item zero (index 0), the first item in that list has a `message` property, and inside that, a `content` property, which contains the actual text answer returned by OpenAI.
To put it in more simple real-world terms, it's kind of like getting a package in the mail. You order something on Amazon, like a book. It's not just a book—it’s in a box, maybe wrapped in inner packaging with styrofoam or padding. You've got to open multiple layers to get to it. That’s what we're doing here.
Now for a more technical explanation: `response.choices[0].message.content`. `response` is the object returned by the OpenAI API. It equals the return value of your query. It’s what we saved with the name `response` earlier.
`choices` is a property of the `response` object. We could’ve named it `answer` or anything else, but it’s conventionally called `response`. `choices[0]` is the first item in the `choices` list, and it contains the AI's answer to our question. `message.content` contains the AI’s actual answer as a string of text.
At that point, it's the actual answer. It's not wrapped in anything anymore—it's just plain text. Now, if the `try` part fails, the `except` part will run.
It’s like an `if` statement that returns `false`, causing the `else` block to run. If `try` fails, `except` runs and returns an error message.
We're going to add the `except` part. The syntax features an `Exception` object, called `e`. This is standard stuff. You can Google it if you want more detail. We’re going to return a simple string that includes the error message.
We'll take the `e` (the Exception object) and stringify it and return it like so. Hopefully, we won’t have to see it.
Python is very finicky about indenting. You have to line things up. Notice how the `try` line shows an error if there's no `except`—it's waiting. Once you add `except`, the error goes away. Python’s happy again.
`except Exception as e:` and then `return f"Error: {e}"` would work just fine. You could use string interpolation (`f-strings`), or you could CONCATENATE a string. Either way is fine.
Hopefully, this exception will never get triggered. But if it does, it will display the error message directly in the browser. And again, what we’re returning in the success case is just text—the AI’s answer—not an HTML page.
We’re dumping text into the browser. When we run `server02.py` and go to the home route and refresh the browser, it’s going to send the request automatically—because hitting the route triggers the request. And if it works, it’ll return the AI’s response, which we unpack and output to the browser.
Actually, note that here: output the AI's text answer to the browser. Unpack and output, right? You’ve got to unpack it first.
If the request to the OpenAI API fails, we run the exception handler. Hopefully, that won’t happen. And that ought to do it.
This is big. This is your very first time communicating with OpenAI—ChatGPT—via an app that you made. So kudos if it works!
We're going to take a little break after this as well. Or I will. You don’t have to.
You can pause anytime you want. Okay.
We're going to quit the server. We need to switch servers. We’ll type `Control+C` in the terminal to shut down the currently running server.
Then we’ll start `server02.py`. We don’t have a server running anyway. We’ll type `Python server02.py`. Looks like it’s working.
And the proof is in the pudding—
Error code: oh—ChatGPT 4.0 does not exist.
Did I write a zero? I meant lowercase "o". Oh yeah—I wrote a zero. Okay.
That’s good. Let’s look at the error. This is great, actually.
It says: Error code. We did get to see an error. Error.
Exception: error code 404. Error message: The model `ChatGPT-4.0` does not exist.
It does not exist. It’s `gpt-4-o`. We go in there, take out the zero, and put in a lowercase "o". Save.
No need to restart the server—it restarts automatically. All we have to do is refresh the browser. Let’s see if we get the answer. It’s spinning its wheels…
Looks like it's sending a request. There—boom.
High five, everybody. The term "Grand Slam" is used across various sports to describe a significant achievement—typically involving winning—which is what we're doing right now. We are winning, yo.
Look at that. Yes. Winning.
Look at that—baseball, Grand Slam.
Golf. And this is just dumping text, right? We’re not trying to parse it. We didn’t instruct ChatGPT to give it to us as JSON in some formatted way.
We just asked for text and dumped text, which is fine. The goal was to establish a connection to the OpenAI API and send a request—and get a result. Which we did. Mission accomplished on this lesson. Absolutely.
Refresh the browser. If all goes as expected, our chat message will be sent to the OpenAI API, which will respond with an impressively detailed answer. Our route function returns the response.
But since we are not rendering any HTML page, the result is raw text dumped into the browser. Which is just fine to start. Absolutely wonderful to start. Here’s the final code for this server.
`API_KEY`—you can call it `OPENAI_API_KEY` if you like. That’s fine.
In fact, I like that name. Let’s change the name—because there are a lot of APIs: Bitcoin, Chuck Norris jokes, whatever. But we’re using OpenAI.
`chat`, `response = …`, return the response.
Handle the error. We did get an error the first time because I had `4-0` instead of `4-o`, which I’m really glad we saw. And then this standard line at the end—
Yeah. Pat on the back. Good job.
Awesome job, everybody.