Hey there, fellow developer! Ready to dive into the world of AI-powered applications? You're in the right place. OpenAI's API is a game-changer, and integrating it into your Python projects can open up a whole new realm of possibilities. Let's get you up and running with this powerful tool.
Before we jump in, make sure you've got:
Got those? Great! Let's move on.
First things first, let's get the OpenAI package installed:
pip install openai
Easy peasy, right?
Now, let's set up your API key. For security reasons, we'll use an environment variable:
import os import openai openai.api_key = os.getenv("OPENAI_API_KEY")
Pro tip: Never hardcode your API key. Your future self will thank you.
Let's make our first API call:
response = openai.Completion.create( engine="text-davinci-002", prompt="Translate the following English text to French: '{}'", max_tokens=60 ) print(response.choices[0].text.strip())
Boom! You've just made your first OpenAI API call. How cool is that?
Ready to level up? Let's explore some advanced features:
response = openai.Completion.create( engine="text-davinci-002", prompt="Write a haiku about programming", max_tokens=50, n=1, stop=None, temperature=0.7, )
Play around with these parameters to see how they affect the output.
OpenAI offers various models. Here's how you can use GPT-3.5-turbo:
response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, ] ) print(response.choices[0].message.content)
For longer responses, streaming can improve user experience:
for chunk in openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[{"role": "user", "content": "Tell me a long story"}], stream=True ): if chunk.choices[0].delta.get("content"): print(chunk.choices[0].delta.content, end="")
Always be prepared for errors:
try: response = openai.Completion.create( engine="text-davinci-002", prompt="Tell me a joke", max_tokens=60 ) except openai.error.APIError as e: print(f"OpenAI API returned an API Error: {e}") except openai.error.APIConnectionError as e: print(f"Failed to connect to OpenAI API: {e}") except openai.error.RateLimitError as e: print(f"OpenAI API request exceeded rate limit: {e}")
Respect the API's rate limits. Implement exponential backoff:
import time import random def api_call_with_backoff(func, max_retries=5): for attempt in range(max_retries): try: return func() except openai.error.RateLimitError: if attempt == max_retries - 1: raise sleep_time = (2 ** attempt) + random.random() time.sleep(sleep_time)
Let's put it all together:
import openai def chat(prompt): response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": prompt} ] ) return response.choices[0].message.content while True: user_input = input("You: ") if user_input.lower() in ['quit', 'exit', 'bye']: break response = chat(user_input) print(f"AI: {response}")
Always test your API calls:
def test_api_call(): response = openai.Completion.create( engine="text-davinci-002", prompt="Say this is a test", max_tokens=10 ) assert response.choices[0].text.strip().lower() == "this is a test" print("Test passed!") test_api_call()
And there you have it! You're now equipped to harness the power of OpenAI's API in your Python projects. Remember, the key to mastery is practice and experimentation. So go forth and create something amazing!
For more in-depth information, check out the OpenAI API documentation. Happy coding!