ChatGPT API Tutorial: Build Your First AI App in 2025
The OpenAI API lets you build AI-powered applications using the same models behind ChatGPT. This hands-on tutorial walks you through building your first AI application from scratch.
What You'll Build
By the end of this tutorial, you'll have:
- A working API connection
- A simple chatbot
- A streaming chat interface
- A practical AI-powered application
Prerequisites
- Basic programming knowledge (Python or JavaScript)
- A text editor or IDE
- An OpenAI account
- ~$5-10 for API credits (very affordable to start)
Getting Started
Step 1: Create OpenAI Account and Get API Key
- Go to platform.openai.com
- Sign up or log in
- Go to API Keys section
- Click "Create new secret key"
- Copy and save the key securely (you won't see it again!)
Important: Never commit your API key to version control or share it publicly.
Step 2: Add Credits
- Go to Billing section
- Add payment method
- Add $5-10 credits to start
- Set usage limits to prevent surprises
Step 3: Set Up Your Environment
Python Setup:
# Create project folder
mkdir my-ai-app
cd my-ai-app
# Create virtual environment
python -m venv venv
# Activate it
# Windows:
venv\Scripts\activate
# Mac/Linux:
source venv/bin/activate
# Install OpenAI library
pip install openai python-dotenv
JavaScript/Node.js Setup:
mkdir my-ai-app
cd my-ai-app
npm init -y
npm install openai dotenv
Step 4: Store API Key Safely
Create a .env file:
OPENAI_API_KEY=sk-your-api-key-here
Create .gitignore:
.env
venv/
node_modules/
Your First API Call
Python Version
Create first_call.py:
from openai import OpenAI
from dotenv import load_dotenv
import os
# Load API key from .env
load_dotenv()
# Initialize client
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# Make your first API call
response = client.chat.completions.create(
model="gpt-4o-mini", # Affordable and capable
messages=[
{"role": "user", "content": "Say hello and introduce yourself!"}
]
)
# Print the response
print(response.choices[0].message.content)
Run it:
python first_call.py
JavaScript Version
Create first_call.js:
import OpenAI from 'openai';
import dotenv from 'dotenv';
dotenv.config();
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
async function main() {
const response = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [
{ role: 'user', content: 'Say hello and introduce yourself!' }
],
});
console.log(response.choices[0].message.content);
}
main();
Run it:
node first_call.js
Understanding the API
Models Available
| Model | Best For | Cost (per 1M tokens) |
|---|---|---|
| gpt-4o | Best quality, multimodal | $5 in / $15 out |
| gpt-4o-mini | Great value, fast | $0.15 in / $0.60 out |
| gpt-4-turbo | Complex reasoning | $10 in / $30 out |
| gpt-3.5-turbo | Budget option | $0.50 in / $1.50 out |
Recommendation: Start with gpt-4o-mini for development—it's cheap and capable.
Message Roles
messages = [
{"role": "system", "content": "You are a helpful assistant."}, # Sets behavior
{"role": "user", "content": "Hello!"}, # User input
{"role": "assistant", "content": "Hi! How can I help?"}, # AI response
{"role": "user", "content": "Tell me a joke."} # Next user message
]
- system: Sets the AI's behavior and context
- user: Messages from the user
- assistant: Previous AI responses (for conversation history)
Key Parameters
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
# Control output
max_tokens=500, # Maximum response length
temperature=0.7, # Creativity (0=focused, 2=creative)
top_p=1.0, # Nucleus sampling
# Advanced
frequency_penalty=0.0, # Reduce repetition
presence_penalty=0.0, # Encourage new topics
stop=["\n"], # Stop sequences
)
Temperature guide:
- 0.0-0.3: Factual, consistent (code, analysis)
- 0.5-0.7: Balanced (general use)
- 0.8-1.0: Creative (writing, brainstorming)
Building a Chatbot
Simple Command-Line Chatbot
from openai import OpenAI
from dotenv import load_dotenv
import os
load_dotenv()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def chat():
# Store conversation history
messages = [
{"role": "system", "content": "You are a helpful assistant. Be concise but friendly."}
]
print("Chatbot started! Type 'quit' to exit.\n")
while True:
# Get user input
user_input = input("You: ").strip()
if user_input.lower() == 'quit':
print("Goodbye!")
break
if not user_input:
continue
# Add user message to history
messages.append({"role": "user", "content": user_input})
# Get AI response
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
max_tokens=500,
temperature=0.7
)
# Extract response
assistant_message = response.choices[0].message.content
# Add to history for context
messages.append({"role": "assistant", "content": assistant_message})
print(f"\nAssistant: {assistant_message}\n")
if __name__ == "__main__":
chat()
Specialized Assistant
Create a domain-specific assistant:
from openai import OpenAI
from dotenv import load_dotenv
import os
load_dotenv()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# Create a specialized assistant
SYSTEM_PROMPT = """You are an expert Python programming tutor.
Your role:
- Help users learn Python
- Explain concepts clearly
- Provide code examples
- Suggest best practices
- Be encouraging and patient
Guidelines:
- Keep explanations beginner-friendly unless asked otherwise
- Always include code examples
- Point out common mistakes
- Suggest next topics to learn
When providing code:
- Include comments
- Explain each part
- Show example usage"""
def python_tutor():
messages = [{"role": "system", "content": SYSTEM_PROMPT}]
print("Python Tutor started! Ask any Python question.\n")
while True:
user_input = input("You: ").strip()
if user_input.lower() in ['quit', 'exit']:
break
messages.append({"role": "user", "content": user_input})
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
max_tokens=1000,
temperature=0.5 # Lower for more consistent technical answers
)
assistant_message = response.choices[0].message.content
messages.append({"role": "assistant", "content": assistant_message})
print(f"\nTutor: {assistant_message}\n")
if __name__ == "__main__":
python_tutor()
Streaming Responses
Streaming shows responses as they're generated (like ChatGPT's interface).
Python Streaming
from openai import OpenAI
from dotenv import load_dotenv
import os
load_dotenv()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def stream_chat():
messages = [
{"role": "system", "content": "You are a helpful assistant."}
]
print("Streaming chatbot started!\n")
while True:
user_input = input("You: ").strip()
if user_input.lower() == 'quit':
break
messages.append({"role": "user", "content": user_input})
print("Assistant: ", end="", flush=True)
# Create streaming response
stream = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
stream=True # Enable streaming
)
# Collect the full response while printing
full_response = ""
for chunk in stream:
if chunk.choices[0].delta.content is not None:
content = chunk.choices[0].delta.content
print(content, end="", flush=True)
full_response += content
print("\n") # New line after response
# Add complete response to history
messages.append({"role": "assistant", "content": full_response})
if __name__ == "__main__":
stream_chat()
JavaScript Streaming
import OpenAI from 'openai';
import dotenv from 'dotenv';
import * as readline from 'readline';
dotenv.config();
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
async function streamChat() {
const messages = [
{ role: 'system', content: 'You are a helpful assistant.' }
];
console.log('Streaming chatbot started!\n');
const askQuestion = () => {
rl.question('You: ', async (input) => {
if (input.toLowerCase() === 'quit') {
rl.close();
return;
}
messages.push({ role: 'user', content: input });
process.stdout.write('Assistant: ');
const stream = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: messages,
stream: true,
});
let fullResponse = '';
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content || '';
process.stdout.write(content);
fullResponse += content;
}
console.log('\n');
messages.push({ role: 'assistant', content: fullResponse });
askQuestion();
});
};
askQuestion();
}
streamChat();
Practical Project: AI Content Analyzer
Let's build something useful—a content analyzer that examines text and provides insights.
from openai import OpenAI
from dotenv import load_dotenv
import os
import json
load_dotenv()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def analyze_content(text: str) -> dict:
"""Analyze content and return structured insights."""
system_prompt = """You are a content analysis expert.
Analyze the provided text and return a JSON object with:
{
"summary": "2-3 sentence summary",
"sentiment": "positive/negative/neutral",
"key_topics": ["topic1", "topic2", "topic3"],
"tone": "formal/informal/technical/conversational",
"reading_level": "elementary/high_school/college/expert",
"word_count": number,
"suggestions": ["improvement suggestion 1", "suggestion 2"]
}
Return ONLY valid JSON, no other text."""
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": f"Analyze this text:\n\n{text}"}
],
temperature=0.3, # Low for consistent structured output
response_format={"type": "json_object"} # Ensure JSON output
)
return json.loads(response.choices[0].message.content)
def main():
# Example text to analyze
sample_text = """
Machine learning is revolutionizing how we interact with technology.
From voice assistants to recommendation systems, AI is becoming an
integral part of our daily lives. While some fear job displacement,
many experts believe AI will create new opportunities and augment
human capabilities rather than replace them. The key is to adapt
our skills and embrace lifelong learning.
"""
print("Analyzing content...\n")
analysis = analyze_content(sample_text)
print("📊 Content Analysis Results")
print("=" * 40)
print(f"\n📝 Summary:\n{analysis['summary']}")
print(f"\n💭 Sentiment: {analysis['sentiment']}")
print(f"\n🎯 Key Topics: {', '.join(analysis['key_topics'])}")
print(f"\n🎨 Tone: {analysis['tone']}")
print(f"\n📚 Reading Level: {analysis['reading_level']}")
print(f"\n📏 Word Count: {analysis['word_count']}")
print(f"\n💡 Suggestions:")
for i, suggestion in enumerate(analysis['suggestions'], 1):
print(f" {i}. {suggestion}")
if __name__ == "__main__":
main()
Advanced Techniques
Function Calling
Let AI trigger functions in your code:
import json
from openai import OpenAI
from dotenv import load_dotenv
import os
load_dotenv()
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
# Define available functions
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, e.g., 'London, UK'"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
},
{
"type": "function",
"function": {
"name": "search_products",
"description": "Search for products in the database",
"parameters": {
"type": "object",
"properties": {
"query": {"type": "string"},
"category": {"type": "string"},
"max_price": {"type": "number"}
},
"required": ["query"]
}
}
}
]
# Actual function implementations
def get_weather(location: str, unit: str = "celsius") -> str:
# In real app, call weather API
return f"The weather in {location} is 22°{unit[0].upper()}, sunny."
def search_products(query: str, category: str = None, max_price: float = None) -> str:
# In real app, query database
return f"Found 3 products matching '{query}'" + (f" in {category}" if category else "")
# Map function names to implementations
function_map = {
"get_weather": get_weather,
"search_products": search_products
}
def chat_with_functions(user_message: str):
messages = [
{"role": "system", "content": "You are a helpful assistant with access to weather and product search functions."},
{"role": "user", "content": user_message}
]
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
tools=tools,
tool_choice="auto"
)
message = response.choices[0].message
# Check if AI wants to call a function
if message.tool_calls:
# Execute each function call
for tool_call in message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
# Call the actual function
result = function_map[function_name](**function_args)
# Add function result to messages
messages.append(message)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"content": result
})
# Get final response with function results
final_response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages
)
return final_response.choices[0].message.content
return message.content
# Test it
print(chat_with_functions("What's the weather like in Tokyo?"))
print(chat_with_functions("Find me headphones under $100"))
Error Handling
from openai import OpenAI, APIError, RateLimitError, AuthenticationError
import time
def robust_api_call(messages, max_retries=3):
"""Make API call with error handling and retries."""
for attempt in range(max_retries):
try:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
max_tokens=500
)
return response.choices[0].message.content
except AuthenticationError:
print("❌ Invalid API key. Check your credentials.")
raise
except RateLimitError:
wait_time = 2 ** attempt # Exponential backoff
print(f"⏳ Rate limited. Waiting {wait_time}s...")
time.sleep(wait_time)
except APIError as e:
print(f"⚠️ API error: {e}")
if attempt < max_retries - 1:
time.sleep(1)
else:
raise
return None
Managing Costs
def estimate_tokens(text: str) -> int:
"""Rough token estimate (4 chars ≈ 1 token)."""
return len(text) // 4
def chat_with_budget(messages, max_budget_tokens=1000):
"""Chat with token budget management."""
# Estimate input tokens
input_estimate = sum(estimate_tokens(m["content"]) for m in messages)
if input_estimate > max_budget_tokens * 0.6:
# Trim conversation history if too long
messages = [messages[0]] + messages[-4:] # Keep system + last 4 messages
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=messages,
max_tokens=int(max_budget_tokens * 0.4) # Reserve for output
)
# Log actual usage
usage = response.usage
print(f"Tokens used - Input: {usage.prompt_tokens}, Output: {usage.completion_tokens}")
return response.choices[0].message.content
Best Practices
1. Secure Your API Key
- Use environment variables
- Never hardcode keys
- Rotate keys periodically
- Set usage limits in OpenAI dashboard
2. Handle Errors Gracefully
- Implement retries with backoff
- Catch specific exceptions
- Provide user-friendly error messages
3. Manage Conversation Length
- Trim old messages to control costs
- Summarize long conversations
- Set max_tokens appropriately
4. Optimize Prompts
- Be specific and clear
- Use system prompts effectively
- Test different temperatures
- Use JSON mode for structured output
5. Monitor Usage
- Track token usage
- Set budget alerts
- Log requests for debugging
Next Steps
Now that you've built your first AI app:
- Expand your chatbot - Add function calling, file upload, memory
- Build a web interface - Use Flask/FastAPI (Python) or Express (Node)
- Explore other models - Try GPT-4o for advanced tasks, Claude for writing
- Add RAG - Combine with vector databases for knowledge-based apps
- Deploy - Host on Vercel, Railway, or AWS
Frequently Asked Questions
Q: How much does the API cost? A: GPT-4o-mini is very affordable: about $0.15 per million input tokens. A simple chatbot conversation costs fractions of a cent.
Q: Is there a free tier? A: New accounts get free credits. After that, you pay for what you use.
Q: How is this different from ChatGPT Plus? A: ChatGPT Plus ($20/mo) is for the chat interface. The API is pay-per-use for building your own applications.
Q: Can I use the API for commercial applications? A: Yes, the API is designed for commercial use. Check OpenAI's terms for specifics.
Q: How do I handle long conversations? A: Summarize or truncate older messages to manage context length and costs.
Conclusion
You've learned to:
- Set up the OpenAI API
- Make basic API calls
- Build a chatbot with conversation history
- Implement streaming responses
- Create specialized AI assistants
- Use function calling
- Handle errors and manage costs
The OpenAI API is your gateway to building AI-powered applications. Start simple, experiment freely (costs are low), and gradually build more complex applications.
The best way to learn is by building. Pick a project idea and start coding!
About the author
Elysiate publishes practical guides and privacy-first tools for data workflows, developer tooling, SEO, and product engineering.