Getting an OpenAI API Key

[Estimated reading time for this document: 20 minutes. Estimated time to watch the linked videos and run the Python code: 4.5 hours. Please share this guide with your friends and colleagues who might find it helpful.]

This comprehensive guide overviews Lesson 18, Building OpenAI API-Based Python Generative AI Applications, from Paul Deitel’s Python Fundamentals video course on O’Reilly Online Learning. The lesson focuses on building Python apps using OpenAI’s generative AI (genAI) APIs. This document guides you through Paul’s hands-on examples and provides Try It exercises so you can experiment with the APIs. You’ll leverage the OpenAI APIs to create intelligent, multimodal apps that understand, generate and manipulate text, code, images, audio and video content.

This guide links you to 31 videos totaling about 3.5 hours in Paul’s Python Fundamentals video course. These present fully coded Python genAI apps that use the OpenAI APIs to

  • summarize documents
  • determine text’s sentiment (positive, neutral or negative)
  • use vision capabilities to generate accessible image descriptions
  • translate text among spoken languages
  • generate and manipulate Python code
  • extract from text named entities, such as people, places, organizations, dates, times, events, products, …
  • transcribe speech to text
  • synthesize speech from text, using one of OpenAI’s 11 voices and prompts that control style and tone
  • create original images
  • transfer art styles to images via text prompts
  • transfer styles between images
  • generate video closed captions
  • filter inappropriate content
  • generate and remix videos (under development at the time of this writing—uses OpenAI’s recently released Sora 2 API)
  • build agentic AI apps (under development at the time of this writing—uses OpenAI’s recently released AgentKit)

The remaining videos overview concepts and present genAI prompt and coding exercises you can use to dig deeper into the presented topics.

Videos:

How We Formed This Guide

We created the initial draft of this guide using five genAIsOpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Microsoft’s Copilot and Perplexity. We provided each with

  • a detailed prompt,
  • a Chapter 18 draft from our forthcoming Python for Programmers, 2/e product suite and
  • a list of the video titles and links you’ll find in this guide.

We asked Claude to summarize the results, then tuned the summary to create this blog post.

Contacting Paul Deitel

The OpenAI APIs are evolving rapidly. If you run into problems while working through the examples or find that something has changed, check the Deitel blog or send an email to paul@deitel.com.

Downloading the Code

Go to the Python Fundamentals, 2/e GitHub Repository to get the source code that accompanies the videos referenced in this guide. The OpenAI API examples are located in the examples/18 folder. Book chapter numbers and corresponding video lesson numbers are subject to change while the second edition of our Python product suite is under development.

Suggested Learning Workflow

If you watch the videos, you’ll get a code-example-rich intro to programming with the OpenAI APIs. To learn how to work with various aspects of the OpenAI APIs, we suggest that you:

  • Watch the video for each example.
  • Run the provided Python code.
  • Complete the “Try It” coding challenges.
  • Experiment by creatively combining APIs (e.g., transcribe audio then translate, or generate images with accessibility descriptions).

Key Takeaways

This comprehensive guide and the corresponding videos present practical skills for harnessing the power of OpenAI’s genAI APIs. You’ll:

  • Master OpenAI API usage in Python and creative prompt engineering.
  • Build complete, functional, multimodal apps that create and manipulate text, code, images, audio and video.
  • Implement responsible accessibility and content moderation practices.

GenAIs make mistakes and even “hallucinate.” You should always verify their outputs.

Introduction

Paul discusses the required official openai Python module, OpenAI’s fee-based API model, and monitoring and managing API usage costs.

Video: Introduction (6m)

OpenAI APIs

Here, Paul discusses the OpenAI APIs and models he’ll demo in this lesson.

Video: OpenAI APIs (2m 29s)

OpenAI Documentation: API Reference

Try It: Browse the OpenAI API documentation and review the API subcategories.

Try It: Prompt genAIs for an overview of responsible AI practices.

OpenAI Developer Account and API Key

Here, you’ll learn how to create your OpenAI developer account, generate an API key and securely store it in an environment variable. This required setup step will enable your apps to authenticate with OpenAI so they can make API calls. You’ll understand best practices for securing your API key. The OpenAI API is a paid service. If, for the moment, you do not want to code with paid APIs, reading this document, watching the videos and reading the code is still valuable.

Video: OpenAI Developer Account and API Key (8m 45s)

OpenAI Documentation: Account Setup, API Keys

Try It: Create your OpenAI developer account, generate your first API key and store it securely using an environment variable.

Text Generation Via the Responses API

The text-generation examples introduce the Responses API, OpenAI’s primary text-generation interface. You’ll learn how to structure prompts, configure parameters, invoke the API and interpret responses. This API enables sophisticated conversational AI applications and is the foundation for many text-based genAI tasks.

Video: Text Generation Via the Responses API: Overview (4m 58s)

OpenAI Documentation: Text Generation Guide

Text Summarization

Here, you’ll use OpenAI’s natural language understanding capabilities to condense lengthy text into concise summaries. This example covers crafting summarization prompts and controlling summary length and style. Text summarization is invaluable for efficiently processing large documents, articles and reports.

Videos:

Try It: Create a summarization tool that takes a long article and generates brief, moderate and detailed summaries.

Sentiment Analysis

Use OpenAI’s natural language understanding capabilities to analyze text’s emotional tone and sentiment. This example classifies text as positive, negative, or neutral, and asks the model to explain how it came to that conclusion.

Video: Sentiment Analysis (4m 18s)

Try It: Build a sentiment analyzer that classifies the sentiment of customer reviews and asks the genAI model to provide a confidence score from 0.0 to 1.0 for each, indicating the likelihood that the classification is correct. Confidence scores closer to 1.0 are more likely to be correct.

Vision: Accessible Image Descriptions

Use OpenAI’s vision capabilities to analyze images and generate detailed, contextual descriptions, making them accessible to users with visual impairments. You’ll understand how to optimize prompts for description styles and detail levels.

Video: Vision: Accessible Image Descriptions (18m 42s)

OpenAI Documentation: Images and Vision Guide

Try It: Create an application that takes URLs for various images and generates both brief and comprehensive accessibility descriptions suitable for screen readers.

Language Detection and Translation

Use OpenAI’s multilingual capabilities to identify what language text is written in and translate text to other spoken languages. This example auto-detects source languages and translates text to a specified language.

Videos:

Try It: Build a translation tool that detects the input language and translates to a target language, preserving tone and context.

Code Generation

Discover how AI can generate, explain, and debug code across multiple programming languages. This example covers code generation, understanding AI-generated code quality, and using AI as a coding assistant. In the second video, you’ll explore how genAIs can assist you with coding, including code generation, testing, debugging, documenting, refactoring, performance tuning, security and more.

Videos:

Try It: In a text prompt, describe the requirements for a function you need and submit a request to the Responses API to generate that function and provide test cases to show it works correctly. If not, call the Responses API again with the generated code and a prompt to refine the code.

Named Entity Recognition (NER) and Structured Outputs

Use OpenAI’s natural language understanding capabilities and named entity recognition to extract structured information from unstructured text, identifying entities such as people, places, organizations, dates, times, events, products, and more. This example shows that OpenAI’s APIs can return outputs as formatted JSON (JavaScript Object Notation), which is human-and-computer readable. NER is essential for building applications that process and organize information from documents and text sources.

Videos:

OpenAI Documentation: Structured Model Outputs Guide

Try It: Modify the NER example to perform parts-of-speech (POS) tagging—identifying each word’s part of speech (e.g., noun, verb, adjective, etc.) in a sentence. Use genAIs to research the commonly used tag sets for POS tagging, then prompt the model to return a structured JSON response with the parts of speech for the words in the supplied text and display each word with its part of speech. Each JSON object should contain key-value pairs for the keys “word” and “tag”.

Try It: Modify the NER example to translate text into multiple languages. Prompt the model to translate the text it receives to the specified languages and to return only JSON-structured data in the following format, then display the results:

{
  "original_text": original_text_string,
  "original_language": original_text_language_code,
  "translations": [
    {
      "language": translated_text_language_code,
      "translation": translated_text_string
    }
  ]
}

Try It: Create a tool that extracts key entities from news articles and outputs them in a structured JSON format.

Speech Recognition and Speech Synthesis

This video introduces speech-to-text transcription and text-to-speech conversion (speech synthesis) concepts that are the foundation for working with audio input and output in your AI applications. You’ll understand the models used in the transcription and synthesis examples, and explore the speech voices via OpenAI’s voice demo site—https://openai.fm.

Video: Speech Recognition and Speech Synthesis: Overview (5m 27s)

OpenAI Documentation:

Try It: Try all the voices at https://openai.fm. Which do you prefer? Why?

English Speech-to-Text (STT) for Audio Transcription

Here, you’ll convert spoken audio to text. Speech-to-text technology enables applications like automated transcription services, voice commands, and accessibility features.

Videos:

OpenAI Documentation: Speech to Text Guide

Try It: Build a transcription tool that converts .mp3 and .m4a audio files to text.

Text-To-Speech (TTS)

Here, you’ll convert written text into natural-sounding speech with one of OpenAI’s 11 voice options. You’ll select voice options, specify speech style and tone, and generate audio files. Text-to-speech technology is crucial for creating voice assistants, audiobook generation, and accessibility applications.

Videos:

OpenAI Documentation: Text to Speech Guide

Try It: Create an app that converts documents to audio files with selectable voices.

Image Generation

Here, you’ll create original images from text descriptions using OpenAI’s latest image-generation model. Image generation opens possibilities for creative content, design mockups, and visual storytelling.

Videos:

OpenAI Documentation: Images and Vision Guide

Try It: Build an image-generation tool that creates variations based on text prompts.

Image Style Transfer

In two examples, you’ll apply artistic styles to existing images using the Images API’s edit capability with style-transfer prompts and the Responses API’s image generation tool.

Videos:

OpenAI Documentation: Images and Vision Guide

Try It: Create a style transfer application that transforms user photos into different artistic styles, such as Vincent van Gogh, Leonardo da Vinci and others.

Generating Closed Captions from a Video’s Audio Track

Here, you’ll generate closed captions from a video file’s audio track using OpenAI’s audio transcription capabilities. Closed captions enhance video accessibility and improve content searchability. This example covers caption formatting standards, audio extraction techniques and using the OpenAI Whisper model, which supports generating captions with timestamps. You’ll then use the open-source VLC Media Player to overlay the closed captions on the corresponding video.

Video: Generating Closed Captions from a Video’s Audio Track (9m 7s)

OpenAI Documentation: Speech to Text Guide

Try It: Build a caption generator that programmatically extracts audio from videos and creates properly formatted subtitle files. Investigate the moviepy module for conveniently extracting a video’s audio track in Python.

Content Moderation

Here, you’ll use OpenAI’s Moderation APIs to detect and filter inappropriate or harmful text and images—essential techniques for platforms hosting user-generated content. Paul presents moderation categories and severity levels, demonstrates the Moderation API with text inputs and discusses image moderation.

Videos:

OpenAI Documentation: Moderation Guide

Try It: Create a content moderation system that screens user submissions and flags potentially problematic content.

Sora 2 Video Generation

This video introduces OpenAI Sora’s video-generation capabilities. You’ll see prompt-to-video and image-to-video demos. Coming soon: Paul is developing API-based video-generation and video-remixing code examples using OpenAI’s recently released Sora 2 APIs and will add videos based on these code examples when he completes them.

Video: Sora Video Generation (10m 58s)

OpenAI Documentation: Video Generation with Sora Guide

Try It: Experiment with text-to-video prompts and explore the creative possibilities of AI video generation.

Closing Note

As we develop additional OpenAI API-based apps, Paul will add new videos to this Python Fundamentals lesson on Building API-Based Python GenAI Applications. Some new example possibilities include:

  • Generating and remixing videos with OpenAI’s Sora 2 API.
  • Using OpenAI’s Realtime Audio APIs for speech-to-speech apps.
  • Building AI agents with OpenAI’s AgentKit.
  • Single-tool AI agents.
  • Multi-tool AI agents.
  • Single-agent applications.
  • Multi-agent applications.
  • Managing AI conversations that maintain state between Responses API calls.

Try It: Review the course materials and start planning your own GenAI application project using the techniques learned. Enjoy!

Additional Resources

Paul Deitel Full Throttle, One-Day, Code-Intensive Live-Training Courses on O’Reilly Online Learning

Paul Deitel Video Courses on O’Reilly Online Learning

Live Online Training with Paul Deitel: April-June 2024

Looking for a one-dayfast-pacedcode-intensive introduction to PythonPython Data Science/AIJava or C++20? Join Paul Deitel for one of his popular Full Throttle webinars at O’Reilly Online Learning!

These webinars are for you because:

  • You’re a developer who sees exciting languages and technologies popping up everywhere and you want a one-day, code-based introduction to them.
  • You’re a developer looking to enhance your career opportunities by learning new languages and technologies and you want a one-day, code-based introduction to them.
  • You’re a software team manager contemplating projects using other languages and technologies and you want a one-day, code-based introduction to them.

Click the course title on our O’Reilly Online Learning landing page to see all available dates and register. Not a subscriber? Sign up for a free trial!

Upcoming Schedule

Course NameDate 
Python Full Throttle with Paul Deitel: A One-Day, Fast-Paced, Code-Intensive Python Presentation (updated with features through Python 3.12)04/09/24
Modern C++ Full Throttle with Paul Deitel: Intro to C++20 & the Standard Library
Presentation-Only Intro to Fundamentals, Arrays, Vectors, Pointers, OOP, Ranges, Views, Functional Programming; Brief Intro to Concepts, Modules & Coroutines
04/23/24
Python Full Throttle with Paul Deitel: A One-Day, Fast-Paced, Code-Intensive Python Presentation (updated with features through Python 3.12)05/07/24
Python Data Science Full Throttle with Paul Deitel: Introductory Artificial Intelligence (AI), Big Data and Cloud Case Studies05/14/24
Python Full Throttle with Paul Deitel: A One-Day, Fast-Paced, Code-Intensive Python Presentation (updated with features through Python 3.12)06/04/24
Java Full Throttle with Paul Deitel: A One-Day, Code-Intensive Java 10-21 Presentation06/11/24

 

Live Online Training with Paul Deitel: September Through December 2023

Looking for a one-dayfast-pacedcode-intensive introduction to PythonPython Data Science/AIJava or C++20? Join Paul Deitel for one of his popular Full Throttle webinars at O’Reilly Online Learning!

These webinars are for you because:

  • You’re a developer who sees exciting languages and technologies popping up everywhere and you want a one-day, code-based introduction to them.
  • You’re a developer looking to enhance your career opportunities by learning new languages and technologies and you want a one-day, code-based introduction to them.
  • You’re a software team manager contemplating projects using other languages and technologies and you want a one-day, code-based introduction to them.

Click the course title on our O’Reilly Online Learning landing page to see all available dates and register. 

Not a subscriber? Sign up for a free trial!

Upcoming Schedule

Python Data Science and AI Full Throttle: Introductory Artificial Intelligence (AI), Big Data and Cloud Case StudiesSeptember 26, 2023
Python Full Throttle: A One-Day, Fast-Paced, Code-Intensive Python PresentationOctober 3, 2023
C++20 Full Throttle (Part 1): A One-Day, Presentation-Only, Code-Intensive Intro to C++20 Core Language Fundamentals, Arrays, Strings, Vectors, Pointers, and Object-Oriented ProgrammingOctober 10, 2023
Python Full Throttle: A One-Day, Fast-Paced, Code-Intensive Python PresentationNovember 7, 2023
Python Data Science and AI Full Throttle: Introductory Artificial Intelligence (AI), Big Data and Cloud Case StudiesNovember 14, 2023
Python Full Throttle: A One-Day, Fast-Paced, Code-Intensive Python PresentationDecember 5, 2023
Python Data Science and AI Full Throttle: Introductory Artificial Intelligence (AI), Big Data and Cloud Case StudiesDecember 7, 2023
September Through December Live Training Schedule

Twitter v2 Update for Our Python Books and Videos

Intro to Python for Computer Science and Data Science: Learning to Program with AI, Big Data and the Cloud
Python for Programmers
Python Fundamentals
Updated September 7, 2023—We’re leaving this post up for anyone who might still have access to the Twitter APIs. The Twitter API’s free tier is now so limited that most of what we demonstrate in our Twitter chapter/lesson is longer available. Higher levels of paid access are too expensive for average users and students. The first paid tier ($100/month) provides basic capabilities and no streaming access (the free tier used to allow access to 1% of the daily live stream). The second paid tier gives more access and some streaming capability, but costs $5000/month and caps the total number of tweets at 1,000,000. Significant access to the live stream of tweets costs tens of thousands of dollars per month. There has been some discussion of an academic/research tier, but as of now, we have not seen any indication of when or if this will be available.
Attention users of the following Python products:
  • Intro to Python for Computer Science and Data Science: Learning to Program with AI, Big Data and the Cloud
  • Python for Programmers
  • Python Fundamentals LiveLessons

On August 18, 2022, we discovered that new Twitter developer accounts cannot access the Twitter v1.1 APIs on which we based Intro to Python‘s Chapter 13, Data Mining Twitter, and two case studies in Chapter 17, Big Data: Hadoop, Spark, NoSQL and IoT. Chapters 13 and 17 correspond to Chapters/Lessons 12 and 16 in our Python for Programmers book and Python Fundamentals LiveLessons videos.

Twitter users who already had Twitter developer accounts can still access the Twitter v1.1 APIs, but most of our Python content users will not fall into this category.

We’ve updated all our Twitter examples to the Twitter v2 APIs now. In addition, for the Intro to Python textbook, we need to update the instructor’s manual solutions and test-item file.

Updated chapters from our books are now available:

Updated instructor slides for Chapter 13 of the textbook should be available now in the Pearson Instructor Resource Center (IRC). Other updated instructor supplements will be updated there as we complete them.

Updated source-code files are available in the books’ IntroToPython and PythonForProgrammers GitHub repositories at https://github.com/pdeitel.

I’ll be re-recording the Python Fundamentals LiveLessons videos’ Lesson 12 soon.

If you have any questions, please email paul@deitel.com.

C How to Program, 9/e Errata

C How to Program, 9/e Cover

 This post contains the C How to Program, 9/e errata list. We’ll keep this up-to-date as we become aware of additional errata items. Please Contact Us with any you find.

Note: After publication, we discovered a bug in our authoring software that deleted some items in single quotes, like ‘A’, from our code tables. The source-code files were not affected, but occasionally a single-quoted item is missing from a code table in the text.

Last updated January 15, 2023

Chapter 2 — Intro to C Programming

  • Page 76, in Section 2.5: “+, / and %” should be “*, / and %.

Chapter 4 — Program Control

  • Page 149, “Notes on Integral Types”:

    –32767 should be –32768
    –2147483647 should be –2147483648
    –127 should be –128

Chapter 5 — Pointers

  • Page 214, Fig. 5.9: The example should produce factorial values through 20, not 21. The value displayed for factorial(21) in the program output is incorrect because unsigned long long is not capable of representing that value.

Chapter 7 — Pointers

  • Page 320, line 19 of Fig. 7.6 should be:
    while (*sPtr != '\0') {
  • Page 321, line 22 of Fig. 7.7, should be
    for (; *sPtr != '\0'; ++sPtr) {

Chapter 10 — Structures, Unions, Bit Manipulation and Enumerations

  • Page 496, Fig. 10.4, line 24 should be:
    putchar(value & displayMask ? '1' : '0');
  • Page 496, Fig. 10.4, line 28 should be:
    putchar(' ');
  • Page 496, Fig. 10.4, line 32 should be:
    putchar('\n');
  • Page 497, seventh text line on the page should be:
    putchar(value & displayMask ? '1' : '0');
  • Page 499, Fig. 10.5, line 53 should be:
    putchar(value & displayMask ? '1' : '0');
  • Page 499, Fig. 10.5, line 57 should be:
    putchar(' ');
  • Page 499, Fig. 10.5, line 61 should be:
    putchar('\n');
  • Page 502, Fig. 10.6, line 32 should be:
    putchar(value & displayMask ? '1' : '0')
  • Page 502, Fig. 10.6, line 36 should be:
    putchar(' ');
  • Page 502, Fig. 10.6 line 40 should be:
    putchar('\n');

Questions? Contact us!

C++20 for Programmers Now Available to O’Reilly Online Learning Subscribers

C++20 for Programmers Final Cover Image

C++20 for Programmers is now available to O’Reilly Online Learning Subscribers at:

https://learning.oreilly.com/library/view/c-20-for-programmers/9780136905776/

The print version should be in-stock mid-April. Preorder it at Amazon.com or other online book retailers.

Written for programmers with a background in another high-level language, in C++20 for Programmers, you’ll learn Modern C++ development hands-on using C++20 and its “Big Four” features:

  • Ranges
  • Concepts
  • Modules
  • Coroutines

In the context of 200+, hands-on, real-world code examples, you’ll quickly master Modern C++ coding idioms using popular compilers—Visual C++®, GNU® g++, Apple® Xcode® and LLVM®/Clang.

After the C++ fundamentals quick start, you’ll move on to C++ standard library containers array and vector; functional-style programming with C++20 Ranges and Views; strings, files and regular expressions; object-oriented programming with classes, inheritance, runtime polymorphism and static polymorphism; operator overloading, copy/move semantics, RAII and smart pointers; exceptions and a look forward to C++23 Contracts; standard library containers, iterators and algorithms; templates, C++20 Concepts and metaprogramming; C++20 Modules and large-scale development; and concurrency, parallelism, the C++17 and C++20 parallel standard library algorithms and C++20 Coroutines.

Features include:

  • Rich coverage of C++20’s “Big Four”: Ranges, Concepts, Modules and Coroutines
  • Objects-Natural Approach: Use standard libraries and open-source libraries to build significant applications with minimal code
  • Hundreds of real-world, live-code examples
  • Modern C++: C++20, 17, 14, 11 and a look to C++23
  • Compilers: Visual C++®, GNU® g++, Apple Xcode® Clang, LLVM®/Clang
  • Docker: GNU® GCC, LLVM®/Clang
  • Fundamentals: Control statements, functions, strings, references, pointers, files, exceptions
  • Object-oriented programming: Classes, objects, inheritance, runtime and static polymorphism, operator overloading, copy/move semantics, RAII, smart pointers
  • Functional-style programming: C++20 Ranges and Views, lambda expressions
  • Generic programming: Templates, C++20 Concepts and metaprogramming
  • C++20 Modules: Large-Scale Development
  • Concurrent programming: Concurrency, multithreading, parallel algorithms, C++20 Coroutines, coroutines support libraries, C++23 executors
  • Future: A look forward to Contracts, range-based parallel algorithms, standard library coroutine support and more

For more details, see the Preface, the Table of Contents diagram and reviewer testimonials.

Questions? Contact us!

Pin It on Pinterest