Building OpenAI API-Based Python GenAI Applications—A Guide to the Deitel Videos on the O’Reilly Online Learning Subscription Site

Image for the OpenAI APIs we demonstrate

[Estimated reading time for this document: 20 minutes. Estimated time to watch the linked videos and run the Python code: 4.5 hours. Please share this guide with your friends and colleagues who might find it helpful.]

This comprehensive guide overviews Lesson 18, Building OpenAI API-Based Python Generative AI Applications, from my Python Fundamentals video course on O’Reilly Online Learning. The lesson focuses on building Python apps using OpenAI’s generative AI (genAI) APIs. This document guides you through my hands-on code examples and provides Try It exercises so you can experiment with the APIs. You’ll leverage the OpenAI APIs to create intelligent, multimodal apps that understand, generate and manipulate text, code, images, audio and video content.

This guide links you to 31 videos totaling about 3.5 hours in my Python Fundamentals video course in which I present fully coded Python genAI apps that use the OpenAI APIs to

  • summarize documents
  • determine text’s sentiment (positive, neutral or negative)
  • use vision capabilities to generate accessible image descriptions
  • translate text among spoken languages
  • generate and manipulate Python code
  • extract from text named entities, such as people, places, organizations, dates, times, events, products, …
  • transcribe speech to text
  • synthesize speech from text, using one of OpenAI’s 11 voices and prompts that control style and tone
  • create original images
  • transfer art styles to images via text prompts
  • transfer styles between images
  • generate video closed captions
  • filter inappropriate content
  • generate and remix videos (under development at the time of this writing—uses OpenAI’s recently released Sora 2 API)
  • build agentic AI apps (under development at the time of this writing—uses OpenAI’s recently released AgentKit)

The remaining videos overview concepts and present genAI prompt and coding exercises you can use to dig deeper into the covered topics.

Videos:

How I Formed This Guide

I created the initial draft of this guide using five genAIsOpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Microsoft’s Copilot and Perplexity. I provided each with

  • a detailed prompt,
  • a Chapter 18 draft from our forthcoming Python for Programmers, 2/e product suite and
  • a list of the video titles and links you’ll find in this guide.

I then asked Claude to summarize the results, and tuned the summary to create this blog post.

Contacting Me with Questions

The OpenAI APIs are evolving rapidly. If you run into problems while working through the examples or find that something has changed, check the Deitel blog or send an email to paul@deitel.com.

Downloading the Code

Go to the Python Fundamentals, 2/e GitHub Repository to get the source code that accompanies the videos referenced in this guide. The OpenAI API examples are located in the examples/18 folder. Book chapter numbers and corresponding video lesson numbers are subject to change while the second edition of our Python product suite is under development.

Suggested Learning Workflow

If you watch the videos, you’ll get a code-example-rich intro to programming with the OpenAI APIs. To learn how to work with various aspects of the OpenAI APIs, I suggest that you:

  • Watch the video for each example.
  • Run the provided Python code.
  • Complete the “Try It” coding challenges.
  • Experiment by creatively combining APIs (e.g., transcribe audio then translate, or generate images with accessibility descriptions).

Key Takeaways

This comprehensive guide and the corresponding videos present practical skills for harnessing the power of OpenAI’s genAI APIs. You’ll:

  • Master OpenAI APIs in Python and perform creative prompt engineering.
  • Build complete, functional, multimodal apps that create and manipulate text, code, images, audio and video.
  • Implement responsible accessibility and content moderation practices.

Caution: GenAIs make mistakes and even “hallucinate.” You should always verify their outputs.

Introduction

In this video, I discuss the required official openai Python module, OpenAI’s fee-based API model, and monitoring and managing API usage costs.

Video: Introduction (6m)

OpenAI APIs

Here, I overview the OpenAI APIs and models I’ll demo in this lesson.

Video: OpenAI APIs (2m 29s)

OpenAI Documentation: API Reference

Try It: Browse the OpenAI API documentation and review the API subcategories.

Try It: Prompt genAIs for an overview of responsible AI practices.

OpenAI Developer Account and API Key

Here, you’ll learn how to create your OpenAI developer account, generate an API key and securely store it in an environment variable. This required setup step will enable your apps to authenticate with OpenAI so they can make API calls. You’ll understand best practices for securing your API key. The OpenAI API is a paid service. If, for the moment, you do not want to code with paid APIs, reading this document, watching the videos and reading the code is still valuable.

Video: OpenAI Developer Account and API Key (8m 45s)

OpenAI Documentation: Account Setup, API Keys

Try It: Create your OpenAI developer account, generate your first API key and store it securely using an environment variable.

Text Generation Via the Responses API

My text-generation examples introduce the Responses API, OpenAI’s primary text-generation interface. I show how to structure prompts, configure parameters, invoke the API and interpret responses. This API enables sophisticated conversational AI applications and is the foundation for many text-based genAI tasks.

Video: Text Generation Via the Responses API: Overview (4m 58s)

OpenAI Documentation: Text Generation Guide

Text Summarization

Here, I use OpenAI’s natural language understanding capabilities to condense lengthy text into concise summaries. This example covers crafting summarization prompts and controlling summary length and style. Text summarization is invaluable for efficiently processing large documents, articles and reports.

Videos:

Try It: Create a summarization tool that takes a long article and generates brief, moderate and detailed summaries.

Sentiment Analysis

This example uses OpenAI’s natural language understanding capabilities to analyze text’s emotional tone and sentiment. It classifies text as positive, negative or neutral, and has model explain how it came to that conclusion.

Video: Sentiment Analysis (4m 18s)

Try It: Build a sentiment analyzer that classifies the sentiment of customer reviews and asks the genAI model to provide a confidence score from 0.0 to 1.0 for each, indicating the likelihood that the classification is correct. Confidence scores closer to 1.0 are more likely to be correct.

Vision: Accessible Image Descriptions

In this example, I show OpenAI’s vision capabilities for analyzing images and use them to generate detailed, contextual descriptions, making them accessible to users with visual impairments. You’ll understand how to optimize prompts for description styles and detail levels.

Video: Vision: Accessible Image Descriptions (18m 42s)

OpenAI Documentation: Images and Vision Guide

Try It: Create an application that takes URLs for various images and generates both brief and comprehensive accessibility descriptions suitable for screen readers.

Language Detection and Translation

In this example, I use OpenAI’s multilingual capabilities to auto-detect what language text is written in and translate text to other spoken languages.

Videos:

Try It: Build a translation tool that detects the input language and translates to a target language, preserving tone and context.

Code Generation

Discover how AI can generate, explain, and debug code across multiple programming languages. The first video covers code generation, understanding AI-generated code quality, and using AI as a coding assistant. In the second video, I discuss how genAIs can assist you with coding, including code generation, testing, debugging, documenting, refactoring, performance tuning, security and more.

Videos:

Try It: In a text prompt, describe the requirements for a function you need and submit a request to the Responses API to generate that function and provide test cases to show it works correctly. If not, call the Responses API again with the generated code and a prompt to refine the code.

Named Entity Recognition (NER) and Structured Outputs

In this example, I use OpenAI’s natural language understanding capabilities and named entity recognition to extract structured information from unstructured text, identifying entities such as people, places, organizations, dates, times, events, products, and more. The example shows that OpenAI’s APIs can return outputs as formatted, human-and-computer readable JSON (JavaScript Object Notation). NER is essential for building applications that process and organize information from documents and text sources.

Videos:

OpenAI Documentation: Structured Model Outputs Guide

Try It: Modify the NER example to perform parts-of-speech (POS) tagging—identifying each word’s part of speech (e.g., noun, verb, adjective, etc.) in a sentence. Use genAIs to research the commonly used tag sets for POS tagging, then prompt the model to return a structured JSON response with the parts of speech for the words in the supplied text and display each word with its part of speech. Each JSON object should contain key-value pairs for the keys “word” and “tag”.

Try It: Modify the NER example to translate text into multiple languages. Prompt the model to translate the text it receives to the specified languages and to return only JSON-structured data in the following format, then display the results:

{
   "original\_text": original\_text\_string,
   "original\_language": original\_text\_language\_code,
   "translations": \[
   _ {
       "language": translated\_text\_language\_code,
       "translation": translated\_text\_string
   _ }
   \]
}

Try It: Create a tool that extracts key entities from news articles and outputs them in a structured JSON format.

Speech Recognition and Speech Synthesis

In this video, I introduce speech-to-text transcription and text-to-speech conversion (speech synthesis) concepts that are the foundation for working with audio input and output in your AI applications. You’ll understand the models used in the transcription and synthesis examples, and explore the speech voices via OpenAI’s voice demo site—https://openai.fm.

Video: Speech Recognition and Speech Synthesis: Overview (5m 27s)

OpenAI Documentation:

Try It: Try all the voices at https://openai.fm. Which do you prefer? Why?

English Speech-to-Text (STT) for Audio Transcription

Here, I convert spoken audio to text. Speech-to-text technology enables applications like automated transcription services, voice commands, and accessibility features.

Videos:

OpenAI Documentation: Speech to Text Guide

Try It: Build a transcription tool that converts .mp3 and .m4a audio files to text.

Text-To-Speech (TTS)

Here, I convert written text into natural-sounding speech with one of OpenAI’s 11 voice options. I discuss selecting voice options, specifying speech style and tone, and generating audio files. Text-to-speech technology is crucial for creating voice assistants, audiobook generation, and accessibility applications.

Videos:

OpenAI Documentation: Text to Speech Guide

Try It: Create an app that converts documents to audio files with selectable voices.

Image Generation

Here, I create original images from text descriptions using OpenAI’s latest image-generation model. Image generation opens possibilities for creative content, design mockups, and visual storytelling.

Videos:

OpenAI Documentation: Images and Vision Guide

Try It: Build an image-generation tool that creates variations based on text prompts.

Image Style Transfer

In two examples, I apply artistic styles to existing images using the Images API’s edit capability with style-transfer prompts and the Responses API’s image generation tool to transfer the style of one image to another.

Videos:

OpenAI Documentation: Images and Vision Guide

Try It: Create a style transfer application that transforms user photos into different artistic styles, such as Vincent van Gogh, Leonardo da Vinci and others.

Generating Closed Captions from a Video’s Audio Track

In this example, I generate closed captions from a video file’s audio track using OpenAI’s audio transcription capabilities. Closed captions enhance video accessibility and improve content searchability. This example covers caption formatting standards, audio extraction techniques and using the OpenAI Whisper model, which supports generating captions with timestamps. I then use the open-source VLC Media Player to overlay the closed captions on the corresponding video.

Video: Generating Closed Captions from a Video’s Audio Track (9m 7s)

OpenAI Documentation: Speech to Text Guide

Try It: Build a caption generator that programmatically extracts audio from videos and creates properly formatted subtitle files. Investigate the moviepy module for conveniently extracting a video’s audio track in Python.

Content Moderation

Here, I use OpenAI’s Moderation APIs to detect and filter inappropriate or harmful text and images—essential techniques for platforms hosting user-generated content. Paul presents moderation categories and severity levels, demonstrates the Moderation API with text inputs and discusses image moderation.

Videos:

OpenAI Documentation: Moderation Guide

Try It: Create a content moderation system that screens user submissions and flags potentially problematic content.

Sora 2 Video Generation

This video introduces OpenAI Sora’s video-generation capabilities. I present prompt-to-video and image-to-video demos. Coming soon: I am developing API-based video-generation and video-remixing code examples using OpenAI’s recently released Sora 2 APIs and will add videos based on these code examples when I complete them.

Video: Sora Video Generation (10m 58s)

OpenAI Documentation: Video Generation with Sora Guide

Try It: Experiment with text-to-video prompts and explore the creative possibilities of AI video generation.

Closing Note

As I develop additional OpenAI API-based apps, I will add new videos to this Python Fundamentals lesson on Building API-Based Python GenAI Applications. Some new example possibilities include:

  • Generating and remixing videos with OpenAI’s Sora 2 API.
  • Using OpenAI’s Realtime Audio APIs for speech-to-speech apps.
  • Building AI agents with OpenAI’s AgentKit.
  • Single-tool AI agents.
  • Multi-tool AI agents.
  • Single-agent applications.
  • Multi-agent applications.
  • Managing AI conversations that maintain state between Responses API calls.

Try It: Review the course materials and start planning your own GenAI applications using the techniques learned. Enjoy!

Additional Resources

My Full Throttle, One-Day, Code-Intensive Live-Training Courses on O’Reilly Online Learning

My Video Courses on O’Reilly Online Learning

  • Python Fundamentals, 2/e, includes Data Science and AI fundamentals (55 hours)
  • Java Fundamentals, 3/e (21.5 hours on fundamentals, including including new treatment of object-oriented programming; will be 50+ hours when I complete the high-end recordings in Q1 2026)
  • C++20 Fundamentals (54 hours; with an Intro to C++23)
  • C Fundamentals (under development)

C How to Program, 9/e Errata

C How to Program, 9/e Cover

 This post contains the C How to Program, 9/e errata list. We’ll keep this up-to-date as we become aware of additional errata items. Please Contact Us with any you find.

Note: After publication, we discovered a bug in our authoring software that deleted some items in single quotes, like ‘A’, from our code tables. The source-code files were not affected, but occasionally a single-quoted item is missing from a code table in the text.

Last updated January 15, 2023

Chapter 2 — Intro to C Programming

  • Page 76, in Section 2.5: “+, / and %” should be “*, / and %.

Chapter 4 — Program Control

  • Page 149, “Notes on Integral Types”:

    –32767 should be –32768
    –2147483647 should be –2147483648
    –127 should be –128

Chapter 5 — Pointers

  • Page 214, Fig. 5.9: The example should produce factorial values through 20, not 21. The value displayed for factorial(21) in the program output is incorrect because unsigned long long is not capable of representing that value.

Chapter 7 — Pointers

  • Page 320, line 19 of Fig. 7.6 should be:
    while (*sPtr != '\0') {
  • Page 321, line 22 of Fig. 7.7, should be
    for (; *sPtr != '\0'; ++sPtr) {

Chapter 10 — Structures, Unions, Bit Manipulation and Enumerations

  • Page 496, Fig. 10.4, line 24 should be:
    putchar(value & displayMask ? '1' : '0');
  • Page 496, Fig. 10.4, line 28 should be:
    putchar(' ');
  • Page 496, Fig. 10.4, line 32 should be:
    putchar('\n');
  • Page 497, seventh text line on the page should be:
    putchar(value & displayMask ? '1' : '0');
  • Page 499, Fig. 10.5, line 53 should be:
    putchar(value & displayMask ? '1' : '0');
  • Page 499, Fig. 10.5, line 57 should be:
    putchar(' ');
  • Page 499, Fig. 10.5, line 61 should be:
    putchar('\n');
  • Page 502, Fig. 10.6, line 32 should be:
    putchar(value & displayMask ? '1' : '0')
  • Page 502, Fig. 10.6, line 36 should be:
    putchar(' ');
  • Page 502, Fig. 10.6 line 40 should be:
    putchar('\n');

Questions? Contact us!

C++20 for Programmers Now Available to O’Reilly Online Learning Subscribers

C++20 for Programmers Final Cover Image

C++20 for Programmers is now available to O’Reilly Online Learning Subscribers at:

https://learning.oreilly.com/library/view/c-20-for-programmers/9780136905776/

The print version should be in-stock mid-April. Preorder it at Amazon.com or other online book retailers.

Written for programmers with a background in another high-level language, in C++20 for Programmers, you’ll learn Modern C++ development hands-on using C++20 and its “Big Four” features:

  • Ranges
  • Concepts
  • Modules
  • Coroutines

In the context of 200+, hands-on, real-world code examples, you’ll quickly master Modern C++ coding idioms using popular compilers—Visual C++®, GNU® g++, Apple® Xcode® and LLVM®/Clang.

After the C++ fundamentals quick start, you’ll move on to C++ standard library containers array and vector; functional-style programming with C++20 Ranges and Views; strings, files and regular expressions; object-oriented programming with classes, inheritance, runtime polymorphism and static polymorphism; operator overloading, copy/move semantics, RAII and smart pointers; exceptions and a look forward to C++23 Contracts; standard library containers, iterators and algorithms; templates, C++20 Concepts and metaprogramming; C++20 Modules and large-scale development; and concurrency, parallelism, the C++17 and C++20 parallel standard library algorithms and C++20 Coroutines.

Features include:

  • Rich coverage of C++20’s “Big Four”: Ranges, Concepts, Modules and Coroutines
  • Objects-Natural Approach: Use standard libraries and open-source libraries to build significant applications with minimal code
  • Hundreds of real-world, live-code examples
  • Modern C++: C++20, 17, 14, 11 and a look to C++23
  • Compilers: Visual C++®, GNU® g++, Apple Xcode® Clang, LLVM®/Clang
  • Docker: GNU® GCC, LLVM®/Clang
  • Fundamentals: Control statements, functions, strings, references, pointers, files, exceptions
  • Object-oriented programming: Classes, objects, inheritance, runtime and static polymorphism, operator overloading, copy/move semantics, RAII, smart pointers
  • Functional-style programming: C++20 Ranges and Views, lambda expressions
  • Generic programming: Templates, C++20 Concepts and metaprogramming
  • C++20 Modules: Large-Scale Development
  • Concurrent programming: Concurrency, multithreading, parallel algorithms, C++20 Coroutines, coroutines support libraries, C++23 executors
  • Future: A look forward to Contracts, range-based parallel algorithms, standard library coroutine support and more

For more details, see the Preface, the Table of Contents diagram and reviewer testimonials.

Questions? Contact us!

Are You Just Getting Started in Java Programming?

Are you just getting started with Java How to Program, 11/e, Early Objects versionJava 9 for Programmers or Java How to Program, 11/e, Late Objects version? You will need to install the Java Development Kit (JDK).

Getting the JDK

Updated January 11, 2021

As of this writing, the Java 15 is the current version and new versions are being released every 6 months—Java 16 is coming in March. For organizations interested in stable versions of Java with long-term support (LTS), these will be released every three years. The current LTS version is Java 11 (September 2018). The next LTS version will be Java 17 in September 2021. 

Oracle, Inc.—Java’s gatekeeper—offers the JDK for download from oracle.com, but Oracle recently changed their licensing terms. Their JDK is meant primarily for corporate users

For learning purposes, we recommend that you get your JDK from AdoptOpenJDK.net. Always read the software licenses for any software you install.

Once you’ve downloaded the installer for your operating system platform and the version of Java you intend to use, be sure to carefully follow the installation instructions for your platform (found further down the page).

Java FX for Graphical User Interfaces

Since Java 11, the graphical user interface (GUI) library we use in our Java books—Java FX—is no longer distributed as part of the Java Development Kit.

To run the first example in Chapter 1 and the examples in our later Java FX chapters, you’ll first need to install the Java FX Software Development Kit (SDK).

The Java FX SDK installation instructions are at https://openjfx.io/openjfx-docs/. You can download the JavaFX SDK from https://gluonhq.com/products/javafx/.

Be sure to download the version that matches your JDK version number and your platform and closely follow the installation instructions.

If you’re unsure what to download, please send us an email. You’ll need to set your PATH_TO_FX Environment Variable. This depends on where you place the SDK’s folder on your system and what version of the SDK you have. The samples below assume the Java FX SDK’s folder is in your user account’s Downloads folder. In the paths I show below, you need to replace

     “/Users/pauldeitel/Downloads/javafx-sdk-15.0.1”

or

     “c:\Users\pauldeitel\Downloads\ javafx-sdk-15.0.1”

with the correct full path on your system and the JavaFX SDK version number for the specific version you downloaded.

Mac/Linux:

     export PATH_TO_FX=/Users/pauldeitel/Downloads/javafx-sdk-15.0.1/lib

Windows:

     set PATH_TO_FX="c:\Users\pauldeitel\Downloads\javafx-sdk-15.0.1/lib"

Compiling and Running the Painter App in Chapter 1

To compile the Painter app in Chapter 1 use the following command in your Command Prompt (Windows), Terminal (macOS or Linux) or shell (Linux)—Windows users should replace $PATH_TO_FX with %PATH_TO_FX%

     javac --module-path $PATH_TO_FX --add-modules=javafx.controls,javafx.graphics,javafx.fxml *.java

To run the Painter app, use the following command—Windows users should replace $PATH_TO_FX with %PATH_TO_FX%

     java --module-path $PATH_TO_FX --add-modules=javafx.controls,javafx.graphics,javafx.fxml Painter

If you’re having any trouble at all, please send us an email. We’re happy to help you get up and running!

Pin It on Pinterest