skillmake
← marketplace
creatorsconceptsha:0f2a139c9456689emanual

podcast-show-notes

Use when turning a recorded podcast episode into publishable show notes — Whisper transcript with timestamps, chapter summaries, pull-quote tweets, and an SEO-formatted episode page in one pass.

Tutorials · creator-attached
One-line install
curl --create-dirs -fsSL https://skillmake.xyz/i/podcast-show-notes -o ~/.claude/skills/podcast-show-notes/SKILL.md

The hash above pins this exact content. The file we serve at /api/marketplace/podcast-show-notes-0f2a139c/raw always matches sha:0f2a139c9456689e.

3,162 chars · ~791 tokens
---
name: podcast-show-notes
description: Use when turning a recorded podcast episode into publishable show notes — Whisper transcript with timestamps, chapter summaries, pull-quote tweets, and an SEO-formatted episode page in one pass.
source: https://github.com/openai/whisper
generated: 2026-05-07T21:42:40.674Z
category: concept
audience: creators
---

## Tutorials

- https://skillmake.xyz/v/podcast-show-notes.mp4

## When to use

- Producing show notes the moment an episode is recorded, not three days later
- Generating chapter markers that podcast apps and YouTube understand
- Extracting tweetable pull-quotes for promo without re-listening
- Writing the episode landing page (title, description, transcript, links) automatically

## Key concepts

### chapter markers

Apple Podcasts, Spotify, and YouTube all read chapter markers in slightly different formats. The lowest-common-denominator: a list of {start (HH:MM:SS), title} that each platform's metadata field accepts. Generate from Whisper segment timestamps + LLM topic detection.

### pull-quote extraction

Run an LLM over the transcript with prompt: 'Find 5 quotable lines: short (≤200 chars), self-contained, opinion or surprising.' Output is a tweet-ready list with timestamps so each can be linked back to the audio moment.

### show-notes structure

What works: 1-line episode hook → guest bio (if any) → chapter markers with timestamps → 'links mentioned' list → pull-quotes → full transcript at the bottom (collapsible). The structure is identical across episodes; only the content varies.

## API reference

```
Whisper verbose JSON for segment timestamps
```

Run with response_format=verbose_json so each segment has start/end seconds — required for chapter detection and pull-quote linking.

```
whisper episode.mp3 --model large-v3 --output_format json --word_timestamps True
# OR via OpenAI hosted:
const t = await openai.audio.transcriptions.create({
  file: fs.createReadStream('episode.mp3'),
  model: 'whisper-1',
  response_format: 'verbose_json',
  timestamp_granularities: ['segment', 'word'],
});
```

```
chapter detection prompt
```

LLM pass over segments to find natural topic shifts. Returns JSON list of chapters with start time + title.

```
From this timestamped transcript, identify 5–10 natural chapter breaks. Each chapter should be 3–10 minutes. Title each in 4–8 words.

Return JSON: [{"start": "HH:MM:SS", "title": "..."}]

TRANSCRIPT:
<segments>
```

## Gotchas

- Whisper timestamps drift on long files (>1 hour) — re-anchor every 30 minutes by chunking the audio and offsetting timestamps after merge.
- Two-host shows need diarization to attribute pull-quotes correctly — Whisper alone doesn't do this; pair with pyannote or use Deepgram/AssemblyAI.
- Don't auto-publish chapter titles — read them once. LLM-generated titles can be technically correct but tone-deaf to the actual conversation.
- Pull-quotes work best as 1–2 sentences ≤200 chars; longer ones don't fit on Twitter and feel like blog snippets.

---
Generated by SkillMake from https://github.com/openai/whisper on 2026-05-07T21:42:40.674Z.
Verify against source before relying on details.

File: ~/.claude/skills/podcast-show-notes/SKILL.md