\n
API Reference

TubeTranscribe API Docs

Extract single-video or channel-scale YouTube transcripts with timeline data, credit charging, and 100-concurrency orchestration.

Base URL https://api.tubetranscribe.com Output json | txt | srt | vtt Auth Bearer <SUPABASE_JWT> or backend email identity fallback

Overview

The core workflow is: resolve source videos, pre-charge credits, run transcript extraction, then render or export with timestamps.

1. Resolve source

Use /api/youtube/source to parse channel or playlist URLs and get the first page of videos.

2. Load all pages

Use /api/youtube/videos with nextPageToken until all videos are loaded.

3. Extract transcript

Call /api/extract per selected video with worker-pool concurrency (up to 100 in flight).

4. Export result

Download one transcript directly, or pack all successful transcripts into a browser-side ZIP.

Authentication

Frontend uses the logged-in Supabase session. Backend validates identity and handles credit charging safely server-side.

Authorization: Bearer <SUPABASE_JWT>
Content-Type: application/json

If JWT is temporarily unavailable, your backend can resolve credits by email identity fallback. Keep this logic on server only.

Single Video Extract

Extract a single YouTube video transcript in one request.

curl -X POST https://api.tubetranscribe.com/api/extract \
  -H "Authorization: Bearer <SUPABASE_JWT>" \
  -H "Content-Type: application/json" \
  -d '{
    "videoUrl": "https://youtu.be/dQw4w9WgXcQ",
    "format": "json",
    "tpSource": "web-homepage"
  }'
{
  "success": true,
  "credits_charged": 1,
  "data": {
    "videoId": "dQw4w9WgXcQ",
    "title": "Example title",
    "language": "en",
    "lines": [
      { "text": "Hello everyone", "start": 0.5, "dur": 2.1 },
      { "text": "Welcome back", "start": 2.8, "dur": 1.9 }
    ]
  }
}

Timeline Fields

Timeline is returned as lines[]. Each line has text and second-based timing values.

{
  "text": "Hello everyone",
  "start": 0.5,
  "dur": 2.1
}
text Caption text content.
start Line start time, in seconds.
dur Line duration in seconds. End time is start + dur.

Channel / Playlist Batch Flow

Use server proxy endpoints so the YouTube Data API key is never exposed in browser code.

POST /api/youtube/source
{
  "url": "https://www.youtube.com/@channel_handle"
}
{
  "success": true,
  "data": {
    "source": {
      "sourceType": "channel",
      "sourceTitle": "Channel name",
      "sourceVideoListId": "UUxxxx"
    },
    "videos": [ ...first 50 videos... ],
    "nextPageToken": "CAUQAA"
  }
}
POST /api/youtube/videos
{
  "playlistId": "UUxxxx",
  "pageToken": "CAUQAA"
}

Then run client worker pool with concurrency 100, retry up to 3 times per video, and track status: completed, no captions, restricted, failed.

Download All as ZIP

After batch extraction completes, build ZIP in browser (no server-side zip cost). Keep only successful transcripts as files.

const zipBytes = createZip(entries);
const blob = new Blob([zipBytes], { type: 'application/zip' });
const url = URL.createObjectURL(blob);
const a = document.createElement('a');
a.href = url;
a.download = 'Channel_Subtitles.zip';
a.click();
URL.revokeObjectURL(url);

Errors and Status

Standardize client-visible failure text so users understand what happened per video.

no captions Video has no available transcript track.
restricted Video is blocked, private, members-only, or region-restricted.
failed Request failed after retry attempts.

Limits and Best Practices

Concurrency Keep 100 requests in flight using worker-pool scheduling.
Credits Charge per video request, not per batch request.
Retries Use exponential backoff for retriable network/server errors.
Key Safety Store YouTube API key only in backend env: YOUTUBE_DATA_API_KEY.

Knowledge Base

AI Knowledge Base Playbook

Use TubeTranscribe transcript exports to build searchable knowledge bases in Grok, ChatGPT, Gemini, and Claude — no infrastructure required.

How It Works

TubeTranscribe's job ends after it downloads and formats transcripts. Everything else — uploading transcripts and chatting with them — happens inside each provider's official chat product.

1. Export from TubeTranscribe

Download transcripts in SRT, TXT, or JSON format.

2. Upload to AI Provider

Upload files to a Project or chat in Grok, ChatGPT, Gemini, or Claude.

3. Ask Questions

Query the transcripts with natural language and request timestamp-based citations.

4. Build Knowledge

Analyze patterns, extract hooks, build topic maps, and create structured datasets.

File Preparation

Recommended Formats

For chat-based analysis, use one of these:

  • SRT (recommended) — best for timestamp citations and finding exact segments.
  • TXT (recommended) — simplest and most compatible across providers.
  • JSON (optional) — good for structured metadata, but some chat UIs treat it as code.

If you can export only one format, pick SRT. If you can export two, upload SRT + a clean TXT.

Naming Conventions

Provider chat UIs rely on filenames to keep documents organized. Use this pattern (one video per file):

channelHandle__videoId__YYYY-MM-DD__lang-xx__title-short.srt

Example:
aliabdal__dQw4w9WgXcQ__2025-01-12__lang-en__weekly-planning.srt

The filename contains video_id and published date, which most chats can reference even without metadata field support.

Metadata Header (Best Practice)

For TXT exports, add a short header at the top:

TubeTranscribe Transcript
video_id: dQw4w9WgXcQ
channel: aliabdal
published_at: 2025-01-12
language: en
source_url: https://www.youtube.com/watch?v=dQw4w9WgXcQ

--- TRANSCRIPT ---
[00:00] ...

Metadata Schema

Use these fields consistently in filenames and/or headers:

Field Type Example Why it matters
channel string aliabdal Filtering, grouping
video_id string dQw4w9WgXcQ Stable identifier
published_at date 2025-01-12 Time-based analysis
language string en Multilingual workflows
source_url url https://youtube.com/watch?v=... Tracebacks
timestamps boolean true Enables citations
title string Weekly planning... Better retrieval

Grok on xAI

Sign In

  1. Visit grok.com (official consumer site).
  2. Sign in using your preferred method. For account management, xAI provides steps at accounts.x.ai.

Upload Transcript Files

Grok supports uploading documents for analysis. The web chat typically has an attachment control (paperclip / "Add file").

  1. Open a new chat in Grok (or create a dedicated chat for one channel).
  2. Click the attachment control (paperclip / "Add file").
  3. Upload your transcript file(s) — prefer SRT or TXT.
  4. Send a first prompt asking Grok to read the uploaded files.

Tip: Keep one chat per channel and upload new transcripts into the same conversation thread for a "Projects-like" experience.

Example Prompts

Start with a setup prompt:

I uploaded TubeTranscribe transcripts (filenames include video_id and dates).
Please:
1) Confirm which files you can read.
2) Explain how you will cite timestamps.
3) Answer questions using only the transcript content.

Then ask targeted questions:

Find the creator's recurring hook patterns in the first 30 seconds.
Return 5 patterns with 2 examples each, citing timestamps as (video_id @ mm:ss).
Search the transcripts for any step-by-step framework.
Extract it verbatim where possible, and cite timestamps.

Known Limits

48 MB Max file size per upload (Files API documented limit).
Auto-search File attachments trigger an automatic document search workflow.

Web UI limits may differ by plan. If upload fails, split transcripts into smaller files.

Optional: xAI Files API Upload

curl https://api.x.ai/v1/files \
  -H "Authorization: Bearer $XAI_API_KEY" \
  -F "file=@./transcript.srt" \
  -F "purpose=assistants"

ChatGPT Projects on OpenAI

Sign In

  1. Go to chatgpt.com and sign in.
  2. Open Projects from the sidebar and create a new Project.

Upload Transcripts to a Project

  1. In the sidebar, click New project.
  2. Name the project (recommend: Channel – Knowledge Base).
  3. Use Add files to upload transcript files to the project sources.
  4. Wait until the files appear in the Project's context list.
  5. Start a chat inside the project — ChatGPT will use the files as context.

Project Instructions (Recommended)

Projects allow project-specific instructions. Set them like:

You are a transcript analyst. Use ONLY the uploaded transcripts.
Always cite timestamps like (video_id @ mm:ss).
If the transcripts don't contain the answer, say "Not found in provided transcripts."

Example Prompts

List the top 10 recurring topics across these transcripts.
For each topic, cite 3 timestamped examples (video_id @ mm:ss).
Create a "Creator Style Profile":
- Hook formulas
- Storytelling structure
- CTA patterns
Cite evidence with timestamps.

Known Limits

512 MB Hard limit per file.
2M tokens Text/doc files capped at 2 million tokens per file.
20–40 files Per project (Plus: 20, Pro/Team/Business: 40).

Optional: OpenAI Files API Upload

curl https://api.openai.com/v1/files \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -F "file=@./transcript.srt" \
  -F "purpose=assistants"

Gemini Apps on Google

Sign In

  1. Go to gemini.google.com and sign in with your Google account.
  2. You must be signed in to use file upload.

Upload Transcript Files

  1. In the Gemini web app, type your question in the message box.
  2. Click Add files.
  3. Choose Files (device upload) and select your transcript file(s).
  4. Click Submit to send the prompt with attached files.

Example Prompts

Because Gemini uploads are per-chat-prompt, start with an indexing prompt:

I uploaded YouTube transcript files exported from TubeTranscribe.
Confirm the filenames you received. Then build a topic map and let me query by topic.
Always cite timestamps from the transcript text.
Find 10 high-performing hook lines (first 30–60 sec) across these transcripts.
For each, cite (filename/video_id @ mm:ss) and explain the hook technique.

Known Limits

10 files Maximum files per prompt.
100 MB Max size per non-video file.
Rolling Rolling limits on "chats with files" — re-upload if analysis fails.

Optional: Gemini Files API

Gemini's Files API supports 20 GB per project, 2 GB per file, with 48-hour retention (auto-deleted).

curl "https://generativelanguage.googleapis.com/v1beta/files" \
  -H "x-goog-api-key: $GEMINI_API_KEY"

Claude Projects on Anthropic

Sign In

  1. Go to claude.ai and sign in.
  2. Open Projects and create a new project.

Upload Transcripts to a Project Knowledge Base

Claude supports uploading files to a project's persistent knowledge area or directly to a chat.

  1. Go to Projects"+ New Project".
  2. Open the project. On the right side, find the project knowledge base / files area.
  3. Click the "+" button to add content, then upload transcript files.
  4. Wait for Claude to process the files (they appear in the project knowledge area).
  5. Start a chat inside the project.

When projects approach context limits, Claude can automatically enable a RAG mode to expand capacity — no setup required.

Example Prompts

You are my YouTube transcript analyst.
Use only the project files. Cite timestamps like (video_id @ mm:ss).
First: list the project files you can access and what each one is about.
Extract the creator's "content formula":
- hook type
- structure
- CTA placement
Provide 5 examples with timestamps.

Known Limits

30 MB Max file size per upload.
20 files Maximum files per chat.
Context Project files must fit within Claude's context window overall.

Optional: Claude Files API Upload

curl https://api.anthropic.com/v1/files?beta=true \
  -H 'Content-Type: multipart/form-data' \
  -H 'anthropic-version: 2023-06-01' \
  -H 'anthropic-beta: files-api-2025-04-14' \
  -H "X-Api-Key: $ANTHROPIC_API_KEY" \
  -F 'file=@./transcript.srt'

Troubleshooting

If Upload Fails

  • Split large transcripts into smaller per-video files.
  • Prefer TXT if SRT parsing seems inconsistent.
  • Avoid ZIP unless the provider explicitly supports it.

If the Model "Ignores" Files

  • Ask it to list filenames it can see first (sanity check).
  • Put key metadata (video_id, date, language) in filename and header.
  • In ChatGPT/Claude Projects, confirm you're chatting inside the project, not in a general chat.

If You Hit Limits

ChatGPT Project file count limits vary by plan; per-file hard limit 512 MB; token caps apply.
Gemini Rolling limits for "chats with files"; 10 files per prompt; 100 MB per file.
Claude 30 MB per file; up to 20 files per chat; projects limited by context capacity.
Grok Files API max 48 MB per file; web UI limits may vary — split files if needed.

Provider Comparison

Provider Upload Location Projects Key Limits Best Use-Case
Grok (xAI) Grok chat on grok.com Not documented 48 MB/file (API) Quick Q&A; real-time web search + transcript grounding
ChatGPT (OpenAI) ChatGPT Projects on chatgpt.com Yes 512 MB/file; 2M tokens; 20–40 files/project Persistent channel knowledge bases
Gemini (Google) gemini.google.com "Add files" No (chat-based) 10 files/prompt; 100 MB/file Fast Q&A on small sets; multimodal workflows
Claude (Anthropic) Claude Projects on claude.ai Yes 30 MB/file; 20 files/chat Deep qualitative analysis; long-running research

Official Links