For learners who listen

Your podcasts become a knowledge base.

Colistener transcribes your podcast in real-time and lets you ask questions by voice or text while you listen. The AI answers with full context of what's being said, and automatically stores every insight, searchable across every episode you've ever listened to.

Free during beta ยท No credit card required

Lex Fridman Podcast #421: Consciousness & AI Live Transcription
Live transcript
Lex Fridman 23:14
So when we talk about RLHF, how does that fundamentally change the way the model behaves?
Guest 23:31
It's a paradigm shift. Instead of predicting the next token, you're optimizing for human preference. The reward model becomes a proxy for what humans actually want.
Lex Fridman 24:02
And the reward model, that's where things get interesting, because...
Ask by voice or text
What did he mean by "proxy"?
He's saying the reward model is a stand-in for actual human judgment. Instead of asking a human to rate every response, they train a separate model to predict what humans would prefer. So it's a proxy, an approximation of human values. ๐Ÿ“ from transcript @ 23:31
Ask anything, voice or text...

Real-time transcription โ†’ contextual AI

Not a chatbot bolted onto a podcast app. Colistener hears what you hear, knows where you are in the episode, and answers with full context.

01

Live Transcription

Audio is transcribed in real-time as you listen. The AI always knows exactly what's being said and when.

Speech โ†’ Text, timestamped
02

Temporal Context

Ask "what did he mean by that?" The AI knows what "that" refers to because it has the live transcript with timestamps. No rewinding. No explaining.

Context-aware Q&A
03

Auto-stored Memory

Every Q&A automatically generates a takeaway + tags and gets stored. Your listening history becomes a searchable knowledge base. No manual bookmarking.

Takeaway + Tags โ†’ Knowledge graph
๐ŸŽ™ Ask by voice, hands-free while you listen โŒจ Or type your question, your choice

Four things no other tool does.

โฑ

Temporal Awareness

"What did he mean by that?" just works. The AI has the full, timestamped transcript. It knows exactly where you are in the conversation and what "that" refers to.

User: "Wait, what's the reward model he mentioned?"
โ†’ AI references transcript @ 23:31, not a generic definition.
๐Ÿง 

Automatic Knowledge Base

Every question you ask generates a takeaway and gets tagged automatically. Across 50 episodes, you build a searchable personal knowledge graph. Without lifting a finger.

"What do I know about AI alignment?"
โ†’ Synthesizes insights from 14 episodes, identifies gaps.
๐ŸŒ

Web Search Fallback

When the podcast doesn't have the answer, the AI searches the web. A guest mentions a paper? You get the abstract. An unfamiliar name? You get the context. Not limited to what the host said.

"Who published that paper he's referencing?"
โ†’ Searches web, returns author, year, key findings.
๐Ÿ”—

Cross-Episode Memory

A new episode mentions RLHF? Colistener knows you learned that in Episode #421 and offers what's new instead of repeating itself. Concepts compound across your entire listening history.

"You explored RLHF in Ep. #421. This episode critiques it. Want a refresher?"

NotebookLM processes after.
Colistener helps during.

Most tools make you wait until the episode is over. Colistener is there in real-time, right when curiosity strikes.

NotebookLM / Others
Colistener
When
After you finish listening
While you're listening, in real-time
Context
Full transcript, no temporal awareness
Knows exactly where you are in the episode
Memory
Per-document, manual organization
Auto-stored across every episode, searchable
Beyond the audio
Limited to uploaded content
Web search when the podcast doesn't have the answer
Over time
Resets per notebook
Compounds: your knowledge grows with every listen

The longer you listen, the smarter it gets.

DAY 1

Instant Understanding

You hear "RLHF" for the first time. Ask Colistener. Get an explanation grounded in exactly what the podcast just said, not a generic Wikipedia answer.

You: "What's RLHF?" "Based on what they're discussing at 23:31, Reinforcement Learning from Human Feedback is a technique where human preferences shape the AI's behavior. The guest is arguing it's a paradigm shift from raw next-token prediction."
MONTH 2

Connected Learning

A different podcast mentions RLHF. Colistener recognizes you've learned this before and surfaces what's new instead of repeating the basics.

Colistener: "You learned about RLHF in Lex Fridman #421. This episode discusses a critique of it โ€“ the guest argues reward hacking undermines the approach. Want a refresher first?"
MONTH 6

Knowledge Synthesis

Ask what you've learned about any topic. Colistener synthesizes insights across your entire listening history into a coherent picture โ€“ and identifies gaps.

You: "What do I know about AI alignment?" "Across 14 episodes, you've explored: the alignment problem, RLHF, constitutional AI, and interpretability. Your understanding is strongest in RLHF. Gap: you haven't encountered governance perspectives yet."