In this tutorial you’ll build a serverless Open-NotebookLM that turns any research paper or article into a lively, two-host audio podcast using Inferless.
pdf_url
, then extract_pdf_content
fetches the file and supplies the full raw text (often 10k+ tokens) to the LLM.SUMMARIZATION_PROMPT
directs Qwen3 to produce a five‑part breakdown: core ideas, context, challenging concepts, standout facts, and unanswered questions.PODCAST_CONVERSION_PROMPT
transforms that summary into a conversation, labeled turn‑by‑turn as Alex:
and Romen:
.generated_podcast_base64
, ready for playback.extract_text()
to extract every page of the user-supplied PDF.--gpu A100
: Specifies the GPU type for deployment. Available options include A10
, A100
, and T4
.--runtime inferless-runtime-config.yaml
: Defines the runtime configuration file. If not specified, the default Inferless runtime is used.Scenario | On-Demand Cost | Serverless Cost |
---|---|---|
50 requests/day | $28.8 (24 hours billed at $1.22/hour) | $6.43 (5.27 hours billed at $1.22/hour) |