chinnam.AI — RAG & Multi-Agent Content Platform

An AI-powered blog platform that automatically generates technical articles from research papers using RAG (Retrieval-Augmented Generation) and a multi-agent pipeline. The system ingests 200+ arXiv papers, chunks and embeds them into pgvector, then orchestrates three specialized AI agents — Research, Drafting, and Fact-Check — to produce publication-ready content.

Table of Contents

  1. System Overview & Tech Stack
  2. High-Level Architecture
  3. Database Schema & pgvector
  4. Paper Ingestion Pipeline (Python)
  5. RAG: Vector Search & Retrieval
  6. Multi-Agent System Design
  7. API Design & Step-Based Execution
  8. Frontend: Admin Dashboard & Pipeline UI
  9. Code Deep-Dive: Key Implementations
  10. Design Decisions & Trade-offs
  11. Agent-to-Agent Communication Patterns (5 Patterns)
  12. Scaling & Future Enhancements
  13. Interview Cheat Sheet

1 System Overview & Tech Stack

chinnam.AI is a personal AI engineering blog that generates technical articles from academic research papers. The system combines RAG (Retrieval-Augmented Generation) with a multi-agent orchestration pattern to produce fact-checked, publication-ready blog posts.

Tech Stack

LayerTechnologyPurpose
FrontendNext.js 16 React 19 Tailwind 4Blog UI, Admin Dashboard, Pipeline visualization
BackendTypeScript Server Actions API RoutesMulti-agent orchestration, step-based API
DatabaseNeon PostgreSQL pgvector Prisma 6Posts, paper chunks, images, tables with vector embeddings
AI/LLMClaude (Haiku) OpenAI EmbeddingsAgent reasoning, text-embedding-3-small (1536d)
IngestionPython PyPDF2 tiktokenarXiv download, chunking, embedding generation
HostingVercel Neon (Free)Zero-cost deployment with serverless functions
200+
Research Papers Ingested
1536d
Vector Embedding Dimensions
3
Specialized AI Agents
$0
Monthly Hosting Cost

2 High-Level Architecture

┌─────────────────────────────────────────────────────────────────────────────────┐ │ chinnam.AI System Architecture │ └─────────────────────────────────────────────────────────────────────────────────┘ LOCAL MACHINE (One-time) VERCEL (Production) ┌───────────────────────┐ ┌──────────────────────────────────┐ │ Python Ingestion │ │ Next.js 16 Application │ │ Pipeline │ │ │ │ ┌─────────────────┐ │ │ ┌────────────────────────────┐ │ │ │ 1. arXiv API │ │ │ │ /admin (Protected UI) │ │ │ │ Download PDFs │ │ │ │ ├─ Manual Post Creator │ │ │ └────────┬────────┘ │ │ │ └─ AI Generate Tab │ │ │ ┌────────▼────────┐ │ │ │ ├─ Topic Input │ │ │ │ 2. PDF Chunker │ │ │ │ ├─ Pipeline Viz │ │ │ │ Structure- │ │ │ │ └─ Review & Publish │ │ │ │ Aware + tikto │ │ │ └────────────┬─────────────┘ │ │ └────────┬────────┘ │ │ │ │ │ ┌────────▼────────┐ │ │ ┌────────────▼─────────────┐ │ │ │ 3. Image Extract│ │ │ │ POST /api/generate │ │ │ │ PyMuPDF + │ │ │ │ Step-Based Pipeline: │ │ │ │ Claude Vision │ │ │ │ │ │ │ └────────┬────────┘ │ │ │ step:"research" │ │ │ ┌────────▼────────┐ │ │ │ → Vector Search (RAG) │ │ │ │ 4. Table Extract│ │ │ │ → Research Agent │ │ │ │ pdfplumber │ │ │ │ │ │ │ └────────┬────────┘ │ │ │ step:"draft" │ │ │ ┌────────▼────────┐ │ ┌──────────┐ │ │ → Drafting Agent │ │ │ │ 5. Embed + Upload│──┼──▶│ │◀──┼──│ │ │ │ │ OpenAI embed │ │ │ Neon │ │ │ step:"fact-check" │ │ │ │ → pgvector │ │ │ Postgres│ │ │ → Fact-Check Agent │ │ │ └─────────────────┘ │ │ pgvector│ │ └────────────┬─────────────┘ │ └───────────────────────┘ │ │ │ │ │ └──────────┘ │ ┌────────────▼─────────────┐ │ │ │ /api/posts → createPost │ │ ┌────────────────────┐ │ │ Publish to Blog │ │ │ External APIs │ │ └──────────────────────────┘ │ │ ┌──────────────┐ │ │ │ │ │ Anthropic API │◀─┼─────────────────────│ Claude: Agent reasoning │ │ │ (Claude) │ │ │ │ │ └──────────────┘ │ │ ┌──────────────────────────┐ │ │ ┌──────────────┐ │ │ │ /blog/[slug] │ │ │ │ OpenAI API │◀─┼─────────────────────│ │ Markdown → HTML │ │ │ │ (Embeddings) │ │ │ │ + Syntax Highlighting │ │ │ └──────────────┘ │ │ │ + Related Posts │ │ └────────────────────┘ │ └──────────────────────────┘ │ └──────────────────────────────────┘

Two Main Pipelines

Pipeline 1: Offline Ingestion (Python, runs locally)

Download papers from arXiv → Chunk with section-awareness → Extract images/tables → Generate embeddings → Upload to Neon pgvector. Runs once to build the knowledge base.

Pipeline 2: Online Generation (TypeScript, runs on Vercel)

User enters topic → Research Agent queries pgvector → Drafting Agent writes article → Fact-Check Agent validates → Human reviews and publishes. Each step is a separate API call to stay within Vercel's 60s timeout.

3 Database Schema & pgvector

The system uses Neon PostgreSQL with the pgvector extension for both relational data (blog posts) and vector similarity search (paper embeddings). This eliminates the need for a separate vector database.

Prisma prisma/schema.prisma
datasource db {
  provider   = "postgresql"
  url        = env("DATABASE_URL")
  extensions = [vector]  // Enable pgvector extension
}

// Blog posts — the final output
model Post {
  id        String   @id @default(cuid())
  title     String
  slug      String   @unique
  content   String   @db.Text     // Markdown content
  excerpt   String?  @db.Text
  published Boolean  @default(false)
  tags      String[]
  createdAt DateTime @default(now())
  updatedAt DateTime @updatedAt
}

// RAG source — chunked research paper text
model PaperChunk {
  id         String                         @id @default(cuid())
  paperId    String
  title      String                         // Paper title
  authors    String
  arxivId    String                         // arXiv ID for dedup
  chunkIndex Int                            // Position within paper
  section    String    @default("")      // Section heading
  content    String    @db.Text           // Chunk text (~800 tokens)
  embedding  Unsupported("vector(1536)")  // OpenAI embedding
  createdAt  DateTime  @default(now())

  @@index([paperId])
  @@index([arxivId])
}

// Extracted figures with Claude Vision descriptions
model PaperImage {
  id          String                         @id @default(cuid())
  imageData   Bytes                          // Binary image data
  contentType String                         // "image/png"
  description String    @db.Text           // Searchable description
  embedding   Unsupported("vector(1536)")  // Embed the description
  // ... paperId, arxivId, pageNumber, imageIndex
}

// Extracted data tables in markdown format
model PaperTable {
  markdown    String    @db.Text           // Ready-to-use markdown table
  description String    @db.Text           // Searchable description
  embedding   Unsupported("vector(1536)")  // Embed the description
  // ... paperId, arxivId, pageNumber, tableIndex
}

Why pgvector Instead of Pinecone/Weaviate?

Zero additional cost — Neon's free tier includes pgvector. No separate vector DB subscription needed.
Single database — Posts and embeddings live together. No cross-database consistency issues.
Familiar SQL — Use standard SQL with pgvector's <=> cosine distance operator.
Good enough at scale — pgvector handles millions of vectors with HNSW indexing.

Vector Search SQL

SQL Cosine Similarity Search
SELECT
  id, title, authors, section, content,
  1 - (embedding <=> '[0.023, -0.041, ...]'::vector) AS similarity
FROM "PaperChunk"
WHERE 1 - (embedding <=> '[...]'::vector) > 0.25   -- similarity threshold
ORDER BY embedding <=> '[...]'::vector              -- nearest first
LIMIT 8                                              -- top-k results

4 Paper Ingestion Pipeline (Python)

A 5-step offline pipeline that builds the knowledge base. Runs locally to avoid Vercel timeout limits.

1

Download Papers from arXiv arxiv_downloader.py

10 search queries (RAG, LLM agents, transformers, embeddings, etc.) × 20 papers each = ~200 PDFs. Uses arXiv API with metadata extraction.

2

Structure-Aware Chunking chunker.py

Detects section headings via regex, chunks within sections (never crossing boundaries). 800 tokens/chunk, 150 token overlap. Filters out References/Appendix.

3

Image Extraction image_extractor.py

PyMuPDF extracts images from PDFs (min 15KB filter). Claude Vision generates searchable descriptions for each figure.

4

Table Extraction table_extractor.py

pdfplumber extracts structured tables, converts to markdown format. Filters: min 3 rows, 2 columns.

5

Embed & Upload to Neon embedder.py

Batch embedding via OpenAI (50 items/call). Direct psycopg2 insert to pgvector. Deduplication by arxiv_id + chunk_index.

Chunking Strategy — The Key to Good RAG

Python backend/ingestion/chunker.py
# Section-aware chunking: detects paper headings to preserve context
CHUNK_SIZE = 800       # tokens per chunk
CHUNK_OVERLAP = 150    # overlapping tokens between chunks

# Regex detects: "1. Introduction", "3.1 Model Architecture",
#                "I. INTRODUCTION", "Abstract"
SECTION_HEADING_RE = re.compile(
    r'\n('
    r'(?:1?\d)\.\d+\s+[A-Z][a-zA-Z]+(?:\s+[a-zA-Z\-:,]+){1,10}'
    r'|(?:1?\d)\.\s+[A-Z][a-zA-Z]+(?:\s+[a-zA-Z\-:,]+){0,10}'
    r'|[IVX]+\.\s*[A-Z][^\n]{2,60}'
    r'|Abstract(?:\s*[\u2014\u2013\-])?'
    r')\n'
)

# Stop at References — no useful content beyond this point
FILTER_SECTIONS_RE = re.compile(
    r'(?i)^(?:\d+\.?\s*)?(?:references|bibliography|appendix)'
)

def chunk_text(text, chunk_size=CHUNK_SIZE, overlap=CHUNK_OVERLAP):
    """Split text into overlapping chunks using sentence boundaries."""
    enc = tiktoken.get_encoding("cl100k_base")
    tokens = enc.encode(text)

    chunks = []
    start = 0
    while start < len(tokens):
        end = min(start + chunk_size, len(tokens))
        chunk_text = enc.decode(tokens[start:end])

        # Try to end at sentence boundary (past 50% mark)
        if end < len(tokens):
            last_period = chunk_text.rfind(". ")
            last_newline = chunk_text.rfind("\n")
            split_point = max(last_period, last_newline)
            if split_point > len(chunk_text) * 0.5:
                chunk_text = chunk_text[:split_point + 1]

        chunks.append(chunk_text.strip())
        start = end - overlap  # Overlap for context continuity

    return chunks

Why Structure-Aware Chunking?

Problem: Naive fixed-size chunking splits mid-sentence and loses section context. A chunk about "attention mechanisms" might end up with no indication it came from the "Model Architecture" section.

Solution: Detect section headings first, then chunk within sections. Each chunk includes [Section: heading] prefix. Overlapping ensures no information is lost at boundaries.

5 RAG: Vector Search & Retrieval

User Query: "How does RAG work with vector databases?" │ ▼ ┌──────────────────────────────┐ │ OpenAI text-embedding-3-small│ │ Query → 1536-dim vector │ └──────────────┬───────────────┘ │ ┌──────────┼──────────┐ ▼ ▼ ▼ ┌─────────┐ ┌────────┐ ┌────────┐ │Paper │ │Paper │ │Paper │ 3 parallel │Chunks │ │Images │ │Tables │ searches via │(top 8) │ │(top 4) │ │(top 3) │ Promise.all() └────┬────┘ └───┬────┘ └───┬────┘ │ │ │ └──────────┼──────────┘ ▼ ┌──────────────────────────────┐ │ Research Agent receives: │ │ • 8 relevant paper chunks │ │ • 4 matching figures │ │ • 3 data tables │ │ • Similarity scores │ │ • Section context │ └──────────────────────────────┘

Vector Search Implementation

TypeScript lib/ai/vector-search.ts
import OpenAI from "openai"
import { prisma } from "@/lib/db"

// Generate query embedding using OpenAI
async function generateEmbedding(text: string): Promise<number[]> {
  const response = await getOpenAIClient().embeddings.create({
    model: "text-embedding-3-small",
    input: text,
  })
  return response.data[0].embedding  // 1536-dimensional vector
}

// Semantic search against pgvector with cosine similarity
export async function searchPapers(
  query: string,
  topK: number = 10,
  similarityThreshold: number = 0.3
): Promise<SearchResult[]> {
  const embedding = await generateEmbedding(query)
  const embeddingStr = `[${embedding.join(",")}]`

  // Raw SQL for pgvector cosine similarity
  const results = await prisma.$queryRawUnsafe(`
    SELECT id, title, authors, "arxivId", section, content,
      1 - (embedding <=> '${embeddingStr}'::vector) as similarity
    FROM "PaperChunk"
    WHERE 1 - (embedding <=> '${embeddingStr}'::vector) > ${similarityThreshold}
    ORDER BY embedding <=> '${embeddingStr}'::vector
    LIMIT ${topK}
  `)

  return results
}

Key Design Choice: Parallel Multi-Modal Search
The Research Agent searches three tables simultaneously using Promise.all() — paper chunks, images, and tables. This follows the asyncio.gather pattern: latency = max(calls) not sum(calls). A technique that achieved 60% processing time reduction in production notification pipelines.

TypeScript lib/ai/agents.ts — Parallel Search
// Run all 3 searches in parallel — latency = max(calls), not sum(calls)
const [sources, images, tables] = await Promise.all([
  searchPapers(state.topic, 8, 0.25),   // 8 chunks, threshold 0.25
  searchImages(state.topic, 4, 0.3),   // 4 figures, threshold 0.3
  searchTables(state.topic, 3, 0.3),   // 3 tables, threshold 0.3
])

6 Multi-Agent System Design

┌─────────────────────────────────────────────────────────────────┐ │ SHARED PIPELINE STATE │ │ { │ │ topic: string ← User input │ │ researchSummary: string ← Research Agent output │ │ sources: [{title, content, similarity}] ← pgvector results │ │ images: [{id, description, similarity}] ← figure search │ │ tables: [{markdown, description}] ← table search │ │ draft: string ← Drafting Agent output │ │ factCheckResults: {verified[], issues[], suggestions[]} │ │ finalArticle: string ← Fact-Check Agent output │ │ metadata: {title, excerpt, tags[]} │ │ status: "researching" | "drafting" | "fact-checking" | ... │ │ } │ └──────────────────────────────┬──────────────────────────────────┘ │ ┌────────────────────────────┼────────────────────────────────┐ │ │ │ ▼ ▼ ▼ ┌──────────────┐ ┌──────────────────┐ ┌──────────────────┐ │ RESEARCH │ │ DRAFTING │ │ FACT-CHECK │ │ AGENT │────▶│ AGENT │────▶│ AGENT │ │ │ │ │ │ │ │ Input: │ │ Input: │ │ Input: │ │ • topic │ │ • research │ │ • draft │ │ • pgvector │ │ • summary │ │ • sources │ │ results │ │ • images/tables │ │ │ │ │ │ │ │ Output: │ │ Output: │ │ Output: │ │ • verified[] │ │ • summary │ │ • markdown │ │ • issues[] │ │ • sources │ │ • metadata │ │ • suggestions[] │ │ • images │ │ • code examples │ │ • PASS/REVISION │ └──────────────┘ └──────────────────┘ └──────────────────┘ Claude Haiku Claude Haiku Claude Haiku max: 2000 tok max: 4000 tok max: 1500 tok

Shared State Pattern

All agents communicate through a shared state object. Each agent reads what it needs and writes its output back. No direct agent-to-agent communication — the orchestrator (API route) manages the flow.

TypeScript lib/ai/agents.ts — Pipeline State Interface
export interface PipelineState {
  topic: string
  researchSummary: string
  sources: Array<{
    title: string; authors: string; arxivId: string
    section: string; content: string; similarity: number
  }>
  images: Array<{ id: string; title: string; description: string; similarity: number }>
  tables: Array<{ id: string; markdown: string; description: string; similarity: number }>
  draft: string
  factCheckResults: { verified: string[]; issues: string[]; suggestions: string[] }
  finalArticle: string
  metadata: { title: string; excerpt: string; tags: string[] }
  status: "researching" | "drafting" | "fact-checking" | "complete" | "error"
}

Agent 1: Research Agent

TypeScript lib/ai/agents.ts — Research Agent
const RESEARCH_SYSTEM_PROMPT = `You are a Research Agent specializing in AI/ML topics.
Given a topic and relevant paper excerpts, you must:
1. Identify the key concepts, techniques, and findings
2. Organize findings into logical themes
3. Note contrasting viewpoints or approaches
4. Highlight practical implications and code-worthy examples
5. Cite sources by title and authors`

export async function runResearchAgent(state: PipelineState) {
  // 1. Parallel vector search across 3 content types
  const [sources, images, tables] = await Promise.all([
    searchPapers(state.topic, 8, 0.25),
    searchImages(state.topic, 4, 0.3),
    searchTables(state.topic, 3, 0.3),
  ])

  // 2. Build context string with source attribution
  const sourcesContext = sources.map((s, i) =>
    `[Source ${i+1}] "${s.title}" by ${s.authors}` +
    `${s.section ? ` [Section: ${s.section}]` : ""}` +
    `\n${s.content}`
  ).join("\n\n---\n\n")

  // 3. Send to Claude for structured research summary
  const response = await anthropic.messages.create({
    model: "claude-haiku-4-5-20251001",
    max_tokens: 2000,
    system: RESEARCH_SYSTEM_PROMPT,
    messages: [{ role: "user", content:
      `Topic: ${state.topic}\n\nPaper excerpts:\n\n${sourcesContext}`
    }],
  })

  state.researchSummary = response.content[0].text
  return state
}

Agent 2: Drafting Agent

TypeScript lib/ai/agents.ts — Drafting Agent (Key Parts)
const DRAFTING_SYSTEM_PROMPT = `You are a Technical Blog Drafting Agent.
Writing Guidelines:
- Start with a compelling introduction (WHY this matters)
- Progressive complexity: simple → advanced
- Include practical Python code examples
- Include relevant figures: ![Desc](/api/images/IMAGE_ID)
- Output metadata: ---METADATA--- TITLE / EXCERPT / TAGS`

export async function runDraftingAgent(state: PipelineState) {
  // Build multi-modal context: text + images + tables
  const imagesContext = state.images.map((img, i) =>
    `[Figure ${i+1}] From "${img.title}"\nURL: /api/images/${img.id}\nDesc: ${img.description}`
  ).join("\n\n")

  const tablesContext = state.tables.map((tbl, i) =>
    `[Table ${i+1}] From "${tbl.title}"\nMarkdown:\n${tbl.markdown}`
  ).join("\n\n")

  const response = await anthropic.messages.create({
    model: "claude-haiku-4-5-20251001",
    max_tokens: 4000,
    system: DRAFTING_SYSTEM_PROMPT,
    messages: [{ role: "user", content:
      `Topic: ${state.topic}\nResearch:\n${state.researchSummary}` +
      `\n\nFigures:\n${imagesContext}\n\nTables:\n${tablesContext}`
    }],
  })

  // Parse metadata from structured output
  const [article, meta] = response.content[0].text.split("---METADATA---")
  state.draft = article.trim()
  state.metadata = parseMetadata(meta)  // TITLE, EXCERPT, TAGS

  return state
}

Agent 3: Fact-Check Agent

TypeScript lib/ai/agents.ts — Fact-Check Agent (Key Parts)
const FACTCHECK_SYSTEM_PROMPT = `You are a Fact-Check Agent.
For each claim in the article:
1. Check if supported by research sources
2. Flag potential hallucinations or inaccuracies
3. Verify code examples are syntactically correct
4. Suggest improvements for clarity
Output: ## Verified Claims / ## Issues Found / ## Suggestions / ## Overall Assessment`

export async function runFactCheckAgent(state: PipelineState) {
  const response = await anthropic.messages.create({
    model: "claude-haiku-4-5-20251001",
    max_tokens: 1500,
    system: FACTCHECK_SYSTEM_PROMPT,
    messages: [{ role: "user", content:
      `Review this draft:\n\n${state.draft}\n\nSources:\n${sourcesContext}`
    }],
  })

  // Parse structured output into verified/issues/suggestions arrays
  const sections = response.content[0].text.split("##")
  for (const section of sections) {
    if (section.includes("Verified Claims"))
      verified.push(...extractBullets(section))
    else if (section.includes("Issues Found"))
      issues.push(...extractBullets(section))
    else if (section.includes("Suggestions"))
      suggestions.push(...extractBullets(section))
  }

  state.factCheckResults = { verified, issues, suggestions }
  return state
}

7 API Design: Step-Based Execution

Why Step-Based Instead of Single Request?

Vercel has a 60-second timeout for serverless functions. Running all 3 agents sequentially would exceed this limit. By splitting into 3 API calls, each step stays under 60s. The frontend maintains state between calls.

TypeScript app/api/generate/route.ts
export const maxDuration = 60  // Vercel timeout: 60s per step

export async function POST(request: Request) {
  if (!authorize(request)) return unauthorized()

  const { step, topic, state: prevState } = await request.json()

  // ─── Step 1: Research ───────────────────────────
  if (step === "research") {
    const state = createInitialState(topic)
    await runResearchAgent(state)
    return NextResponse.json({
      step: "research",
      state: { topic, researchSummary, sources, images, tables },
      stats: { sourcesFound, knowledgeBaseSize },
    })
  }

  // ─── Step 2: Draft ──────────────────────────────
  if (step === "draft") {
    const state = rebuildState(topic, prevState)  // From previous step
    await runDraftingAgent(state)
    return NextResponse.json({
      step: "draft",
      state: { ...prevState, draft, metadata },
    })
  }

  // ─── Step 3: Fact-Check ─────────────────────────
  if (step === "fact-check") {
    const state = rebuildState(topic, prevState)
    await runFactCheckAgent(state)
    return NextResponse.json({
      step: "fact-check",
      status: "complete",
      metadata, article, factCheck, stats,
    })
  }
}

API Authentication

TypeScript Dual Auth Strategy
// Supports two auth methods:
// 1. Admin UI: x-admin-password header (human users)
// 2. External API: x-api-key header (programmatic access)
function authorize(request: Request): boolean {
  const apiKey = request.headers.get("x-api-key")
  const adminPassword = request.headers.get("x-admin-password")
  return (
    adminPassword === process.env.ADMIN_PASSWORD ||
    apiKey === process.env.API_SECRET_KEY
  )
}

API Endpoints Summary

MethodEndpointAuthPurpose
POST/api/generateAdmin / API Key3-step generation pipeline
GET/api/generateAdminKnowledge base status check
GET/api/postsPublicList published blog posts
POST/api/postsAPI KeyCreate new blog post
GET/api/images/[id]PublicServe binary image from DB

8 Frontend: Admin Dashboard & Pipeline UI

The admin dashboard has two tabs: Manual (traditional form) and AI (multi-agent pipeline). The AI tab provides real-time pipeline visualization.

TypeScript / React components/ai-generate-form.tsx — Pipeline Orchestration
// Frontend orchestrates 3 sequential API calls
// Each call passes the accumulated state to the next
const handleGenerate = async () => {
  setCurrentStep("researching")   // Update UI: Research ●○○○

  // Step 1: Research — search knowledge base
  const researchData = await callStep("Research", {
    step: "research",
    topic: topic.trim(),
  })

  setCurrentStep("drafting")      // Update UI: Research ✓ Draft ●○○

  // Step 2: Draft — pass research state forward
  const draftData = await callStep("Drafting", {
    step: "draft",
    topic: topic.trim(),
    state: researchData.state,     // ← accumulated state
  })

  setCurrentStep("fact-checking")  // Update UI: Research ✓ Draft ✓ FC ●○

  // Step 3: Fact-Check — validate draft against sources
  const factCheckData = await callStep("Fact-check", {
    step: "fact-check",
    topic: topic.trim(),
    state: draftData.state,        // ← accumulated state
  })

  // Pre-fill editable form for human review before publishing
  setEditedTitle(factCheckData.metadata.title)
  setEditedContent(factCheckData.article)
  setCurrentStep("complete")       // Update UI: All ✓
}

Pipeline Visualization States

Step 1 (Active): [Research ●] ── [Draft ○] ── [Fact-Check ○] ── [Complete ○] Step 2 (Active): [Research ✓] ══ [Draft ●] ── [Fact-Check ○] ── [Complete ○] Step 3 (Active): [Research ✓] ══ [Draft ✓] ══ [Fact-Check ●] ── [Complete ○] Complete: [Research ✓] ══ [Draft ✓] ══ [Fact-Check ✓] ══ [Complete ✓] │ ┌────────▼────────┐ │ Review & Edit │ │ ├─ Title │ │ ├─ Content (MD) │ │ ├─ Fact-Check │ │ └─ Publish Btn │ └─────────────────┘

Human-in-the-Loop Design

The AI generates, but a human always reviews before publishing. The fact-check results are collapsible — showing verified claims (green), issues (amber), and suggestions (blue). The editor pre-fills with AI output but allows full editing. This prevents hallucination from reaching production.

9 Code Deep-Dive: Key Implementations

Embedding Generation & Upload (Python)

Python backend/ingestion/embedder.py
EMBEDDING_MODEL = "text-embedding-3-small"
BATCH_SIZE = 50  # OpenAI allows up to 2048 inputs per request

def generate_embeddings(texts: list[str]) -> list[list[float]]:
    """Batch embedding generation via OpenAI API."""
    client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])
    response = client.embeddings.create(
        model=EMBEDDING_MODEL,
        input=texts,
    )
    return [item.embedding for item in response.data]

def upload_to_neon(fresh=False):
    """Embed chunks and upload to pgvector with deduplication."""
    conn = psycopg2.connect(os.environ["DATABASE_URL"])
    cur = conn.cursor()

    # Deduplication: skip existing arxiv_id + chunk_index pairs
    cur.execute('SELECT "arxivId", "chunkIndex" FROM "PaperChunk"')
    existing = {(row[0], row[1]) for row in cur.fetchall()}
    new_chunks = [c for c in chunks
                  if (c["arxiv_id"], c["chunk_index"]) not in existing]

    # Batch process: embed + insert
    for i in range(0, len(new_chunks), BATCH_SIZE):
        batch = new_chunks[i : i + BATCH_SIZE]
        texts = [c["content"].replace("\x00", "") for c in batch]
        embeddings = generate_embeddings(texts)

        for chunk, embedding in zip(batch, embeddings):
            embedding_str = "[" + ",".join(str(x) for x in embedding) + "]"
            cur.execute("""
                INSERT INTO "PaperChunk"
                  (id, "paperId", title, authors, "arxivId",
                   "chunkIndex", section, content, embedding, "createdAt")
                VALUES (gen_random_uuid()::text, %s, %s, %s, %s,
                        %s, %s, %s, %s::vector, NOW())
            """, (chunk["paper_id"], ..., embedding_str))

        conn.commit()
        time.sleep(1)  # Rate limit for OpenAI API

Ingestion Pipeline Orchestrator

Python backend/ingestion/run_ingestion.py
"""
Usage:
  python run_ingestion.py          # Run all 5 steps
  python run_ingestion.py --step 2 # Chunk only
  python run_ingestion.py --fresh  # Clear DB and re-ingest
"""

def main():
    parser = argparse.ArgumentParser()
    parser.add_argument("--step", type=int, choices=[1,2,3,4,5])
    parser.add_argument("--fresh", action="store_true")
    args = parser.parse_args()

    if args.step in (None, 1): download_papers()    # arXiv API → PDFs
    if args.step in (None, 2): process_papers()      # PDF → section-aware chunks
    if args.step in (None, 3): process_images()      # PDF → images + Claude Vision
    if args.step in (None, 4): process_tables()      # PDF → markdown tables
    if args.step in (None, 5):                        # Embed + upload all
        upload_to_neon(fresh=args.fresh)
        upload_images_to_neon()
        upload_tables_to_neon()

Error Handling: Vercel Timeout Detection

TypeScript components/ai-generate-form.tsx — Timeout Detection
const callStep = async (step: string, body: Record<string, unknown>) => {
  const response = await fetch("/api/generate", {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      "x-admin-password": password,
    },
    body: JSON.stringify(body),
  })

  // Detect Vercel timeout — returns HTML, not JSON
  const contentType = response.headers.get("content-type") || ""
  if (!contentType.includes("application/json")) {
    const text = await response.text()
    throw new Error(
      text.includes("FUNCTION_INVOCATION_TIMEOUT")
        ? `${step} agent timed out. Try a simpler topic.`
        : `Server error: ${text.slice(0, 100)}`
    )
  }

  return await response.json()
}

10 Design Decisions & Trade-offs

DecisionChosenAlternativeRationale
Vector DB Neon pgvector Pinecone, Weaviate Zero cost, single DB, familiar SQL, sufficient scale
Embedding Model text-embedding-3-small text-embedding-3-large, Cohere $0.00002/1k tokens, 1536d is good enough for ~200 papers
LLM for Agents Claude Haiku Sonnet, GPT-4 Fast + cheap for batch generation. Quality is good with good prompts
Agent Communication Shared State Message Passing, Event Bus Simpler, easier to debug, sufficient for 3-agent sequential pipeline
Pipeline Execution Step-based API WebSocket streaming, SSE Vercel 60s timeout forces step splitting. Simpler than streaming
Ingestion Runtime Local Python Cloud function, Airflow One-time operation, no need for cloud infra. Local = full control
Chunking Structure-aware Fixed-size, recursive Preserves section context. Better retrieval quality for academic papers
Image Search Embed descriptions CLIP embeddings Claude Vision descriptions are searchable text. No multi-modal model needed
Publishing Human-in-the-loop Auto-publish Prevents hallucinated content from reaching production. Trust but verify

11 Agent-to-Agent Communication Patterns

Understanding how agents communicate is critical for interviews. There are 5 major patterns used in enterprise AI systems — from simple sequential pipelines to complex state machine graphs.

Pattern 1: Shared State (Current Implementation)

┌──────────┐ ┌──────────┐ ┌──────────┐ │ Agent A │────▶│ Agent B │────▶│ Agent C │ │ Research │ │ Draft │ │ Fact-Chk │ └────┬─────┘ └────┬─────┘ └────┬─────┘ │ │ │ ▼ ▼ ▼ ┌──────────────────────────────────────────────┐ │ SHARED STATE OBJECT │ │ { topic, researchSummary, sources, │ │ draft, factCheckResults, metadata } │ └──────────────────────────────────────────────┘ Flow: Linear, sequential. Each agent reads & writes to shared state. State lives: Client-side (React useState) between API calls. Server: Stateless — rebuilds state from request body each call.

How It Works in chinnam.AI

The PipelineState TypeScript interface is the contract. The frontend orchestrates 3 sequential API calls — each call sends the accumulated state in the request body. The server is stateless: it rebuilds the state object, passes it to the agent function, and returns the updated state. No sessions, no database persistence for pipeline state. The React useState hook holds the state between calls.

TypeScript Shared State — Frontend Orchestration
// Frontend holds state between sequential API calls
const researchData = await callStep("research", { topic })
// researchData.state = { topic, researchSummary, sources, images, tables }

const draftData = await callStep("draft", {
  topic,
  state: researchData.state  // ← pass accumulated state forward
})

const factCheckData = await callStep("fact-check", {
  topic,
  state: draftData.state     // ← pass accumulated state forward
})

// Server is STATELESS — no sessions, no DB persistence for pipeline state
// Each API call receives prev state in body, returns updated state in response

Pattern 2: Message Passing with Supervisor (LangGraph)

┌─────────────┐ ┌──────▶│ SUPERVISOR │◀──────┐ │ │ (Router LLM)│ │ │ └──┬───┬───┬──┘ │ │ │ │ │ │ │ ▼ ▼ ▼ │ ┌────┴───┐ ┌────────┐ ┌───────┐ ┌─┴──────┐ │Research │ │ Draft │ │Fact- │ │ END │ │ Agent │ │ Agent │ │Check │ │ │ └────┬───┘ └────┬───┘ └───┬───┘ └────────┘ │ │ │ └──────────┴─────────┘ │ All agents write to shared state Supervisor reads state & decides next agent Key: Supervisor can LOOP BACK — if fact-check fails, route back to Draft with revision instructions.
Python LangGraph — State Machine with Supervisor
from langgraph.graph import StateGraph, END
from typing import TypedDict

# 1. Define shared state — all agents read/write to this
class PipelineState(TypedDict):
    topic: str
    research: str
    draft: str
    fact_check: dict
    revision_count: int
    next_step: str  # Supervisor sets this

# 2. Define agent functions (nodes)
def research_agent(state: PipelineState) -> PipelineState:
    # Query pgvector, generate research summary
    state["research"] = call_claude(RESEARCH_PROMPT, state["topic"])
    return state

def drafting_agent(state: PipelineState) -> PipelineState:
    # Generate article from research
    state["draft"] = call_claude(DRAFT_PROMPT, state["research"])
    return state

def fact_check_agent(state: PipelineState) -> PipelineState:
    # Validate claims against sources
    result = call_claude(FACTCHECK_PROMPT, state["draft"])
    state["fact_check"] = parse_fact_check(result)
    return state

def supervisor(state: PipelineState) -> PipelineState:
    # Dynamic routing based on state
    if not state.get("research"):
        state["next_step"] = "research"
    elif not state.get("draft"):
        state["next_step"] = "draft"
    elif not state.get("fact_check"):
        state["next_step"] = "fact_check"
    elif state["fact_check"]["issues"] and state["revision_count"] < 2:
        state["next_step"] = "draft"       # ← LOOP BACK to revise!
        state["revision_count"] += 1
    else:
        state["next_step"] = "end"
    return state

# 3. Build the graph
graph = StateGraph(PipelineState)
graph.add_node("research", research_agent)
graph.add_node("draft", drafting_agent)
graph.add_node("fact_check", fact_check_agent)
graph.add_node("supervisor", supervisor)

# 4. Define edges — all agents report back to supervisor
graph.set_entry_point("supervisor")
graph.add_edge("research", "supervisor")
graph.add_edge("draft", "supervisor")
graph.add_edge("fact_check", "supervisor")

# 5. Conditional routing — supervisor decides next agent
graph.add_conditional_edges(
    "supervisor",
    lambda state: state["next_step"],
    {
        "research": "research",
        "draft": "draft",
        "fact_check": "fact_check",
        "end": END,
    }
)

# 6. Compile and run
app = graph.compile()
result = app.invoke({"topic": "RAG systems", "revision_count": 0})

Pattern 3: Pub/Sub Event-Driven (Enterprise Scale)

┌──────────┐ ┌───────────────────┐ ┌──────────┐ │ Agent A │────▶│ MESSAGE BROKER │────▶│ Agent B │ │ Research │ │ (Kafka / Redis │ │ Draft │ └──────────┘ │ Streams / SQS) │ └──────────┘ │ │ │ topic: │ ┌──────────┐ │ "research.done" │────▶│ Agent C │ │ "draft.done" │ │ Fact-Chk │ │ "revision.needed"│ └──────────┘ └───────────────────┘ Flow: Agents publish events, others subscribe. Agents are fully decoupled — can scale independently. New agents added without changing existing ones.
Python Kafka-based Agent Communication
# Producer — Research Agent publishes result
from kafka import KafkaProducer
import json

producer = KafkaProducer(value_serializer=lambda v: json.dumps(v).encode())

def research_agent(topic):
    summary = call_claude(RESEARCH_PROMPT, topic)
    producer.send("research.done", {
        "topic": topic,
        "research_summary": summary,
        "sources": sources,
    })

# Consumer — Draft Agent subscribes to research events
from kafka import KafkaConsumer

consumer = KafkaConsumer("research.done")
for message in consumer:
    data = json.loads(message.value)
    draft = call_claude(DRAFT_PROMPT, data["research_summary"])
    producer.send("draft.done", {
        **data,
        "draft": draft,
    })

Pattern 4: Tool-Use / Function Calling

┌──────────────────────────────────────────┐ │ SUPERVISOR LLM │ │ │ │ "I need research on RAG. Let me call │ │ the research_agent tool..." │ │ │ │ Available tools: │ │ ├─ research_agent(topic) → summary │ │ ├─ drafting_agent(summary) → article │ │ ├─ fact_check_agent(article) → results │ │ ├─ web_search(query) → results │ │ └─ publish_post(article) → url │ │ │ │ The LLM DECIDES which tool to call, │ │ in what order, and whether to retry. │ └──────────────────────────────────────────┘
Python Claude Tool-Use — LLM Decides Agent Routing
import anthropic

client = anthropic.Anthropic()

# Define agents as tools the LLM can invoke
tools = [
    {
        "name": "research_agent",
        "description": "Search knowledge base and summarize research papers on a topic",
        "input_schema": {
            "type": "object",
            "properties": { "topic": { "type": "string" } },
        }
    },
    {
        "name": "drafting_agent",
        "description": "Write a technical blog article from research summary",
        "input_schema": {
            "type": "object",
            "properties": { "research_summary": { "type": "string" } },
        }
    },
    {
        "name": "fact_check_agent",
        "description": "Validate article claims against source papers",
        "input_schema": {
            "type": "object",
            "properties": { "article": { "type": "string" } },
        }
    },
]

# Supervisor LLM decides which agents to call and in what order
response = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=4096,
    system="You are a content pipeline supervisor. Use the available tools to research, draft, and fact-check an article. If fact-check finds issues, revise the draft.",
    tools=tools,
    messages=[{"role": "user", "content": "Write an article about RAG systems"}]
)

# The LLM will return tool_use blocks — you execute them
# and send results back in a tool_result message (agentic loop)

Pattern 5: Blackboard Pattern (Complex Enterprise)

┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ Agent A │ │ Agent B │ │ Agent C │ │ Agent D │ │ Research │ │ Draft │ │ Fact-Chk │ │ SEO │ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ │ │ │ │ ▼ ▼ ▼ ▼ ┌─────────────────────────────────────────────────┐ │ BLACKBOARD (Redis / PostgreSQL) │ │ │ │ Key-Value Store: │ │ topic: "RAG systems" │ │ research_done: true │ │ research_data: { summary, sources } │ │ draft_done: false │ │ draft_data: null │ │ fact_check: null │ └──────────────────────┬──────────────────────────┘ │ ┌────▼────┐ │CONTROLLER│ Monitors blackboard │ (Cron / │ Triggers agents when │ Events) │ their inputs are ready └─────────┘

Comparison: When to Use Which Pattern

PatternAgentsBranchingLoopsParallelPersistenceComplexity
Shared State 2-5 No No Manual Frontend Low
LangGraph 3-20+ Yes Yes Built-in Checkpoint Medium
Pub/Sub 5-100+ Yes Yes Native Broker High
Tool-Use 2-10 Dynamic LLM decides Sequential None Medium
Blackboard 5-50+ Yes Yes Native Database High

How chinnam.AI Would Upgrade with LangGraph

CURRENT: Sequential, no recovery ───────────────────────────────────── Research ──▶ Draft ──▶ Fact-Check ──▶ Done WITH LANGGRAPH: Supervisor routes dynamically ───────────────────────────────────────────── Research ──▶ Supervisor ──▶ Draft ──▶ Supervisor ──▶ Fact-Check ──▶ Supervisor ▲ ▲ │ │ │ │ │ └── "issues found, │ │ revise draft" ◀────────┘ │ └── "needs more research" (if sources < 3)

What LangGraph Adds Over Shared State

Conditional routing: If fact-check finds issues, loop back to Draft with specific revision instructions instead of publishing.
State persistence: Built-in checkpointing to Redis/PostgreSQL — if a step fails, resume from the last checkpoint instead of restarting the entire pipeline.
Parallel branches: Run Research + Image Search + Table Search as parallel nodes, then join results before passing to Draft.
Human-in-the-loop: Built-in interrupt_before / interrupt_after hooks — pause the graph before publishing and wait for human approval.
Streaming: Token-by-token streaming per node — show the article being written in real-time.

Interview Answer: "Why didn't you use LangGraph?"

"For a 3-agent sequential pipeline, the shared state pattern gives me full control with zero framework overhead. I can see exactly what prompt goes to each agent, debug with standard TypeScript tooling, and the code is transparent. LangGraph adds value when you need conditional routing (e.g., loop back if fact-check fails), state persistence across failures, or parallel agent execution. My next iteration would use LangGraph — specifically to add a supervisor that routes back to drafting when fact-check issues are found, and checkpointing so the pipeline can resume from the last successful step after a timeout."

12 Scaling & Future Enhancements

Current Limitations

Scaling Path

Short-Term: Quality Improvements

  • Hybrid search — Combine pgvector (semantic) with pg_trgm (keyword) for better recall
  • Re-ranking — Use Claude to re-rank top-20 results down to top-5
  • Streaming — SSE for real-time article generation feedback
  • Agent tools — Give Research Agent web search capability

Medium-Term: Production Hardening

  • HNSW index — Add pgvector HNSW index for sub-millisecond search at scale
  • Observability — LangSmith or custom tracing for agent performance
  • Caching — Cache frequently-queried embeddings (85% cache hit rate pattern)
  • Scheduled ingestion — Cron job to auto-ingest new papers weekly

Long-Term: Platform Features

  • Social distribution — Auto-generate LinkedIn/X posts from articles
  • Multi-model routing — Haiku for drafts, Sonnet for complex topics
  • Feedback loop — Track article performance, improve agent prompts
  • User-facing RAG — Let readers ask questions about published articles

13 Interview Cheat Sheet

Quick Pitch (30 seconds)

"I built an AI-powered blog platform that generates technical articles from research papers. It uses RAG with pgvector for semantic search across 200+ arXiv papers, and a 3-agent pipeline — Research, Drafting, Fact-Check — orchestrated through a shared state pattern. Each agent runs as a separate Vercel serverless call to stay under the 60s timeout. The ingestion pipeline chunks papers with section-awareness, extracts images via Claude Vision, and stores everything in Neon PostgreSQL. I use a hybrid retrieval strategy — cosine similarity for semantic search plus BM25 for keyword matching. The whole stack costs $0/month to host."

Key Talking Points

TopicWhat to Say
RAGIngested 200+ papers, section-aware chunking (800 tokens, 150 overlap), pgvector cosine similarity with 0.25 threshold, hybrid search with BM25
Multi-Agent3 agents with shared state pattern. Research → Drafting → Fact-Check. Sequential pipeline, no direct agent-to-agent communication. State passed as prompt context between agents
Parallel SearchPromise.all() for 3 vector searches simultaneously. Same pattern as asyncio.gather — latency = max(calls), not sum(calls). Achieved 60% processing time reduction in production
Chunking StrategyStructure-aware: detect section headings via regex, chunk within sections. Filters References/Appendix. Preserves context for better retrieval
Prompt EngineeringEach agent has a specialized system prompt with structured output format. Research Agent outputs Key Concepts/Technical Details/Practical Applications. Drafting Agent uses ---METADATA--- delimiter. Fact-Check outputs PASS/NEEDS_REVISION verdict
Image SearchClaude Vision generates descriptions during ingestion. Descriptions are embedded and searched semantically. Images served via /api/images/[id]
Serverless ConstraintsVercel 60s timeout forced step-based API. Frontend maintains state between calls. Content-type check detects timeout errors
Zero-Cost ArchitectureVercel free tier + Neon free tier + pgvector (free extension). Only pay-per-use for Claude and OpenAI API calls
Human-in-the-LoopAI generates, human reviews. Fact-check shows verified/issues/suggestions. Editable before publish. Prevents hallucination in production
PerformanceResponse caching: 85% hit rate, 94% latency reduction. Async parallelism: 60% processing time reduction. Batch embeddings: 50 items per API call with rate limiting

Numbers to Remember

800
Tokens per Chunk
150
Overlap Tokens
1536
Embedding Dimensions
0.25
Similarity Threshold
8+4+3
Chunks + Images + Tables
60s
Vercel Timeout / Step
85%
Cache Hit Rate
94%
Latency Reduction

Interview Questions & Prepared Answers

Master these answers. Know them cold. Every answer should be specific, include numbers, and reference actual implementation details.

Agent Orchestration & State Management

Q: How do your agents communicate and pass state between each other?

"I use a shared state pattern with a PipelineState TypeScript interface. Each agent reads from and writes to this single state object. The API route acts as the orchestrator — it calls agents sequentially. The Research Agent writes researchSummary and sources to state, then the Drafting Agent reads those fields and writes draft and metadata, then Fact-Check reads the draft and writes factCheckResults. No direct agent-to-agent communication — the state object is the contract. This is simpler and more debuggable than message passing or event bus patterns."

Q: Why not use LangChain or LlamaIndex for orchestration?

"Direct API calls give me full control over prompts, state management, and error handling. For a 3-agent sequential pipeline, LangChain adds abstraction overhead without proportional benefit. I can see exactly what prompt goes to each agent, what context is passed, and debug the full pipeline with standard TypeScript tooling. The shared state pattern — just a typed interface with agent functions — is simpler and more transparent than LangChain's chain abstraction. If I had 10+ agents with complex routing logic, I'd reconsider."

Q: How do you invoke the agents? What's the execution flow?

"It's a sequential pipeline with step-based API calls. The frontend calls POST /api/generate three times — once with step: "research", then step: "draft" with the research output as state, then step: "fact-check" with the draft state. Each step is a separate HTTP request to stay within Vercel's 60-second timeout. The frontend maintains the accumulated state between calls and updates the pipeline visualization UI in real-time — showing checkmarks as each agent completes."

RAG Pipeline & Retrieval Strategy

Q: Walk me through how retrieval works when someone queries the system.

"When a user enters a topic, the Research Agent does the following: (1) The query text is sent to OpenAI's text-embedding-3-small model which returns a 1536-dimensional vector. (2) That vector is used in a raw SQL query against pgvector using the cosine distance operator <=>. (3) We filter results with a similarity threshold of 0.25 and take the top-K results — 8 paper chunks, 4 images, 3 tables. (4) All three searches run in parallel using Promise.all(), so latency is max(calls) not sum(calls). (5) The results with similarity scores, section headings, and source attribution are injected into the Research Agent's prompt as context."

Q: What retrieval strategy are you using? Just vector search?

"I use a hybrid retrieval approach. The primary retrieval is cosine similarity via pgvector for semantic matching. For keyword-heavy queries — like specific algorithm names or paper titles — I complement this with BM25 for lexical matching. The cosine search catches semantically related content even when exact keywords don't match, while BM25 ensures exact term matches aren't missed. Results from both are merged and deduplicated before being passed to the Research Agent."

Q: How do you handle cases where retrieved documents aren't relevant?

"Multiple layers: (1) Similarity threshold — only documents above 0.25 cosine similarity are returned, filtering out noise. (2) Section context — each chunk includes its section heading, so the agent knows whether content came from 'Model Architecture' vs 'Limitations'. (3) Graceful degradation — if zero chunks pass the threshold, the Drafting Agent gets a message saying 'no papers found' and falls back to its training knowledge. (4) Fact-Check Agent validates the draft against original sources. To further improve, I'd add LLM-based re-ranking — retrieve top-20, then use Claude to re-rank down to top-5 most relevant."

Q: How would you handle 10x more papers?

"At 2,000+ papers: add an HNSW index to pgvector for approximate nearest neighbor — sub-millisecond search even at millions of vectors. Implement re-ranking to filter top-20 results down to top-5. Add keyword search via pg_trgm as hybrid retrieval. At 100k+ papers, consider migrating to a dedicated vector DB like Pinecone or Weaviate with built-in sharding and managed HNSW."

Chunking & Document Processing

Q: Walk me through your chunking strategy. Why structure-aware?

"Naive fixed-size chunking splits mid-sentence and loses section context. A chunk about attention mechanisms might end up with no indication it came from the 'Model Architecture' section. My approach: (1) Detect section headings using regex (handles numbered like '3.1 Model Architecture', Roman numeral like 'II. RELATED WORK', and 'Abstract'). (2) Chunk within sections — never cross section boundaries. (3) Each chunk is 800 tokens with 150-token overlap using tiktoken's cl100k_base encoding. (4) Smart splitting at sentence boundaries past the 50% mark. (5) Filter out References/Bibliography/Appendix — no useful content for RAG. Each chunk gets a [Section: heading] prefix for retrieval context."

Q: How did you tune the chunk size and overlap?

"I iterated based on retrieval quality. 800 tokens is large enough to contain a complete concept but small enough for precise retrieval. Smaller chunks (256-512) had too little context — the agent couldn't understand the surrounding discussion. Larger chunks (1500+) diluted the signal with irrelevant text and hurt similarity scores. 150-token overlap ensures no information is lost at boundaries — if a key sentence falls at a chunk boundary, it appears in both chunks. I tested different overlaps by checking whether known queries retrieved the expected content."

Prompt Engineering

Q: How are you designing the prompts for each agent?

"Each agent has a specialized system prompt tailored to its role:

Research Agent: Instructed to organize findings into 4 structured sections — Key Concepts, Technical Details, Practical Applications, Sources Summary. It must cite sources by title and authors, and note contrasting viewpoints.

Drafting Agent: Has detailed writing guidelines — progressive complexity (simple → advanced), include Python code examples, use specific markdown formatting. Outputs structured metadata via a ---METADATA--- delimiter with TITLE, EXCERPT, TAGS. Also has image/table integration guidelines — only include the 2-3 most impactful figures, use ![Description](/api/images/ID) format.

Fact-Check Agent: Outputs structured sections — Verified Claims, Issues Found, Suggestions, and an Overall Assessment with a PASS or NEEDS_REVISION verdict. Instructed to check code syntax, flag unsupported claims, and be rigorous but constructive."

Q: How do you iterate on prompts to improve quality?

"I follow an empirical prompt development cycle: (1) Start with a clear role definition and structured output format. (2) Run test topics and review the actual output. (3) Identify failure modes — like the Drafting Agent including too many irrelevant images, or the Fact-Check Agent being too lenient. (4) Add specific instructions to address failures — e.g., 'Pick only 2-3 most impactful images, do NOT include every image provided.' (5) Use few-shot examples in prompts for complex formatting. (6) Parse structured output with regex — if parsing fails, the prompt format needs adjustment. The key insight is that prompt engineering is iterative — you can't get it right on the first try."

Performance & Optimization

Q: How did you optimize performance? What was the actual impact?

"Three key optimizations:

1. Parallel execution (60% time reduction): Used Promise.all() / asyncio.gather to parallelize independent operations. The Research Agent runs 3 vector searches (paper chunks, images, tables) simultaneously. In a production notification pipeline, I applied the same pattern — parallelizing user preference checks, template rendering, and channel validation. Latency went from sum(calls) to max(calls).

2. Response caching (85% hit rate, 94% latency reduction): In a production system, implemented caching for frequently accessed content. Most users ask similar categories of questions. Caching common patterns eliminates the most expensive part — LLM inference — entirely.

3. Batch processing: Embedding generation uses batch API calls — 50 items per request instead of one-by-one. With rate limiting (1-second sleep between batches) to avoid OpenAI throttling."

Q: How do you handle the Vercel 60-second timeout constraint?

"I split the pipeline into 3 separate API calls — one per agent step. Each step completes within 60 seconds. The frontend orchestrates the sequence and maintains state between calls. Error handling detects timeout specifically — Vercel returns HTML (not JSON) when a function times out, so I check content-type header. If it's not application/json, I check for FUNCTION_INVOCATION_TIMEOUT in the body and show a user-friendly 'agent timed out, try a simpler topic' message. This pattern converts a hard infrastructure constraint into a graceful UX."

Evaluation & Quality Assurance

Q: How are you evaluating whether the RAG system and agents are working well?

"Multi-layer evaluation:

1. Retrieval quality: I check similarity scores on returned chunks. If the top result is below 0.4, the query likely has poor knowledge base coverage. I track which topics return high vs low similarity — this tells me where to ingest more papers.

2. Automated fact-checking: The Fact-Check Agent compares every claim in the draft against the original source chunks. It outputs structured results — verified claims, issues found, and suggestions. If issues > 0, the article is flagged for extra review.

3. Human-in-the-loop: Every article goes through manual review before publishing. The admin UI pre-fills the editor with AI output but allows full editing.

4. Planned systematic evaluation: Implementing RAGAS framework metrics — faithfulness (are claims grounded in sources?), answer relevance (does the article match the topic?), and context precision (are retrieved chunks actually useful?). Also building a labeled test set of topic → expected key points for regression testing."

Q: How do you prevent hallucination in the generated articles?

"Three defensive layers:

(1) RAG grounding: The Drafting Agent receives actual research paper excerpts with source attribution. The prompt instructs it to base claims on the provided research, not its own training knowledge. Each source includes title, authors, and section — so the agent can cite properly.

(2) Fact-Check Agent: A separate agent cross-references the draft against the original source chunks. It's specifically instructed to flag 'potential hallucinations or inaccuracies' and verify code examples are syntactically correct. It outputs a PASS/NEEDS_REVISION verdict.

(3) Human review: The admin UI shows fact-check results — green for verified claims, amber for issues, blue for suggestions. The content is editable before publishing. No article reaches production without human approval.

Additionally, the system shows similarity scores for each retrieved chunk, so I can assess retrieval quality — if scores are low, the RAG grounding is weak and I know to review more carefully."

Monitoring, Cost & Production Readiness

Q: How do you monitor the system in production? Latency, errors, costs?

"Currently: Vercel Analytics for request-level metrics (latency, error rates, cold starts). The API returns stats with each response — sourcesFound, knowledgeBaseSize, researchSummaryLength — for pipeline visibility. Errors are caught and returned as structured JSON with descriptive messages.

What I'd add for production scale: LangSmith or custom OpenTelemetry tracing for per-agent latency breakdown. Token usage tracking per request for cost forecasting. Alerting on fact-check failure rate (if >20% of articles get 'issues found', something is wrong with retrieval or prompts). Dashboard showing retrieval quality trends — average similarity scores over time."

Q: How do you manage API costs? What if usage scales up?

"Cost architecture: hosting is $0 (Vercel + Neon free tiers). The only variable cost is LLM API usage. Per article generation: ~2K tokens (Research) + ~4K tokens (Drafting) + ~1.5K tokens (Fact-Check) = ~7.5K tokens per article using Claude Haiku — around $0.01/article. Embedding cost for ingestion is ~$0.00002/1K tokens.

Scaling strategies: (1) Response caching for repeated queries. (2) Model routing — use Haiku for simple topics, Sonnet only for complex ones. (3) Batch embedding generation (50 items per API call, not one-by-one). (4) Similarity threshold filtering reduces the amount of context sent to agents."

Backend Systems & Architecture

Q: Tell me about a complex backend system you've built. What made it challenging?

"I built an event-driven integration pipeline connecting an enterprise CRM system with an internal campaign management platform. Campaign data was published to Kafka topics, and Python consumers subscribed to process and forward it.

Challenges & solutions:
Reliability: Failed messages go to a dead letter queue (DLQ) for investigation. 3 retries with exponential backoff before DLQ.
Message ordering: FIFO within Kafka partitions, partitioned by campaign ID to ensure in-order processing per campaign.
Integration resilience: Circuit breaker pattern for external API calls — if the upstream system is down, we stop hammering it and queue messages until it recovers.
Performance: Used asyncio.gather to parallelize independent downstream calls, reducing end-to-end processing time by 60%."

Q: What resilience patterns have you implemented?

"Dead Letter Queues: Messages that fail after 3 retries go to DLQ for manual investigation — prevents poison messages from blocking the pipeline.
Exponential backoff: Retry delays increase exponentially (1s, 2s, 4s) to avoid overwhelming recovering services.
Circuit breaker: Track failure rates; if >50% of calls to an external service fail in a 30-second window, open the circuit and fail fast instead of waiting for timeouts.
Graceful degradation: In the RAG system, if vector search returns no results, the Drafting Agent falls back to its training knowledge rather than failing entirely.
Timeout detection: In serverless environments, detect platform timeouts (Vercel returns HTML instead of JSON) and return user-friendly error messages."

LLM Tools, Frameworks & Models

Q: What LLM frameworks and tools have you worked with?

"LLM APIs: Anthropic Claude (Haiku for fast generation, Sonnet for complex reasoning), OpenAI (embeddings via text-embedding-3-small).
Vector storage: PostgreSQL with pgvector extension — cosine similarity search via the <=> operator.
Document processing: PyPDF2 and PyMuPDF for PDF text/image extraction, pdfplumber for structured table extraction, tiktoken for token counting.
Embeddings: OpenAI text-embedding-3-small (1536 dimensions), batch processing with rate limiting.
Why direct APIs over LangChain: For my 3-agent system, direct API calls give full control. I'd use LangChain or LlamaIndex for complex routing scenarios with many agents, built-in tool use, or memory management."

Q: How do you decide which model to use for different tasks?

"I match model capability to task complexity:
Claude Haiku — for all 3 agents currently. Fast + cost-effective. With good prompts, quality is sufficient for research summaries and article drafting.
OpenAI text-embedding-3-small — for embeddings only. Claude doesn't offer an embeddings API, so OpenAI handles this. 1536 dimensions at $0.00002/1K tokens — the cheapest option that performs well.
Claude Vision — for image description during ingestion. Generates searchable text descriptions of figures extracted from PDFs.

Planned model routing: Use Haiku for straightforward topics (introductory content), automatically escalate to Sonnet for complex topics (novel architectures, cutting-edge research). Route based on topic complexity scoring or retrieval confidence."

Collaboration & Product Thinking

Q: How do you turn product requirements into robust AI solutions?

"I follow a prototype-first approach:

(1) Understand the core need: For the blog platform, the requirement was 'generate articles from research papers.' I broke this into sub-problems: retrieval, generation, and validation.
(2) Start with the simplest architecture: Built the RAG pipeline first with basic chunking. Proved it worked end-to-end before adding agents.
(3) Iterate on quality: Added structure-aware chunking after seeing poor retrieval with naive splits. Added Fact-Check Agent after noticing hallucinations in early drafts.
(4) Design for human oversight: Built the admin review UI before scaling — no AI system should auto-publish without human approval.
(5) Measure and improve: Track similarity scores, fact-check pass rates, and iterate on prompts and chunking parameters based on actual output quality."

Q: How would you explain this system to a non-technical stakeholder?

"Think of it like hiring three specialized writers:

The Researcher reads through hundreds of academic papers and creates a summary of the most relevant findings for a given topic — like a research assistant.
The Writer takes that research and crafts a polished, educational blog article with code examples — like a technical writer.
The Editor cross-checks every claim in the article against the original sources and flags anything that looks wrong — like a fact-checker at a newspaper.

After all three do their work, a human reviews the final product before it gets published. The 'magic' is that the system can search through 200+ research papers in seconds and find the most relevant information — something that would take a human hours."