← All case studies
◢ Chapter 05A · Case Study

Anatomy of a working LinkedIn Writer agent.

This is the architecture behind the demo in Chapter 6. Every box is a real call. Every arrow is real bytes over a real wire.

◢ Top-level system

Topic → Guideline → Research → Writer.

Topic
Guideline AgentLLM · structured JSON
Research AgentReAct loop · tools
Writer Pipelineevaluator-optimizer
Final Post

Each stage's output is persisted and editable. The user can tweak the guideline before research starts, or edit the research before the writer runs. Determinism flows from human control points, not from praying to the model.

◢ Inputs

The four files that shape every run.

guideline.mdguideline.md

The dynamic, per-task contract. Generated by the Guideline Agent, edited by the human, consumed by the Writer.

research.mdresearch.md

Streaming output from the Research Agent. Sourced facts, quotes, links, with citations. Editable before writer starts.

profiles/*.mdprofiles/paul.md

Static writing profiles (structure, terminology, character). The Writer picks one; the Evaluator scores against all three.

few-shot/*.mdfew-shot/01.md

5-10 archetypal good posts. Loaded into the Writer's prompt as user/assistant pairs. The cheapest quality multiplier.

◢ Streaming architecture

Client ↔ SSE ↔ orchestrator.

Browser (React)

EventSource ↔ Audit Log

POST
/api/agent-stream

SSE · text/event-stream

invokes
runResearchAgent()
runWriterPipeline()
ts
// Server emits typed events; client renders them live.
event: status     data: { stage: "research", message: "Searching..." }
event: chunk      data: { stage: "research", text: "..." }
event: tool_call  data: { name: "web_search", args: {...} }
event: tool_result data: { name: "web_search", result: "..." }
event: review     data: { iteration: 1, issues: [...] }
event: done       data: { post: "...", iterations: 2 }
◢ Research agent · ReAct

Tools the research agent uses.

web_search()

Query the web for fresh sources. Returns title, URL, snippet.

fetch_page()

Scrape a URL. Returns cleaned markdown content.

fetch_transcript()

Pull a YouTube transcript by video ID.

synthesize()

Internal LLM call. Compresses N raw results into the running research.md.

cite()

Mark a claim with a numbered source. Builds the references block.

done()

Signal that research is sufficient. Hands off to Writer.

◢ Demo simplification
The live demo in Chapter 6 uses a model-synthesized research stream (clearly labelled) instead of wiring real Firecrawl/Perplexity/YouTube APIs. The full production version of these integrations lives in the sister project "Agentic AI Creator".
◢ Writer pipeline

The evaluator-optimizer in detail.

guideline + research + profile + few-shot
Generator
v1
v1
Evaluator (tool call)
issues[]
v1
+ issues
Editor
v2
⟲ until issues = [] or maxRounds reached → Final
ts
type ReviewIssue = {
  profile: "structure" | "terminology" | "character";
  location: string;     // e.g. "paragraph 2, sentence 3"
  comment: string;
  severity: "low" | "medium" | "high";
};

type Review = { iteration: number; issues: ReviewIssue[] };