The Interview Method: Using LLMs to Extract Human Expertise

By

Large language models (LLMs) have become powerful tools for generating text, but their effectiveness often hinges on the quality and completeness of the context we provide. For complex tasks—like designing a new software feature or drafting a technical specification—the required context can span several pages. Traditionally, a human expert must painstakingly write that context. But an emerging alternative turns the process upside down: instead of having a human write for the LLM, the LLM interviews the human. This approach, sometimes called the interrogatory LLM, promises to make knowledge capture more efficient, accurate, and accessible.

How the Interrogatory LLM Works

At its core, the interrogatory LLM method involves prompting the model to act as an interviewer. The LLM asks the human a series of questions, learning about the domain, the requirements, and the intended outcomes. Once the interview is complete, the LLM synthesizes that information into a well-structured context document—ready to be fed into another LLM session (or the same one) to execute the actual task.

The Interview Method: Using LLMs to Extract Human Expertise
Source: martinfowler.com

A key insight comes from Harper Reed, who first popularized this technique. He insists that the LLM should ask only one question at a time. This prevents overwhelming the human and ensures each answer is focused. In practice, many users find they need to repeatedly remind the LLM of this rule, as models tend to default to multi-question prompts.

Applications in Software Development

Creating Context for Complex Tasks

When building a new feature, the LLM requires a wealth of information: user-facing descriptions, implementation guidelines, data sources, and integration points. Instead of a human drafting all that in markdown, the LLM can conduct an interview. The human provides high-level goals and answers clarifying questions, while the LLM fills in gaps and structures the output. This collaborative process can save hours and often yields more thorough context than a rushed manual write-up.

Reviewing Documents Through Dialogue

The same technique can be applied to document validation. Give the LLM an existing specification, then ask it to interview a subject-matter expert to verify accuracy. People often find reading and critiquing a dense document tedious, but a conversation with an LLM feels more natural. The model can ask pointed questions and cross-reference answers with the document, flagging inconsistencies or omissions. This is especially valuable when the original document is poorly written—the interview process can compensate for documentation weaknesses.

It's even possible to chain multiple interrogatory sessions: one LLM builds a document, then additional sessions interview different experts to review and refine it.

Beyond LLM Context: Capturing Any Knowledge

While the approach is valuable for feeding LLMs, its utility extends further. Many people—even brilliant domain experts—struggle with writing. Putting thoughts into coherent prose is a skill not everyone possesses. The interrogatory LLM offers a way out: instead of forcing someone to write, let the LLM interview them. The result may carry the stylistic fingerprints of AI-generated text, which some find off-putting. However, as the original essay notes, "that's better than not having the information itself, either due to rushed writing or no writing at all." For organizations needing to capture institutional knowledge quickly, the trade-off is often worth it.

Best Practices for Effective Interviews

  • Enforce single questions: Clearly instruct the LLM to ask one question per turn, and repeat the instruction if needed.
  • Define the scope: Tell the LLM what kind of context it needs to build (e.g., a requirements document, a user story, a review checklist).
  • Incorporate external sources: If the LLM can't access certain databases or APIs, instruct it to ask the human for those details or to suggest where to find them.
  • Use separate sessions: Consider using one LLM for the interview and a different model (or a fresh session) for the subsequent task to avoid contamination.
  • Iterate: After the initial context is generated, you can run a second interview to validate or expand it.

A Human-AI Partnership for Knowledge Work

The interrogatory LLM transforms the relationship from human-supplicant-to-model to a collaborative dialogue. It leverages the model's ability to ask probing questions while relying on the human's unique knowledge and judgment. As LLMs become more conversational, this method could become a standard part of the workflow for software design, documentation, and any domain where expertise needs to be extracted and structured. By turning writing into a conversation, we lower the barrier for contribution and ensure richer, more accurate context—whether for a machine or for other humans.

Related Articles

Recommended

Discover More

Secure Your AI Agents: A Step-by-Step Guide to Governing MCP Tool Calls in .NETElon Musk's Corporate Web: Tesla's Filing Reveals $573 Million in Transactions with His Other CompaniesAnthropic Unveils Claude Code Auto Mode: Autonomous Coding with Human Oversight GatesBroadcom's VMware Strategy Sparks Mass Customer Exodus to NutanixSubnautica 2 Unveils Full Cross-Platform Co-op – Up to Four Players Can Now Explore the Depths Together