The AI Communication Paradox
How Generation and Verification Asymmetries Shape LegalTech Adoption
This week, I watched my four-year-old daughter direct an AI image generator to visualize her dream “big girl bed fort.” As she described what she wanted—more pillows here, lights and “coloring things” there, add some stuffed animals—the AI transformed her words into images that captured her imagination with surprising accuracy.
After each iteration, she’d point at the screen: “Yes, that!” or “No, make it more pink!” She was communicating her vision more effectively than she ever could with words alone. (We used Open AI’s new 4o Image Generation, or “Computer,” as my daughter called it.)
This wasn’t just a cute parent-child moment. It was a perfect illustration of one of AI’s most profound capabilities: bridging the gap between imagination and expression.
The Generator-Discriminator Gap: A Practical Framework
At the heart of this capability lies the generator-discriminator gap—a concept that helps explain why AI has become such a powerful communication tool. The principle is straightforward: it’s far easier for humans to recognize what they like or don’t like (discriminate) than it is to create something from scratch (generate).
This principle—that humans find it easier to judge options than create them from scratch—has been influential in modern AI assistant development. OpenAI’s seminal InstructGPT paper (2022) described their initial approach where human labelers manually created examples of ideal assistant responses to train the models how to respond, an approach known as supervised fine-tuning.
Leveraging the generator-discriminator gap, training methods have evolved. In a process known as Reinforcement Learning from Human Feedback (RLHF), models create outputs (generate), humans judge them (discriminate), and the human preferences are then used to train a specialized reward model. This reward model then guides an iterative optimization process that helps the system learn which outputs humans prefer. At its core though, the generator-discriminator gap is fueling the process—machine generates, human discriminates, machine learns.1
The generator-discriminator gap explains why my daughter could so easily direct the AI toward her vision. She couldn’t draw her dream bedroom herself, but she immediately knew when the AI’s depiction matched what she had in mind.2
Brandolini’s Law: The Flip Side of the Coin
There’s another principle worth considering alongside the generator-discriminator gap: Brandolini’s Law (also known as the bullshit asymmetry principle). First articulated by Italian programmer Alberto Brandolini, it states that “the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.” In other words, it’s much easier to make a claim than to verify it.
These two principles create an interesting tension when applied to AI. The generator-discriminator gap explains why AI excels at creative tasks: humans expend high energy generating content but low energy discriminating between options. AI flips this equation, making generation effortless while human discrimination remains the valuable, low-effort input.
Conversely, Brandolini’s Law exposes AI’s challenge with factual reliability. For language models, generating plausible-sounding true or false statements requires roughly equal energy—both are simply statistical predictions based on training patterns. Yet the energy required for humans to verify these claims remains extremely high, creating a verification burden that falls entirely on the user—particularly concerning for legal professionals who rely on factual precision.
This energy asymmetry explains why AI can simultaneously feel like a communication superpower and a potential liability. The same system that bridges the gap between imagination and expression can generate convincing inaccuracies requiring substantial human effort to detect.
For legal professionals, these dual principles create a strategic imperative: deploy AI with precision awareness. We must recognize where the generator-discriminator gap creates genuine value and where Brandolini’s Law warns of hidden costs—a practical framework for determining exactly where and how AI belongs in our professional toolkit.
Where AI Creates Practical Value: Communication and Expression
AI excels at bridging the gap between concept and expression when:
Clarity matters more than originality. For professionals focused on conveying ideas rather than showcasing personal expression, AI provides a frictionless path from concept to communication.
You have vision but face execution barriers. Like my daughter’s bedroom visualization example, AI thrives when you know what you want but lack either time or specific skills to create it yourself.
Blank-page hurdles consume disproportionate energy. Even skilled professionals face activation energy challenges with first drafts that AI can overcome.
Variations help clarify thinking. Sometimes we only recognize what we want after seeing options to compare.
Value-added AI uses can feel like the Pareto Principle on steroids—where traditionally 20% of effort might yield 80% of results, AI now allows an even smaller investment of human energy (simple discrimination between options) to yield nearly complete communication products. The professional’s value shifts from generation to curation, from creation to discrimination.
From an ethical perspective, when the goal is simply effective communication of ideas—as in most legal work—AI assistance raises few concerns. The ethical success measure is straightforward: did the audience understand the intended message? In these cases, whether AI helped craft the explanation is secondary to whether the explanation achieved its purpose. The focus remains on outcome rather than process.
This ethical calculation shifts significantly when human expression itself is the primary value. In creative or artistic contexts, the connection between creator and creation often matters fundamentally—the voice, perspective, and unique human origin are inseparable from the work’s meaning and value. Legal professionals generally operate outside this domain; our communications primarily serve as vehicles for ideas and information rather than as expressions valuable for their human origin alone.
Where Caution Is Warranted: The Factual Frontier
The picture changes dramatically when we need AI to provide factually accurate information—particularly in legal contexts where precision is paramount.
Here, Brandolini’s Law creates a serious challenge: while AI can generate plausible-sounding legal analysis with remarkable fluency, verifying its accuracy often requires more time and effort than conducting the research yourself.
This verification burden stems from LLMs’ fundamental architecture. These systems retain only vague recollections of their training data—not perfect retrieval like databases. They function as “token-level internet document simulators” (per Andrej Karpathy) whose parameters compress internet knowledge. They don’t “know” facts with certainty but produce text based on observed statistical patterns.
While drafting this piece, Claude 3.7 subtly misquoted Brandolini’s Law itself—which I immediately fact-checked. This highlights how LLMs produce “mostly right” approximations rather than precise recall. For legal work, where exact wording determines outcomes, this creates an energy expenditure asymmetry that particularly impacts our profession.
This challenge appears consistently across legal applications:
When asking AI to analyze case law, every citation must be verified.
When using AI for contract review, every provision must be double-checked.
When generating legal arguments, every premise must be validated.
Finding the Sweet Spot: Practical Applications
Understanding these principles helps identify where AI creates maximum value with minimum downside. The key insight is recognizing two fundamentally different use cases that are often conflated:
AI as a communication facilitator for ideas already in your head
AI as a source of factual information
In the first case, verification is straightforward because you’re evaluating whether the output accurately reflects YOUR own ideas—the verification burden is minimal and leverages the generator-discriminator gap effectively. In the second case (traditionally the domain of legal research tools), the verification burden often outweighs any generation benefits due to Brandolini’s Law. Many frustrations with AI stem from failing to distinguish between these distinct applications.
With this framework in mind, we can identify specific applications where AI creates genuine value:
High-Value, Low-Risk Applications:
Drafting initial communications that you’ll review and refine
Generating creative options for presentations or educational materials
Brainstorming approaches to complex problems
Explaining complex concepts to clients in multiple ways
In these scenarios, you leverage the generator-discriminator gap while minimizing exposure to factual inaccuracies.
Approach with Caution:
Legal research requiring precise citation
Document review where subtle distinctions matter
Factual analysis of complex information
Any output where verification cost exceeds direct creation cost
The Sociotechnical Dimension: How AI Shapes Us While We Shape It
Beyond these practical distinctions, we must also consider a deeper dynamic: these aren’t just tools we use—they’re systems that subtly reshape how we think.
The relationship between AI and humans is bidirectional. While our inputs and feedback shape AI outputs, those outputs simultaneously shape our thinking patterns and creative processes. This “sociotechnical” dimension creates both opportunities and risks we must navigate thoughtfully.
Consider my daughter’s bedroom design experience again. The AI didn’t just execute her vision—it suggested possibilities she hadn’t imagined. This expanded her creative horizons, but also created an anchoring effect that potentially constrained further exploration. Once you’ve seen a particular visualization, it’s difficult to unsee it—similar to how a movie adaptation permanently fixes how you imagine a character from a book. Daniel Radcliffe is Harry Potter.
For legal professionals, this effect manifests in subtler ways. When we use AI to generate explanations, arguments, or communication approaches, we unconsciously absorb its patterns and tendencies. These systems learn from collective human feedback, developing preferences for certain explanatory frameworks, analogies, or communication styles. Left unchecked, this could lead to a homogenization of professional communication—where the most commonly rewarded patterns become increasingly dominant and alternative approaches gradually fade.
The risks extend beyond stylistic concerns. When we repeatedly delegate our initial thinking to AI, we may inadvertently outsource the formative stages of problem-solving—the messy, exploratory thinking that often leads to novel insights. The technology becomes not just a tool for expression but a substitute for certain forms of cognition. Going on creative autopilot weakens precisely the uniquely human capabilities that distinguish us from our tools.
This isn’t an argument against using AI, but rather for using it with awareness of its influence on our thinking. The goal isn’t merely efficient production but enhanced human capability—using technology to bridge gaps in our natural communication processes without surrendering our conceptual diversity or cognitive independence.
The Path Forward: Evolving with AI
For professionals navigating this landscape, the practical approach follows directly from our framework:
Leverage AI where the generator-discriminator gap works in your favor - When you need creative options for communication that you’ll refine with your judgment
Be wary when factual precision matters - Recognize when verification costs outweigh generation benefits
Develop efficient verification workflows - Create systems to validate AI-generated content when accuracy matters
Respect context-dependence - AI thrives with clear constraints but struggles with nuanced contexts
Looking ahead, technology will continue reshaping these dynamics. Experimental verification systems that can fact-check AI outputs in real-time may eventually reduce the Brandolini’s Law burden. Meanwhile, the “generator” side of the equation improves dramatically with each model release—gradually shrinking the iterations needed to reach optimal results.
For legal professionals specifically, domain-specific models that understand legal precedent without hallucinating citations will become increasingly valuable. The practitioners who understand exactly when to leverage AI and when to rely on traditional methods will maintain the strategic advantage.
Full Circle
When my daughter directed the AI to visualize her dream bedroom, she wasn’t just playing with a new toy—she was intuitively navigating the generator-discriminator gap. She knew exactly what she wanted when she saw it, even if she couldn’t create it herself. Her delight in the process revealed AI’s most profound capability: not replacing human creativity but amplifying our ability to express it.
This remains the North Star for professional AI use. The most powerful implementation treats AI as a communication interface rather than a factual oracle—a bridge between what we can recognize and what we can articulate. With this understanding, we can embrace these tools’ remarkable capabilities while preserving the critical judgment that defines true expertise.
Andrej Karpathy explains this concept clearly in his “Deep Dive into LLMs like ChatGPT”, likely the clearest explanation of large language model development that currently exists. Karpathy is a renowned AI researcher and educator known for his ability to explain complex AI concepts in accessible terms (much like Richard Feynman did for physics). I highly recommend his YouTube channel.
You can skip here for his explanation of the generator-discriminator gap—where I first heard the phrase used. OCD lawyer note: he refers to the “discriminator-generator gap” in the video, but calls it the “generator-discriminator gap” in his X post I cited in the body of the text. I don’t think it matters. :)
A note on terminology: Throughout this piece, I use “AI” broadly for readability, but I’m primarily discussing large language models (LLMs) and other generative AI technologies. These specific systems—trained on vast text corpora and designed to predict and generate content—present unique advantages and challenges distinct from other AI technologies like classical machine learning algorithms or expert systems.