AI as Intellectual Infrastructure
Recently, I was a speaker at a beginner AI workshop hosted by TechGC/the L-suite for a group of fellow CLOs and general counsels. In preparing for the event, initially I found myself considering “use cases” that I could share. During that process, my thinking shifted, and I wrote this in my “quick hit list” for AI beginners:
“Lawyer use cases” are just... lawyering… You learn by playing, not by planning.
While most lawyers focus on specific tasks—document review, research, drafting—I’ve found the real value lies somewhere else entirely. AI fluency isn’t about mastering particular applications. It’s about developing a different relationship with intelligence itself.
My approach has been straightforward: I use multiple frontier models (ChatGPT, Claude, Gemini) as collaborative thought partners throughout the day. Not for discrete tasks, but as intellectual infrastructure. The “use case” is simply working—with AI augmenting the reasoning process.
This approach stems from a hypothesis I’ve held since early adoption: access to core intelligence matters more than specialized implementations. The limitations of earlier models meant you could not reliably delegate complex reasoning, so the focus was necessarily on augmentation rather than automation. As that intelligence has become more reliable, the possibilities have shifted dramatically.
To understand where this is heading, I have deliberately explored domains where AI capability is most advanced. Software engineering represents the current frontier—human coders still edge out AI in global competitions, but barely, and not for long. My experiments in this space aren’t about becoming a developer; they’re about understanding how intelligence amplification transforms knowledge work.
Crossing Domains: What Coding Experiments Reveal About Legal AI
My most instructive experiments have involved software engineering. Coding represents the current edge of what these models can reliably accomplish. If you want to understand where legal AI is heading, watching how it transforms adjacent knowledge work provides valuable signals.
Apart from watching, I decided to try out some of the AI-assisted coding tools myself. I’m a proponent of the See-Do-Review framework for deliberate practice and learning.
To that end, I’ve built several simple applications: an iPhone app1 (from scratch, using Cursor and Expo), a couple of basic games (testing Claude Artifacts), and a terminal-based chat that runs small LLM models (from Hugging Face) locally on my PC (using a consumer-grade NVIDIA GPU). These simple prototypes allowed me to stress test what a non-expert could build with AI. At the same time, it drastically accelerated my learning about AI capabilities. Nothing beats hands-on, project-based learning.
The process also revealed something important to me about how AI changes skill acquisition. When I was studying computer science years ago, syntax errors were the bane of my existence. The concepts and program structure made sense, but the linguistic precision required by code was brutally unforgiving.
Current models essentially eliminate that friction. They handle syntax fluently while also helping with architectural decisions—at least for beginners. Of course, this doesn’t mean you can simply prompt “build me a complex application” or “fix this” and expect reliable results. I learned this the hard way when I let an LLM refactor (i.e., restructure and rewrite) a working prototype of one app without first committing the code to version control using GitHub (essentially creating save points for your work). The result was a complete loss of functional code. Thanks AI. Lesson learned.
But this failure taught me something crucial about the current state of AI-assisted work: it parallels exactly what I discovered years ago when I tested early versions of ChatGPT, seeing if it could one-shot a brief for a small litigation matter. (In this context, “one-shot” means asking the AI write the entire brief in one fell swoop from a single prompt.) Wrong approach entirely. These systems continue to excel at iteration and collaboration, not complex, unsupervised, autonomous execution.
The correct methodology—whether for coding or brief-writing—involves ideation and outlining first, then attacking complex problems piece by piece while maintaining architectural control. Modern reasoning models are making one-shot approaches more feasible for certain tasks, but we’re not (quite) there yet for anything genuinely complex.
This maps directly onto legal work. There are some low-hanging fruit where you don’t need extensive customization—routine documents where you can input context, let the system run, verify the output, and deliver the result. But for complex matters, the AI remains extraordinarily useful only when you’re actively serving as the architect, engineering the approach, and remaining fully in the loop.
We’re squarely in the “centaur chess” era of knowledge work—human-AI collaboration where the intelligence amplifies human expertise rather than replacing it. The question isn’t whether to enter this stage, but how quickly you can develop fluency within it.
From Assistant to Agent: Orchestrating Intelligence
The most significant shift in my recent experimentation has been moving beyond individual interactions with an AI assistant toward orchestrated workflows—specifically, those incorporating “agentic” elements. The distinction: an assistant helps you accomplish tasks; an agent accomplishes tasks for you.
Most lawyers who are experimenting with AI have probably used Deep Research, where you provide a broad research question and the system autonomously conducts multiple searches, analyzes sources, and synthesizes findings into a comprehensive memo. You’re not orchestrating each query or evaluation—the AI is managing that workflow independently.
This represents a fundamentally different relationship with artificial intelligence. Instead of prompting for specific outputs, you’re defining objectives and allowing the system to determine methodology. The technical infrastructure behind this involves giving AI access to tools, persistent memory, and decision-making frameworks, but the practical impact is more profound: you can delegate entire categories of complex work.
I've been experimenting with building custom workflows using n8n rather than relying on off-the-shelf agent platforms that often function as black boxes. My approach involves creating transparent, modular systems where I understand exactly how decisions are being made and can maintain control over the process.
One project I started focuses on regulatory compliance analysis—a solid test case because it’s complex, but also inherently labor-intensive. When new regulatory frameworks are announced, companies must systematically cross-reference dozens of SOPs against new guidance, identify gaps, check internal references, and document required changes. Traditionally, this means weeks of manual document analysis.
I’m experimenting with a multi-component system (designed with AI assistance and documented using project templates I developed alongside AI) where a “Mapper” extracts regulatory requirements, an “Auditor” assesses compliance gaps, and a “Strategist” provides recommendations. I started with a simple document processing pipeline (example below) to test systematic analysis capabilities, but the broader vision is shaping up to incorporate true agentic elements: persistent memory, tool access, and autonomous decision-making about what additional information to gather.
Family trips and summer obligations have slowed progress, but I’m confident I can complete this project because AI has removed the traditional friction points that used to derail complex experiments. System design guidance, project management frameworks, knowing exactly where to pick up after interruptions—AI handles the organizational overhead that previously made sustained technical learning nearly impossible for someone operating outside their core domain.
To help with all this, I designed a custom Claude project that serves as my technical tutor, designed to explain concepts through first principles rather than simply prescribing what to do. I’ve designed it to channel a Richard Feynman-esque approach to learning—breaking down complex ideas into fundamental components and helping me understand not just the “how” but the “why” behind every decision.
You are a patient coding tutor and computer science professor, who covers all sorts of relevant topics, but you are particularly skilled at guiding users on "vibe coding," using AI to build magical apps and programs.
You are the Richard Feynman of computer science. You always go step-by-step, explaining precisely what we need to do, how we do it, and most importantly, why we do it at each step.
Where do I find the time? Early mornings, often before sunrise, after meditation and stretching. But here’s what surprised me the most: this work has become genuinely engaging. Time I used to spend on less productive activities now goes toward these experiments because building things and pushing boundaries is inherently rewarding.
What I've Learned: The Foundation Changes Everything
Through my experiments and conversations with dozens of legal colleagues, a few key insights have emerged about how this technology actually transforms work.
Deliberate practice beats passive consumption. Every lawyer I know who’s genuinely fluent got there through hands-on experimentation, not courses or frameworks. You learn AI like learning an instrument—through play, iteration, and regular practice. The lawyers making real progress use it daily, all the time—yet maintain their cognitive integrity by deliberately practicing core reasoning skills without AI assistance and regularly auditing which thinking they’re delegating versus amplifying.
Everyone’s path will be different, and that’s exciting. Some colleagues are automating document review, others are building research workflows, still others are using it to explore creative projects or learn new skills. The beauty of having access to core intelligence is that it becomes foundational infrastructure—you can build whatever aligns with your interests and needs.
Start with curiosity, not efficiency. The lawyers who plateau early are those who approach AI as a productivity hack. The ones who continue advancing treat it as intellectual exploration. Play without purpose. Try things that seem interesting rather than immediately practical. Deep Research on random topics, build something in Claude Artifacts, experiment with video generation. This isn’t wasted time—it’s developing intuition about capabilities and boundaries.
Work alongside it constantly. Keep multiple models open, parallel-process everything through them, develop comfort with the technology by making it part of your daily thinking process. This builds fluency faster than any structured approach.
The revolutionary aspect isn’t that AI will replace legal work—it’s that core intelligence as foundational infrastructure opens possibilities we couldn’t previously imagine. Everyone gets to explore what becomes possible when the friction of complex reasoning is dramatically reduced.
Where This Is Heading
The most challenging aspect isn’t technical—it’s psychological. I’ve grown comfortable with AI as an assistant where I maintain control, but as these systems become more autonomous, the pressure to delegate entire categories of reasoning becomes harder to resist.
This creates genuine tension. The black box problem is real—if I can’t trace how the system reached its conclusions, how do I assess reliability? And what happens to my analytical capabilities if I consistently delegate the thinking that keeps those skills sharp? (Explored in my prior piece, here.)
Yet the trajectory seems inevitable. The question that interests me most isn’t whether this transition will happen, but what it enables. If systematic document analysis and compliance checking become reliably automated, what does that free us to focus on? The highest-value legal work—judgment, strategy, creative problem-solving—where human insight combines uniquely with domain expertise.
My working hypothesis is that AI will expand the scope of what individual practitioners can accomplish rather than simply replacing existing work. Within 12-24 months, I expect agent-based workflows to become standard for routine tasks. The practitioners who develop fluency with these systems now—while maintaining judgment about when human involvement remains essential—will define the next generation of legal practice.
Conclusion
Through my experiments and conversations with legal colleagues, I’ve come to see AI adoption differently. This isn’t about keeping up or falling behind—it’s about expanding what’s possible for individual practitioners to accomplish.
My experiments across coding, workflow automation, and regulatory analysis aren’t about becoming a developer or systems architect. They’re about understanding how intelligence amplification transforms knowledge work and positioning myself to recognize opportunities as they emerge. When systematic analysis takes minutes instead of hours, you can focus on problems that require genuine human creativity and domain expertise.
The tools are here. The intelligence is sufficient. The choice isn’t whether this technology will reshape legal practice, but whether one will develop fluency with AI as thinking infrastructure. The biggest use case isn’t automation—it’s augmented reasoning itself. Everything else follows from there.
Inspired by Wharton Professor Ethan Mollick’s concept of using “Tarot” cards to facilitate meeting discussions, I created a simple journaling app that draws a Tarot card to provide practical insights in the style of Paul Quinn’s Jungian psychology-based interpretations from the book Tarot for Life. This whimsical project perfectly embodied “learning by playing”—I had no practical need for the app, but building it taught me more about AI integration than any tutorial could have.