AI, Analogy, and the Future of Law and Marketing

Persuasion Machines
Both law and marketing are persuasion-driven professions. Lawyers persuade judges, juries, and regulators. Marketers persuade customers, investors, and the public. At the core of persuasion in both domains lies one of humanity’s oldest tools for thinking: analogy.
“An atom is like a solar system.”
“This case is like that precedent.”
“Your product is like an iPhone for [industry].”
Analogies are shortcuts to understanding. They let us transfer knowledge, structure arguments, and make unfamiliar ideas relatable.
The question now is ~ what happens when machines start making analogies?
Large Language Models (LLMs) such as GPT, Claude, and Gemini are entering spaces once thought exclusively human. They can generate legal briefs, draft marketing campaigns, and spin up metaphors on command. But do they reason by analogy the way humans do—or simply imitate? And what does that mean for the future of professions built on persuasion?
The Power of Analogy in Persuasion
Before we talk AI, let’s ground ourselves in why analogy matters so much in law and marketing.
- In law: Analogical reasoning is foundational. Courts look to precedents: “This case is like Smith v. Jones, therefore the ruling should be similar.” Lawyers don’t just cite; they map relationships between old and new cases. That’s persuasion by analogy.
- In marketing: Analogies fuel storytelling. A fitness app isn’t just a program—it’s “a personal trainer in your pocket.” Good copy positions products through analogies that connect with human experience.
Both fields thrive on transfer: moving meaning from something familiar to something new. And this is exactly the kind of pattern-making AI is built to exploit.
How Humans Build Analogies
Cognitive science shows that humans don’t just look for surface similarities. We align structures and relationships.
- Structure-Mapping Theory (Gentner, 1983) — PDF describes analogy as a mapping of relations (not just features) between two domains.
- Example: In the solar system, planets orbit a sun. In the atom, electrons orbit a nucleus. The relation “small bodies orbit a central mass” transfers.
The brain networks involved—particularly the rostrolateral prefrontal cortex and hippocampus—enable us to retrieve memories, detect patterns, and map structures in ways tuned for persuasion and problem-solving.
How AI Does It Differently
Here’s the key distinction:
- Humans explicitly align relational structures (case law → precedent, feature → benefit).
- LLMs approximate analogy using embeddings—mathematical vectors that cluster similar words and concepts based on context in massive datasets.
When an LLM fills in “Hand is to glove as foot is to ___” → “sock,” it isn’t reasoning like a lawyer. It’s detecting that the vector difference between “hand” and “glove” is similar to that between “foot” and “sock.”
That’s powerful, but it isn’t true relational mapping—it’s statistical geometry masquerading as reasoning. This explains both the strength and weakness of LLMs in persuasion tasks.
AI in Marketing - Fast, Fluent, and ...Flat
Where LLMs Excel
- Speed: Instant generation of brand analogies, headlines, and campaign copy.
- Variation: Dozens of stylistic takes in seconds (“personal trainer in your pocket,” “coach on demand,” “fitness guide in your hand”).
- Pattern mining: Can imitate analogies seen across thousands of campaigns.
Where They Fail
- Clichés: Without human judgment, outputs often land in well-trodden territory.
- Context misses: A campaign analogy that works in Toronto may flop in Tokyo; LLMs don’t inherently know cultural nuance.
- Empathy gap: Humans map analogies not just on structure, but on audience emotion. That layer of resonance remains stubbornly human.
AI in Law - Useful but Risky
Where LLMs Excel
- Drafting efficiency: First-pass briefs, memos, or case summaries.
- Pattern recognition: Identifying similar precedents or case law at scale.
- Educational support: Explaining complex concepts in simpler, analogy-rich terms.
Where They Fail
- Hallucinated precedents: AI may confidently cite cases that don’t exist.
- Jurisdictional blindness: Analogies may pull from U.S. law when Canadian context is needed.
- Lack of stakes-awareness: Lawyers reason strategically with lives, liberty, or millions at stake. AI cannot weigh those consequences.
For law societies across Canada, this is the red line: LLMs can assist in persuasion, but they cannot own it. Responsibility—and liability—remains human.
Persuasion Machines - The Middle Ground
So, are LLMs “persuasion machines”? In one sense, yes: they generate analogies at scale, which is the raw material of persuasion. But their analogies are shallow approximations.
The winning formula isn’t AI versus human. It’s AI + human:
- In marketing: LLM drafts campaign analogies, humans filter for cultural resonance and originality.
- In law: LLMs draft case comparisons, lawyers vet for accuracy, ethical compliance, and persuasive force.
In both, humans bring goal-directed judgment—deciding which analogy matters, and why.
What the Next 5–10 Years Hold
Short Term (1–3 years)
- LLMs become ubiquitous assistants for persuasion-heavy writing.
- Marketers rely on them for campaign drafts.
- Lawyers use them for internal memos, but courts remain wary of AI-generated submissions.
Medium Term (3–5 years)
- Neuro-symbolic AI emerges—hybrids that combine LLM fluency with explicit relational reasoning.
- Marketing tools offer analogy engines that adjust for audience psychology.
- Legal AI tools link directly to verified case databases to prevent hallucinations.
Long Term (5–10 years)
- AI capable of true relational reasoning appears, closing the gap between embeddings and human-like analogy.
- Routine persuasion tasks (draft contracts, boilerplate marketing campaigns) become semi-automated.
- Human persuasion shifts upward: high-stakes law, breakthrough branding, and ethical strategy remain human domains.
So… Who Owns Persuasion?
Law and marketing share a truth: persuasion isn’t just about words. It’s about judgment, stakes, and context.
LLMs are powerful persuasion machines in the sense that they can mass-generate analogies. But they don’t yet reason with analogy the way humans do. They imitate the surface; humans grasp the structure.
For now—and likely well into the future—the most persuasive work will come from humans who know how to harness AI, not be replaced by it.
In courtrooms and campaigns alike, analogy remains the common currency. And while AI may mint more of it, humans still decide its value.