AI, Analogy, and the Future of Law and Marketing

Persuasion Machines

By Dave Taillefer, Business Director / ICONA

Law and marketing share a common foundation: both rely on the ability to persuade. Lawyers frame arguments that move judges and regulators. Marketers build narratives that shape customer decisions and public perception. And at the centre of both disciplines sits one of humanity’s oldest cognitive tools — analogy.

“An atom is like a solar system.”
“This case resembles that precedent.”
“Your product is the iPhone for this industry.”

Analogies compress complexity. They turn unfamiliar concepts into something we already understand, bridging knowledge gaps and accelerating judgment. They remain one of the most efficient tools for reasoning.

Now a deeper question is emerging: what happens when machines begin generating analogies of their own?

Large Language Models (LLMs) such as GPT, Claude, and Gemini have moved into work that once seemed inseparably human. They can draft briefs, build campaigns, and produce metaphors with ease. But the real issue isn’t output — it’s method. Are these systems reasoning through analogy, or simulating it? And what does that distinction mean for professions built on influence and interpretation?

The Power of Analogy in Persuasion

Analogy sits near the heart of both legal argument and marketing strategy.

  • In law: Persuasion often hinges on comparing one case to another. Courts rely on precedent; lawyers draw structural parallels to argue why an outcome should align with past rulings.
  • In marketing: Analogies anchor messages in familiar experiences. A fitness app becomes “a personal trainer in your pocket.” A complex service becomes something instantly recognizable.

Both domains rely on transfer — carrying meaning from the known to the new. And that’s precisely the type of pattern-matching AI has begun to automate, though not always in the way humans do it.

How Humans Build Analogies

Cognitive science shows that humans don’t match ideas based on surface features. We map relationships and structures.

  • Structure-Mapping Theory (Gentner, 1983) — PDF defines analogy as the alignment of relational patterns, not just shared traits.
  • Example: Planets orbit a sun; electrons orbit a nucleus. The specifics differ, but the underlying relational structure carries over.

This mapping ability draws on regions of the brain such as the rostrolateral prefrontal cortex and hippocampus, which help us retrieve memories, detect patterns, and build arguments. It’s a distinctly human form of reasoning.

How AI Does It Differently

Here’s where the systems diverge.

  • Humans reason through relationships, consequences, and intent.
  • LLMs rely on embeddings, clustering words and ideas based on their statistical proximity across massive datasets.

When an LLM answers “Hand is to glove as foot is to ___” with “sock,” it isn’t tracing structural relations. It’s detecting that the vector difference between “hand” and “glove” mirrors the vector between “foot” and “sock.”

It’s sophisticated pattern geometry, not true analogy. This gap explains why AI can sound insightful one moment and superficial the next.

AI in Marketing — Fast, Fluent, and Often Flat

Where LLMs Excel

  • Speed: They produce analogies, headlines, and variations instantly.
  • Range: They generate numerous conceptual angles at once.
  • Pattern recall: They draw on a vast base of advertising language.

Where They Fall Short

  • Clichés: Without direction, they default to familiar patterns.
  • Cultural nuance: An analogy that resonates in one market may fall flat in another.
  • Emotional judgment: Humans weigh tone and audience sentiment; AI doesn’t.

AI in Law — Helpful, but Not a Substitute

Where LLMs Excel

  • Drafting: Rapid first-pass briefs, summaries, or memos.
  • Pattern spotting: Surfacing similar cases across large datasets.
  • Explanations: Breaking down complex doctrines through analogy.

Where They Fail

  • Hallucinated cases: Confidently presenting decisions that never occurred.
  • Jurisdictional drift: Pulling U.S. concepts into Canadian-law contexts.
  • Lack of stakes-awareness: Humans adjust reasoning when consequences escalate; AI does not.

For Canadian law societies, that distinction is decisive. AI may support persuasion, but it cannot replace the professional judgment tied to ethical and legal responsibility.

Persuasion Machines — A Middle Ground

So are LLMs really “persuasion machines”? In one sense, yes — they can generate analogies at unprecedented scale. But the analogies often lack the structural depth that persuasion demands.

The real advantage isn’t AI versus human. It’s AI with human oversight:

  • In marketing: AI proposes possibilities; humans choose what resonates.
  • In law: AI drafts comparisons; lawyers determine what holds up under scrutiny.

Humans provide the judgment — selecting the analogy that strengthens an argument and discarding the ones that don’t.

What the Next 5–10 Years Are Likely to Bring

Short Term (1–3 years)

  • LLMs become routine assistants for persuasion-heavy writing.
  • Marketers use them for rapid draft cycles.
  • Lawyers rely on them internally while courts continue to scrutinize AI-generated material.

Medium Term (3–5 years)

  • Neuro-symbolic AI begins merging statistical models with explicit reasoning systems.
  • Marketing tools adapt analogies to audience psychology.
  • Legal AI tools integrate with verified caselaw repositories.

Long Term (5–10 years)

  • AI with deeper relational reasoning begins to emerge.
  • Routine persuasion tasks — boilerplate campaigns or standard agreements — become semi-automated.
  • Human work shifts toward strategy, narrative framing, and high-stakes advocacy.

So… Who Owns Persuasion?

Persuasion has always involved more than language. It relies on judgment, context, and the stakes of the moment. LLMs can generate analogies at scale, but they don’t yet grasp the underlying structure in the way humans do. They imitate the pattern; humans understand the purpose.

For now — and likely for many years — the strongest arguments and most effective campaigns will come from people who know how to use AI without surrendering judgment to it.

In the courtroom and the marketplace alike, analogy remains the shared vocabulary of influence. AI may flood the field with possibilities, but humans still decide which ones carry weight.

About the Author

Dave Taillefer is Business Director at ICONA Inc., overseeing SEO, content strategy and digital transformation for Canadian law firms. With deep experience in legal marketing, technical SEO and generative search strategy, Dave helps firms strengthen their digital visibility in a rapidly changing landscape.