Le seul vocabulaire IA dont les spécialistes du marketing ont réellement besoin

,

TL;DR

About 20 terms cover 95% of AI marketing meetings. Half the failed AI conversations happen because two people use the same word to mean different things. Learn the glossary once and you (a) won’t be intimidated by jargon and (b) can catch vendors when they’re wrong. Three distinctions matter most: generative vs. predictive, narrow vs. general AI, and training data vs. live data.

Ce que couvre ce guide

This is the minimum shared vocabulary you need to navigate vendor pitches, internal strategy meetings, and team standups about AI. It’s organized as a quick-reference glossary plus three high-leverage distinctions that filter most product decisions. Print it, share it with your team, refer back to it. You don’t need to memorize anything beyond what’s here.

Points clés à retenir

  • 20 terms cover 95% of AI marketing conversations — learn them once.
  • Generative vs. predictive vs. agentic is the only categorical split you need to filter vendor pitches.
  • RAG (Retrieval-Augmented Generation) beats fine-tuning for most enterprise marketing use cases.
  • AGI is not shipping in 2026 — if a vendor sells it, you’re being sold marketing not capability.
  • Live data trumps training data for anything current.

The 20 Terms That Cover 95% of Meetings

Terme What It Means
IA Software performing tasks associated with human intelligence — recognition, prediction, generation, optimization.
Machine Learning Systems that learn patterns from data instead of being explicitly programmed.
LLM (Large Language Model) The engine behind ChatGPT, Claude, Gemini — trained on huge text datasets to predict the next word.
Prompt The instruction you give an AI model to produce a result.
Token A chunk of text the model processes (roughly 0.75 words in English). Pricing usually per token.
Context window How much text a model can consider at once. Bigger windows let you pass full briefs and reference material.
Hallucination Confidently stated false answer. Always verify factual claims before publishing.
RAG Retrieval-Augmented Generation — the model pulls from your live documents to ground answers.
Fine-tuning Further training a base model on your own data to specialize it for a task.
Embeddings Numeric representations of text used for similarity search and semantic matching.
Vector database Storage optimized for embeddings (Pinecone, Weaviate, pgvector). Powers RAG and semantic search.
System prompt The hidden instruction that sets the model’s role, constraints, and behavior for a session.
Temperature How random or creative the model’s output is — low for factual tasks, higher for creative work.
Multimodal Works across text, image, audio, and video in one workflow.
Agent AI that takes multi-step actions on tools autonomously toward a goal.
MCP (Model Context Protocol) A standard way to connect AI to tools and data — emerging as the universal connector.
Inference Running the model to get an output (vs. training, which builds the model).
Guardrails Rules that prevent the model from going off-script (no PII, brand-safe topics, factual scope).
IA générative AI that creates new content from a prompt — text, image, audio, video, code.
Predictive AI AI that forecasts future values from past data — churn, LTV, conversion likelihood.

Three Distinctions Worth Internalizing

1. Generative vs. Predictive

Generative AI creates new content. Predictive AI forecasts future values. They are completely different toolsets with different vendors, different price models, and different success metrics. Buying a “generative AI solution” to forecast customer churn is a category error that wastes budget and time. When a vendor pitches you, ask which category their product is in — if they hedge, that’s the answer.

2. Narrow vs. General AI

Every AI tool in 2026 is narrow — good at a specific task or task family. General AI (often called AGI) doesn’t exist yet despite vendor claims. This matters in practice because narrow AI requires you to specify the task clearly. There’s no “just handle it” button. The marketers who get value from AI write specific prompts and define specific outcomes. The ones who don’t blame the model.

3. Training Data vs. Live Data

Training data is what the model learned from, with a knowledge cutoff date (usually months before today). Live data is what you feed it in the moment via RAG, web search, or document uploads. Live data trumps training data for anything current — pricing, news, competitive moves, your own customer records. Models without live data access will confidently give you yesterday’s answer to today’s question.

Erreurs courantes à éviter

  • Letting jargon intimidate you out of asking basic questions. Nine times out of ten, the person using the jargon heard it in a demo last week and can’t define it either.
  • Confusing AI with AGI. AGI doesn’t exist yet. Anyone selling it is exaggerating.
  • Skipping vocabulary work entirely. A team that can’t define the terms can’t write good prompts, evaluate vendors, or escalate problems.
  • Asking vendors for “AI” without specifying generative or predictive. You’ll get pitches for tools you don’t need.

Mesures à prendre cette semaine

  1. Pick three terms from the table above you’ve heard but never fully understood.
  2. Use each one correctly in one sentence today, out loud or in Slack.
  3. Make this glossary table available to your team in Notion or a shared doc.
  4. Schedule a 30-minute lunch-and-learn next month to walk through the 20 terms.

Foire aux questions

What’s the difference between an LLM and a chatbot?

The LLM is the underlying engine (e.g., GPT-5, Claude). The chatbot is the user interface that talks to people. ChatGPT is a chatbot powered by OpenAI’s LLMs. A chatbot on your website might be powered by Claude, GPT, Gemini, or a smaller model — the choice affects quality and cost.

RAG or fine-tuning — which should I use?

RAG for most marketing use cases. It’s cheaper, faster to update, and grounds answers in current documents (your help center, brand guide, product specs). Fine-tuning is for narrow, repetitive tasks where you’ve already proven RAG isn’t enough — and for tone-matching at very high volume.

What’s a context window and why does it matter?

It’s the amount of text a model can consider in one conversation. A larger context window (e.g., 200K tokens, roughly 150,000 words) lets you upload full brand guides, long meeting transcripts, or extensive product documentation without losing earlier context. Smaller windows force more summarization and lose nuance.

Should I worry about hallucinations?

Yes, for any output with stakes — factual claims, statistics, named people, quoted text. Always verify. RAG with citations and conservative temperature settings dramatically reduce hallucination but don’t eliminate it. Build a quick verification step into every workflow.

Is AGI shipping in 2026?

No. Useful narrow AI keeps shipping. AGI remains a research goal with no agreed-upon timeline. If a vendor markets “AGI” or “human-level AI” as a current capability, treat it as marketing, not capability — and keep verifying their other claims.

Sources et lectures complémentaires

  • Riman, T. (2026). Introduction au marketing et à l'IA 2e édition.
  • Anthropic and OpenAI documentation on RAG, embeddings, and context windows.
  • Stanford AI Index 2025.

À propos de l'agence Riman : We translate AI vocabulary into marketing decisions and run team training. Book a team training session.

← Previous: AI Marketing Landscape | Index des séries | Next: Prompt Engineering →