Etna AI Glossary (2026)

Plain-English definitions for leadership AI fluency. Each term includes why it matters at Etna and a usage note.

Automation

Foundations
Definition: Rule-based execution of tasks without human intervention.
Why it matters: Many “AI wins” are actually automation wins; confusing the two increases cost and risk.
Use when: Deciding if a workflow needs intelligence or just consistency.

Generative AI

Foundations
Definition: Models that create new content (text, images, audio, code).
Why it matters: Multiplies ideation and first-draft speed.
Use when: You need options/variations to edit rather than start from scratch.

Large Language Model (LLM)

Foundations
Definition: A model trained on vast text to predict the next tokens; powers chat/assistants.
Why it matters: Core engine behind drafting, summarizing, and Q&A.
Use when: You need language tasks done quickly (briefs, emails, reports).

Inference

Foundations
Definition: Running a trained model to generate predictions/outputs.
Why it matters: Where latency and per-output cost are felt.
Use when: Estimating throughput and SLAs for workflows.

Deterministic vs Probabilistic Systems

Foundations
Definition: Deterministic systems produce the same output given the same input; probabilistic systems may vary.
Why it matters: Sets expectations for consistency and review needs.
Use when: Deciding where AI is appropriate vs where strict repeatability is required.

Context Window

Foundations
Definition: The maximum text the model can consider at once.
Why it matters: Too little context → omissions; too much → cost.
Use when: Deciding how much source material to include.

Token / Tokenization

Foundations
Definition: Sub-word units models read and bill on.
Why it matters: Drives cost/latency and limits prompt size.
Use when: Estimating run cost and fitting long inputs.

Transformer

Foundations
Definition: The neural architecture most modern LLMs use.
Why it matters: Explains why attention to context matters (and costs scale).
Use when: Framing limits and behavior of LLMs to non-technical peers.

Prompt

Prompting
Definition: The instructions and context you give the model.
Why it matters: Clear prompts produce better, more consistent results.
Use when: You specify role, task, context, constraints, and format.

Prompt Framework (RTF/TAG/RISE)

Prompting
Definition: Simple structures for writing prompts (e.g., Role-Task-Format).
Why it matters: Reduces trial-and-error; improves reproducibility.
Use when: You want teammates to get similar results for the same task.

System Prompt

Prompting
Definition: A persistent instruction that sets the model’s role/behavior.
Why it matters: Standardizes outputs across people and time.
Use when: Building repeatable assistants or SOP-driven workflows.

Temperature

Prompting
Definition: A setting that controls randomness (lower = more deterministic).
Why it matters: Balances creativity vs. consistency.
Use when: Lower for compliance/SOPs; higher for brainstorming.

Chain-of-Thought (CoT)

Prompting
Definition: Prompting the model to reason step-by-step.
Why it matters: Improves problem decomposition and transparency.
Use when: You want structured reasoning or math-like steps.

RTF (Role–Task–Format)

Prompt Framework
Definition: A prompt structure that specifies who the AI is, what it must do, and what the output should look like.
Why it matters: Most weak prompts fail because they don’t define the role clearly, the task precisely, or what “done” looks like.
Use when: You want reliable first drafts, summaries, rewrites, or deliverables with a specific output structure.
Template: Role: You are a [role]. Task: Do [task]. Format: Return as [format].
Example: Role: “You’re a B2B SaaS product marketer.” Task: “Write a launch announcement for an AI-powered CRM feature.” Format: “LinkedIn post with a clear CTA + 3 bullet benefits.”
Watch-outs: Role too vague (“be helpful”), or no format specified (output rambles).

TAG (Task–Action–Goal)

Prompt Framework
Definition: A framework that clarifies what the work is, what action to take, and what outcome success should drive.
Why it matters: Leaders often ask for work without specifying the intended outcome. TAG forces the “so what” into the prompt.
Use when: Redesigning something, improving performance, or writing for conversion and outcomes.
Template: Task: [what you’re working on]. Action: [what to do]. Goal: [how success is measured].
Example: Task: “Onboarding email sequence.” Action: “Revise our current 5-email flow.” Goal: “Increase 7-day activation by 20%.”
Watch-outs: Goal is fuzzy (“make it better”) or missing constraints (audience, tone, legal, brand).

BAB (Before–After–Bridge)

Prompt Framework
Definition: A change-focused framework that describes the current state, the desired state, and the bridge to get there.
Why it matters: Forces leaders to state the transformation, not just the task.
Use when: You need improvement plans, recommendations, or transformation strategies.
Template: Before: [current state]. After: [desired state]. Bridge: [recommended approach].
Example: Before: “Low daily app engagement.” After: “Users return at least 3x/week.” Bridge: “Suggest feature, UX, and notification changes.”
Watch-outs: “After” is unrealistic or unmeasurable, or the bridge jumps to tactics without diagnosing constraints.

CARE (Context–Action–Result–Example)

Prompt Framework
Definition: A framework that provides context, specifies what to do, defines success, and includes an example.
Why it matters: AI performs better when it can pattern-match against a reference example.
Use when: You want outputs to match a known style, structure, or proven approach.
Template: Context: [background]. Action: [task]. Result: [success]. Example: [reference].
Example: Context: “Virtual summit for e-commerce founders.” Action: “Draft landing page sections.” Result: “1,000+ registrations in 4 weeks.” Example: “Use a structure similar to Shopify’s summit page.”
Watch-outs: No example provided (output becomes generic), or results defined without constraints (timeline, audience, tone).

RISE (Role–Input–Steps–Outcome)

Prompt Framework
Definition: A framework that clarifies role, inputs, steps, and desired outcome.
Why it matters: Prevents “magic wand prompting” by forcing clarity on inputs and process.
Use when: Work depends on real inputs and you want transparent reasoning.
Template: Role: [who]. Input: [materials]. Steps: [process]. Outcome: [success].
Example: Role: “Senior UX designer.” Input: “User interviews + checkout heatmaps.” Steps: “Identify drop-off points → propose fixes → prioritize by impact.” Outcome: “Increase completion from 45% to 60%.”
Watch-outs: Missing inputs increases hallucination risk; steps that are too loose produce shallow outputs.

Explain It Back Test

Technique
Definition: A technique where the AI (or a person) restates a plan, workflow, or decision in simpler terms to surface gaps, misunderstandings, or hidden assumptions.
Why it matters: Work often seems clear until it is restated plainly. This quickly reveals ambiguity, missing steps, or invisible constraints that were assumed but never articulated.
Use when: Reviewing AI-generated plans/summaries, checking whether a workflow is actually understood, or diagnosing why something “sounded right” but feels off.

What’s Still Human?

Technique
Definition: A reflective prompt that forces leaders to explicitly identify which parts of a workflow or decision must remain human-owned, even when AI is involved.
Why it matters: AI often relocates judgment rather than removing it. If leaders don’t deliberately assign human responsibility, accountability quietly degrades.
Use when: Evaluating AI-assisted workflows, designing review/approval steps, or clarifying ownership of decisions with real impact or risk.

Two-Pass Discipline

Technique
Definition: A structured approach that separates thinking from execution by using AI in two deliberate stages: first for structure and assumptions, then for execution or detail.
Why it matters: Skipping directly to execution increases rework and outcome illusion. Separating the passes preserves leadership judgment and improves clarity before scale or speed.
Use when: Working on complex or high-impact tasks, when outputs feel fast but misaligned, or when you want to reduce downstream correction.

Determinism Check

Technique
Definition: A diagnostic question to classify work: “If I gave the same input 100 times, should the output be identical?”
Why it matters: Clarifies whether work should be automated (deterministic) or supported by AI with human oversight (probabilistic), reducing misuse and risk.
Use when: Deciding between automation vs. AI assistance, evaluating workflow redesign opportunities, or preventing AI from being applied to rule-bound tasks.

What Would Break? Test

Technique
Definition: A risk-oriented question that asks where damage would occur if an AI-assisted output were wrong, incomplete, or misunderstood.
Why it matters: AI outputs can fail quietly. This forces leaders to consider downstream impact, risk concentration, and the true cost of error.
Use when: Reviewing recommendations, applying IFR thinking, or determining where human review and guardrails are required.

Secret Weapon: Ask AI to Improve the Prompt

Technique
Definition: A technique where you ask the AI to help craft or refine the prompt itself before attempting the task, improving clarity, constraints, and reliability on the first real run.
Why it matters: Leaders often know the goal but not the best way to express it. This technique reduces trial-and-error by using the AI to surface missing context, define what “good” looks like, and clarify constraints before producing outputs.
Use when: You’re unsure how to frame the request, the task has important constraints, outputs feel “mostly right” but off, or you want to standardize a prompt for repeatable team use.
Example: “I’m trying to get you to help me with [goal]. I’m not sure how to phrase my request to get the best results. Can you ask me the key questions first, then draft an effective prompt using RTF+G?”

Ask it to think first

Technique
Definition: A technique where you explicitly ask the AI to pause and consider the problem, constraints, and approaches before recommending a solution.
Why it matters: Many weak outputs come from jumping straight to an answer. This prompt encourages more structured reasoning, better tradeoff handling, and fewer “first idea wins” responses—especially for complex or high-stakes work.
Use when: You need higher-quality reasoning, the problem has multiple constraints, the first answer tends to be shallow, or you’re making a decision with real downstream impact.
Example: “Before answering, please think through this carefully. Consider the different factors involved, potential constraints, and various approaches before recommending the best solution.”

Outcome Illusion

Reliability & Risk
Definition: The tendency to overestimate improvement because outputs appear faster or more impressive, while ignoring downstream costs like review time, correction, rework, and human judgment.
Why it matters: AI outputs often look polished and persuasive, creating a false perception of progress even when net outcomes have not meaningfully improved.
Use when: Evaluating whether an AI-assisted workflow actually improves outcomes, not just activity or speed.

Hallucination

Reliability & Risk
Definition: Plausible-sounding but false output.
Why it matters: Brand, SEO, and trust risk.
Use when: Designing checks: grounding, citations, and human review.

Grounding / RAG (Retrieval-Augmented Generation)

Reliability & Risk
Definition: Supplying trusted documents/data to the model at answer time.
Why it matters: Reduces hallucination; enables citations.
Use when: Outputs contain facts, numbers, or client specifics.

Guardrails

Reliability & Risk
Definition: Automated checks/policies that block or flag risky outputs.
Why it matters: Enforces standards without manual policing.
Use when: Validating brand/SEO rules or blocking restricted terms.

Human-in-the-Loop (HITL)

Reliability & Risk
Definition: A design where humans review, approve, or intervene in AI outputs.
Why it matters: Maintains accountability while enabling speed and scale.
Use when: Outputs affect clients, brand, compliance, or decisions.

Alignment

Reliability & Risk
Definition: Steering model behavior toward human values and policies.
Why it matters: Prevents unsafe or off-brand outputs.
Use when: Setting constraints, safety filters, and tone rules.

Explainability (XAI)

Reliability & Risk
Definition: Methods to understand why a model produced an output.
Why it matters: Aids trust, debugging, and client communications.
Use when: Reviewing higher-impact decisions or sensitive content.

Model Drift

Reliability & Risk
Definition: A decline in model performance over time due to changes in data, environment, or how the model is used.
Why it matters: An AI approach that works today can silently degrade later, producing more errors unless performance is monitored.
Use when: Talking about monitoring, QA, or why a once-reliable AI workflow is suddenly producing worse outcomes.

Evaluation Harness

Reliability & Risk
Definition: A structured set of tests and benchmarks used to measure AI performance consistently over time.
Why it matters: Without consistent evaluation, teams rely on anecdotes and “it feels better,” which leads to outcome illusion and drift going unnoticed.
Use when: Comparing model versions, validating improvements, or defining what “better” means in a pilot.

Embedding

Knowledge & Retrieval
Definition: Numeric representation of text used to measure similarity.
Why it matters: Powers search and retrieval over our own content.
Use when: Building knowledge search or RAG.

Vector Database / Index

Knowledge & Retrieval
Definition: Storage optimized for embeddings and similarity search.
Why it matters: Makes retrieval fast and relevant.
Use when: You want repeatable grounding over many documents.

Workflow Decomposition

Workflow & Experimentation
Definition: Breaking work into discrete steps to identify automation or AI assist points.
Why it matters: Makes AI opportunities visible and scoping realistic.
Use when: Mapping a workflow to find assistable steps.

Impact–Feasibility–Risk (IFR) Lens

Workflow & Experimentation
Definition: A framework for evaluating opportunities across value, practicality, and downside.
Why it matters: Prevents novelty-chasing; supports responsible prioritization.
Use when: Comparing pilot ideas and picking “smallest safe tests.”

AI Pilot / Experiment

Workflow & Experimentation
Definition: A time-boxed, scoped test designed to learn—not to scale.
Why it matters: Prevents premature rollout and reputational risk.
Use when: Testing assumptions before committing resources.

AI Intervention

Workflow & Experimentation
Definition: A targeted change to a single step in a workflow where an AI system assists with analysis, generation, classification, or summarization.
Why it matters: AI adoption often fails when teams try to redesign entire workflows at once. Small, isolated interventions allow leaders to test impact without creating operational instability.
Use when: Designing or testing an AI pilot within an existing workflow.
Example: Add an AI step that summarizes meeting transcripts before a strategist reviews them.

One-Step Discipline

Workflow & Experimentation
Definition: An experimentation constraint that limits an AI pilot to modifying only one step in a workflow.
Why it matters: Changing multiple steps at once makes it impossible to know what caused improvement (or degradation), and increases risk.
Use when: Scoping AI pilots, reviewing pilot charters, or pushing back on scope creep.
Watch-outs: Bundling “quick wins” into one pilot (two prompts + a new template + a new review step) turns learning into noise.

Safe-to-Fail Experiment

Workflow & Experimentation
Definition: A small, controlled experiment designed so that failure does not cause meaningful operational, client, or reputational harm.
Why it matters: AI behaves probabilistically. Safe-to-fail experiments allow learning without betting client trust or stability.
Use when: Testing AI-assisted workflow changes before expanding scope or exposure.
Example: Run AI-assisted drafts internally for two weeks before any client-facing use.

Kill Criteria

Workflow & Experimentation
Definition: Predefined conditions that trigger stopping an experiment or pilot.
Why it matters: Without stopping rules, teams keep weak pilots alive due to sunk cost or enthusiasm, increasing risk and wasted time.
Use when: Writing a pilot charter, designing a test plan, or deciding whether to continue an experiment.
Example: Stop the pilot if human review time increases by more than 30% or if quality drops below an agreed threshold.

Quality vs Efficiency Optimization

Quality & Decision Thinking
Definition: Distinguishing between improving speed of work (efficiency) and improving the quality or impact of outcomes.
Why it matters: AI often improves speed while quietly reducing rigor, insight, or decision quality if leaders optimize only for efficiency.
Use when: Defining pilot success metrics, assessing “AI wins,” or reviewing whether faster output actually improved outcomes.
Example: A report produced 20 minutes faster is not a win if it creates more rework or weaker recommendations.

System Effects

Quality & Decision Thinking
Definition: Unintended consequences that appear elsewhere in a workflow when a single step is modified.
Why it matters: Workflows are interconnected. Improving one step can create hidden costs or failures downstream.
Use when: Evaluating pilots, discussing the productivity paradox, or diagnosing why a “faster” process feels worse.
Example: AI-generated drafts speed up writing but increase editing time and approval cycles.

Evaluation Under Uncertainty

Quality & Decision Thinking
Definition: Assessing AI outputs when correctness cannot always be determined immediately or objectively.
Why it matters: Many AI outputs are plausible but incomplete or subtly wrong. Leaders need explicit quality definitions and review practices.
Use when: Reviewing summaries, recommendations, analyses, or any output where “looks right” is not proof.

Human Owner

Governance
Definition: The person responsible for reviewing and validating AI outputs before they influence decisions, deliverables, or client communications.
Why it matters: AI does not remove accountability. It relocates judgment. Without a named owner, errors become “everyone’s fault,” which means no one’s.
Use when: Designing AI-assisted workflows, setting review steps, or writing a pilot charter.
Example: A strategist validates AI-flagged anomalies before they appear in client reporting.

Delegated Judgment

Governance
Definition: A failure mode where AI outputs are implicitly treated as decisions rather than inputs to human reasoning.
Why it matters: Delegated judgment erodes accountability and increases the chance that subtle errors go unnoticed until they hit clients or performance.
Use when: Diagnosing workflow failures involving AI, or designing guardrails and review expectations.
Watch-outs: This usually happens gradually when teams stop checking outputs because “it’s been right lately.”

Accountability Anchor

Governance
Definition: The specific point in a workflow where a human explicitly confirms responsibility for a decision or output influenced by AI.
Why it matters: AI can blur responsibility. Accountability anchors prevent “the model decided” from becoming an excuse.
Use when: Designing approval steps for client-facing outputs or high-impact decisions.

Pilot Inflation

Anti-Pattern
Definition: A small experiment that gradually expands into a larger operational change without deliberate review or approval.
Why it matters: Pilot inflation introduces risk before value is proven, and makes failures harder to unwind.
Use when: Monitoring experiments that begin adding steps, stakeholders, or broader rollout before clear results.
Watch-outs: “We’re still piloting” becomes cover for using the tool as if it’s production.

Tool-First Thinking

Anti-Pattern
Definition: Starting with an AI tool and searching for problems it might solve instead of starting with a workflow problem.
Why it matters: Tool-first thinking drives novelty projects with little operational value and poor adoption.
Use when: Evaluating proposals that lead with vendor/tool features rather than measurable workflow outcomes.

Agent / Agentic System

Architecture
Definition: An agent plans/acts in steps; an agentic system orchestrates multiple tools/agents toward a goal.
Why it matters: Agentic systems have more power, but also greater complexity and risk.
Use when: Start with simple agents; justify multi-agent with a clear need.

AI Orchestration

Architecture
Definition: Coordinating multiple AI models, tools, and data sources inside a single workflow so they work together toward an outcome.
Why it matters: Most production AI systems are not a single prompt—they combine retrieval, prompts, tools, and checks. Orchestration is where complexity (and risk) increases.
Use when: Discussing multi-step AI workflows (e.g., retrieve docs → draft → validate → route for approval).

Fine-tuning

Architecture
Definition: Training a base model further on task-specific examples.
Why it matters: Boosts consistency for narrow, high-volume work.
Use when: You have many labeled examples of the same task.