HomeLearnPrompt Engineering Glossary
Educational Guide

Prompt Engineering Glossary: Essential Terms Explained

Master prompt engineering terminology. Definitions for every term you need to know as an AI engineer.

Definition

A comprehensive reference of terms and concepts used in prompt engineering and LLM application development.

Core Concepts

**Prompt**: The input text sent to an LLM **System prompt**: Instructions that set the model's behavior **User prompt**: The actual query or request **Completion**: The model's generated response **Context window**: Maximum tokens the model can process **Temperature**: Controls randomness in outputs

Techniques

**Zero-shot**: Prompting without examples **Few-shot**: Providing examples in the prompt **Chain-of-thought**: Requesting step-by-step reasoning **Role prompting**: Assigning a persona to the model **Prompt chaining**: Connecting multiple prompts **RAG**: Retrieval-Augmented Generation

Evaluation Terms

**Ground truth**: The correct expected output **Baseline**: Reference performance to compare against **Regression**: Quality decrease after changes **Hallucination**: Made-up or incorrect information **Eval set**: Dataset for testing prompt quality

Advanced Concepts

**Fine-tuning**: Training a model on custom data **Embeddings**: Vector representations of text **Tool use**: LLMs calling external functions **Agents**: Autonomous LLM-powered systems **Guardrails**: Safety constraints on outputs

Put This Knowledge Into Practice

Use PromptLens to implement professional prompt testing in your workflow.

Start Free