Claude Academy
beginner12 min

What Is Claude and Who Built It

Learning Objectives

  • Understand what Claude is and who created it
  • Learn about Anthropic's Constitutional AI approach
  • Know what sets Claude apart from other AI models

Meet Claude

Claude is a family of large language models (LLMs) built by Anthropic, an AI safety company founded in 2021 by former members of OpenAI. Claude is designed to be helpful, harmless, and honest -- three properties that guide every aspect of how the model is trained and how it behaves.

Unlike many AI assistants you might have used, Claude is built from the ground up with safety as a core design principle, not an afterthought. This doesn't make Claude less capable -- it makes Claude more reliable and more trustworthy as a tool you can depend on for real work.

Who Is Anthropic?

Anthropic was founded by Dario Amodei (CEO) and Daniela Amodei (President), along with several other researchers who were deeply involved in the development of earlier large language models. The company's mission is the responsible development and maintenance of advanced AI systems.

What sets Anthropic apart from other AI labs is their focus on AI safety research as a core part of the company, not just a side project. They publish research on topics like:

  • How to make AI systems more interpretable
  • How to align AI behavior with human values
  • How to detect and prevent harmful outputs before they happen

Constitutional AI: Claude's Secret Sauce

The key innovation behind Claude is a training approach called Constitutional AI (CAI). Here's the core idea:

Instead of relying solely on human feedback to teach the model right from wrong, Anthropic gives Claude a set of principles -- a "constitution" -- and trains the model to critique and revise its own outputs based on those principles.

This means Claude can:

  • Self-correct when it notices it might be producing harmful content
  • Explain its reasoning about why certain requests might be problematic
  • Stay helpful even when declining to do something it shouldn't

What Makes Claude Different?

There are several large language models available today. Here's what makes Claude stand out:

  • Long context windows: Claude can process very large amounts of text at once (up to 200K tokens), making it excellent for analyzing long documents, codebases, and conversations.
  • Strong at following instructions: Claude is particularly good at following detailed, structured instructions -- which is exactly what makes it powerful for Claude Code.
  • Honest about uncertainty: When Claude doesn't know something, it says so rather than making something up. This is crucial when you're using it for coding and technical work.
  • Safety-first design: Constitutional AI means Claude is less likely to produce harmful or misleading outputs compared to models trained with less rigorous safety approaches.

Claude's Model Family

Anthropic offers several versions of Claude, optimized for different use cases:

  • Claude Opus: The most capable model, best for complex reasoning and analysis
  • Claude Sonnet: The balanced option, great for most everyday tasks
  • Claude Haiku: The fastest and most efficient, ideal for quick tasks and high-volume work

Throughout this course, you'll primarily work with Claude through Claude Code, Anthropic's official command-line interface. Claude Code gives you direct access to Claude's capabilities right from your terminal, with powerful tools for reading files, writing code, running commands, and much more.

Key Takeaways

  • Claude is built by Anthropic, an AI safety company
  • Constitutional AI is the core training approach that makes Claude safe and capable
  • Claude excels at following detailed instructions, processing long inputs, and being honest about its limitations
  • You'll use Claude through Claude Code for the rest of this course