For years, "AI in my editor" meant autocomplete. Copilot would suggest the next few tokens, I'd hit tab, and that was the interaction. Useful, but shallow. The AI stayed in one file, ignored my tests, and had no way to know when its suggestion broke the build. Claude Code works at a different level. It runs in my terminal, reads my files, runs my commands, and talks back in plain language. When I ask it to fix a failing test, it runs the test, reads the failure, finds the bug, edits the file, and runs the test again to confirm. The result feels less like autocomplete and more like delegating a small task to a teammate. ## What Claude Code actually is Claude Code is Anthropic's agentic coding tool. "Agentic" is the important word. It means the AI can take actions on my behalf beyond generating text. It reads files, writes files, runs shell commands, searches the codebase, fetches web pages, and edits code, all by deciding which tool to use at each step. It ships in a few forms: * A command-line interface (CLI) that runs in any terminal. * A desktop app for Mac and Windows. * A web app at [claude.ai/code](https://claude.ai/code). * Extensions for the VS Code and JetBrains integrated development environments (IDEs). The CLI is the original and still the most powerful form. When I type `claude` in my terminal, I get an interactive session with full access to my current directory. I can ask it to do anything from "explain this file" to "rewrite this module using the new API and update all the tests." Under the hood, [Claude][claude-models], Anthropic's large language model (LLM), does the thinking, while Claude Code serves as the harness that gives the LLM hands. [claude-models]: https://claude.com/product/overview ## The mental model: an AI with tools The concept that makes Claude Code click is the tool-use loop. Instead of generating a block of text and stopping, the AI generates a plan, picks a tool, runs it, reads the result, and decides what to do next. ```mermaid graph TB A[User request] --> B[Claude reasons about the task] B --> C[Claude picks a tool] C --> D[Tool executes: read, edit, bash, search] D --> E[Claude reads the result] E --> F{Task complete?} F -->|No| B F -->|Yes| G[Claude reports to user] style A fill:#e1f5fe style B fill:#f3e5f5 style C fill:#e8f5e8 style D fill:#fff3e0 style E fill:#f3e5f5 style F fill:#fce4ec style G fill:#e1f5fe ``` This loop is the whole trick. Each pass through the loop, Claude looks at what it knows so far and picks the next useful action. If I ask it to "find the function that handles auth and add rate limiting," it will grep for auth-related terms, read the matching files, identify the right function, propose the edit, and make the change. I watch each step happen and can stop it at any point. The tools themselves are simple: read a file, edit a file, run a bash command, search the codebase, browse the web. What makes the system feel smart is the model's judgment about which tool to use, when to use it, and what to do with the result. ## Why Claude Code exists Autocomplete has a ceiling. It helps inside a single file, one suggestion at a time. Real software work spans many files, requires reading documentation, running commands, debugging failures, and making judgment calls about design. Autocomplete stays at the keystroke level. Chat-based coding assistants partially solved this. I could paste code into a chat window and get back a refactor. But the context switch was expensive. I'd copy code out, paste the response back, run it, find a bug, copy the error, paste it in, and repeat. The friction added up. Claude Code collapses that loop. The AI lives where my code lives, reads the same files I read, runs the same commands I run, and sees the same errors I see. Copy-paste disappears from the workflow. Claude Code fills the gap between "AI as a chat companion" and "AI as a teammate sitting at my terminal." ## How Claude Code differs from its cousins The AI coding field is crowded. Knowing what's what helps me pick the right tool for the job. **GitHub Copilot** In-editor autocomplete. Suggests the next few lines as you type. Fast but shallow. No multi-file reasoning, no command execution. --card-- **Cursor** A fork of VS Code with AI baked into the editor. Strong on in-editor chat and edits. The AI lives inside the editor UI. --card-- **Aider** An open-source CLI coding assistant, conceptually similar to Claude Code. Works with any LLM you configure. Less polished, more hackable. --card-- **Claude Code** Anthropic's agentic CLI. Runs in your terminal, uses Claude models, has first-party tools for files, bash, search, and web. The common thread is tool use. Every serious coding AI now has some version of the "AI with tools" loop. What differs is the surface: some live in the editor, some in the terminal, some in both. Claude Code bets on the terminal. That choice matters. The terminal is where I already run my builds, my tests, my git commands. Putting the AI there means it speaks the language of my existing workflow instead of pulling me into a new one. It also pairs well with tmux. Running Claude Code inside persistent tmux sessions makes it easy to keep long-running work alive, split multiple tasks into panes, and switch contexts without losing momentum. If tmux is new to you, start with [What Is Tmux?](https://jeffbailey.us/blog/2025/11/23/what-is-tmux/). ## The extensibility model The core loop is just the starting point. Claude Code has several mechanisms for extending what it can do. **Slash commands** are shortcuts I can invoke in a session. Type `/init` and it sets up a CLAUDE.md file documenting the codebase. Type `/review` and it reviews the current pull request. I can define my own slash commands for project-specific workflows. **Skills** are reusable capabilities with their own files and instructions. A skill might teach Claude how to write in my voice, how to review articles against a specific rubric, or how to generate architecture diagrams in my preferred style. Skills stay dormant until a relevant request activates them. **Hooks** are shell commands that run automatically at specific points, like before a tool call or after a session ends. Hooks let me enforce policies (block certain commands, log all file edits) or automate follow-up tasks (run the formatter after every edit). **Model Context Protocol (MCP) servers** extend Claude Code to external systems. MCP is an open protocol that connects large language models to tools and data sources. With an MCP server, Claude Code can talk to my database, my ticketing system, my monitoring dashboard, or anything else I have a server for. **Subagents** are specialized assistants I can spawn to handle specific kinds of work. A subagent runs with its own context and returns a summary. Useful for delegating research or isolated tasks so my main session stays focused. Each of these is optional. You can skip all of them and still get most of Claude Code's value. Layering them in turns the tool from a chatbot into a platform. ## Trade-offs and limitations Every tool has trade-offs. Claude Code has its share. * **It costs money.** Each request to the model costs API tokens. A single task can run hundreds of tool calls, and tokens add up. Heavy users pay real money every month. That is the honest price of having a capable AI in your loop. * **Context has limits.** Even with Opus 4.7's 1M token context, large codebases overflow. Claude Code works around this by reading files selectively, but it holds only part of your monorepo in its head at once. You still need to guide it toward the relevant code. * **It can be wrong, confidently.** The model sometimes proposes changes that look right but are subtly broken. This is a property of large language models. Review every diff before you accept it. * **It can take destructive actions.** The tool can run `rm -rf`, force-push, or drop database tables. Permission modes and confirmation prompts mitigate this, but the risk is real. Treat the AI like a fast, well-meaning junior who needs guardrails. * **It needs a human in the loop.** Even in auto mode, Claude Code needs someone to set the goal and review the outcome. It works as an amplifier for a developer who already knows what they want, rather than a replacement for judgment. ## Common misconceptions **"It writes code for me, so I can skip understanding it."** False. The code it writes is my code. If I ship a bug, the bug is mine. The AI accelerates the work; understanding remains my job. I read every diff. **"It replaces developers."** The tool makes individual developers more productive. So far, the evidence is that good developers get better faster than weak developers. The skills that matter (system design, judgment, communication) stay human. **"It's just a fancy autocomplete."** Autocomplete suggests tokens. Claude Code runs my tests, reads my docs, and edits my files in a loop until the task is done. The mental model is closer to "delegate a task" than "type faster." **"The CLI is just for terminal nerds."** The CLI is where the power lives, but the desktop and web apps give the same experience with a friendlier user interface. Pick the surface that fits your workflow. The engine underneath is the same. ## The core idea Claude Code is what happens when you give a capable language model real hands: text generation, file access, command execution, and the judgment to loop through actions until a task is done. Everything else (the CLI, the IDE plugins, the hooks, the MCP servers) is surface detail on top of that core loop. The mental model that carried me from "curious about AI coding" to "relying on it daily" was this: Claude Code is a teammate who sits at my terminal. I describe the goal, it does the work, and I review and correct. Over a session, it learns the shape of my project through the files I point it at. The interaction is conversational, the output is concrete, and the loop closes in minutes instead of days. ## Next steps * Install Claude Code from the [official docs][claude-code-docs] and run `claude` in a real project. * Try asking it to fix a failing test or add a small feature. Watch each tool call happen. * Create a CLAUDE.md file with `/init` so Claude has project-specific context on every session. * Explore [MCP servers][mcp] to connect Claude Code to external systems like databases and ticketing tools. * If you've set up Claude Code, read my post on [how I use tmux][tmux] to run multiple Claude sessions in parallel panes. [claude-code-docs]: https://code.claude.com/docs [mcp]: https://modelcontextprotocol.io/docs/getting-started/intro [tmux]: https://jeffbailey.us/how-do-i-use-tmux/ ## References * [Claude Code documentation][claude-code-docs], the official guide from Anthropic covering installation, commands, and configuration. * [Claude models overview][claude-models], for background on the Opus, Sonnet, and Haiku model family that powers Claude Code. * [Model Context Protocol][mcp], the open protocol for connecting AI tools to external data sources and systems.