How to Use Claude Code Best
After six weeks of intense vibe coding with Claude Code, I've discovered patterns that separate frustrating sessions from magical ones. Here's everything I wish I knew when starting.
The most shocking aspect of vibe coding isn't how intelligent the models are—it's the unprecedented iteration speed they enable. Claude Code itself exemplifies this: since I started using it in mid-June, we've witnessed game-changing features emerge at breakneck pace. Custom commands eliminate repetitive prompts, Hooks automate workflows, and Subagents break through context limitations. This update frequency would have been fantasy in traditional software development.
01. The Vibe Coding Velocity Revolution
The entire AI-assisted development field moves at dizzying speed. Building a complete product in hours, not months, has become reality. But this acceleration creates an interesting paradox: while AI frees developers from boilerplate code, when everyone drives a Ferrari, the race becomes more intense than ever.
Your competitor might iterate through three versions while you're still polishing your first feature. The craftsman approach of careful refinement gets buried in the sand. Sometimes I miss the slow, thoughtful development days. But technology's wheel rolls forward—you either ride it or get crushed beneath it.
If you remember only one thing from this article: In the vibe coding era, don't let tools drive you to exhaustion. Efficiency has skyrocketed, but we're still human. We need not just faster development, but time to think and space to live.
02. Transitioning from Traditional Editor AI
Before diving into Claude Code, I was a power user of every major AI editor—Cursor, Windsurf, GitHub Copilot, VS Code plugins like Cline. But none delivered the paradigm shift that vibe coding with Claude Code brings.
The fundamental limitation of editor AI tools? They lack global vision. Picture the typical workflow: open a file, select some lines, ask AI to modify them. This interaction pattern naturally constrains thinking to the current file or even just selected lines. While comfortable for developers transitioning from traditional coding, this control-retention mindset actually limits vibe coding potential.
The killer issue is synchronization: AI thinks the file is in state A, but you've already modified it to state B. When AI continues based on its outdated understanding, chaos ensues. Sometimes fixing these sync issues takes longer than writing the code yourself.
Command-line tools operate on different principles. No fancy interface, no real-time hints, making it harder to micromanage. But this simplicity enables deeper project understanding. Starting from the root directory, it builds comprehensive codebase knowledge. Without the editor layer, direct code modification becomes difficult, "forcing" greater AI reliance—which paradoxically unleashes greater efficiency.
The difference ultimately stems from usage patterns and model quality. Claude Code, backed by Anthropic, delivers unmatched model quality. More crucially, its liberal token usage (despite recent weekly limits) creates a quantity-drives-quality transformation that puts results leagues ahead.
For true vibe coding experience today, Claude Code might be your only choice.
03. Understanding Claude Code's Boundaries and Strengths
Like all tools, Claude Code excels in some areas while struggling in others. Recognizing these boundaries makes your vibe coding journey smoother.
Ask Claude Code to analyze complex logic, understand module relationships, or create architecture diagrams? It performs brilliantly. These comprehension and synthesis tasks showcase LLM strengths. Need quick algorithm implementation, project scaffolding, or test writing? Claude Code delivers satisfying results.
But don't expect universal excellence. For global variable renaming or precise refactoring requiring exact matches, traditional IDE refactoring tools prove more reliable. LLMs are fundamentally probability generators—tasks demanding 100% accuracy aren't their forte. For such tasks, having AI write scripts to execute modifications often works better than direct file manipulation.
Training data bias presents another reality: Claude Code handles frontend and TypeScript with fish-in-water ease, wielding frameworks effortlessly, crafting dazzling CSS, knowing cutting-edge APIs intimately. Switch to iOS/Swift development? Expect outdated API usage, non-existent method hallucinations, and worse performance with niche languages. Training set richness directly determines domain performance.
Other command-line code agents exist—Crush, Gemini CLI, and more. But testing reveals significant gaps versus Claude Code. As an integrated hardware-software solution, Anthropic's dual role as model provider and tool developer enables deep optimization. Like Apple's ecosystem—controlling both hardware and software enables far more than disconnected components.
04. Plan First or Code First?
Claude Code's Plan Mode raises an interesting question: should we think everything through before coding, or dive in and iterate?
I've seen two extremes. "Planning perfectionists" spend hours in Plan Mode, discussing every detail with AI, burning through context windows planning architecture, implementation, error handling, and optimization. When coding finally begins, AI simply executes the plan step-by-step. "Code cowboys" jump straight in with "implement feature X," watching AI code furiously, fixing issues as they arise in endless cycles.
Which approach wins? It depends.
Experienced developers with clear project architecture benefit from thorough planning. For existing projects following specific patterns, Plan Mode ensures AI-generated code aligns with project standards. I often discuss in Plan Mode: "Our project uses MVVM architecture, how should we split this feature across layers?" This helps AI understand overall structure and generate higher-quality code.
But for unfamiliar tech stacks or exploratory projects, "just start coding" might work better. You often don't know what you don't know. Rather than imagining problems, let AI build a prototype, run it, identify issues, then iterate. This suits quick validation or proof-of-concept work.
My preference? I lean toward Plan Mode discussion before implementation. Since I mostly maintain existing codebases, I need stable, reliable iteration. Planning helps me maintain control. Even with new tech stacks, I prefer structured discussion over blind coding—development principles remain universal across stacks.
Plan Mode offers a hidden benefit: it clarifies thinking. What seems clear in your head often reveals gaps when articulated. The AI dialogue process becomes self-organization—a vibe coding era variant of "rubber duck debugging" that remains valuable.
05. Small Steps or Big Leaps?
Manual coding meant celebrating a few hundred lines per day. Vibe coding changes the game completely—generate thousands of lines in minutes, even complete entire projects. This "productivity explosion" raises a new question: how should we wield this power?
I've seen two camps: "incremental iterators" who complete small features step-by-step, and "all-in architects" who dump entire requirements for AI to implement completely. The extreme version enables --dangerously-skip-permissions
mode, letting AI execute any operation without confirmation.
Having tried both extensively, my conclusion: when possible, small steps always win.
Here's an example. I once wanted to refactor a module touching seven or eight files. Thinking "AI is so capable, let's do it all at once!" I detailed the requirements and watched Claude Code output frantically. Minutes later, thousands of lines modified, compilation passed. Success!
Then reality hit during testing. First a small bug—with thousands of changes, I couldn't review them all, so I described the issue for AI to fix. The fix introduced new problems. More fixes, more problems. After several rounds, the codebase was unrecognizable. Lost control meant inability to distinguish necessary changes from AI's bug-fix band-aids. The only solution: git reset and start over.
This taught me that while AI generates code powerfully, its architectural grasp and long-term maintenance consideration remain limited. Generating too much code at once is like sprinting in darkness—you might run fast, but you might also hit a wall. When problems arise, debugging complexity grows exponentially.
Small-step iteration offers clear advantages:
- High control: Small changes make problems easy to locate and rollback
- Understanding: You follow AI's thinking and comprehend each step
- Quality assurance: Test after each step ensures code quality
- Learning opportunity: Observing AI's implementation teaches new techniques
I'm not saying "big leaps" never work. For well-planned new features, Claude Code can complete most work with minimal supervision. If you must attempt large-scale generation, consider these suggestions:
- Comprehensive testing: Use TDD—write tests first (AI-written, of course), then implement features
- Version control: Create new branches before starting, ready to rollback anytime
- Modular approach: Even for multiple features, organize by module, don't mix everything
- Cross-review: Feed generated code to another AI for improvement suggestions
06. Task Scope and Context Constraints
Humans and AI share striking similarity: small tasks flow smoothly, large projects create chaos. For Claude Code, this problem intensifies due to the hard limit—200k context window. In an era of models offering 1M windows, this constraint proves genuinely painful.
Typically, after 15-20 minutes of normal use, context usage soars above 90%. Claude Code becomes an overstuffed suitcase, struggling to fit anything more. Worse, automatic compression during task execution can confuse the agent, causing it to forget its purpose or loop endlessly.
Managing complex tasks within limited context windows becomes essential for vibe coding mastery.
Task Decomposition Is Key
Rather than requesting "build me system X," first decompose large tasks into specific subtasks. Best done in Plan Mode with AI assistance:
Me: I want to implement user authentication, help me break down requirements
AI: Let's decompose the tasks:
1. Design database schema (users, sessions tables)
2. Implement registration (validation, encryption, storage)
3. Implement login (validation, token generation)
4. Implement middleware (token validation, refresh)
5. Add test cases
...
For tasks exceeding single session capacity, have AI document discussions (e.g., dev-notes/auth-implementation-plan.md
). Even with new sessions, AI can read these documents to quickly restore context.
Leverage Subagents
Claude Code's recent Subagent feature partially alleviates context limitations. Previously, Task tool operations worked in fresh contexts, effectively extending main session windows. Now with dedicated subagent configuration, stability improves dramatically. Create specialized agents for different tasks:
- Code analysis agent: Understanding existing structure
- Code review agent: Checking quality and issues
- Test agent: Writing and running tests
- Git agent: Handling commits and PRs
Chaining these agents properly enables large tasks to complete systematically within single sessions. Each agent works in independent context without interference or main session depletion.
Manual Compaction Timing
While Claude Code auto-compresses context, proactive management works better. When context usage approaches limits, manually execute /compact
at natural breakpoints—after completing feature modules or test runs. This timing preserves important information better than mid-task automatic compression.
For independent tasks, starting fresh sessions often makes more sense than struggling in near-capacity sessions. With documented plans, new sessions quickly become productive.
In AI-assisted programming, context windows remain scarce resources requiring memory-like management. Proper planning, timely cleanup, and knowing when to "change rooms" keeps vibe coding smooth.
07. Mastering Commands and Ecosystem Tools
Commands and Hooks
My bold claim: any prompt repeated twice deserves a command!
Typing similar prompts repeatedly wastes time: "run tests and fix failures," "use conventional commit messages"... When you catch yourself repeating requests, stop immediately and spend one minute configuring a command.
Commands trump subagents with one huge advantage: full current session context. For tasks highly related to ongoing work, commands prove more efficient. My frequently used commands:
/test-and-fix
: Run tests, auto-fix failures/review
: Review current changes, suggest improvements/commit-smart
: Analyze changes, generate appropriate commit messages
Regarding Hooks, I personally use them sparingly. While theoretically executing commands automatically on specific events (like testing before commits), I prefer maintaining control over background automation. Pure personal preference—if your workflow is fixed, Hooks save significant effort.
MCP Integration
MCP supplements model knowledge gaps. My most common use cases:
Latest Apple Documentation: Apple's JavaScript-heavy docs defeat Claude Code's WebFetch, but apple-docs-mcp provides latest, accurate API documentation—a lifesaver for iOS development.
Project Management Integration: Connect JIRA through mcp-atlassian, enabling Claude Code to read/update task status directly, maintaining smooth communication.
LSP Support: While Claude Code lacks native LSP support, mcp-language-server provides accurate completion and type information—invaluable for unfamiliar languages.
MCP configuration requires some time but proves worthwhile, transforming Claude Code from generic tool to personalized assistant.
Compile, Analyze, and Test
Remember always: untested AI-generated code is garbage.
My workflow typically follows:
- List compile, test, and linter commands in CLAUDE.md
- Compile immediately after completing small features
- Run relevant tests after successful compilation
- Run linter and formatter after passing tests
Sounds tedious? Proper configuration makes these simple commands or subagent tasks. The key: make these steps habitual, not afterthoughts.
Beyond Code Generation
Don't limit Claude Code to just writing code—its capabilities extend far beyond:
- Commits and PRs: Analyze changes, generate commit messages, push code, create PRs with clearer descriptions than I write myself
- Technical documentation: Generate API docs, update READMEs, write examples—more complete and error-free
- Project management: Update tickets, add comments, create subtasks without clicking through web interfaces
- Data processing: Batch file processing, format conversion, data cleaning—no more maintaining disposable scripts
More interesting: Claude Code enables work from anywhere. Using VibeTunnel or mobile SSH clients with Tailscale, I connect to home machines from anywhere, directing Claude Code via phone. While unsuitable for complex planning, it handles simple tasks perfectly.
Finally, invest in a good microphone. In the vibe coding era, voice input for requirements feels more natural than typing. Modern speech recognition handles mixed languages well. My old streaming microphone finally found its true calling.
08. Performance Variations and Constraints
Some observations come from personal experience, others from community complaints. Many things can't be proven, so take them as anecdotal.
Opus Vastly Outperforms Sonnet
This fact is undeniable: Opus performs much better than Sonnet. The 5x price difference shows—$100 max subscription's 5-hour Opus window depletes quickly on small tasks. Even $200 barely suffices.
For $100 tier users, develop manual model-switching habits. Use Sonnet for simple tasks, save Opus for complex architecture or tricky bugs.
Timing Mysteries
This sounds absurd but feels real: US nighttime (Beijing daytime) performs better than US daytime. Since software development concentrates in US and China, and Anthropic lacks official China access, perhaps fewer US night users mean less server pressure and maintained model performance? If morning problems prove unsolvable in Beijing, afternoon attempts might surprise you.
Intelligence Degradation Concerns
Most worrying: last month's experience noticeably exceeded recent weeks. Initially thinking it was illusion, community complaints crescendo. Reasonable speculation: resource strain from developer influx. Like a 100-person buffet suddenly serving 1000—quality degradation seems inevitable. Combined with Anthropic's funding news and weekly limits, profitability at current pricing seems impossible.
Coping Strategies
Facing these limitations, we must adopt conservation techniques:
- Tiered usage: Sonnet for simple tasks, Opus for complex ones
- Off-peak timing: Avoid US working hours, choose low-load periods
- Prompt quality: Clear first-time communication reduces token-consuming back-and-forth
- Strategic subagents: Assign high-consumption tasks to subagents
- Multiple options: While Claude Code currently leads, keep watching alternatives
Master Your Vibe Coding Journey
The vibe coding revolution has arrived. Those who master these patterns and techniques will thrive in this new era. Remember: it's not about using every feature or maximizing every moment—it's about finding the rhythm that amplifies your capabilities while preserving your humanity.
Start small, experiment constantly, and let vibe coding transform how you build. The future of development isn't just faster—it's fundamentally different. Welcome to the revolution.
Ready for more? Next article: "Advanced Vibe Coding Patterns for Production Systems"