Back to Blog

Claude Code 13 Tips and Tricks

claude cn

Claude Code 13 Tips and Tricks

Boris Cherny is a founding engineer of Claude Code. He recently shared his 13 core methods for using Claude Code on X. These aren't theories—they're workflows he actually uses every day.

Let's break them down one by one and see how a true expert uses AI tools.

Method 1: Run 5 Claude Instances in Terminal Simultaneously

How to do it:

Boris opens 5 Claude windows in his terminal at the same time. He numbers the tabs 1 through 5, with each window handling different tasks. For example:

  • Window 1: Writing code for a new feature
  • Window 2: Running tests to find bugs
  • Window 3: Checking API documentation
  • Window 4: Code refactoring
  • Window 5: Handling user feedback

Key technique:

He enables system notifications. When a Claude instance needs his input, the system pops up an alert. This way he doesn't have to stare at one window waiting—he can flexibly switch between different tasks.

Why do this:

Claude subscription costs $200 per month. If you only open one window and let the AI finish one task before starting the next, it's like paying for 5 assistants but making them work one at a time in a queue. Opening 5 windows simultaneously means all 5 assistants work in parallel, increasing efficiency 5x.

Many people ask: How can I manage switching between so many windows? Boris's answer: Most of the time you don't need to switch. You only need to handle things when a Claude gets stuck or needs confirmation—just respond to the notification. The rest of the time, they work automatically in the background.

Method 2: Open 5 to 10 More Claude Tasks in the Web Version

How to do it:

Boris isn't satisfied with just 5 windows in the terminal. He also opens claude.ai/code in his browser and starts 5 to 10 more Claude sessions.

He switches between terminal and web version in two ways:

  • Use the & symbol to hand off terminal tasks to the web version
  • Manually start new sessions in Chrome
  • Use the --teleport command to switch back and forth

Even more impressive, he uses his phone (Claude iOS app) in the morning and throughout the day to start more tasks, then checks the results later.

Why do this:

Including the 5 in terminal, Boris often runs about 15 Claude instances simultaneously. This sounds crazy, but think about it—if you have 15 things to do, why not have 15 AIs do them at the same time?

The key is distinguishing which tasks need your real-time attention and which can run on their own. Things like generating test data, organizing documentation, or doing preliminary code reviews can be handed to the web or mobile version to run in the background—you just need to check the results at the end.

Method 3: Always Use Opus 4.5 + Thinking Mode

How to do it:

Boris uses the Opus 4.5 model for all tasks and enables the thinking feature.

Many people might ask: Opus is more expensive and slower than Sonnet—why not use the cheaper one?

Boris's reasoning:

Although Opus takes longer per response, it's smarter, so you need fewer back-and-forth exchanges.

Specifically:

  • With Sonnet: You might need 3 prompts to get a satisfactory result, waiting 30 seconds each time, totaling 90 seconds
  • With Opus: It gets it right the first time. You wait 60 seconds, but the total is just 60 seconds

Plus, Opus is more accurate when using tools (like reading/writing files, running commands, calling APIs) and makes fewer mistakes, meaning you don't need to frequently correct its errors.

When this choice matters most:

Opus's advantage is most obvious when your task is complex and requires multiple steps. For example, if you want the AI to refactor a module—reading multiple files, understanding code logic, making changes, running tests. In such cases, Sonnet might get stuck or make mistakes at some step, and you'd have to start over. Opus can often complete the entire workflow in one go.

Method 4: Team-Shared CLAUDE.md File

How to do it:

Boris's team maintains a CLAUDE.md file that's committed to the Git repository, visible and editable by everyone.

This file specifically records two types of information:

  • Things Claude did wrong, and the correct approach
  • Team-specific conventions and preferences

For example, the file might contain:

Do not use console.log in production code. Use our logger utility instead.
Commit messages must follow Conventional Commits format.
All API errors must return a unified error format {code, message, details}.

Workflow:

Whenever a team member discovers Claude did something wrong, they add a rule to CLAUDE.md. The entire team contributes to this file multiple times per week.

Over time, this file becomes the team's "AI training manual." Every new Claude session reads this file and automatically learns how this team works.

Why do this:

Traditional team knowledge management is hard: you write documentation, but few people read it; you hold training sessions, but new hires forget in a couple of days.

But CLAUDE.md is different because Claude reads it every time. Rules you write in it, Claude will follow 100%. It's like building team standards directly into the AI assistant.

And because this file is continuously updated, it grows with the team. Six months later, it might contain hundreds of lessons learned. When new people join, Claude can work according to veteran team standards.

Method 5: Use @claude During Code Review to Update Standards

How to do it:

Boris's team uses Claude Code GitHub action. During code reviews, if someone finds an issue worth documenting, they comment @claude on the PR to automatically add that rule to CLAUDE.md.

For example, during review you notice:

This API call has no timeout. If the server hangs, our app will wait forever.

You can comment:

@claude Please add to CLAUDE.md: All external API calls must have a reasonable timeout.

Claude will automatically write this rule into CLAUDE.md and submit it as part of the PR.

Why do this:

This mechanism solves an old problem: valuable insights from code reviews are often scattered across PR comments and become unfindable after a few months.

Now these insights are automatically captured in CLAUDE.md as executable rules. Next time Claude writes similar code, it will automatically remember to add a timeout.

It's like turning code review into a continuous "AI training process." Each review makes the AI better understand your team's requirements.

Method 6: Start Tasks in Plan Mode

How to do it:

Boris says most of his sessions start in plan mode.

To enter plan mode: Press Shift+Tab twice.

Once in plan mode, your workflow becomes:

  1. You tell Claude what you want to do
  2. Claude provides a plan, listing how it intends to proceed
  3. You review the plan, give feedback, request adjustments
  4. Go back and forth until you're satisfied with the plan
  5. Switch to auto-accept mode and let Claude execute

Why do this:

Many people use AI like this:

Please help me implement user login functionality.

Then Claude starts writing code. After a while, you look at it and find the technical approach isn't what you had in mind, or it misunderstood the requirements. Then you either have it rewrite everything or fix it yourself—both waste time.

Boris's approach:

I want to implement user login functionality, supporting both email and phone number.

Claude responds:

Okay, I plan to do this:
1. Add /api/login endpoint on backend
2. Support email and phone fields
3. Validate format then query database
4. Return JWT token
5. Store token in localStorage on frontend

Does this approach work for you?

You review and say:

Phone number login needs to verify a code first, not query the database directly. Also, I suggest using httpOnly cookie instead of localStorage for the token—it's more secure.

Claude adjusts the plan:

Understood, new plan:
1. Add /api/send-code endpoint to send verification code
2. For phone login, verify the code first
3. Use httpOnly cookie to store token
...

Does this work?

After you confirm, Claude starts writing code. At this point it knows all the details and can often get it right the first time.

Boris emphasizes: A good plan really matters. Spending 3 minutes confirming the plan can save 30 minutes of rework.

Method 7: Create Slash Commands for High-Frequency Workflows

How to do it:

Boris creates slash commands for every repetitive workflow. These commands are saved in the .claude/commands/ folder.

His example is the /commit-push-pr command. This command:

  1. Checks current git status
  2. Generates an appropriate commit message
  3. Commits the code
  4. Pushes to remote repository
  5. Creates a Pull Request

He uses this command dozens of times per day.

How commands are written:

Slash commands are essentially files containing bash scripts and prompts. For example:

#!/bin/bash

# Get git status
git_status=$(git status --short)
git_diff=$(git diff --cached)

# Call Claude
echo "Please generate a commit message based on the following changes, then commit the code and create a PR:

Git status:
$git_status

Code changes:
$git_diff

Requirements:
1. Commit message follows Conventional Commits
2. PR title is concise and clear
3. PR description includes reason for changes and testing method
"

Why do this:

Imagine doing the same operation 20 times a day, re-entering the prompt each time—it's easy to miss details and wastes time.

Slash commands solidify this workflow. You just type /commit-push-pr and the rest is automatic. And because commands are committed to Git, the whole team can use them, and new members can get started immediately.

Boris also mentions he uses bash in commands to pre-compute information (like git status), so Claude doesn't need to repeatedly call tools to query—execution is faster.

Method 8: Use Sub-Agents for Specialized Tasks

How to do it:

Boris has created multiple sub-agents, each responsible for specific types of tasks.

For example:

  • code-simplifier: After Claude completes code, specializes in simplifying and optimizing it
  • verify-app: Contains detailed testing instructions, responsible for end-to-end testing the application
  • Other sub-agents for documentation, performance analysis, etc.

Difference between sub-agents and slash commands:

Slash commands are suitable for fixed-step operations like committing code or running tests.

Sub-agents are suitable for tasks requiring judgment and decision-making, like:

  • Looking at code and determining what can be simplified
  • Testing an application and finding which features have issues
  • Reviewing documentation and finding unclear parts

Why sub-agents are needed:

When tasks are complex, cramming all instructions into one session makes Claude prone to distraction or missing details.

Using sub-agents is like splitting a big task into specialized smaller tasks. Each sub-agent only needs to focus on doing one thing well, and instructions can be very detailed and precise.

For example, the verify-app sub-agent's instructions might include:

Please test the application following these steps:
1. Open browser and visit localhost:3000
2. Test user registration flow, confirm verification email is received
3. Test login functionality, including wrong password cases
4. Test each button of main features
5. Check console, confirm no errors
6. Summarize all issues found, sorted by severity

Such detailed instructions would look messy in the main session, but as a sub-agent it's very clean.

Method 9: Use Hooks to Auto-Format Code

How to do it:

Boris's team uses a PostToolUse hook. Whenever Claude uses a tool (like editing a file), this hook automatically runs code formatting tools.

For example:

  • Claude writes JavaScript code → Hook automatically runs Prettier to format
  • Claude writes Python code → Hook automatically runs Black to format

Why do this:

Code generated by Claude usually has correct logic, but formatting might not fully comply with team standards (indentation, spacing, line breaks, etc.).

If not handled, these formatting issues will cause errors in CI (Continuous Integration), and you'll have to come back to fix them.

Using hooks for automatic formatting is like giving Claude an "assistant" while it works, specifically handling cleanup. Claude focuses on writing logic, and the hook handles formatting.

This approach can be extended:

The hook mechanism can automate many "cleanup tasks":

  • Automatically run linter checks after code is written
  • Automatically update related documentation after files are modified
  • Automatically add to test suite after test files are created

Any "fixed operation that needs to be done every time" can be automated with hooks.

Method 10: Use Permission Presets to Avoid Repeated Confirmations

How to do it:

Claude Code by default asks for your permission when executing sensitive operations. For example:

  • Deleting files
  • Running shell commands
  • Accessing the network

Boris doesn't use --dangerously-skip-permissions (skip all permission checks) because that's too dangerous.

His approach is to use the /permissions command to pre-allow certain known-safe common commands.

For example, he pre-allows:

git status
git diff
npm test
docker ps
ls
cat

These commands are written into the .claude/settings.json file, committed to Git, and shared across the team.

Why do this:

If every git status requires your confirmation, it's annoying and interrupts workflow.

But if you completely skip permission checks, Claude might accidentally run dangerous commands (like rm -rf).

Permission presets are a balance: common safe commands auto-pass, while dangerous or uncommon commands still require confirmation.

How to determine which commands can be preset:

Boris's principles:

  • Read-only operations (checking status, reading files) → Preset allowed
  • Reversible operations (git commit can be reverted) → Preset allowed
  • Irreversible operations (delete, overwrite) → Confirm each time
  • Operations involving external systems (sending emails, calling APIs) → Confirm each time

Method 11: Let Claude Use All Tools

How to do it:

Boris has configured many MCP servers for Claude (Model Context Protocol—a protocol that lets AI call external services).

Claude can now:

  • Search and post messages to Slack
  • Run BigQuery queries for data analysis
  • Get error logs from Sentry
  • Call various command-line tools

These configurations are written in the .mcp.json file and shared across the team.

Real usage scenario:

For example, Boris might say:

Our users reported that a feature was broken last night. Help me investigate.

Claude will automatically:

  1. Check Sentry for last night's error logs
  2. Analyze which API endpoint reported errors
  3. Look at related code to find possible causes
  4. Search Slack to see if others encountered the same issue
  5. Provide diagnosis and fix recommendations

Throughout this process, you don't need to manually switch tools—Claude handles it all.

Why do this:

A programmer's job isn't just writing code—it also involves checking logs, looking at data, and communicating with the team. If all of this has to be done manually, it wastes a lot of time.

Letting Claude call all tools is like giving it "hands" and "eyes"—it can gather information itself instead of waiting for you to feed it.

Key to configuration:

Boris emphasizes putting common MCP configurations in .mcp.json and committing to Git, so the whole team can use them and new members don't need to configure themselves.

This is another example of "team knowledge accumulation": the first person sets it up, everyone after benefits.

Method 12: Use Background Agents or Plugins for Long Tasks

How to do it:

For tasks that take a long time to run, Boris has three strategies:

Strategy A: Have Claude use a background agent to verify

When a task is almost complete, prompt Claude:

After completing the task, please start a background agent to verify your work, ensuring all tests pass and functionality is normal.

Strategy B: Use agent stop hook for automatic verification

Set up a hook that automatically triggers a verification process when Claude's task ends. This is more reliable than Strategy A because you don't need to remember to remind Claude.

Strategy C: Use the ralph-wiggum plugin

This is a specialized plugin that lets Claude automatically run a series of checks after completing a task and report results to you.

Combined with no-permission-prompt mode:

In sandbox environments, Boris uses --permission-mode=dontAsk or --dangerously-skip-permissions, so Claude can complete the entire long task unimpeded, without getting stuck on permission confirmations.

Why this is needed:

Imagine asking Claude to do a task that takes 1 hour, like refactoring an entire module. You can't stare at the screen for 1 hour, ready to click "Allow" at any moment.

Using background agents or plugins means you can go do other things, and when you come back, Claude has finished and self-verified that everything is fine.

Security considerations:

Boris emphasizes only skipping permission checks in sandbox environments (isolated environments that don't affect production code). Be cautious in production projects.

Method 13: Give Claude Ways to Verify Its Work

How to do it:

Boris considers this the most important point: ensure Claude has a way to verify its own work.

His example: Every time Claude submits code to the claude.ai/code website, it:

  1. Uses Claude Chrome extension to open the browser
  2. Tests various UI features
  3. Checks if user experience is smooth
  4. If issues are found, modifies code and tests again
  5. Only considers the task complete when all features work normally

Verification methods for different domains:

Boris says verification methods vary by task:

  • Backend code: Run test suites
  • Frontend pages: Test in browser
  • Mobile apps: Test in simulator
  • Data analysis: Check if numbers are reasonable
  • Documentation: Check if links are valid and formatting is correct

Why this is most important:

An AI without a feedback loop is like someone working with their eyes closed—it doesn't know if it's doing things right; it can only wait for you to tell it.

With a feedback loop, AI can self-check, self-correct, and self-iterate, improving final result quality by 2 to 3 times.

Investment you need to make:

Boris emphasizes: Make the verification process "rock solid."

This means:

  • Testing environment must be stable, not flaky
  • Test scripts must be reliable, no false positives
  • Verification standards must be clear, not vague

These are one-time investments. Once done, Claude can use them for self-verification every time.

Summary: The Common Logic Behind These 13 Methods

After reading these 13 methods, you'll find they revolve around several core ideas:

  1. Work in parallel, maximize tool value (Methods 1, 2)
  2. Use the best model, aim to get it right the first time (Method 3)
  3. Accumulate team wisdom, make AI smarter over time (Methods 4, 5, 11)
  4. Automate all repetitive operations (Methods 7, 8, 9, 10)
  5. Give AI complete tools and permissions (Methods 10, 11, 12)
  6. Build feedback loops, let AI self-verify (Method 13)
  7. Plan before executing, direction matters more than speed (Method 6)

These methods weren't made up randomly—they were summarized through continuous trial and error by Boris and his team in actual work.

You don't need to use all of them at once. Start with the ones most suitable for you and gradually build your own AI workflow.

The key awareness is this: AI tools aren't ready to use well out of the box—you need to invest time configuring, training, and optimizing them. But once you establish a comfortable workflow, the returns are huge.

As Boris says: There's no single correct way to use Claude Code. Everyone on the team uses it differently. What matters is finding the way that works for you, then continuously iterating and improving.