When Boris Cherny, the creator of Claude Code—one of the most advanced AI coding agents—shares his approach, the tech world pays close attention. Over the past week, Cherny’s detailed explanation of his personal AI-driven development workflow has taken Silicon Valley and the global engineering community by storm, sparking intense discussion about the future of software development.
Cherny, who leads Claude Code at Anthropic, shared his method in a viral thread on X, where his approach was hailed by industry experts as a potential turning point for the startup and the broader AI coding landscape. Jeff Tang, a respected developer voice, emphasized, “If you’re not reading the Claude Code best practices straight from its creator, you’re behind as a programmer.” Meanwhile, Kyle McNease suggested that Anthropic might be experiencing its own “ChatGPT moment” due to Cherny’s innovative updates.
Parallel AI Agents: Coding as a Real-Time Strategy Game
At the core of Cherny’s workflow is a departure from traditional, linear coding practices. Instead of writing and testing code sequentially, Cherny likens his process to commanding a fleet in a real-time strategy game.
He operates five instances of Claude simultaneously within his terminal, using system notifications to manage each AI agent’s needs for input. This multitasking enables one agent to run test suites, another to refactor legacy code, while others handle documentation or other tasks. Additionally, Cherny manages multiple Claude agents on the web, seamlessly transferring sessions between local and browser environments through a “teleport” command.
This methodology underscores Anthropic’s “do more with less” philosophy, championed recently by Anthropic’s President Daniela Amodei. Unlike competitors investing heavily in massive infrastructure, Anthropic’s approach demonstrates that intelligent orchestration of existing AI models can achieve significant productivity gains.
Choosing Intelligence Over Speed: The Case for Opus 4.5
In an industry often focused on reducing latency, Cherny’s choice to rely exclusively on Anthropic’s largest and slowest model, Opus 4.5, is counterintuitive yet insightful.
He explains that despite its larger size and slower response, Opus 4.5 requires less human intervention due to its superior understanding and tool integration capabilities. This reduces the time spent correcting AI errors, ultimately speeding up the development cycle.
For enterprise leaders, this highlights a crucial shift: prioritizing smarter AI that minimizes human correction time over faster but less accurate models can enhance overall efficiency.
Transforming AI Mistakes into Collective Learning
Addressing the common challenge of AI “amnesia,” Cherny’s team maintains a centralized file, CLAUDE.md, stored in their git repository. Whenever Claude makes an error, the team documents the issue in this file, enabling the AI to “learn” and avoid repeating the same mistakes in future sessions.
This practice effectively turns the codebase into a self-improving system. Human developers not only fix bugs but also update AI guidance, ensuring continual agent improvement. Product leader Aakash Gupta remarked, “Every mistake becomes a rule,” emphasizing the evolving intelligence of the AI collaborator.
Automation Through Slash Commands and Specialized Subagents
Cherny’s workflow also heavily relies on automation to reduce repetitive tasks. He uses custom slash commands integrated into the project’s repository to execute complex operations with a single keystroke. For example, the /commit-push-pr command automates the entire version control process—committing, pushing, and opening pull requests—without manual input.
Moreover, Cherny deploys specialized AI subagents dedicated to particular development phases, such as code simplification post-development and comprehensive end-to-end application testing before deployment.
Verification Loops: Elevating AI-Generated Code Quality
One of the most significant factors behind Claude Code’s rapid growth—reportedly achieving over $1 billion in annual recurring revenue—is its robust verification loop.
Rather than solely generating code, Claude actively tests each change through browser automation, running UI tests and iterating until the code meets quality and usability standards. Cherny explains that enabling AI to verify its own work can improve output quality by two to three times, ensuring that the generated code is both functional and user-friendly.
The Future of Software Development: AI as a Workforce
Cherny’s revelations signal a fundamental shift in software engineering. Where AI has traditionally been viewed as a coding assistant, mostly offering autocomplete suggestions, his approach treats AI as a fully integrated workforce multiplying human productivity.
Jeff Tang encapsulated the sentiment: “Read this if you’re already an engineer… and want more power.” The tools to expand human output fivefold are available today, but adopting them requires a mindset change—from seeing AI as help to embracing it as a collaborative team member.
Developers who adapt quickly will not only become more efficient but will redefine programming paradigms, leaving others behind in conventional typing-based workflows.
Fonte: ver artigo original

2026 Set to Mark the Rise of Agentic AI Interns in the Workplace
Lyria 3: Advanced AI Tool Revolutionizing Music Creation for Content Producers
Cryptocurrency Markets Serve as a Crucial Testing Ground for Advanced AI Forecasting Models
Grok’s Image Editing Tool Sparks Safety Concerns After Generating Sexualized Images of Children