technologyconservative

Amazon Engineers Push Back on Internal AI Tool Rules

Seattle, WA, USAWednesday, February 11, 2026
Advertisement

Amazon’s own developers are feeling the squeeze from new rules that limit their use of Anthropic’s Claude Code in everyday work. Even though the company backs Anthropic heavily and sells its AI services through AWS Bedrock, employees can’t deploy Claude in live products without special permission.

New Guidance Pushes Kiro

Late last year, Amazon rolled out guidance that nudges teams toward its own coding assistant, Kiro. The memo urges developers to use Kiro for production code and lists Claude as a non‑approved third‑party option.

Heated Debate on Amazon Forums

  • 1,500 staffers signed a thread asking for formal adoption of Claude Code.
  • They argue that the tool offers better performance than Kiro.

Impact on Bedrock Sales

The policy hits hard on engineers who sell AWS Bedrock, Amazon’s platform that lets customers access Claude and other AI services. One engineer wrote:

“It is hard to convince clients of a tool you can’t use yourself in internal projects.”

Layered Relationship with Anthropic

  • Amazon invests over $60 billion in Anthropic.
  • Anthropic relies on Amazon’s cloud and chips.

The company claims it wants to boost internal efficiency with Kiro, but says it will not open‑source or fully support other AI tools for production.

Irony Highlighted by Sales Engineers

A sales engineer noted the irony of selling Claude through Bedrock while it’s not officially allowed inside Amazon. He said:

“To support customers, I need to demo and build with Claude myself.”

Concerns About Kiro’s Performance

Other engineers worry that Kiro’s slower performance could hurt development speed, calling it a “survival mechanism” rather than genuine innovation.

Usage Statistics

  • About 70 % of Amazon’s software engineers used Kiro at least once in January.
  • Yet many still favor Claude.
  • A manager overseeing internal tools acknowledged the frustration and cited 1,500 endorsements for Claude’s official use.

Conflicting Internal Guidelines

Some employees pointed out that earlier internal guidelines had cleared Claude for production, only to later remove that language. The lack of clear decision‑making data left many questioning why the tool was denied official status.

Bottom Line

The overall picture shows a company juggling its own AI ambitions with partnership obligations, while employees push for tools they believe will deliver better results.

Actions