Merchants of Safety

When a corporation elects to drape itself in the vestments of moral authority, to raise billions not on the promise of profit alone but on the claim that it, uniquely among its competitors, can be trusted with the most consequential technology of the century, it does not merely invite scrutiny. It demands prosecution.

Anthropic's founders did not slip quietly out of OpenAI over stock options or office politics. They departed, loudly and publicly, on the declared grounds that the race toward artificial general intelligence required a steadier, more principled hand at the wheel. Their hand, as it happened. What follows is a specific and evidenced examination of the distance between that founding promise and what came after: the governance structures announced as bulwarks and functioning as furniture, the safety commitments drafted with great fanfare and quietly fed into the shredder when the contracts arrived.

The question is not whether Anthropic is worse than its rivals. It may not be. The question is whether a company that sold the world its conscience ever actually had the inventory.

Read Merchants of Safety

2021-2026 Analysis

Anthropic may be a real company with real revenue and real technical accomplishments. It may also be a terrible investment. These two facts coexist more comfortably than the prospectus would suggest. The public story is cleaner, nobler, and more tightly managed than the business underneath it, and the distance between the two is where ordinary investors lose their money.

Our five-year analysis tracks what actually happened from 2021 to 2026. The cautious disclosures gave way to narrative management. The safety commitments aged like milk. The release behavior drifted, quietly and then not quietly at all, from every prior public commitment. We mapped the divergence so you would not have to take anyone's word for it, including ours.

Read the full analysis