Paul Tudor Jones says the US is dangerously late to regulate AI
Photo: Images George Rex / Flickr / CC BY-SA 2.0
Why it matters
  • Jones is not a tech executive or an AI researcher — he is one of the most respected macro investors in the world. His intervention frames AI governance as a financial stability concern, not just a civil liberties one.
  • The US has no federal AI regulation in force as of May 2026 — the administration has rolled back Biden-era executive orders and has not replaced them with legislation.
  • The EU AI Act’s enforcement begins in August 2026, creating a situation where the world’s largest AI companies operate under strict European rules and essentially none in their home jurisdiction.

Paul Tudor Jones, founder of Tudor Investment Corporation and one of the most closely watched macro investors of the past four decades, told CNBC on May 7 that the United States is “late” to regulating artificial intelligence. “We should have already done it,” he said, in remarks that drew unusual attention for their source: Jones occupies a position in financial markets somewhere between oracle and institution, and his public statements on policy tend to carry weight across asset classes.

His specific concern, as expressed in the CNBC interview, was that AI deployment is accelerating rapidly across financial services, healthcare, defence, and infrastructure while governance frameworks lag years behind. He described the technology as “the most consequential since the printing press” and suggested the absence of federal oversight creates systemic risks that markets cannot price because regulators cannot name them. He did not endorse any specific legislation, but called for a “serious” bipartisan effort to address the gap before “something happens that forces our hand.”

The regulatory vacuum in the US

The United States has no comprehensive federal AI law in force as of May 2026. The Biden-era executive orders on AI safety and transparency — signed in October 2023 and intended as a placeholder while Congress debated legislation — were revoked by the Trump administration early in its second term. The administration has indicated a preference for industry self-governance and has resisted both congressional AI bills and international coordination mechanisms that would bind US companies to multilateral standards.

The result is a structural asymmetry with the European Union, which will begin enforcing its AI Act against general-purpose model providers in August 2026. US companies including OpenAI, Google, and Microsoft face legal obligations in Brussels — transparency requirements, risk assessments, incident reporting — that they do not face at home. Whether that asymmetry produces regulatory arbitrage, moves AI development offshore, or simply creates compliance costs without safety benefits is contested among analysts. Jones’s position is that the absence of domestic oversight is itself the risk, regardless of what Europe does.

The market angle

For investors, the Jones warning maps onto an already-present concern in portfolio construction: AI stocks are priced for a continuation of the current development trajectory, but that trajectory depends on regulatory conditions that could change rapidly. A major AI-related incident — a significant model failure in a safety-critical system, a financial markets disruption attributable to algorithmic decisions — could trigger emergency legislative action that reshapes the operational and liability environment overnight.

The AI sector’s current valuations do not obviously price that tail risk. Jones’s intervention is a reminder that macro investors who survived the 2008 financial crisis built their frameworks around the premise that governance lags can have sudden, nonlinear consequences. Whether the governance gap in AI resolves gradually through regulatory catch-up or abruptly through a forcing event is, as Jones would frame it, a question of timing rather than direction.