Hot Posts

6/recent/ticker-posts

Ted Cruz focusing on privacy group's role in AI policy formation


As the race to regulate artificial intelligence (AI) heats up, U.S. Senator Ted Cruz (R-TX) is sharpening his focus on the groups shaping those policies behind the scenes. Cruz, who chairs the Senate Committee on Commerce, Science, and Transportation, has sent a pointed letter to the Future of Privacy Forum (FPF), a nonprofit known for advocating for responsible tech development, requesting clarity on its involvement in crafting AI regulations.

The letter, addressed to FPF CEO Jules Polonetsky, asks for details on the organization’s potential role in helping shape federal AI policy—particularly in relation to a now-rescinded executive order issued by President Biden in October 2023. That order aimed to establish new federal standards for AI safety, promote content authentication practices, and encourage Congress to pass bipartisan data privacy legislation.

In Cruz’s view, the Future of Privacy Forum may have overstepped the neutral role it claims to play.

“FPF bills itself as a mediator that brings together thought leaders to address challenges posed by technology,” Cruz wrote. “That doesn’t mean it lacks a point of view.”

Cruz is also asking whether FPF received any government grants or outside funding that may have influenced its work on AI policy. FPF has denied any such financial connections related to its role in AI legislation, including state-level efforts.

The inquiry adds another layer to the broader national debate on who should influence AI governance, especially as states and the federal government grapple with the rapid acceleration of AI capabilities and the risks they bring. While the Biden administration's executive order was praised by many in the tech and privacy communities for its broad scope and call for government-wide action, critics like Cruz have pushed back hard.

In a co-authored op-ed published in the Wall Street Journal, Cruz slammed the executive order, arguing it had “little to do with AI and everything to do with special-interest rent-seeking,” comparing the administration’s approach to a “mafia shakedown.”

FPF has pushed back on such characterizations. The group praised Biden’s executive order at the time, calling it “incredibly comprehensive” and commending its influence on both public and private sector AI safety efforts. However, the nonprofit insists it does not advocate for importing “European-style regulation” to U.S. states and says it received no grant funding for its multistate work or other AI-related legislative efforts.

One of the flashpoints of the debate is FPF’s involvement in a national AI policymaker group. Known as the Multistate AI Policymaker Working Group (MAP-WG), this bipartisan coalition included over 200 lawmakers from more than 45 states. According to FPF, the working group was designed to help legislators share insights and coordinate on emerging AI issues—but the group has now drawn scrutiny from Cruz, particularly in light of Texas legislation.

Cruz’s letter highlights FPF’s association with Texas state Rep. Giovanni Capriglione’s AI bill, the Texas Responsible AI Governance Act (TRAIGA). Filed during the current 89th Legislative Session as House Bill 1709, TRAIGA seeks to regulate “high-risk” AI systems by requiring developers to adopt risk management policies, disclose AI use, and prevent algorithmic discrimination.

Capriglione, a Republican from Southlake, has said his mission is to strike a balance between innovation and protection. Speaking at a Texas Public Policy Foundation event last November, he emphasized the legislation’s goal: “to protect the constituents of Texas,” not cater to industry pressure.

“My goal, our goal, and all of our goal of the Legislature should be to go and protect our constituents and protect their liberties and protect their safety,” Capriglione said.

FPF maintains it “did not play any role” in drafting the Texas legislation and reiterated that MAP-WG activities were not government-funded.

Still, Cruz’s inquiry signals an intensifying focus on the web of influence around AI policy—particularly between nonprofit organizations and government actors. With AI technology developing faster than legislation can keep up, the debate over who gets a seat at the table is just beginning.

As federal and state lawmakers alike work to shape the future of AI in America, questions over transparency, influence, and accountability are likely to remain at the forefront.