Trump charts a course on AI policy
If the Administration restricts efforts to correct for algorithmic bias in the name of ideological neutrality, those AI systems will recapitulate many existing human biases. That might be the point.
On July 23, the Trump Administration released a three-pronged action plan on federal policy toward artificial intelligence. In the works since an Executive Order in January, the plan seeks to accelerate the development and adoption of AI in pursuit of U.S. dominance of the economic and geopolitical impact of the technology through a mix of government investments and low regulatory burdens on the private sector. Its reflection of core Administration political values and objectives, however, injects uncertainty into how it will achieve these goals.
Part of the Trump action plan focuses on expunging governmental and corporate policies that may have restricted AI adoption in the public and private sector. At the federal level, it recommends a White House-led review of existing rules, regulations, and administrative documents created by federal agencies that may hinder AI development and adoption for revision or repeal. Such regulations are likely to be holdovers of government action in specific economic sectors like health care and finance that may limit federal AI adoption inadvertently. Firms developing AI also will get direct say in government regulation of their creations as the action plan includes a directive to the White House Office of Science and Technology Policy to launch a Request for Information from the private sector on federal regulations they find burdensome.
At the same time, the Trump Administration also erected a different type of regulatory scheme tied to its political ideology. One of the EOs it released to accompany the action plan targets “woke AI” usage by the federal government. Declaring “diversity, equity, and inclusion” to be “destructive” and an “existential threat to reliable AI,” the order instructs the Office of Management and Budget to allow only the acquisition of AI systems that “do not manipulate responses in favor of ideological dogmas such as DEI,” claiming that such models trained with concepts like “transgenderism”, unconscious bias, and racial and sexual discrimination actually suppress factual information.
But AI models that are trained indiscriminately on everything are known to repeat the biases and discrimination that are found online and in the world. For example, researchers have identified systemic trends in data collection caused by the implicit racial bias of medical professionals and by home appraisers. AI models trained on such data, or on other sources like historic photographic archives, will generate outputs that mirror biases in the source material, like racially-discriminatorial mortgage underwriting. If the Administration restricts developers’ efforts to build training datasets to correct for algorithmic bias in the name of ideological neutrality, those AI systems will recapitulate many existing human biases. That might be the point.
The action plan reflects the Administration’s contradictory approach to scientific research. It takes specific interest in how federal agencies can support scientific datasets and data repositories at the scale that will be useful to learning algorithms, noting that “the AI era will require more scientific and engineering research to transform theories into industrial-scale enterprises.” At the same time, the Administration just froze all grants by the National Science Foundation until a review determines if projects align with its priorities. The action plan also instructs the National Institute of Standards and Technology to revise its AI risk management standards to eliminate “references to misinformation, Diversity, Equity, and Inclusion, and climate change.” Critics warn these restrictions will hinder the accuracy of AI models used to predict things like natural disasters and agricultural productivity and even would impact national defense assessments and may make the technology itself much less reliable.
The action plan also draws battle lines with state governments as they explore AI regulation. States are out in front of federal action: 38 states have established AI-related regulations and all 50 state legislatures are considering legislation on the topic. The action plan directs the Office of Management and Budget to consider “a state’s AI regulatory climate when making funding decisions and limit funding if the state’s AI regulatory regimes may hinder the effectiveness of that funding or award.” This direction is similar to an effort by Sen. Ted Cruz to insert a requirement in the fiscal year 2026 appropriations package that states seeking grants from a $500 million AI infrastructure fund must adopt a 10-year moratorium on state regulation of AI. Many of Cruz’s Republican colleagues opposed the amendment and the Senate overwhelmingly voted to strip it from the bill.
The Administration’s AI plan also recommends streamlining a number of environmental regulations it claims hinder building the data centers and semiconductor manufacturing plants needed to support AI systems. It calls for new categorical exclusions within the National Environmental Policy Act (NEPA) for data centers, without mentioning the existing process federal agencies use to determine such exclusions. It also calls for water permitting exemptions consistent with those submitted in the public comment period for the plan by a trade association that represents Google and Amazon Web Services. An EO released along with the plan provides much more detail on the rollback of environmental regulations for data center construction.
Other nations, of course, have the technological and industrial capacity to develop their own AI systems. Concern for how open source and proprietary systems developed in other countries that may not reflect domestic political values has led to countries to see AI as related to their national sovereignty. The European Union and countries like India, China, Japan are discussing policies of “AI sovereignty,” or complete control over AI inputs.
The Trump Administration’s AI policy strongly endorses American AI sovereignty as part of its foreign policy vision of competition between great powers like the U.S. and the People’s Republic of China. Vice President J.D. Vance previewed this vision for American AI sovereignty in March. Another EO accompanying the action plan, however, goes further in calling for something closer to AI hegemony. “The United States must not only lead in developing general-purpose and frontier AI capabilities, but also ensure that American AI technologies, standards, and governance models are adopted worldwide to strengthen relationships with our allies and secure our continued technological dominance,” it declares. To do so, it calls for federal action to support the export of American AI systems as full-stack packages, or sets of hardware, data, and machine learning algorithms together as a product. The action plan describes full-stack export as a form of “alliance,” noting that the “distribution and diffusion of American technology will stop our strategic rivals from making our allies dependent on foreign adversary technology.”
In sum, the Trump plan is a total repudiation of Biden Administration AI policy, which sought to balance caution with support for adoption. In October 2023, the Biden Administration issued EO 14110, vowing to protect intellectual property rights of creators and the personal data of private citizens in training AI systems. It also promised protections against discrimination and bias by AI through inherent biases in training data, labeling systems for AI-generated content, and input from organized labor on AI use in the workplace. It also positioned the Administration to counter domination of the AI marketplace by large tech firms. President Trump rescinded EO 14110 in January.