From EFF to NSF: Prioritizing People in the AI Action Plan

In January 2025, the new US administration announced an executive order on artificial intelligence (AI), replacing previous regulations established during the Biden era. This decree marks the beginning of an AI Action Plan aimed at “unleashing” the AI industry, promoting innovation, and eliminating “designed social agendas” that affect the sector’s operation. In this context, the National Science Foundation (NSF) has opened the door to public comments to enrich this initiative, and the Electronic Frontier Foundation (EFF) has responded with key concerns.

The EFF argues that government acquisition of automated decision-making technologies should be done with a high degree of transparency and public accountability. The organization warns against the use of secret and unverified algorithms in decisions that could have a significant impact on people’s lives, such as employment or asylum decisions. In its second recommendation, the EFF suggests that policies related to generative AI be specific and proportionate, taking into account the need to protect other public interests. Finally, it calls for preventing large companies from consolidating their dominance in the sector through licensing schemes that restrict competition.

The rapid advancement of AI in the United States has raised alarms due to the lack of transparency. This type of regulation could not only strengthen the power of the largest corporations but also jeopardize the civil liberties of individuals affected by automated decisions. The implementation of experimental AI tools in sensitive areas such as policing and national security has already been observed, and concerns have been raised about their use in federal government job evaluations, which could impact the employment opportunities of various workers.

The automation of critical decisions about people is considered not only imprudent but also dangerous. New AI systems, in many cases, are ineffective and require considerable effort to correct their errors. In the worst cases, they could produce discriminatory and incorrect results hidden behind the inherent lack of clarity of these technologies. The EFF emphasizes the need for a robust process that allows for review and public notice before implementing such tools, in accordance with the Administrative Procedure Act. This would help minimize expenses on ineffective technologies and determine when their use is harmful.

Amidst the anxiety over generative AI, lawmakers have begun proposing broad regulations that do not adequately address the multiple public interests at stake. Initiatives like the NO FAKES and NO AI Fraud laws expand copyright laws to benefit large corporations, disregarding the needs of other creators. Some technical proposals, such as “watermarking,” present serious practical limitations, posing an additional challenge to effective AI regulation.

Among the questionable approaches, the growing trend towards AI licensing schemes creates barriers for smaller artists and creators, favoring those who can afford more for access to these technologies. This strategy is akin to trying to solve bullying by giving resources to the bully instead of protecting the victim.

The search for quick solutions, such as expanding copyright, actually benefits no one, especially artists and small businesses who cannot compete with large corporations. Artificial intelligence poses a threat to fair treatment of creative work, and dismissing secondary use will not address the power imbalance between workers and dominant oligopolies. It is crucial that citizens have the right to participate in culture and express themselves without being at the mercy of private corporations, so policymakers should focus on formulating clear policies that protect these rights and limit regulations to proven solutions that truly address the issues at hand.

via: MiMub in Spanish

Scroll to Top