Skip to content

Majority Leader Schumer Announces First-Of-Its-Kind Funding To Establish A U.S. Artificial Intelligence Safety Institute; Funding Is A Down Payment On Balancing Safety With AI Innovation And Will Aid Development Standards, Tools, And Tests To Ensure AI Systems Operate Safely

Washington, D.C. – Senate Majority Leader Chuck Schumer (D-NY) today announced that the National Institute Of Standards and Technology will receive up to $10 million to establish the U.S. Artificial Intelligence Safety Institute after heeding the call of Leader Schumer to balance continued U.S. leadership in AI innovation with safety, transparency, and accountability in the use of the technology:

The recently-released Commerce, Justice, and Science Fiscal Year 2024 appropriations bill includes up to $10 million for establishment of the U.S. Artificial Intelligence Safety Institute (USAISI) at the U.S. Department of Commerce’s National Institute for Standards and Technology (NIST). Schumer heralds this funding as a strong down payment that will help support the implementation of President Biden’s Executive Order on artificial intelligence released last year.

“As Majority Leader, I have kept up the drumbeat that our government must implement smart guardrails to make sure that we balance the need for the U.S. to continue to lead in innovation while also addressing any potential risks posed by artificial intelligence” said Leader Schumer. “I am pleased to announce the allocation of an initial up to $10 million for the establishment of the U.S. Artificial Intelligence Safety Institute at the Department of Commerce. I fought for this funding to make sure that the development of AI prioritizes both innovation and safety, accountability, and transparency, while supporting American industry and allowing for progress.”

The NIST AI Institute will facilitate the development of standards for safety, security, and testing of AI models, develop standards for authenticating AI-generated content, and provide testing environments for researchers to evaluate emerging AI risks. The AI Safety Institute is working in coordination with a Consortium of 200 companies and organizations focused on research and development as well as testing and evaluation, among other activities, to improve the safety and accountability of AI systems.