“The use of generative artificial intelligence (genAI) by government employees is expected to be subject to new, stronger restrictions announced by the Biden Administration, following criticism of earlier attempts at regulation for being too vague and ineffectual.”
The use of generative artificial intelligence or gen AI by government employees is expected to be subject to new, stronger restrictions announced by the Biden Administration, following criticism of earlier attempts at regulation for being too vague and ineffectual.
US Government Puts Efforts in Better Use of AI Technology
The executive order will alter immigration laws to permit a larger intake of technology workers, and it is anticipated to be revealed on Monday. The US’s development efforts will be accelerated by this.
Last May, President Biden issued “guidance” regarding the rapid advancements in Generative AI that have been causing concern among industry experts. Vice President Kamala Harris also met with CEOs from Google, Microsoft, and OpenAI, the creator of the popular ChatGPT chatbot, to discuss potential issues with Generative AI, including security, privacy, and control problems.
US Government Planned Ahead of Launch of ChatGPT
Even before ChatGPT launched in November 2022, the administration had already made plans for an AI Risk Management Framework and a “AI Bill of Rights” public. It also included a plan for creating a National AI Research Resource.
The upcoming executive order is set to enhance national cybersecurity defenses by mandating that US government agencies, such as the Department of Defense, Department of Energy, and intelligence agencies, conduct assessments on large language models (LLMs) before they can be employed.
In an effort to stop AI technology from creating content that breaks EU regulations, the US has been trying to control it. Among other concerns, European nations have been attempting to guarantee that AI-generated content does not contain child pornography or Holocaust denial. For example, Italy prohibited ChatGPT‘s continued development due to privacy concerns following a data breach that disclosed user conversations and payment details for the natural language processing software.