
Why Lasting Collaboration is Key for Effective AI Regulation
With the advent of generative AI and the explosion of data privacy, bias, and other concerns, the need for effective AI regulation is becoming increasingly important.
The Federal Government and private sector already have an interest in working together to create strong regulations. The private sector’s knowledge and flexibility provides innovative solutions to the Federal Government, while rigid Federal regulations reduce the likelihood that tech companies will invest in products that could be regulated out of existence.
Therefore, the biggest question surrounding regulating generative AI in the United States is not ‘should it be regulated?’, or ‘who should regulate it?’, but ‘how do we regulate it?’
The European Union (EU), which focuses on regulating individual systems, has tackled this question with mixed results. According to a recent Stanford study that analyzed the ten most used generative AI models, none followed EU regulations. China’s plan to regulate AI comprehensively and aggressively may result in compliant businesses, but at the cost of innovation and collective knowledge.
Engaging tech companies in the regulatory process will not immediately solve these problems. However, developing effective regulations requires sustained collaboration, drawing on the strengths of both the public and private sectors. Some agencies have already taken tentative steps in this direction. The U.S. Food and Drug Administration’s (FDA) pilot program involved certifying AI development processes, rather than the system itself, making it inherently adaptable and open to innovation. Their “Pre-Cert Pilot” focused on both the “culture of quality and organizational excellence” of the developer and their ability to develop and properly monitor safe and effective devices. The FDA reinforced this approach through continuous monitoring of KPIs relating to organizational effectiveness and product performance.
The necessity of both strong and flexible regulation for generative AI is important for both public-facing and internal Federal functions. According to OpenAI, over 90,000 employees of Federal, state, and local governments generated more than 18 million prompts within ChatGPT to do everything from writing, drafting, and summarizing documents to generating code in 2024 alone. This presents a clear opportunity for the private sector, but they will need to make sure they fully understand the needs and requirements of Government systems, some of which have vastly different security challenges depending on their levels of security and phase of technology modernization, to be effective. Conversely, Government regulations need to allow for the technical and innovative flexibility it requires within its own systems.
In the gap of any firm Government regulations, OpenAI’s recently launched ChatGPT Gov is a great start. Their Government-specific platform will be used in each agency’s specific Microsoft Azure commercial cloud or Azure Government community cloud so they can manage their own security, privacy, and compliance requirements. The Government’s considerations of systems like these as regulation takes shape may be very beneficial to improving Government efficiency.
Whatever form regulation takes in the United States, it is important for the Government to continue to involve tech companies and use their knowledge to create lasting, responsible, and adaptable regulation.