The Evolving Use of AI in Government Agencies
Artificial Intelligence (AI) and Machine Learning technologies have steadily increased in number and capability over the past 10 years. They are much more accessible to Government leaders, staff, and policy makers due to the popular “Large Language Models” such as ChatGPT, Bing, Bard, and others.
AI technologies are here to stay, serving as a powerful tool to help agencies meet their goals and better serve the public better. Adopting new technology involves planning and management when involved with the Government workforce to ensure ethical and effective usage of AI. Additionally, with any new technology, there are risks that leaders should monitor and mitigate.
In many ways, the generative AI is like a new search engine interface. You can find helpful answers to basic inquiries, but you need to check for accuracy, tone, and completeness. As technology rapidly advances, the use cases become more varied, complex, nuanced, and ethically challenging.
At its simplest level, AI technology can learn from data and perform tasks that normally require human engagement, such as understanding language, recognizing images, making decisions, and solving problems. Examples of AI applications include:
Chatbots that can answer questions and provide information to citizens
Natural language processing tools that can analyze text and speech data
Image generation tools that can create realistic graphics and artworks
Government agencies can use AI to improve their efficiency, accuracy, innovation, and customer satisfaction. For instance, agencies can use AI to:
Automate repetitive and tedious tasks, such as data entry, document processing, and reporting, called Robotic Process Automation (RPA)
Enhance your decision making and problem solving, by providing insights, predictions, and recommendations
Engage with your customers and stakeholders, by providing personalized, interactive, and accessible communication channels
The three biggest current risks are security, inaccuracy, and challenges with the underlying language models:
Leakage of data, information, and concepts into the public large language models is the most pressing risk. Agencies should only allow access to tools within the security boundary of the agency
Inaccuracy of results when AI tools confidently give back answers that are incorrect or made up (sometimes called “hallucinations”)
Language model inconsistencies or unclear data sources could alter the results giving undesired outcomes if not corrected or adjusted
In a government contracting environment, contracting officers should receive training and policy guidance on the issues and communicate their guidelines clearly and explicitly. For example, in a procurement setting, the Government should reinforce a principle that remains true –that each bidder is 100% responsible for the content in their proposals. They may also provide guidance that says that program offices cannot use AI to author statements of work. In a contracting delivery context, the government should be clear about how and when they want AI tools to be used. Mission owners may want the use of AI tools to increase efficiency and automate mundane jobs but may not want to use it to author important agency documents.
Every agency should initiate a cross-agency working group to define how AI can enhance their missions, making them more efficient, effective, and customer experience-driven. These groups would identify common use cases and provide recommendations on policy, guidance, and potentially agency-wide training. These groups can also conduct pilot projects to prove the value of AI and help mitigate risks.
AI is here to stay; it is a transformative tool for Government agencies, shaping a more efficient and ethical path toward mission success. Careful planning, clear guidance, and risk management help agencies use them effectively and ethically to maximize the full potential these technologies bring to bear.