Watershed Moment for Responsible AI or Just Another Conversation Starter? 

Reading Time : 4min read

The Biden Administration’s recent moves to promote “responsible innovation” in artificial intelligence may not fully satiate the appetites of AI enthusiasts or defuse the fears of AI skeptics. But the moves do appear to at least start to form a long-awaited framework for the ongoing development of one of the more controversial technologies impacting people’s daily lives.

The  May 4 announcement included three pieces of news.

  • $140m in funding to launch seven new AI research institute
  • A commitment from Google and other leading AI developers to participate in a public evaluation of AI systems
  • A commitment to draft policy guidance on the use of AI systems in government

Beyond that, the statement didn’t include a lot of detail. It left plenty of room for interpretation and speculation about how far the White House is willing to go to promote AI or rein it in. A year from now, we could look back on this as a watershed moment. Or we could remember the announcement as little more than a conversation starter with no tangible impacts.

What can we take from the early May actions? Why does this matter to us? Let’s dig in a little.

First, the funding news can only be considered welcome. Adding seven new institutes to the 18 that are already performing AI R&D should accelerate innovation into technologies that are transforming every industry. The White House said the new institutes will focus on responsible, ethical advances that both “serve the public good” and solve problems in critical areas like climate, agriculture, energy and cybersecurity. More grants bringing together government, industry, academia and nonprofit interests can generate real benefits, economic and otherwise.

The second pledge is more interesting – and much more open ended. It calls for a public assessment of existing generative AI systems. A group of AI developers will study how ChatGPT and other AI models align with the principles and practices outlined in a White House  AI Bill of Rights.

What will come of this independent exercise? It depends on the criteria the group uses to evaluate the impacts of these generative systems.  ChatGPT, the AI chatbot capable of writing credible copy and answering a wide range of questions, has faced a firestorm of controversy since it debuted last fall. Italy briefly banned the tool based on privacy concerns. Will the AI developers who are studying the models value innovation and economic prospects more than the potential elimination of whole categories of jobs? Can society restore privacy once AI tools have taken it away?

The third statement could be the most impactful of all. The Office of Management and Budget is developing policy guidance on the use of AI systems by the U.S. government. The guidance likely will establish rules for federal use and recommendations for state and local governments and private businesses.

Could this be the start of something bigger? Could it lead to GDPR-devotype regulation that puts real stringent limits on what kind of AI can be developed and who it affects? Crypto currency regulation is trying to catch up with the technology, but it’s taken a number of large failures for regulators to take notice—leaving many consumers exposed in the meantime. 

Will it take several AI missteps for regulation to catch up with AI development? Before cryptocurrency, cryptographic algorithms—at one time considered munitions—themselves were export controlled to keep secure encryption schemes out of the hands of unauthorized individuals and foreign powers. Given the potential for AI to be used in military applications, could we see similar export controls established for new AI developments? Do either of these sectors provide a model—or warning sign—for responsible AI?

Right now, the prospects for future legislation are unclear. Could the U.S. government follow Italy’s lead and ban generative model AI, at least temporarily? Could it halt development of controversial AI models inside the country, or at least slow them down?

Our best guess is that the government won’t put any significant curbs on AI. If the U.S. slows it down, it risks allowing research initiatives to get passed by other countries. Use of the technology is accelerating. The cat is already out of the bag.

Ethical AI has become a hot research topic in recent years, and as a result, there’s been a lot of publicity about facial recognition systems not accurately identifying people of color because the AI models were trained on a higher percentage of white-skinned people. 

Beyond biased facial recognition, the government also could aggressively target other applications that could violate people’s rights. Should AI-influenced evidence be allowed to be submitted in court? If AI is going to be used to develop criminal justice models, it shouldn’t have inherent bias built into it from the data that it’s being trained on. Ultimately, responsible AI should ensure that the technology doesn’t disadvantage certain segments of the population, and there are steps the government can take to ensure that AI is used responsibly. It will be interesting to see how many it tries to institute. 

While AI has been a compelling topic for years, it’s currently trending in a big way. The debate is on as to whether public policy will truly help  people and AI to coexist. The White House has elevated the level of discussion with its introduction of a fact sheet on responsible AI. It will be fascinating to see where it goes.

Ready to make data your security advantage?

Request a Demo Let’s Chat