U.S. and EU Announce Collaborative AI Regulation at TTC Meeting

Rosslyn Elliott / Updated Apr 17, 2024 | Pub. Apr 16, 2024

Council Statement Cites Safety Risks and Workforce Needs

On Friday, April 5, the U.S. and the European Union issued a joint statement of planned cooperation in AI technology development to increase oversight and safety.

The agreement emerged during the meeting of the EU-U.S. Trade and Technology Council (TTC) in Belgium.

U.S. Secretary of State Anthony Blinken co-chaired the committee along with European Commission Executive Vice President Margrethe Vestager and other officials.

Need to Protect Human Rights and Worker Opportunity

The council announced the intent to collaborate on “rules of the road for emerging technologies such as artificial intelligence (AI).” The statement emphasized the need to protect human rights while promoting innovation. The TTC also called “to equip our workforce with the skills necessary to meet the needs created by rapidly changing technology, including AI.”

Tech Experts Support AI Regulation

Many leaders in the tech industry have stated their support for regulation of AI.

Elon Musk, CEO of SpaceX, said in an interview in November 2023 that he has been pushing for regulatory oversight of AI for some time. “I think we want to be very careful how we develop AI… I think it would be wise for us to have at least an objective third party who can be a referee that can go in and understand what the various leading players are doing with AI.”

Months earlier, in March 2023, Musk and over a thousand other technology experts signed a joint letter calling for a six-month pause in the development of AI. The experts cited “profound risks to society and humanity” from the pell-mell race to develop AI systems.

The purpose of the suggested pause was to allow independent experts to develop protocols to make AI tools “safe beyond a reasonable doubt.” But that pause never occurred, because there was no government policy or oversight in place.

 

““I think we want to be very careful how we develop AI…" Elon Musk

 

glowing brain on computer screen to symbolize AI

AI is moving too fast

Significant AI Risks to the Economy and Social Stability

Governments have been very slow to understand and regulate rapidly changing technology.  MIT Technology Review recently held up a cautionary example for AI, comparing the risks to the many negative outcomes for societies that resulted from the rapid development of social media. “The biggest mistake we made with social media was leaving it as an unregulated space,” said authors Nathan Sanders and Bruce Schneier. Leaders must now act more quickly to prevent extensive damage to society from AI by creating appropriate regulation.

AI’s threat to replace human employment has long been known as one of the major social risks of the technology. Now, about 18 months after the meteoric rise of generative AI in October 2022, disruptive effects for human employment are beginning to spread.

The New York Times reported on April 10 that Accenture estimated that AI could replace nearly three-quarters of working hours for most employees in the banking industry. Titans such as Goldman Sachs and Morgan Stanley are cutting back significantly on their hiring of junior analysts.

Incentives for automating labor are too great to be healthy for the economy, as analysts from the Brookings Institution noted in 2023 commentary. “We must prepare ourselves for a potential future in which the demand for human labor undergoes dramatic shifts,” said authors Katya Klinova and Anton Korinek. “Massively distorted incentives and a weakened union movement make AI more likely to be put into uses that emphasize labor cost cutting over job quality improvements.”

AI’s potential for major disruption to the labor market (and therefore, the economy) is one urgent reason why the U.S. and the EU plan to collaborate on oversight and training of the workforce. But as of yet, there has been no clear vision for the “new jobs” that AI’s proponents claim it will create.

Most articles on the subject of new AI jobs discuss an AI-driven need for high-level tech operators and data scientists. But they do not address the fact that a majority of the world’s workers will never be able to perform those jobs.

So, the TTC’s call for training workers in new skills is only a gesture for now, because no one has yet proposed any concrete educational plans for a radically shifting future. There has been no high-profile political discussion of what happens to the millions of workers without the years of education or abstract thinking ability necessary to train for a high-tech or data science job.

 

A robot touches a glowing button with an illustration of a human

Robots vs. humanity?

AI’s Risk to Human Rights and Privacy

The TTC’s statement on AI regulation has been a long time in coming, considering that experts have been warning of these major AI risks for well over a year.

Another well-known risk of AI addressed by the TTC is the risk of bias, privacy loss, and other threats to human rights.

Over eighty human rights and consumer advocacy groups signed an open letter to the U.S. Congress in October 2023 warning of the need for legislation to protect society from AI’s dangers.

U.S. and EU to Collaborate, But on What Action?

All the public warnings from tech leaders, journalists, and engaged citizens have gathered sufficient head of steam to trickle up to government.

For now, the TTC’s recent statement is only a start. It will take even more time to bring together experts or develop plans of action. The TTC’s working groups are likely to play a role in coordinating across national boundaries.

There are some signs of concrete progress. The UK and the U.S. signed a memo of understanding on April 1 to develop frameworks to test the safety of Large Language Models (LLMs).

 

glowing cyborg finer and human finger touching

Limiting AI’s risks

What You Need to Know About AI Safety and Risks

For the moment, most of AI’s serious risks are beyond the control of everyday citizens.

However, being aware of those AI risks can help you lift your voice and vote to support wise policy.

You can also better understand the things you see in the news if you follow the reporting on AI-related developments.

AI has great potential to advance certain aspects of human life, especially in medicine and education.

But knowing AI risks to the economy, to your privacy, and to internet culture will help you take steps to avoid some problems.

The more you are aware of deepfakes and types of AI scams such as impersonation, the less likely you are to fall for them.

Check out our resource library to learn more about AI and many other helpful internet topics!