Why Our Laws and Regulations Aren’t Ready for AI

By | 9:14 AM Leave a Comment

This post is part of Lifehacker’s “Living With AI” series: We investigate the current state of AI, walk through how it can be useful (and how it can’t), and evaluate where this revolutionary tech is heading next. Read more here.

Generative AI tools like ChatGPT seem to be on the verge of taking over the world, and the world has been scrambling to figure out how to respond. While there are some laws and regulations in place around the globe that seek to reign in and control this impressive technology, they are far from universal. Instead, we need to look toward the future to see how AI will be handled by governments going forward.

AI is essentially running wild right now

The situation at present is, for lack of a better phrase, not great. The movement to regulate artificial intelligence isn’t keeping pace with the technology itself, which is putting us in a precarious place. When it launched, ChatGPT was fascinating and fun to test out. Today, it and other large language models are already being used by companies to replace labor traditionally done by people.

Consider the example of G/O Media, Lifehacker’s former parent company. Without informing editorial staff, the company recently published AI content on several of its digital media sites, including tech magazine Gizmodo. That content was riddled with mistakes that knowledgable writers would not have made and which would have been easily identified by editors—but since their input or opinions weren’t considered, the articles went up with misinformation and stayed up.

AI as we understand it in mid-2023 is a particularly novel case. It’s tough to think of the last time a technology has captured the attention of the world in quite this way—maybe the iPhone? Even blockchain technologies like NFTs and the metaverse didn’t take off nearly so quickly. It’s no surprise, then, that AI has also caught lawmakers with their pants down. Yet legitimate warning bells have been sounding about AI for years, if not decades. If the tech came faster than we thought, that doesn’t excuse the lack of forethought in our laws and regulations in the interim. Like a plot twist out of The Matrix, the robots have staged a sneak attack.

But lamenting our lack of foresight isn’t exactly a productive way to deal with the situation we’ve found ourselves in. Instead, let’s take an objective look at where we stand right now with laws and regulations to control this technology, and how the situation could change in the future.

Laws and regulations governing AI in the U.S.

Land of the free, home of the robots. As it stands, the U.S. has very few laws on the books that regulate, limit, or control AI. If that wasn’t the case, we might not have the advancements we’ve seen from companies like OpenAI and Google over the past year.

What we have instead are research and reports on the subject. In October of 2016, the Obama administration published a report titled “Preparing for the Future of Artificial Intelligence” and a companion piece, “The National Artificial Intelligence Research and Development Strategic Plan,” which highlight the potential benefits of AI to society at large, as well as the potential risks that must be mitigated. Important analysis, no doubt, but clearly not convincing enough for lawmakers to take any decisive action in the following six years.

The John S. McCain National Defense Authorization Act for Fiscal Year 2019 established the National Security Commission on Artificial Intelligence, which, you guessed it, produced additional reports on the potential good and bad aspects of AI, and their advice for what to do about it. It dropped its final, 756-page report in 2021.

At this point, the official policy aims to help the development of AI technology rather than hinder it. A 2019 report from the White House’s Office of Science and Technology Policy reiterates that, “the policy of the United States Government [is] to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI,” and that, “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.” It also lays out 10 pillars to keep in mind when considering AI regulation, such as public trust in AI, public participation in AI, and safety and security.

Perhaps the closest thing we have to administrative action is the “AI Bill of Rights,” released by the Biden Administration in 2022. The informal bill includes five pillars:

  • “You should be protected from unsafe or ineffective systems.”
  • “You should not face discrimination by algorithms and systems should be used and designed in an equitable way.”
  • “You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.”
  • “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”
  • “You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”

In addition, the White House has a set of blueprints to ensure these pillars help the public use and understand AI technologies without being taken advantage of or abused. This gives us a glimpse into what AI regulation might look like, especially if the Congress in power proves sympathetic to the White House’s views.

All these reports trend in a positive direction, but at this point, they’re also mostly talk. None spur lawmakers to act; they more gently suggest that someone do something. You know, eventually.

We do have seen some action, though, in the form of hearings. (Congress loves to hold hearings.)

Back in May, OpenAI CEO Sam Altman and two AI experts went before Congress to answer questions on potential AI regulation. During the hearing, lawmakers seemed interested in ideas like creating a new agency (potentially an international one) to oversee the development of AI, as well as introducing a licensing requirement for those looking to use AI technologies. They inquired about who should own the data these systems are trained on, and how AI chatbots like ChatGPT could influence elections, including upcoming 2024 presidential race.

It hasn’t been that long since those hearings, but, still, we haven’t made much progress since.

Some states are introducing their own AI regulations

While the federal government doesn’t have much regulation in place at the moment, some states are taking it upon themselves to act, albeit with a light touch—mostly in the form of privacy laws issued by states like California, Connecticut, Colorado, and Virginia that seek to regulate “automated decision-making” using their citizens’ data.

There do exist laws for one area of AI technology—self-driving cars. According to the National Conference of State Legislatures, 42 states have enacted laws surrounding autonomous vehicles. Teslas are already on the road and driving themselves, and we’re closer than ever to being able to call an autonomous vehicle, rather than a human driver, to deliver us to a destination. But that’s no replacement for laws and regulations controlling AI in general, and on that front, no state, nor the federal government as a whole, has substantial regulations in place.

International views on AI regulation

AI regulation is somewhat further along in other parts of the world than it is in the U.S., but that’s not saying a great deal. For the most part, governments around the world, including those of Brazil and Canada, have done similar work to investigate AI’s potential benefits and drawbacks, and, within that context, how to regulate it in the best way possible.

China is the only major player on the world stage actually aiming to get laws regulating AI on the books. On Aug. 15, rules drawn up by the Cyberspace Administration of China (CAC) will go into effect that apply to AI services available to citizens. These services will need a license, will need to stop generating any “illegal” content once discovered and report the findings accordingly, will be required to conduct security audits, and to be in line with the “core values of socialism.”

Meanwhile there’s the E.U.’s proposed Artificial Intelligence Act, which the European Parliament claims will be the “first rules on AI.” This law bases regulation of AI on the technology’s risk level: Unacceptable risks, such as manipulation of people or social scoring, would be banned. High risks, such as AI in products that fall under the EU’s product safety legislation or AI systems used in things such as biometric identification, education, and law enforcement, would be scrutinized by regulators before being put on the market. Generative AI tools like ChatGPT would need to follow various transparency requirements.

The EU Parliament kicked off talks last month, and hopes to reach an agreement by the end of the year. We’ll see what ends up happening.

As for ChatGPT itself, the technology has been banned in a handful of countries, including Russia, China, North Korea, Cuba, Iran, and Syria. Italy banned the generative AI tool as well, but shortly reversed course.

For now, it seems, the worlds’ governments are mostly playing wait and see with our coming AI overlords.


from Lifehacker https://ift.tt/tYvk2iO

0 comments:

Post a Comment