I found it very interesting to learn that, likely following Italy’s example, Europe as a whole is starting to organize to create some kind of specific legislation to regulate AI.

In my understanding, the main reason authorities have to start thinking about AI regulation doesn’t lie in AI being dangerous or out of control, but in the fact that we, as humans, have a very prolific imagination. And with AI models available to millions of people right now, not requiring anyone to have the slightest technical knowledge to use them, you can imagine what happens. Take, as an example, the guy who used Midjourney to create a fake image of the Pope wearing a white puffy jacket.

Consider also Claudia, a 19 year-old, beautiful and horny woman, also 100% created via AI, the brainchild of two computer science students who decided to create her as a joke, after knowing about the story of a a guy who made $500 catfishing users with photos of real women. And they succeded.

Or take Alexander Hanff’s case as a final example. ChatGPT wrongly stated to Alexander, a renowed computer scientist who happens to be very much alive, that he had passed away in 2019.

These examples, at least to me, all reflect the need to institute some kind of regulation. I just don’t think doing such regulation is going to be that easy. But I’ll come back to that in a moment.

According to a report by the Financial Times published this week, members of the European Parliament intend to create Europe’s Artificial Intelligence Act, a sweeping set of regulations on the use of AI, among which is asking for developers of products such as ChatGPT to declare if copyrighted material is being used to train their AI models, so content creators can demand payment when applicable.

They also want responsibility for misuse of AI programmes to lie with their developers rather than with small business (and persons) using it, what I don’t think it’s a bad idea… but in my opinion, the most interesting obligation the European AI Act may bring to reality is for LLM chatbots to explicitly tell users that they are not humans.

You might think that a language model explicitly telling you that it is not human is unnecessary. But consider all this coverage media is giving to AI recently: it feeds peoples’ imagination, making them believe artificial intelligence LLMs might have abilities and consciousness they don’t actually have. I myself have spent a couple of hours discussing this exact point with my parents a couple of weeks ago, and I can say that it can become very difficult, sometimes, to separate fact and fiction in people’s minds.

The difficult to separate reality from imagination is fueled by the fact that, when people identify that a machine, rather than another human is interacting with them, machine heuristic comes into play, making them believe that machines are accurate, objective, unbiased and infallible, clouding people’s judgment and causing an overconfidence in machines judgement and decision making.

Also on the side of things that just don’t help in convincing other people that AI is not infallible is the behavior of humans, who tend to unconsciously assume competence even when a given technology doesn’t really warrant it and lower their guard while machines perform their tasks.

People also tend to treat computers as social beings even when the machines have only the slightest hint of humanness, as is just the case with language models. We tend to apply the same human-to-human interaction rules to the interactions we make with machines, being more polite, for instance (or have you never typed a Google query in the exact form of a question you’d ask another person, even if the computer algorithm doesn’t really need you to type in that way?). Thus, when computers seem sentient, people tend to trust them, blindly.

So take the great imagination our human brains have and combine it with these behavior biases I’ve talked about, and you have not one, but at least two good reasons wy regulation is needed for AI. Like I said before, though, I don’t think this will be easy, even if Europe and their legislators are trying to take the lead.

I believe AI is different from conventional engineering products. Take an airplane, for instance: the minds behind an airplane, and each one of their components, are perfectly able to establish how the airplane is going to behave in each one of the conditions it has been prepared to be used. So, think of a parameter, life fuel consumption or maximum speed and the engineers will always be able to answer your questions based on these projected, planned parameters, given a set of pre-established conditions.

But AI? You probably know more than one person — or a couple of them — who have at least once been amazed at ChatGPT has created, be it receiving their own made prompts, be it because they’ve watched a video or read an article telling what happened after someone prompted them to do something. Think AI and you’ll always be surprised. There’s even hallucinating AI!

Ask these language models the same question twice. Notice you’ll never receive the same answer, because the model will always process your query a little differently each time. This means that, contrary to my airplane example above, none of the engineers who develop these AI models can precisely tell you what will be the resulting product of any of the models. So… how do you legislate about something unpredictable?

The unpredictability of artificial intelligence models will also require developers to envision ways in which the computer might behave, trying to be always one step ahead of potential violations of social standards and responsibilities. So, periodic audits of AI’s outcomes will probably need to be created by whatever regulations show up.

I believe good regulations reduce risks. But again… AI is, at least for the moment, unpredictable. And laws require well defined matters to work better. Is it at all possible to define AI well, being it a field still evolving? Most of the material I usually read on AI regulation is in English, and most of it says many technology related legislations have failed in the past, even for subjects more defined, like e-mail, because of the slowness to adapt to rapid changes in technology. Most of the time, laws become obsolete the moment they are introduced.

Another aspect yet to consider is the actual need for AI-specific regulation. Some people, like John Villasenor, Professor of Electrical Engineering, Law, Public Policy, and Management at UCLA, believe that many of the potentially problematic outcomes from AI can be addressed already, by existing legislation.

John believes that algorithms used by banks which end up being discriminatory in loan application decisions are subject to the Fair Housing Act, already in place in US. AI in a driverless car which gets involved in an accident is subject to a products liability law.

I believe that even if John is partially right, AI, the way it is being promoted and used, doesn’t reach only Americans. It’s spread worldwide, and there are countless countries, like mine, for example, where such legislations, so vital to support AI misfires, have not even been discussed yet. And there’s still the fact that when a country decides to push limits against a technology, the developers can always decide to move to a less regulated country to continue working.

Now, I have worked for quite some time with regulating processes and creating procedures. When we’re talking about processes, they’re subject to change as much as technology, because of continuous improvement and continuous evolution. When a process evolves, we revisit its associated procedures and update them, reflecting the necessary changes. When you regulate technology and it evolves from its initial state, the same should happen: update the related legislation. But as it happens with processes, changing legislation to an updated version is easier said than done: people, time and will sometimes, more often than not, are lacking.