Share

*By Loren Spindola, Leader of the Artificial Intelligence Working Group at ABES

Since the beginning of time, any tool can be used for good or ill. Even a broom can be used to sweep the floor or to hit someone on the head. The more powerful the tool, the greater the benefit and risk it can cause. This also goes for Artificial Intelligence, which can become a very powerful weapon. And, unlike the broom, which we know how to use and what its features are, we're still not even close to knowing the full potential of AI. And that generates an important reflection. When the technology we develop changes the world, the more responsibility we will be for that transformation. In this context, it is necessary not only to accept this responsibility, but also to develop good global practices, which help in the process of defining principles with the various international forums, investments in artificial intelligence centers, in research, in short. The point, however, is that the technology sector cannot address these challenges alone. And here we come to the central point of the discussion: the need to promote a combination of self-regulation and government action. 

We cannot dissociate the fact that AI, like all technology, is designed to be global. It needs to work the same way everywhere. But, how do I do that if I have different laws and regulations between countries? 

Considering the object of the law, it is important to highlight that there is no universally accepted definition of what Artificial Intelligence is. What exists, in fact, is the consensus to avoid definitions that are too broad or enigmatic (and, also, the consensus of stipulating what no is AI; therefore, I consider that the Bill 21/20 was very skillful in removing automation – or we would be regulating the use of formulas in spreadsheets, for example). Thus, Artificial Intelligence is, basically, a computational system that can learn from experience, discerning patterns in the imputed data and, thus, make decisions and make predictions. It learns from its mistakes to generate new, more accurate results. The fact that AI learns from experience – that is, cognitive learning, like us humans – makes it unnecessary to program the system and thus guarantee quick, accurate and deductive answers. 

It is worth mentioning that PL 21/20 brought a concept based on the definition of AI that the Organization for Economic Cooperation and Development (OECD) stipulated. And that makes us comfortable. This concept brings the system based on computational process that, from a set of objectives defined by humans, can, through data and information processing, learn to perceive and interpret the external environment, as well as to interact with it. , making predictions, recommendations, rankings or making decisions. 

Today, artificial intelligence systems are used by all sectors of the economy. There are several types of AI systems offering different benefits, opportunities, risks and regulatory challenges. When proposing a Legal Framework for Artificial Intelligence in Brazil, it is necessary to bear in mind that AI systems are different from each other, and that the attempt to group them together, without considering their use, is harmful to the development and application of technology in the field. Country. 

Interestingly, AI forces the world to face both similarities and differences between philosophical traditions. However, we know that ethics do vary across cultures. Despite being developed globally, to be used anywhere and in the same way, AI transforms when applied in a certain location. And this leads to a new reflection: the sociotechnical aspect of AI. 

It is very valid to bring this aspect to our discussion, because although it seems obvious, this is something that is still little talked about. Here in Brazil, to get an idea, there is little literature available on the subject. In his research work, Prof. Dr. Henrique Cukierman, from UFRJ, brings a sociotechnical perspective to software development. And, among several analogies, it is clear that this look drives us to go beyond mathematical and algorithmic models, including the context of the real world. And, to bring the technical and the social together, an interdisciplinary perspective is needed – which is, in turn, the essence for the mitigation of biases. Using the socio-technical perspective and, considering the fact that AI systems have this socio-technical aspect, it is important to bring here that it will not be a homogeneous group that will be able to scrutinize all the technicalities of the subject and, consequently, design a law that encompasses all specificities of technology. It is necessary to bring philosophers, engineers, developers, researchers, academics, jurists, entrepreneurs and civil society to the debate. Everyone needs to contribute. 

As important as the definition of the object is to think about the purpose of the law. After all, what is the purpose of establishing a Legal Framework for AI in Brazil? Where does Brazil want to go with this technology? I imagine that our goal is to create an environment conducive to innovation, so that companies invest in research and development, so that people have confidence in using the benefits of technology every time, bring legal certainty. In other words, benefit society, the private sector, academia and government. And how to find the right balance between all needs? The fact is that AI is a constantly evolving technology. And we have no way of predicting what it will look like a year from now. We don't have a crystal ball. 

It seems counterproductive (or even utopian) to pretend that a law overlaps with future realities, at the risk of harming the development of the technology itself. Or, worse, that it is intended to regulate situations/hypotheses not currently foreseen, which would make the norm innocuous and obsolete. In other words, even though there is no specific law on AI in Brazil, companies are developing incredible solutions with technology, in a serious and responsible way. 

That is why we defend a text with principles, a risk approach, with clear guidelines on where and how Brazil wants to go. If and when necessary, the use of artificial intelligence should be regulated, not the technology itself. In this way, we will use technology to reduce injustice and inequality in society. Having technology in our favor, we will increase our ability to think and act, always with an eye on transparency, respect and humanity.

*Loren Spindola, ABES Artificial Intelligence Working Group Leader

quick access

en_USEN