Share

*By Loren Spindola

“I only know that I know nothing”, Socrates would have said even before Christ. But, in any case, it portrays his intellectual humility and his constant quest for knowledge, and can be interpreted as an acceptance that, despite our efforts at full understanding, there is still much we do not know.

The Socratic phrase could not be more current. As technologies are developed, or scientific discoveries are made, new possibilities and challenges arise and demand that society as a whole remain open to new approaches. And just like that, keep learning. Despite the countless positive changes brought about by technology, there will always be something more to be explored and discovered.

Taking a ride on the intellectual humility of Socrates, who believed that it is necessary to question certainties, opinions and preconceptions, we came to the conclusion that diversity requires innovation. Different looks bring new visions. The diversity of race, gender, sexual orientation and police, social status, allied to the diversity in academic formation and professional experience, are, nowadays, fundamental for the discovery of new alternatives to old problems. After all, it is impossible for a human being to know everything and about everything.

And that's where Artificial Intelligence comes in. Because it is an interdisciplinary field par excellence, involving several areas of knowledge – such as computer science, statistics, mathematics, psychology, philosophy, ethics, etc. -, each discipline can contribute to the development of fairer and more impartial algorithms. Therefore, in addition to innovation, multidisciplinarity plays a key role in reducing bias in AI, more diversity is needed to reach more perspectives and, ultimately, broader knowledge.

For example, computer science can provide the expertise needed to create efficient algorithms. Statistics can help identify and mitigate statistical biases in datasets, such as lack of diversity or uneven distribution of classes. Psychology can contribute to understanding how humans perceive and make decisions, which can help identify possible cognitive biases that may be present in AI algorithms. Philosophy can help to question and define ethical and moral concepts involved in making automatic decisions, while ethics can help to identify and evaluate the consequences of automated decisions in different groups of society.

By bringing together different specialists and, therefore, different perspectives, multidisciplinarity can help create fairer and more inclusive AI systems, avoiding the perpetuation of discrimination. Furthermore, multidisciplinary collaboration can help promote transparency in automated decision-making, allowing developers and users to understand how decisions are made and to assess the social consequences of those decisions.

The main international organizations recognize the importance of multidisciplinarity for the development of responsible AI and make recommendations in this regard. In this vein, the UN, OECD, UNESCO and the European Union itself recommend the inclusion of multidisciplinary perspectives in the discussion about the future of AI, for example.

Here in Brazil, despite the collegiate and diverse effort underway within the scope of the actions of the Brazilian AI Strategy and the AI study group at ABNT, it seems that we are going against the grain: we have a proposal for the regulation of AI prepared by a group formed only by jurists, who – like me – have a “limited” view of technology, precisely because of the lack of technical knowledge of how it works and is developed.

The fact that the theme was widely debated in public hearings and international seminars, with the presence of national and international specialists, with the laudable intention of bringing the sociotechnical view of technology, unfortunately did not guarantee that the representativeness of the speeches was present in any of the 45 articles of the proposed regulation presented.

From the time of our monarchy, we still bring the expression “para Inglês ver”, which we use to say that someone did something just to comply with formalities, without actually doing that something effectively. The debate generated a 900-page report that brings all the views of the participants - in the annexes - but it seems that due care was not taken with the principle of neutrality by neglecting all sides impacted by a regulation and when drafting the proposal , which turned out to be dangerously rigid and biased.

The proposal already assumes that AI will harm someone, bringing an extensive list of rights and prerogatives of the “affected people”, but forgetting the developers, entrepreneurs, scholars, researchers and beneficiaries of the technology, who equally need a safe and conducive environment for carrying out their activities. In addition to consumers and citizens who can be positively affected by responsible, ethical and safe AI.

In addition to the negative bias of considering only AI risks, the proposal brings a restrictive model, unprecedented in the world, with technically challenging obligations to fulfill and responsibilities outside the scope of action of the agents involved. Points that can be easily explained by a team of computer engineers and data scientists.

It is also necessary to be aware of the consequences of regulatory overlaps. We have the Consumer Defense Code, the Civil Code, the Penal Code, the Civil Rights Framework for the Internet, the General Data Protection Law and, very soon, the Brazilian Law on Freedom, Responsibility and Transparency on the Internet, which together deal with rules and obligations for the relationship with consumers, responsibility and performance of companies.

We are facing a positive possibility, perhaps unique, of building together a modern, flexible regulation, adapted to the new times, which allows at the same time to protect fundamental rights and bring security to citizens, and to create a favorable environment for companies to develop and invest in technology in Brazil. But it is necessary to involve all actors in the construction of a new balanced proposal.

Bias is inherent to human beings and therefore – going back to Socrates – it is essential to be aware of our limitations in order to overcome them through different perspectives and opinions. Interestingly, we are already developing ways to mitigate it in AI systems applications, in order to have fair and impartial systems.

But there's no point in giving voice to all parts, if you don't really listen to them. The objective of the proposal presented was to bring legal certainty, through governance and protection of fundamental rights, and nobody is against that. It is the “how” that needs to be carefully planned so that “it doesn't backfire”.

*Loren Spindola, Leader of the ABES Artificial Intelligence Working Group

quick access

en_USEN