Share

*By Filippo Di Cesare

The initiative to create the Legal Framework for the use of Artificial Intelligence in Brazil, which was approved at the end of September by the Chamber of Deputies and is now being validated by the Federal Senate, becomes ineffective given its principled nature. This is because the absence of regulations can bring little support to cases, which tend to expand more and more if we take into account that AI is the future.

We see that this technology is going through a moment of exponential evolution and creating norms would represent a risk of the Law becoming outdated and outdated even before it is approved. From this perspective, it is clear that setting a regulation with the necessary detail when technology continues to evolve is not feasible. Therefore, at this point, it makes sense for the discussion to remain in the principled aspect so as not to limit local innovation.

Anyway, we must treat the potential of this technology in a comprehensive way, as a lever to be stimulated to increase competitiveness in the country. In other words, the more accessible, the more projects will be created and, consequently, the entire ecosystem around this technology becomes will develop. In this sense, the government's view of this front is an opportunity to discuss a true national AI plan, which will aim at legal certainty, investment promotion and cooperation between universities and companies.

In this treadmill, it is necessary to consider investments in research and development, as well as in the training of specialized labor to meet the demands of professionals who will deal with AI - remembering that machines take men away from activities to do better, but by at the same time it creates opportunities.

Therefore, it is important that the AI Legal Framework is not limited to initiatives of a risk to be managed. This is because, if we go into detail on how this technology works and how the Legal Framework intends to regulate activities, we may have a conflict when we relate AI to company transparency, for example, with regard to racial, gender and discrimination discriminatory aspects. sexual orientation. The criteria used by the algorithm for decision making are, yes, imputed by the AI operator, but there are always parameters based on patterns that are learned autonomously, that is, there is no moral judgment.

In this sense, even though there is transparency and even though the AI agent can always periodically control and normalize the "intentional" algorithms, there are decisions that cannot be interpreted, as there are technical problems in saying which parameters the algorithm has learned to determine an decisions. If the AI Legal Framework interpretation criteria have a strict requirement, Artificial Intelligence will fail to exploit its full potential.

 The expansion of AI has sparked discussions around human rights, privacy and data protection, and the labor market. The topic is complex and a permanent regiment on the technology requires a more extensive level of detail than what is initially being presented in the current proposals before the Congress. The focus of the AI Legal Framework, therefore, cannot only be linked to ethical and regulatory issues. It is necessary to go further: it is time to promote a national AI plan, involving companies, educational institutions and the government.

*Filippo Di Cesare is CEO of Latam (Brazil and Argentina) of Engineering

Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies

 

quick access

en_USEN