Select Page
Share

How Brazil seeks balance between innovation and rights protection in the global race for regulatory frameworks for AI

* By Luiz Felipe Vieira de Siqueira

 

Artificial Intelligence (AI), and in particular Generative AI, is rapidly reshaping technological, economic, and social landscapes around the world. As the power and ubiquity of these tools grows, the need for effective governance frameworks becomes increasingly urgent to ensure that the development and deployment of AI serve the common good and protect fundamental rights. The "Artificial Intelligence Governance Report" of the World Economic Forum, in collaboration with Accenture, offers a comprehensive overview of the global landscape, highlighting regulatory challenges and approaches (WEF, 2024). By analyzing this report, we can draw important parallels with AI regulatory approaches in Brazil. 

Artificial Intelligence Governance Report – World Economic Forum  

The WEF report describes a global AI governance landscape that is complex, fragmented, and rapidly evolving. It identifies diverse regulatory approaches adopted by different jurisdictions, such as risk-based (exemplified by the European Union), rules-based (China), principles-based (Japan), and outcomes-based. Generative AI, with its scale, power, and design, amplifies existing challenges and introduces new debates, such as prioritizing long-term risks, the governance of open versus closed models, the impact on employment, intellectual property, and disinformation.  

The global document emphasizes the critical need for international cooperation and jurisdictional interoperability to avoid fragmentation and ensure trust in Generative AI. Inclusive governance, involving the Global South, is considered fundamental to innovation and to mitigating harm, addressing structural inequalities in infrastructure, data, talent, and institutional capacity. 

The Australian model 

Australia has adopted an approach pragmatic and risk-based for the regulation of Artificial Intelligence, focusing on the application of model contractual clauses for the acquisition of AI systems and services by the government. Instead of comprehensive legislation that classifies AI risks, a priori, the country allows government buyers to select and adapt clauses according to the specific requirements of each contract. This strategy aims to ensure transparency, human oversight, and compliance with ethical principles, without imposing rigid restrictions that could limit innovation. Furthermore, there is an effort to mitigate risks associated with privacy, impartiality, and security by requiring suppliers to maintain detailed records and adopt measures to protect against bias and operational failures. 

THE Digital Transformation Agency (DTA) is an Australian government agency responsible for driving digital transformation and ensuring efficient and accessible public services. In the context of AI, the DTA has developed model clauses for government contracts, addressing topics such as transparency, human oversight, data protection, and impartiality. These clauses include requirements for prior approval of AI use, detailed record-keeping, incident notifications, and mechanisms for shutting down automated systems. Additionally, there are guidelines to ensure that AI systems do not discriminate and operate ethically, in compliance with privacy and anti-discrimination laws. This approach allows the Australian government to manage AI risks flexibly, without imposing rigid regulations on all applications. 

The counterpoint of more flexible models: China, Russia, USA, Japan  

While the European Union adopts a rigorous, risk-based regulatory approach, countries like China, Russia, and the United States follow more flexible models, prioritizing innovation and technological development with less direct government intervention. Each of these countries has particularities in its AI governance, reflecting distinct strategic and economic interests. 

In China, AI regulation is closely tied to state control over data and national security. The government imposes specific restrictions to prevent political and social risks, but at the same time allows extensive experimentation and technological development by local companies. This model favors the sector's rapid growth, especially in areas such as facial recognition and automation, but raises concerns about privacy and individual freedom. 

Russia, on the other hand, takes an even more flexible approach, prioritizing the development of AI for strategic and military purposes. Regulation regarding social and ethical impacts is minimal, allowing technological advancements to occur without major regulatory barriers. However, this lack of oversight can create challenges related to transparency and responsible use of AI, especially in security and defense applications.

In the United States, regulation is decentralized, based on principles and results. Instead of a single, comprehensive regulatory framework, different agencies and states establish general guidelines, encouraging self-regulation by companies. This model stimulates innovation and competitiveness in the sector, making the country a global leader in AI. However, the lack of uniform rules can create gaps in the protection of fundamental rights, especially on issues such as privacy, algorithmic bias, and the ethical use of technology. 

Now, in June 2025, Japan has passed its Artificial Intelligence Law, which represents a different approach from Europe's, prioritizing the promotion of AI research and use. Rather than imposing sanctions, Japanese regulation is based on general guidelines that seek to balance innovation and ethics, relying on voluntary collaboration, primarily from the private sector. 

AI is positioned as a fundamental technology for economic development and national security, with an approach focused on innovation, infrastructure, and governance. Unlike the European model, Japan does not adopt risk classifications or establish specific penalties or rights for automated decisions, focusing on transparency and shared responsibility. 

Bill No. 2338/2023 – Regulation of AI in Brazil 

Bill No. 2338 of 2023 aims to establish general national standards for the development, implementation, and responsible use of artificial intelligence (AI) systems in Brazil. The law seeks to protect fundamental rights and guarantee safe and reliable systems for the benefit of the human person, the democratic regime, and scientific and technological development. Its foundations include the centrality of the human person, respect for human rights and democratic values, privacy, data protection, and non-discrimination. The bill establishes principles such as inclusive growth, sustainable development, well-being, self-determination, human participation and oversight, justice, equity, inclusion, transparency, reliability, accountability, and risk prevention. People affected by AI systems have rights such as prior information, explanation of decisions, challenge of relevant decisions, human participation in decisions, the right to non-discrimination and correction of biases, and the right to privacy and protection of personal data. Clear and adequate information must be provided before using the system, detailing its automated nature, general description, consequences, identification of operators, role of AI and humans, categories of data used, security measures and non-discrimination. 

The project adopts a risk-based regulation, requiring a preliminary assessment of AI systems for risk classification. Excessive risk systems, such as those that use harmful subliminal techniques or exploit vulnerabilities, are prohibited. High-risk systems are listed by their intended purposes, including applications in critical infrastructure, education, recruitment, essential services, credit, healthcare, public safety, and migration management. For high-risk systems, additional governance measures are required, such as detailed documentation, automatic operation logging, reliability testing, data management to mitigate discriminatory biases, and measures to enable explainability. An algorithmic impact assessment is mandatory for high-risk systems, to be conducted by a government agency (the ANPD has already applied), considering risks, benefits, probability and severity of consequences, operational logic, and testing. 

AI Regulation in Brazil: Between Rigor and Flexibility in the Global Scenario 

Global approaches to AI regulation reflect different priorities and challenges faced by each country. While the European Union adopts a rigorous, risk-based model, with detailed rules and clear sanctions for high-risk systems, Japan is betting on a less restrictive, focused on encouraging innovation and voluntary collaboration from the private sector. Australia, for its part, implements contractual clauses to adapt specific AI requirements in government contracts, allowing flexibility in risk management. Countries like China, Russia, and the United States follow more open, prioritizing technological development with less direct state intervention. 

In Brazil, the Bill No. 2338/2023 seeks to reconcile innovation and protection of fundamental rights, adopting a model intermediary between strict regulation and flexibility. Partially inspired by the European approach, the country proposes risk classification for different AI systems, requiring specific measures for high-impact systems. Furthermore, it establishes principles such as transparency, human oversight, and mitigation of algorithmic biases, reinforcing the need for impact assessments and robust governance. Brazilian regulation, therefore, aims to balance safety and technological growth, building a regulatory framework that encourages AI advancement without creating excessive barriers to the productive sector. This positioning could allow Brazil to play a relevant role in global AI governance, seeking alignment between innovation and social responsibility. 

*Luiz Felipe Vieira de Siqueira is a lawyer and researcher at Think Tank ABES, PhD student in Innovation & Technology – PPGIT UFMG and partner at Privacy Point

 

Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies

Article originally published on the IT Forum website: https://itforum.com.br/colunas/rigor-brasil-regulacao-ia/

quick access

en_USEN