Why we need to go beyond technology and treat AI as a strategic, ethical, organizational and societal issue from the outset
*By Natalia Marroni Borges
When discussing artificial intelligence, the most common approach is to start with technology—topics like data, models, algorithms, frameworks, infrastructure, and vendors inform almost any conversation at the corporate and government levels. On the one hand, this makes perfect sense, as there won't be any AI agenda if AI isn't developed.
But this technical focus, which often guides the adoption of new tools in companies, has already proven limited in previous technological transitions, such as the introduction of ERP systems, the digital transformation driven by cloud computing, or, more recently, the hype we've witnessed around big data. In all these cases, the initial focus on infrastructure and technical tools—without a broader reflection on strategy, people, and processes—resulted in largely flawed implementations, rework, projects with costs and duration far exceeding expectations, cultural resistance, or, worst of all, underutilized solutions. Anyone who hasn't heard of a story like this hasn't lived in IT for the past 30 years.
With AI, pursuing this path can be a bit more challenging, especially considering the transformative—and disruptive—potential of this technology. We're facing something we haven't yet fully grasped, impacting everything from day-to-day operations to the way we make decisions, interact with customers, and rethink business models.
Upon closer inspection, it's easy to see that artificial intelligence encompasses multiple dimensions that demand distinct knowledge, skills, and responsibilities. Strategy, organizational culture, governance, ethics, team training, and regulation are central—and often more urgent than the technical modeling itself. European forums, for example, have brought to light other equally relevant layers, such as algorithmic transparency, digital sovereignty, fundamental rights, environmental impacts, and citizen participation in the AI debate.
Ignoring these layers—or placing them under the responsibility of a single department (or worse, a single person)—compromises not only the results of AI projects, but also the legitimacy and sustainability of these initiatives in the long term. This is a similar mistake we've made in the recent past: topics that encompass knowledge beyond the technological perspective need to be led by people who truly master these topics.
For this reason, separating these dimensions is the first step in a strategic decision. This means recognizing that AI is not a "ready-made product" to be plugged into a process, but rather a process in itself—one that requires coordination between departments, active listening of different stakeholders, constant adaptation, and organizational maturity.
Insisting on centralized, technical, and hasty approaches is repeating mistakes we should have overcome by now. More than ever, we need to treat artificial intelligence as an organizational and societal issue—and not just as a frontier for technological innovation.
To address this challenge responsibly, companies and governments could—and should—invest in initiatives that articulate these multiple dimensions from the outset. This includes the creation of interdisciplinary AI committees, with participation from areas such as strategy, legal, HR, technology, and governance; the development of internal frameworks for ethical and responsible use, inspired by international best practices; and the development of ongoing training programs, aimed not only at technical professionals, but also (and much more so) at leaders and decision-makers. Furthermore, fostering spaces for listening with civil society, supporting independent research, and including social and environmental criteria in the evaluation of AI projects are fundamental steps to ensuring that technology serves the common good—and not just operational or short-term gains.
While we recognize that many of these actions are already being adopted by governments and organizations around the world, we must admit that, in practice, AI solutions are still often treated with a sense of urgency, in a fragmented manner, and without clear governance structures. On the one hand, we discuss risks and responsibilities in specialized forums; on the other, we see a flood of use cases disconnected from these reflections, often led by teams unprepared to deal with the ethical, social, and political implications involved. The good news is that the path is already being laid—but we will only advance in a structured manner if we treat artificial intelligence no longer as a race for innovation, but as a collective and conscious construction of the future.
*Natália Marroni Borges is a researcher at ABES Think Tank, Researcher member of the IEA Future Lab group (linked to the Federal University of Rio Grande do Sul – UFRGS), Post-doctorate in Artificial Intelligence and Foresight and professor at UFRGS.
Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies
Article originally published on the IT Forum website: https://itforum.com.br/colunas/ia-alem-perspectiva-tecnologica/