Select Page
Share

For Gilson Magalhães, vice president and general manager of Red Hat Latin America, the convergence between human knowledge and machines will take technology to the next level; open source will be the bridge for this synergy

The big star of this decade when it comes to technology, artificial intelligence will continue to rise in the coming years. Projections from Gartner indicate that by 2026, more than 801,000 companies will have used generative artificial intelligence (GAI) models or implemented GAI-enabled applications in production environments. However, this growth also highlights the challenges organizations face in dealing with AI: more than 30% of these projects are expected to be abandoned by the end of 2025 due to poor data quality, inadequate risk control, low perceived value and high costs.

For market experts, the high number of frustrated or abandoned projects halfway through is a result of companies’ belief that new technologies, such as AI, should act as a kind of magic solution to all problems. “We recently experienced a panacea in Latin America with the cloud and now we see history repeating itself with artificial intelligence. There is an excitement that, at the same time, brings tension and pressure to CIOs to produce anything with this technology, without much analysis or strategy. And this is definitely not the best path”, says Gilson Magalhães, VP and General Manager of Red Hat for Latin America.

Seeking to address the challenges of their past choices, especially with regard to data security and quality, the executive points out that organizations must advance in so-called agents, simpler software components that will connect to existing applications to modernize them. In addition, the market should return to predictive AI applications, with more than half of artificial intelligence cases focused on delivering customer personalization and supply chain optimization. “Organizations will rely more on AIOps platforms to reduce technical debt and improve business results, and they will find in open source solutions the answer they need to move forward safely and efficiently,” he says.

A recipe for every business

The synergy between open source and artificial intelligence to drive the advancement of enterprise IT was the central theme of the main open source event in Latin America. Bringing together an audience of more than 500 people, including decision-makers, developers and experts, Red Hat Summit: Connect São Paulo addressed future trends that should once again transform the digital journey of organizations. “Amid all the challenges and predictions for the coming years, we are now talking not only about AI, but also about how you obtain the power of human intelligence, aided by the work that your AI can do better or faster than humans can possibly do,” reinforces Magalhães.

In other words, the answer to sustainable AI advancement lies in the convergence of this intelligence with human skills and knowledge. “Artificial intelligence is already a reality and the next step is to make it interact satisfactorily with human intelligence. This means having AI tools that are better aligned to solve day-to-day processes, automating repetitive work and enhancing human capabilities for more strategic tasks,” he points out.

Open source tools are great allies in this journey. The open hybrid cloud strategy advocated by Red Hat and presented during the event is the gateway to open and effective artificial intelligence, supported by major partners such as Google, AWS and Microsoft. Present at Red Hat Summit: Connect, the technology giants presented success stories with clients such as Solar Coca-Cola, Vibra Energia and SulAmérica. Banco da Amazônia also participated in the meeting, highlighting BASA Digital, an internationally awarded initiative that aims to democratize banking access through technology.

As a global leader in enterprise open source solutions, Red Hat is part of all these stories by offering ready-made tools to meet the needs of each organization, especially when it comes to artificial intelligence. One of the company’s most famous platforms, Red Hat OpenShift, recently gained a version specifically for AI. “Red Hat OpenShift AI is a scalable and flexible MLOps tool that allows you to create and launch AI applications at scale in hybrid cloud environments,” says Magalhães.

Built on open source technologies, Red Hat OpenShift AI provides reliable and consistent capabilities for testing, developing, and training AI models, enabling data preparation and acquisition, as well as model training, fine-tuning, and deployment. The solution integrates with other company news, Red Hat® Enterprise Linux® AI, the base model platform used to develop, test, and run models of large-scale languages (LLMs) from the Granite family for enterprise applications.

“Red Hat Enterprise Linux AI enables portability across hybrid cloud environments, scalability of your AI workflows with Red Hat OpenShift® AI, and advancement to IBM watsonx.ai with additional capabilities for enterprise AI development, data management, and model governance. This is especially important at a time when the need for increasingly smaller and more specialized AI models is becoming clear,” said the executive.

SLMs: the new era of AI

According to predictions from Forrester, while nearly 90% of global technology decision-makers say their companies will increase investment in data infrastructure, management and governance, technology leaders will remain pragmatic about investing in AI to maximize the business value derived from the technology. By 2025, the most significant shift in investment will be toward small AI models, so-called SLMs (small language models, in English). With lower cost, greater assertiveness and less chance of hallucination, these models are also more specialized and can be trained and executed much more easily.

In practice, understanding the alphabet soup of LLMs and SLMs is straightforward. Both acronyms refer to types of artificial intelligence (AI) systems trained to interpret human language, including programming languages. LLMs are generally intended to emulate human intelligence at a very broad level and are therefore trained on a wide range of large data sets. As a result, they can produce so-called “hallucinations,” potentially incorrect answers, since they lack the fine-tuning and domain training to accurately answer each industry-specific or niche query.

SLMs, on the other hand, are typically trained on smaller datasets tailored to specific industry domains (i.e., areas of expertise). For example, a healthcare provider might use an SLM-powered chatbot trained on medical datasets to inject domain-specific knowledge into a user’s non-specialized health query, improving the quality of the question and answer. “Which model is best? It all depends on each organization’s plans, resources, expertise, timeline, and other factors. Apart from the choice, open source will certainly be the key that makes all the difference,” says the expert.

Other trends

While artificial intelligence continues to dominate headlines and discussions, other topics are also expected to gain traction in enterprise IT, according to the Red Hat executive. By 2025, automation and edge computing will be on the agenda of public and private organizations with the increasingly rapid advancement of the Internet of Things (IoT). Furthermore, quantum computing could take a major leap forward: the United Nations (UN) has already proclaimed that next year will be the International Year of Quantum Science and Technology, in the hope that research into this solution will move beyond the laboratories and gain more practical applications in everyday life.

"THE virtualization will also have a growing space. The same virtual machines that were fundamental to the evolution of cloud computing now contribute to training AI/ML models, with isolation and reproducibility, scalability, hardware acceleration, flexibility in software configuration and data management and security”, explains Magalhães. Survey of Gartner points out that 70% of the workloads that power personal computers and data center servers will continue to use hypervisor-based virtualization through 2027.

On the telecommunications side, the focus is on the evolution of faster and more reliable 5G and 6G networks, which will boost smart cities, autonomous vehicles and immersive technologies. Concern for more sustainable solutions will also be on the rise, with companies seeking to reduce their carbon footprints, with innovations in solar technology, battery storage and zero-carbon data centers. Investments in cybersecurity round out the list, with robust and decentralized solutions for organizations to protect themselves against increasing attacks, data breaches and ransomware.

quick access

en_USEN