Select Page
Share

 

*Per Chris Wright

More than three decades ago, Red Hat saw the potential of open source development and licensing to create better software and drive IT innovation. Thirty million lines of code later, Linux has not only evolved into the most successful open source software, but it continues to do so today. The company’s commitment to open source principles continues to be part of its corporate business model and culture. The company believes that these concepts have the same impact on artificial intelligence (AI) if done right, but the technology world is divided on what constitutes the “right way.”

AI, especially the large language models (LLMs) behind generative AI (gen AI), cannot be viewed as an open-ended program in the same way. Unlike software, AI models consist primarily of numerical model parameters that determine how a model processes inputs, as well as the connections it makes between various data points. Trained model parameters are the result of a long process involving vast amounts of training data that are carefully prepared, mixed, and processed.

While model parameters are not software, in some ways they serve a similar function to code. It is easy to make the comparison that data is the source code of the model, or very close to it. In open source, source code is commonly defined as the “preferred way” to make modifications to software. Training data alone does not fit this role, given its different size and its complicated pre-training process that results in a tenuous and indirect connection that any given piece of data used in training has to the trained parameters and resulting behavior of the model.

Most of the improvements and enhancements to AI models that are occurring in the community right now do not involve accessing or manipulating the original training data. Instead, they are the result of modifications to the model parameters or a process or adjustment that can also serve to fine-tune the model’s performance. The freedom to make these improvements to the model requires that the parameters be released with all the permissions that users receive under open source licenses.

Red Hat's vision for open source AI.

Red Hat believes that the foundation of open source AI lies in open source licensed model parameters combined with open source software components. This is a starting point for open source AI, but not the ultimate destination for the philosophy. Red Hat encourages the open source community, regulators, and industry to continue striving for greater transparency and alignment with open source development principles when training and tuning AI models.

This is Red Hat's vision as a company that encompasses an open source software ecosystem and can practically engage with open source AI. It is not an attempt at a formal definition, like the one Red Hat has Open Source Initiative (OSI) is developing with its Open Source AI Definition (OSAID). This is the corporation's vision to make open source AI feasible and accessible to the widest range of communities, organizations, and vendors.

This point of view is put into practice through work with open source communities, highlighted by the project InstructLab, led by Red Hat and the effort with IBM Research in the Granite family of licensed open source templates. InstructLab significantly reduces the barriers for non-data scientists to contribute to AI models. With InstructLab, domain experts from all industries can contribute their skills and knowledge, both for internal use and to help build a shared and widely accessible open-source AI model for upstream communities.

The Granite 3.0 family of models addresses a wide range of AI use cases, from code generation to natural language processing to extract insights of large datasets, all under a permissive open source license. We helped IBM Research bring the Granite family of code models to the open source world, and we continue to support the family of models, both from an open source perspective and as part of our Red Hat AI offering.

The repercussion of Recent DeepSeek Announcements This shows how open source innovation can impact AI, both at the model level and beyond. There are clear concerns about the Chinese platform’s approach, especially since the model license doesn’t explain how it was produced, which reinforces the need for transparency. That said, the disruption mentioned reinforces Red Hat’s vision for the future of AI: an open future focused on smaller, optimized, and open models that can be customized for specific enterprise data use cases across any and all locations in the hybrid cloud.

Expanding AI models beyond open source

Red Hat’s work in the open source AI space extends far beyond InstructLab and the Granite family of models, to the tools and platforms needed to actually consume and productively use AI. The company has become very active in fostering technology projects and communities, such as (but not limited to):

●      RamaLama, an open source project that aims to facilitate the local management and provision of AI models;

●      TrustyAI, an open source toolkit for building more responsible AI workflows;

●      Climatik, a project focused on helping to make AI more sustainable when it comes to energy consumption;

●      Podman AI Lab, a developer toolkit focused on facilitating experimentation with open source LLMs;

recent announcement about Neural Magic extends the enterprise vision of AI by making it possible for organizations to align smaller, optimized AI models, including licensed open source systems, with their data, wherever it lives in the hybrid cloud. IT organizations can then leverage the inference server vLLM to drive decisions and production of these models, helping to build an AI stack based on transparent and supported technologies.

For the enterprise, open source AI lives and breathes in the hybrid cloud. The hybrid cloud provides the flexibility to choose the best environment for each AI workload, optimizing for performance, cost, scale, and security requirements. Red Hat’s platforms, goals, and organization support these efforts, along with industry partners, customers, and the open source community, as open source in AI continues to grow.

There is immense potential for this open collaboration to expand in the AI space. Red Hat sees a future that embraces transparent work on models, as well as their training. Whether it’s next week or next month (or even sooner, given how rapidly AI is evolving), the company and the wider open community will continue to support and embrace efforts to democratize and open up the world of AI.

*Chris Wright is senior vice president and chief technology officer (CTO) at Red Hat

Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies

quick access

en_USEN