Share

*By Raghu Raghuram

A few months ago, there was a “boom” in our sector worldwide: the launch of ChatGPT-3.5 beta. Since then, we have been plunged into a frenzy of interest, innovation and investment in Artificial Intelligence (AI), specifically generative AI.

I'm old enough to remember previous cycles of AI hype, only this time there's a difference: Generative AI allows us to interact with advanced technology tools in a dialogical way, making natural language associated with creativity. “human-like” to generate new content, including text, code, video, audio and more.

Now, with large language models (LLMs), a native language like English or Mandarin becomes a real programming language. The word prompts we give these models are essentially the code they use to calculate an answer. It's the closest we've ever come to a true democratization of programming.

In short (and it is clear that we are in the midst of a generational breakthrough), opportunities are emerging to transform core business functions such as software development, customer support, sales and marketing. As this next wave of AI innovation accelerates, it will have a profound impact on the entire global economy. With generative AI, we can reinvent education by addressing variability in learning, help doctors provide clinical diagnoses, help consumers make investment decisions, and much more. While these are just a few examples, consider this projection: A recent McKinsey report suggests that generative AI could generate up to US$7.9 trillion in global economic value annually.

The three great challenges that we must overcome

As is often the case in the early stages of a large-scale innovation, we are facing major obstacles to broader adoption. To harness the full potential of generative AI in enterprises, there are three core challenges that we must collectively overcome.

  • From high cost to affordable price

Preparing and managing current generative AI models is complex and expensive. They require high specialized computing power and high-speed networking with lots of memory. There is a 1:1 relationship between AI model performance and computing infrastructure, a dynamic that is neither scalable nor sustainable. Andreessen Horowitz recently described training a model like ChatGPT as “one of the most computationally intensive tasks humanity has undertaken to date.” The price of a single training varies from US$ 500 thousand to US$ 4.6 million, considering that the practice will remain an ongoing expense as the models are updated.

Analyzing these staggering costs, many have come to the conclusion that our world will be limited to a very small number of “mega LLMs” like ChatGPT.

However, we have another alternative. I see a future where everyday businesses are empowered to create and run their own custom AI models at an affordable price. It's all about flexibility and choice: most CIOs I talk to plan to use mega LLMs in a variety of use cases, but they also want to create several smaller AI models that can optimize specific tasks. These models are often based on open source software (OSS). In fact, the sheer volume of innovation in open source AI right now is staggering. It is not a stretch to predict that many companies will adopt these models as an option for many use cases, with less reliance on the massive proprietary LLMs prevalent today.

These open, purpose-built formats will leverage an organization's domain-specific data, which is their exclusive intellectual property. We have the opportunity to cost-effectively run these more compact AI systems on dedicated infrastructure, such as the cheapest GPUs (graphics processing units) and, perhaps one day, modified low-cost CPUs, to deliver the performance and rate throughput required by AI workloads. By reducing costs and creating solutions that offer flexibility and choice, we can enable AI innovation, making it more accessible to leading companies.

  • From the “magic” of specialized AI to the experience of democratized AI

Currently, the professionals needed to create, tune, and run AI models are specialized and scarce. This is the subject of almost every conversation I have with CEOs and CIOs, who always rank it among their biggest challenges. They are aware that the open source AI software space is moving very quickly, and so they want to be able to embrace innovations quickly and easily as they emerge, without being locked into a single platform or vendor. This level of adaptability is difficult to achieve when only a relatively small percentage of technology professionals fully understand the “magic” behind current AI models.

To address this skills gap, we need to radically simplify the process and tools we use to create and train AI models. This is where reference architectures come into play, providing a useful blueprint and path for most organizations that don't have the in-house expertise to build AI solutions from scratch.

  • From risk to trust

Finally, and perhaps most importantly, we need to move from risk to trust. Current AI models create significant risks, including privacy concerns, legal and regulatory threats, as well as intellectual property leakage. These obstacles have the potential to ruin a company's reputation, harm customers and employees, and negatively impact revenue. Many organizations have established policies that restrict employees from using generative AI tools following accidental leaks of internal and sensitive data on platforms like ChatGPT. Furthermore, current generative AI systems suffer from a fundamental lack of trust because they often “hallucinate”, creating new content that is meaningless, irrelevant and/or inaccurate.

As a specialized industry, we need to develop a strong set of ethical principles to ensure and reinforce impartiality, privacy, accountability, third-party intellectual property, and transparency of training data. A large and growing ecosystem of organizations aims to address the key issues of AI explainability, data integrity and privacy. The open source community is innovating at the heart of this movement, working to help companies train and deploy their AI models in a safe and controlled way.

The next wave of technological innovation 

Just as the mobile app revolution has transformed business and our relationship with technology over the past 15 years, a new wave of AI-enabled solutions is poised to dramatically increase worker productivity and accelerate economic development around the world. We are in the early stages of a new supercycle of innovation. Our collective challenge is to make this powerful new technology more affordable, more accessible and more reliable.

In my conversations with AI decision makers around the world, there is a general consensus that we need to strike a strategic balance: we must tread carefully when there are unknowns, especially around confidentiality, privacy, and misuse of proprietary information. It is also critical that we equip companies to quickly adopt new AI models so that they can participate in the next wave of innovation in a responsible and ethical manner.

*Raghu Raghuram, CEO of VMware

Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies

quick access

en_USEN