Share

*By Bern Elliot

OpenAI, an Artificial Intelligence (AI) research and deployment company, recently announced the official launch of ChatGPT, a new model of Conversational Artificial Intelligence. According to OpenAI, the dialogue provided by this platform allows ChatGPT to respond to follow-up questions, admit its mistakes, challenge incorrect assumptions and reject inappropriate requests.

Since its launch, social networks have been full of discussions about the possibilities — and dangers — of this innovation, from its ability to debug code to its potential for writing essays for university students. To help shed light on the topic, Gartner, the world's leading research and advisory firm, shares its view on ChatGPT:

Differentiation from other Artificial Intelligence innovations

ChatGPT is the perfect union of two current and hot topics in Artificial Intelligence: chatbots and GPT3. Together, they offer an intriguing method of interacting and producing content that sounds surprisingly human. These technologies are the result of significant improvements honed over the last five years.

You chatbots allow interaction in an apparently 'intelligent' conversational manner, while GPT3 produces results that appear to have 'understood' the question, content and context. Together, they create a special effect (like Uncanny Valley, or Uncanny Valley) and inspire reflections on whether it's human or via a computer, or even 'a human computer', as the interaction can be humorous, profound and even insightful.

Because it is not generated with human intelligence, sometimes, unfortunately, the content can be incorrect. The problem may lie in the terms 'understand' and 'intelligent'. These are terms loaded with implicitly human meaning. Therefore, when applied to an algorithm, they can result in serious misunderstandings. Thus, the most useful perspective is to see chatbots and Large Language Templates (LLM-Large Language Models, in English), such as GPT, as potentially useful tools to accomplish specific tasks, not as gimmicks. Success depends on identifying the use of these applications for technologies that offer significant benefits to businesses.

Potential Use Cases for ChatGPT

            At a high level, the chatbots, or conversational assistants, provide a curated interaction with an information source. They have many use cases, from customer service to technical assistance in identifying problems.

At a high level, ChatGPT is a special case, as chatbots are employed to interact or converse with an information source, in addition to being trained for a specific activity by OpenAI. The learning data used in the model determines how the questions will be answered. However, GPT's ability to deliver faulty results unpredictably means that they can only be used for situations where errors can be tolerated or corrected. There are numerous use cases for foundation models such as GPT in domains such as computer vision, software engineering, and scientific research and development. For example, you can create images from text; generate, review, and audit natural language code, including smart contracts; and even in the health area to create medicines and decipher DNA sequences to classify diseases.

ethical concerns

            Artificial Intelligence foundation models such as GPT represent a major shift in this field. They offer unique benefits such as massive reductions in the cost and time required to create a domain-specific model. However, they also present ethical risks and concerns, including issues associated with:

–                 Complexity – Large models involve billions, or even trillions of parameters. In some cases, the size makes training impractical for most organizations, due to the consumption of computational resources. Thus, they can become expensive and harmful to the environment;

–                 concentration of power – These models were built primarily by the largest technology organizations, with large investments in Research and Development (R&D) and significant talent in Information Intelligence. This resulted in a large concentration of power in some entities, which could create a significant imbalance in the future;

–                 potential misuse – Foundation templates reduce content creation costs, making it easier to create deepfakes (non-real images and audios) similar to the original. This includes everything from voice and video impersonation to fake artwork, as well as targeted attacks. The serious ethical concerns involved in relation to this topic can damage reputations or even cause political conflicts;

–                 black box nature – These models still require careful training and may provide unacceptable results due to their black box nature. The base models used in the responses are often not clear, which can propagate the bias downstream in the data sets. The homogenization of such models can lead to a single point of failure;

–                 Intellectual property – The model is trained by a work team. It is not yet clear what legal precedent for reusing this content, nor whether it is derived from the intellectual property of others.

Model integration

It is recommended to use natural language processing (NLP – for Natural Language Processing, in English) such as classification, summarization and text generation in non-customer-oriented scenarios, as well as the choice of models previously tested and prepared for the various tasks to avoid costly customizations and training. Human-reviewed use cases are preferred. Thus, it is indicated the creation of a strategy document that describes the benefits, risks, opportunities and the implementation roadmap for the basic models of Artificial Intelligence, such as GPT. This will help companies easily determine whether the benefits outweigh the risks for specific use cases.

Use Cloud-based APIs (communication mechanisms between software components) and choose the model that will provide the accuracy and performance needed to reduce operational complexity, lower energy consumption and optimize Total Cost of Ownership. Prioritize vendors who promote responsible deployment of templates by setting usage guidelines and documenting known vulnerabilities. Proactively track and disclose any harmful behavior and misuse scenarios for continuous improvement.

*Bern Elliot, Vice President and Analyst at Gartner

Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies

quick access

en_USEN