By Kriti Sharma, Global Vice President of Bots and Artificial Intelligence at Sage
Developing chatbots and artificial intelligence that are useful to our customers is the easy part. There are many questions that arise with the advent of artificial intelligence. For this reason, we developed our action in this segment based on a series of values. It is these essential principles that, in our view, contribute to ensuring that our products are safe and ethical.
The five principles that guide Sage's AI strategy worldwide also guide the company's operations in Brazil. Around here, the company created Sage Labs, a team charged with thinking about ways to employ new software technologies that customers use every day.
The Code Ethics: 5 ethical principles for developing Artificial Intelligence in the business world
1. AI must reflect the diversity of its users
We need to create artificial intelligence that is diverse in origin. As an industrial and technological community, we must develop effective mechanisms to stop prejudice and any adverse feelings that may be assimilated by AI, in order to ensure that it will not perpetuate stereotypes. Unless we have teams, databases and design whose nature is diversity, we are at risk of repeating the inequality that marked past revolutions.
2. AI must be responsible, as well as its users
We learned that users establish a relationship with AI and begin to trust it after a few interactions. With trust comes responsibility. AI needs to be held accountable for its actions and decisions, as if it were a human being. Technology cannot be allowed to become intelligent to the point of not being responsible. We do not accept this type of behavior from other specialized professions. So why should technology be the exception?
3. Reward AI for "showing how it works"
Any AI system that learns from bad examples can end up becoming socially inappropriate - we have to remember that most AI today is unaware of what it is saying. Only extensive listening and learning from different databases will solve this.
One approach is to develop a reward mechanism when training AI. Reinforcement learning measures must be developed not only based on what AI or robots do to achieve a result, but also on how AI and robots align with human values to achieve that particular result.
4. The AI game must be the same for everyone
Voice technology and social robots provide new access solutions, specifically for users disadvantaged by problems with vision, dyslexia and limited mobility. Our technological business community must accelerate the development of these resources in order to offer equal conditions and expand the talents we have, both in the accounting and technological professions.
5. IA replaces. But it must also create
The best use case for AI is automation - customer service, workflow and rules-based processes are the perfect panoramas in which AI is revealed.
AI learns faster than humans and is very good at repetitive, everyday tasks. And in the long run, it is cheaper than humans. There will be new opportunities created by robotizing tasks and we have to train people for this perspective - allowing them to focus on what they are good at, building relationships and taking care of customers. Without ever forgetting the need for human empathy in the core professions such as law enforcement, care, protection and complex decision making.