Select Page
Share

*By Anjelica Dortch and Dr. Stacy Hobson

There is no doubt that human biases can influence artificial intelligence (AI) algorithms, providing biased responses. But it is also difficult to determine how much these prejudices are infiltrated in the technologies we develop and use in our daily activities. However, although mitigating biases in artificial intelligence is still a challenge for some decision-making models and systems, it is imperative to reduce the likelihood of undesirable outcomes.

Our society continues to evolve with the rapid innovation of emerging technologies, especially AI. Industry, academia, governments and consumers have a shared responsibility to ensure that AI systems are properly tested and evaluated for the potential for polarization. In addition, any action or practice prohibited by current anti-discrimination legislation must also apply. To support bias mitigation strategies, organizations should work to create, implement, and put into practice AI's ethical principles ​​and ensure adequate governance for ongoing review and oversight.

IBM believes that in order to fully harness the transformative power of Artificial Intelligence, it is necessary to continually develop and evaluate it, with a commitment to avoiding discriminatory outcomes that could negatively harm individuals and their families. A fundamental aspect of responsible AI development is precisely identifying and mitigating biases. A critical point in developing responsible AI is the focus on identifying and mitigating bias. In recent years, IBM has shared research findings, made tools available, and provided businesses and their consumers with a better understanding of AI systems. This includes the AI Fairness 360 Toolkit, AI Factsheets and IBM Watson OpenScale, as well as new IBM Watson features designed to help companies build reliable AI.

Last year, the IBM Policy Lab asked for a Precision Regulation” to strengthen trust in AI with a framework based on principles such as accountability, transparency, fairness and safety, calling for action to be taken at the same time. In light of how public dialogue about biases in AI has evolved, this perspective is more important than ever. That's why, in response to renewed attention to inequalities and how technology – in areas such as criminal justice, financial services, healthcare and human resources – can be misused to exacerbate injustices, IBM suggests that policymakers take additional steps to shaping an appropriate legislative environment that addresses legitimate societal concerns.

IBM is committed to upholding diversity, equality and inclusion in our society, economy and the technology we build. As such, we ask governments to implement five priorities to strengthen the adoption of testing, assessment and bias mitigation strategies in AI systems:

  1. Strengthen knowledge and literacy in AI for the entire society.A greater understanding of what AI is, its potential benefits, and how to interact with systems can accelerate your growth and confidence in the technology. Furthermore, the development and implementation of a national AI agenda can promote a more inclusive and diverse ecosystem, as well as support the eradication of misconceptions. In this regard, increased investment in education to include AI in curricula and increased funding for research can ensure a more diverse range of stakeholders to guide the planning, development and implementation of AI systems in the future. Science and technology ministries and agencies should also prioritize building partnerships that promote equity in AI.
  2. Require assessments and testing for high-risk AI systems, with a focus on protecting consumers and, at the same time, enabling innovation. This means requiring evidence and bias mitigation – conducted in a robust and transparent manner – for AI systems such as those used in courts. These systems also need to be continuously monitored and tested. Also, focus on the pre-implementation assessment requirements for high-risk AI systems that have the greatest potential for harm; document the assessment processes, make them auditable and maintain them for a minimum period of time; convene and conduct national and international forums to accelerate consensus for credible AI; provide resources and expertise to help all organizations ensure responsible AI; increase investment in research and development for testing and bias mitigation; and support accelerated training of developers in polarization recognition.
  3. Require transparency in AI through disclosure.Developers and owners must inform users when they interact with AI technologies with little or no human involvement, as well as when using a high-risk AI system. Furthermore, in the case of automated decision-making systems, at a minimum, the user must be informed why and how a particular decision was made using AI.
  4. Request mechanisms for consumer analysis and feedback.Operators of high-risk applications should provide communication channels (eg, email, phone number, or postal address) to answer users' questions, concerns or complaints. Owners must act responsibly, conducting ongoing reviews of consumer concerns and, where necessary, working on systemic issues.
  5. Establish universal limitations on the use of AI and adopt responsible licensing practices.To prevent systems from being exploited for illegal, irresponsible and harmful uses, IBM urges the establishment of universal limitations on the use of high-risk AI applications to prohibit their use in mass surveillance, racial discrimination and human rights violations. and basic freedoms. Also, expand development, education and adoption of responsible license terms for open source software and AI-based applications.

New laws, regulatory frameworks and guidelines are arriving to mitigate biases in AI systems. Building on the above priorities, these measures can provide industry and organizations with clear testing, assessment, mitigation, and education requirements to increase consumer confidence and security in artificial intelligence.

*Anjelica Dortch, Technology Policy Executive, IBM Government & Regulatory Affairs

*Dr. Stacy Hobson, Director of Responsible and Inclusive Technologies, IBM Research

Notice: The opinion presented in this article is the responsibility of its author and not of ABES - Brazilian Association of Software Companies

quick access

en_USEN