Estimated reading time: 6 minutes, 22 seconds

Overcoming Bias in Data: The Great Challenge for AI Featured

Overcoming Bias in Data: The Great Challenge for AI person standing on concrete road

Artificial Intelligence (AI) is a powerful tool and its growing capabilities can be used to tackle an immense number of problems across all industries, from cybersecurity to manufacturing to healthcare. But while AI is profoundly changing the way businesses operate, concern is mounting around the inherent biases that the technology can amplify. And for good reason; today, we still see new issues with biased AI emerging, which suggests that too many organizations are still experimenting with AI without an adequate focus on ethics and potential biases.

All organizations exploring the technology should invest in an ethical framework to develop AI policies, regulations and business best practices that are environmentally sustainable and socially preferable. Together, these parameters will help to ensure that AI’s effects will be beneficial, fair, and felt by the largest number of people possible.

Diverse Teams

As organizations increasingly experiment with and rely on artificial intelligence, teams not only need to be trained on the technology but also on the potential ethical repercussions of AI deployments.  Development teams need to learn to identify sources of bias in data and to check AI models for fairness. Where possible, developers should attempt to work with simple interpretable models, which also solve issues around explainability and sustainability. Where the use case demands complex black box models, special care must be taken to avoid model cheating (learning artefacts instead of solving the problem) and extensive testing is required to detect any issues around bias and fairness.

Business managers need to learn to address ethical issues in their business cases and build in pre-deployment checks and post-deployment mitigation to address the unavoidable errors AI deployments will have. AI deployments result in special post-deployment responsibilities for continuous monitoring because even if all pre-deployment checks have been done, changes in data or people behavior can result in new unforeseen issues.

Having teams with diverse views, experiences and backgrounds working on your AI projects is essential in increasing your chance for producing fair AI. The unconscious biases we all have are amplified in echo chambers of non-diverse teams. If it’s a challenge to stand up sufficiently diverse teams, specially trained team members who take on the role of an AI ethicist can help. These employees are trained in AI technology and the ethical, legal and regulatory ecosystem. They should be pulled in already at the project design stage, be involved in regular peer review throughout the project, and have a key role in the final sign-off before deployment. They will work with disciplines across the business towards goals throughout the product life cycle, and ensure potential biases are tackled from diverse perspectives and at the earliest stage possible. AI ethicists can inform leadership at all levels about unintended non-technical aspects of AI and the impact this might have on corporate risk tolerance.

We are already seeing this trend gaining traction, with leading tech companies – even traditional ones, consulting firms, and even the U.S. military rushing to add governance to navigate the potential ethical pitfalls of AI.  The more that organizations rely on artificial intelligence, the more vital the role a Responsible AI framework plays in ensuring that the use of such technologies aligns with the organization’s mission, core values, and regulatory requirements.

Establish Governance

Developers and IT teams cannot be solely responsible for the success and impartiality of algorithms and the data sets they draw from; the responsibility must start with senior leadership. Senior leaders need to understand the trade-offs in deploying automated decision making in their operation and demand that the organization has insight into the inner workings of AI models. Explainability methods are still far from mature and are of questionable value. Organizations require documented and accessible knowledge about how a deployed AI system works, how it was built, and what its limitations are. A recent report of the UK Centre for Data Ethics and Innovation reminds senior leaders that they need to be clear that they retain accountability for decisions made by their organization even in the case of an AI making them – regardless of whether the AI was developed in-house or procured externally.

Since any AI model created from data can only ever be statistically correct, it’s unavoidable that AI makes mistakes. It’s essential that mitigations – and where humans are impacted efficient complaint processes – are part of the deployment. Organizations can establish shared monitoring and reporting systems as well as processes to ensure that the appropriate level of management is aware of the performance of and other issues relating to the deployed AI. Several organizations are already working on frameworks and tools that can help establish governance and best practice workflows for machine learning. IBM, for example, published an open source AI Fairness toolkit.  However, tackling bias cannot be simply delegated to software. Research shows that are many different possible definitions of fairness and that it’s impossible to satisfy all at the same time. This means AI developers and managers need to be aware of resulting impacts and trade-offs when addressing fairness in their products.

The key for all organizations is that defining AI governance needs to be done carefully and with significant thought. By creating a robust framework of governance, AI can be implemented consistently and safely, allowing organizations to successfully deliver AI programs responsibly, every time, and at scale.

Educate Employees

AI-centric roles must be competent in understanding its limitations, being able to identify where weaknesses or biases in the data may be hiding and recognize how to correct them to ensure that the decisions being made by algorithms are consistent with the values of the organization. This requires employee training on the organization's ethical AI framework, and could take the form of online training modules, virtual seminars or workshops.

Mandatory training sessions should bring together different departments and functions to share AI-related issues from various perspectives. For example, an HR compliance officer will bring just as an important perspective as a data engineer or a regulatory affairs officer. Sessions should train employees on how to uphold the organization’s AI ethical commitments, as well as ask the critical questions needed to spot potential ethical issues, such as whether an AI application might lead to exclusion of groups of people or cause social or environmental harm.

Time must be invested to develop and disseminate responsible AI principles, but through training, organizations can ensure that their employees are operationalizing the ethical AI framework in their day-to-day work.

Consistently Review

Effective integration heavily relies on regular assessment of the risks and biases associated with each use case. An ethical AI framework must include internal and external checks to ensure equitable application across all participants. For example, best practice ML Ops pipelines will monitor AI models in production in order to detect data drift, unwanted bias or decrease in performance. Combining ML Ops with strong data governance and data quality management can help to detect discriminatory bias and inconsistencies in the data before it’s used in machine learning processes. Embedding continuous and rigorous testing in the ML engineering workflows guards against the risk of releasing flawed AI models.

The development of AI holds the potential for both positive and negative impact on society, and charting the most responsible course will depend on the use of a framework that delivers an ethical AI strategy. Businesses can then clearly explain how they use data, monitor AI activities, arrive at decisions, and control unwanted scenarios. Ultimately, the right level of structure and control will protect customers and stakeholders and build trust in the business’s use of technology.


BT Global Head of AI & Research Dr. Detlef Nauck

Read 50 times
Rate this item
(0 votes)

Visit other PMG Sites:

click me
PMG360 is committed to protecting the privacy of the personal data we collect from our subscribers/agents/customers/exhibitors and sponsors. On May 25th, the European's GDPR policy will be enforced. Nothing is changing about your current settings or how your information is processed, however, we have made a few changes. We have updated our Privacy Policy and Cookie Policy to make it easier for you to understand what information we collect, how and why we collect it.