This blog explores the UN sustainability goal 10, reduce inequality within countries, with a specific focus on how AI can support or hinder this goal. Two recommendations for Board Directors and CEOs are outlined to help improve equality with stronger AI methods.
1.) Look closely at the talent mix in your software engineering teams to ensure that you have diversity in terms of ethnicity and gender. Gaps in social diversity impacts how programmers design software, so if companies want to build a more equitable world, they need to look deeply at the operating ratios in their engineering teams and make adjustments.
Often facial recognition software methods have inherent bias as often AI ML Models have insufficient and diverse data sets to train from . Fortunately MIT has developed a Diversity in Faces Dataset, which has over one million annotated human faces, that offers a more representative dataset of society, with diverse ethnicity, and gender.
Developing AI practices where equity is a foundational value is key. Training software engineers and data scientists on AI equity that protects ethnic, gender and sexual orientation is important. There is in fact a whole range of criterion used in Machine Learning to judge an algorithm’s fairness, but none has consensus and several are incompatible.
Sourcing open-source software toolkits to help close the AI Bias gaps an be found in IBM’s AI Fairness 360 which identifies over 100 types of data biases is an excellent source. Other AI Data bias/and audit tools can be found at the Center for Data Science and Public Policy at the University of Chicago, which has developed Aequitas, an open-source bias audit toolkit for machine learning developers, analysts, and policymakers to audit machine learning models for discrimination and bias, and make informed and equitable decisions around developing and deploying predictive risk-assessment tools
2.)Help ensure your AI practices are transparent and are not using black box approaches. In other words understand how your AI learning models work and the data that is used to produce a prediction. There are opportunities for board directors and CEO’s to modernize their procurement practices as well to request that vendors are using transparent and ethical approaches to AI. Ensuring engineering teams have ethicists, and statistical experts reviewing /validating the data sets and investing in operating processes that are designed to detect data bias are all important operating practices.
Other ways AI can help is using smarter text editors that detect data bias, joining community groups like France’s Data for Good community which brings together data scientists, developers, and designers, to help support social impact projects.
Other recent helpful sources an be found in Salesforce’s Research, The AI Economist, which is an open source AI framework to help economists, governments, and others design tax policies that optimize social outcomes in the real world.
Then on the darker side AI can also track people’s productivity performance and where abouts, often referred to as surveillance capitalism. If you are a food delivery worker in China working for Ele.me, you will have an AI algorithm tracking your every move, in terms of how long it takes you to fulfil and order, and reducing your hourly rate if you are not meeting the tracking deadlines, even Amazon and other large scale distribution centers are tracking your productivity pace and it’s not described as a leisurely walk.
“Critics say those fulfillment center workers face strenuous conditions: workers are pressed to “make rate,” with some packing hundreds of boxes per hour, and losing their job if they don’t move fast enough. “You’ve always got somebody right behind you who’s ready to take your job,” says Stacy Mitchell, co-director of the Institute for Local Self-Reliance and a prominent Amazon critic (Quote from the The Verge).
AI has tremendous power to modernize and ensure equitable results but humans must be held accountable to ensure the data inputs are cleansed and strong data bias software tools are needed. Few organizations have invested in data ethicists and third party auditors on large scale AI projects to ensure they receive a bill of “data equity”health.
Striving to advance the UN goal #10: “reduce inequality within and among countries” can simply start with 1.) ensuring diverse talent are on AI project solutioning teams, and that investments are made in data bias detection toolkits and 2.) having more robust explainable and transparent AI solution development methodologies.
Furthermore, boards have an important role in ensuring their CEO leadership is effectively managing both the potential of AI and also increasingly its organizational risks—including ethical, and reputational risks.
It is part of a board’s fiduciary responsibility to oversee AI and its potential impact on the business, but also ethically on society and strive to make our world a better place.
The United Nations developed in 2015 the Sustainable Development Goals as an universal call to action to end poverty, protect the planet and improve the lives and prospects of everyone, everywhere. The 17 Goals were adopted by all UN Member States in 2015, as part of the 2030 Agenda for Sustainable Development which set out a 15-year plan to achieve the Goals.
To see the full AI Brain Trust Framework introduced in the first blog, reference here.