Return to site

Salon Dinner #2 : Bias in AI

Olga Yanovskaya and Sirisha Chada

Salon #2 Dinner Guests

Entrepreneurs, academia and HerCentury community members gather over delicious Malaysian cuisine and melodious piano music for our Salon dinners once a month. These dinners have an addictive quality as we indulge, and debate thought provoking topics that intersect technology and philosophy. HerCentury salon dinners are inspired by the eighteenth century Parisian Salons and the English teahouses; our objective is to revive this style of public discourse and enable more women to participate in these conversations.

The conversation for our April dinner centered around “Bias in AI”. Bias is a mental shortcut that helps us quickly make decisions. When we make decisions, we prioritize what we pay attention to, eventually it is kind of a mental error. Unconscious bias is less researched and is a term used to describe people making assumptions about sub-groups within society. Gender bias and racial bias are highly prevalent and there is some engagement in both these realms.

Our dinner focused on bias in machine learning (ML). A simple experiment with reverse Google translation from English to any gender-neutral language like Turkish or Finish and back reveals such accumulated bias. “She" will change to “he" and vice versa based on the activity performed due to inherent bias in the tool. We still live in the world where “she” cooks delicious dinners and “he” recites presidential speech. Gender biased translation results are a more visible manifestation of inherent biased results coming out of ML algorithms.

ML algorithms are now looking at giant data sets and powering up a lot of decisions regarding consumers such as admissions, insurance policies scoring, credit loan applications, good credit and social scoring, justice system, face recognition, drug discoveries, clinical studies etc. Many of these key algorithms that affect our public life are also considered to be proprietary/trade secrets. These veils of secrecy leave the pubic facing the outcome of these decisions in the dark.

Although ML is still relatively emerging technology, there is already a trend in democratization of access to powerful machine learning tools. Amazon, Microsoft, Google now offer solutions that can be accessible to individuals without engineering background and sometimes require limited or no coding. This will potentially open access and accelerate the development of new intelligent applications with more and more people getting access to ML tools.

As machines “practice” on giant data sets, there is a probability such data sets contain accumulated biased data. Therefore, ML algorithms are vulnerable to the characteristics of the “training” data. Feeding this data further into learning systems can worsen the effect of so-called “Garbage in, garbage out” situation when the quality of output is determined by the quality of input. ML algorithms need a healthy “data diet” and certain ethically acceptable conditions to govern its output.

Our dinner guests agreed that diversity in teams and data, transparency, traceability and education are key to prevent building biased models and limiting the negative potential of ML.

Dr. Lorna Doucet, an emotional intelligence expert considers that the burden of self-regulation should not be imposed on the developers, as they don’t necessarily have the core principles that the end users truly value-especially regarding potential adverse impact. End users must take responsibility to articulate their core values and ensure that the AI they use is aligned.

 

Ideas from around the table included adding a level of scrutiny like the Pharma and other heavily regulated industries.

Young entrepreneurs, Li Ziwei and Zhu Xiaohu from University AI highlighted the risky trend where engineers are working more and more isolated from business process owners. Understanding the business processes and the effect on individuals and societies at large will always at least partially mitigate development risks. Both agreed that in addition to technical courses, teaching ethics and societal consequences to developers should become a part of standard curriculum. Zhu Xiaohu an engineer himself is also passionate about building a tool that can explain ML algorithms to end users.

Dr. Miguel A. Cerna, a behavioral scientist considers bias to be inherent in societies and varies from one society to another based on their fundamental values. Dr. Cerna questions how those cultural and social differences, embedded in data set will be eventually reflected in the outcomes of ML algorithms.

Marc Pedri, a founder of Evo Creations, focuses to enhance how humans through AI with his multi-disciplinary team of AI trainers. Diverse teams deliver better solutions. Marc also sees blockchain technology as an opportunity to trace ML ¨training¨ data sets.

Sun Wei, leader of Humanity + China thinks that emotional intelligence is one of the important aspects of AGI (Artificial General Intelligence) that needs to rise, it is critical to cultivate AI’s EQ(Emotional Quotient) with human values and cultures in order to establish a friendly AGI.

Tianyi Pan, an e-commerce professional, fluent in Finnish and striving to apply AI & ML to solve real problems, reminded us about the role of language in perpetuating bias in society. As we know it, language is what forms the concepts, shorthand and knowledge about how we understand the world around us. If a certain language has gender neutral third person pronouns, he thinks it will curb at least the gender bias to a degree, even though we could never rid of it in totality.

Although we all look forward to all exciting opportunities ML solutions will bring into our lives, we are concerned about transparency and ability to explain certain outcomes. It is unlikely that any of us will voluntarily accept lower paid jobs based on our gender, overpay insurance premium due to our address or accept a rejection because a person like us wasn’t represented in ML training data set. The common man will not be in any position to combat this bias as an individual without the help of governing bodies representing everyone’s interests

We look forward to the next month discussions!

Engineers, data scientists and all other creators of the machine-learning models need to understand how bias might impact the outcomes of the solutions and how development teams and companies can mitigate these risks.

Recommended reading and videos list is provided below.

Salon Dinner #2: Articles and Videos used as reference for discussion
All Posts
×

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OKSubscriptions powered by Strikingly