Today, I simply wanted to renew my passport online. After numerous attempts and changing my clothes several times, this example illustrates why I regularly present on Artificial Intelligence/Machine Learning bias, equality, diversity and inclusion. #passport pic.twitter.com/sEsmdTcR1L— Cat Hallam (@CatHallam1) April 6, 2019
Yesterday this tweet showed up in my Twitter feed. Before then, I hadn’t really thought about the biases present in machine learning, which is a bias of my own. I’m interested to learn more about this issue. Let’s start today with an overview to the problem:
Here’s what we’ll be discussing this week:
What did you picture when you pictured a shoe? How do you think that choice is impacted by your own experiences?
What are some of the things you use on a daily basis that utilize machine learning?
“Just because something is based on data doesn’t automatically make it neutral.” Can you think of other instances where this would be true?
Have you seen this bias in machine learning?
What solutions can you suggest for combatting this complex problem?