Welcome to Monday Hot Takes Episode 3! This week’s hot take is about machine learning and facial recognition (again!). Let me know what you think in the comments! For more info on digital literacy, please check out our Digital Literacy channel.
Hot Take: Machine Learning is not nuetral!
Lots of outcry over this Harrisburg University blog post touting that a data science research team had created a machine learning algorithm that could detect criminals only by examining a face. (That post was later replaced with this one). This research was to be included in an academic series published by Springer Publishing called Springer Nature — Research Book Series: Transactions on Computational Science and Computational Intelligence. (it has since been excluded from publication).
A group called the Coalition for Critical Technology posted a letter they wrote to the publishing company, warning about the dangers of publishing as fact a paper based on race science. It’s important, because racists theories like eugenics were once all the rage, leading to things like the Holocaust.
RackN’s weekly Distance Devops Lunch and Learn is a pretty intimate lunch meeting that discusses some interesting topics.
Disclaimer: I do some work for RackN, but I wasn’t asked (or paid) to include them here.
We must be vigilant about repeating the past, and especially so now when the tech has matured to the point that real AI a real possibility. The machine learning algorithms doing that are never neutral, we must work to get them there. We must recognize that by encapsulating bias in this way we will automate inequality.
What are your thoughts on this topic?