The founders of Predictim want to be clear with me: Their product—an algorithm that scans the online footprint of a prospective babysitter to determine their “risk” levels for parents—is not racist. It is not biased.
Another example of an app saying “We don’t have bias in our AI” when it seems patently true that they do. I wonder how one would prove (mathematically) that one didn’t have bias?
Published by
Chris Aldrich
I'm a biomedical and electrical engineer with interests in information theory, complexity, evolution, genetics, signal processing, IndieWeb, theoretical mathematics, and big history.
I'm also a talent manager-producer-publisher in the entertainment industry with expertise in representation, distribution, finance, production, content delivery, and new media.
View all posts by Chris Aldrich