What is used to determine these kinds of predictive AIs are things like histories of arrests in a certain zip code. So if you live in a zip code that has been overpoliced historically, you are going to have overarresting.
There is ample evidence of the discriminatory harm that AI tools can cause to already marginalized groups… Bias is often baked into the outcomes the AI is asked to predict. Likewise, bias is in the data used to train the AI — data that is often discriminatory or unrepresentative for people of color, women, or other marginalized groups
Recognizing and addressing bias in generative AI models is crucial to building inclusive technology which ensures diverse perspectives are factored, and that outcomes are both fair and equitable.