Awful AI, when AI crosses the line
Awful AI is an open repository that brings together real examples of artificial intelligence technologies that do the worst. Things like excessive surveillance, unfair decisions, or systems that treat some people worse than others.
The idea is simple: show what is going wrong to prevent it from continuing to happen.
The problem: machines deciding without people knowing
Increasingly, schools, companies, hospitals, police forces, banks, and even social networks use artificial intelligence in their decisions.
The problem? Most of the time, people don't know this is happening.
For example, facial recognition AIs or predictive criminal justice systems may have racial or gender bias. When the code and data are closed, there is no way to audit, challenge, or repair. This amounts to a new form of"algorithmic governance"— automatic decisions that regulate life.
Why does this happen?
Many of these technologies are created by companies that do not show the code, do not show the data, and do not explain how decisions are made.
Others are purchased by public or private institutions as "black boxes": they work... but no one can see what's inside.
And what are the impacts?
The problems are not theoretical; they occur on a daily basis:
Injustice: systems that treat certain people worse (based on gender, neighborhood, color, accent).
Loss of opportunities: algorithms that deny credit, insurance, or employment.
Surveillance: cameras that follow people for no reason.
Stress and confusion: feeling that "something" decides for us, but without knowing how. It's like living in a place where there are rules, but no one tells you what they are.
Groups that are already marginalized feel the worst effects: algorithmic biases transform inequalities into automatic decisions.
Opacity is not neutral; it prolongs injustices that already existed.
What does Awful AI do?
Awful AI is not a "magic tool," but acts as an infrastructure for visibility and debate—an open repository, free of commercial bias, that documents and reports AI abuses. This is important in itself: it makes visible what is usually hidden. Transparency is the first step toward accountability.
The fact that it is open means that anyone—researchers, journalists, activists—can consult the list, examine cases, replicate complaints, conduct studies, and put pressure on institutions. This empowers communities outside the corporate-technological core.
In addition, Awful AI inspires what the authors call "contestational tech" — that is, technologies of contestation, of resistance: since the problem exists, it is urgent to think about counterforces. Although the repository itself does not provide a ready-to-use alternative system, it serves as a critical knowledge base, a starting point for those who want to imagine fair, regulated, auditable AIs that are aligned with the common good.
This function is essential: to denaturalize the supposed inevitability of AI, to reaffirm that technologies are social and political choices and depend on ethical decisions.
How this helps those who create civic technology in Portugal
Although Awful AI is not Portuguese—and documented cases often come from the United States, Europe, or the global north—its role as critical infrastructure is valuable in any context where AI and automation are on the rise.
In Portugal, as in the rest of Europe, public and private initiatives are beginning to adopt automated systems: facial recognition, administrative decisions subject to algorithms, large-scale analysis of personal data. Without monitoring, debate, and alternatives, we will repeat the same abuses— segregation, opacity, algorithmic discrimination, and mass surveillance.
For those who advocate for free software, open data, and open governance, Awful AI serves as a valuable tool for raising awareness, documenting, and mobilizing.
A technology we can trust because we can see how it works.
The repository helps us understand that there are better ways:
Free software: we can see and contribute to the code
open and well-documented data: we know where decisions come from
community tools: made to help, not to monitor
low-tech or hybrid technology: simple but transparent
processes that accept review and correction.
Opacity is not neutral; it prolongs injustices that already existed.
Links
daviddao/awful-ai: Awful AI - 2021 Edition (DOI 10.5281/zenodo.5855971)





0 comments
Log in or create an account to add your comment.