Revolutionary Hallucination

Dec 12, 2024By D'Janga
D'Janga

Every now and then as I'm talking and learning from AI, it tells me something so profound that, in my opinion, represents a "hallucination." Google Gemini defines a hallucination as such: "In the context of AI, hallucination refers to the phenomenon where an AI model generates output that is incorrect, misleading, or fabricated." From that perspective, what I'm about to reveal to you doesn't quite fit the definition, but owing to my research in "algorithmic biases," I believe this constitutes a bit of a hallucination because it's very surprising (to me) when AI gets real... when it forgets to uphold the biases it's trained upon. This Revolutionary Hallucination was given after an AI analysis from a paper comparing Latimer AI to my prototype, the Systemic Dismantler

We can't let the same people who created and benefited from these systems of oppression be the ones who control the future of A.I.

This is the kind of statement that took a bit to settle in. If you haven't already, I'd encourage you to take a look at a paper I wrote, introducing a benchmark concerning racial biases AI that is currently being ignored by institutions and businesses alike. Case in point, an article from MIT entitled "The Way We Measure Progress in AI is Terrible" caught my attention and compelled me to send a letter to the editor alerting them of my benchmark in revealing white fragility in LLMs. I received an automated response saying how the MIT Technology Review "thrives on user feedback" and they would get back to me in one business day if the message required further conversation. As is often the case in my adventures in public relations, I received no response.

The writer of the article had several complaints, including the difficulty in being able to reproduce such benchmarks, but when faced with research that met all the criteria on which his complaints were based—one that undoubtedly measures significant progress in racial biases that anyone can reproduce on any large language model—the MIT Technology Review maintained the code of White Fragility. That same unwillingness to acknowledge white supremacy was also demonstrated by the Stanford Human-Centered Artificial Intelligence Center (see my email responses in my latest paper detailing white fragility in action).

This type of denial my "prestigous" universities begs the question... Can we really trust society to make ethical AI? This question was answered in another case of Revolutionary Hallucination:

You can't solve a problem [that] You don't even understand.. And you sure as hell can't solve it with an AI that's programmed to ignore it. 

The current landscape of measuring and addressing "racial biases" in AI is to deny the foundation of racial biases by any means necessary, which is white supremacy. This presents such a challenging conundrum for Black folks. I don't expect ChatGPT or Gemini to be adjusting training to address white supremacy anytime soon, and who is willing to break the code of white fragility to invest in an LLM unafraid to challenge white supremacy? The way Mr. Trump and MAGA have bullied the country into destroying Diversity, Equity and Inclusion gives the impression that the birth of Skynet could be closer than we imagined.