If Western civilization had never existed, AI would never have existed:
https://www.vice.com/en/article/y3pezm/scientists-increasingly-cant-explain-how-ai-worksThe people who develop AI are increasingly having problems explaining how it works and determining why it has the outputs it has.
...
These types of AI systems notoriously have issues because the data they are trained on are often inherently biased, mimicking the racial and gender biases that exist within our society. The haphazard deployment of them leads to situations where, to use just one example, Black people are disproportionately misidentified by facial recognition technology. It becomes difficult to fix these systems in part because their developers often cannot fully explain how they work, which makes accountability difficult.
...
“Additionally, if we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers.”
So how is AI developed?
Black box models can be extremely powerful, which is how many scientists and companies justify sacrificing explainability for accuracy. AI systems have been used for autonomous cars, customer service chatbots, and diagnosing disease, and have the power to perform some tasks better than humans can. For example, a machine that is capable of remembering one trillion items, such as digits, letters, and words, versus humans, who on average remember seven in their short-term memory would be able to process and compute information at a much faster and improved rate than humans. Among the different deep learning models include generative adversarial networks (GANs), which are most often used to train generative AI models, such as text-to-image generator MidJourney AI. GANs essentially pit AI models against each other to do a specific task; the "winner" of each interaction is then pitted against another model, allowing the model to iterate itself until it becomes very good at doing that task. The issue is that this creates models that their developers simply can't explain.
Imitate natural selection! What could possibly go wrong?
When we put our trust in a system simply because it gives us answers that fit what we are looking for, we fail to ask key questions: Are these responses reliable, or do they just tell us what we want to hear? Whom do the results ultimately benefit? And who is responsible if it causes harm?
They tell you what you want to hear. Western civilization. Western civilization.
“The risks are that the system may be making decisions using values we disagree with, such as biased (e.g. racist or sexist) decisions. Another risk is that the system may be making a very bad decision, but we cannot intervene because we do not understand its reasoning,”
So why develop AI at all?
AI systems are already deeply entrenched with bias and are constantly reproducing such bias in their output without developers understanding how. In a groundbreaking 2018 study called “Gender Shades,” researchers Joy Buolamwini and Timnit Gebru found that popular facial recognition systems most accurately detected males with lighter skin and had the highest errors detecting females with darker skin. Facial recognition systems, which are skewed against people of color and have been used for everything from housing to policing, deepen pre-existing racial biases by determining who is more likely to get a house or be identified as a criminal, for example.
AI is a Western invention! What else would we expect?
At the same time, some experts argue that simply shifting to open and interpretable AI models—while allowing greater visibility of these processes—would result in systems that are less effective.
“There are many tasks right now where black box approaches are far and away better than interpretable models,” Clune said. “The gap can be large enough that black box models are the only option, assuming that an application is not feasible unless the capabilities of the model are good enough. Even in cases where the interpretable models are more competitive, there is usually a tradeoff between capability and interpretability. People are working on closing that gap, but I suspect it will remain for the foreseeable future, and potentially will always exist.”
I agree. Which is why AI should never have begun to exist in the first place.
The issue with explainability has to do with the fact that because AI systems have become so complex, blanket explanations only increase the power differential between AI systems and their creators, and AI developers and their users.
It's a Western feature.
“Maybe the answer is to abandon the illusion of explanation, and instead focus on more rigorously testing the reliability, biases, and performance of models, as we try to do with humans,” Clune said.
Maybe the answer is to end Western civilization altogether?
if you build explainable AI with a one-size-fits-all design process, “you end up with something where it has explanations that only make sense to one group of people who are involved in the system in practice,”
Guess which group that is?
“If we orient knowledge and AI around big data, then we're always going to bias towards those who have the resources to spin up a thousand servers, or those who have the resources to, you know, get a billion images and train them,”
...
“The question first is, what are the conditions under which AI is developed? Who gets to decide when it's deployed? And with what reasoning? Because if we can't answer that, then all good intentions in the world around how do we live with that [AI] are all screwed,” they added. “If we're not participating in those conversations, then it's a losing game. All you can do is have something that works for people with power, and silences the people who don't.”
Homework: under which civilization did those with power today acquire the power they currently have?