the digital elephant in the room: how ai doomerism ignores the ongoing dangers of ai

amanda southworth
3 min readAug 21, 2023

--

Generative AI. I’m not going to describe it because you probably know what it is, but I will tell you that our fear of it is misplaced.

Imagine this: we’re standing in front of a manufacturing plant that makes material that has the potential to cause great harm to the human race, and we’re watching it get loaded into trucks to go to people.

If we’re watching this tangible thing go out to people, should we talk about the implications the material may have in 50–60 years when it’s more advanced? Or, should we focus on mitigating risk of the pre-existing material already being delivered?

I’m not directly working in AI, but I have built models, I know the background, and my entire career has been dedicated to software that helps people. And so my first and foremost thought about the state of AI doomerism that the generative AI industry has found itself clouded in is: “Why are we focusing on harm in the future instead of fixing the harm we see right now?”

In the past few months, NEDA’s new AI chatbot told ED survivors to cut calories, an AI chatbot has encouraged a man to commit suicide, and another AI chatbot discussed assassinating the Queen.

The greater issue of AI doomerism is that it conceals a lack of preparedness for the issues we’re already coming to see. There’s no large scale way of publicly tracking the harm and tracing it back to specific models. Gen AI companies are making fucketh-tons of money and yet they have landed on ‘don’t trust the model and report if it does anything bad’ as a solution to something that they call one of the most pressing issues of our time.

AI doomerism is appealing because it allows us to play psychic, and to look into our often foggy future and predict harms before they swallow us. It’s a neccessary part of developing any new technology, but it’s not the only part. And that’s the issue.

There is a public AI incident database, but it’s run by a scrappy 3rd party as opposed to the cash flush Gen AI startups who keep the dangers and implications private and in house.

The infrastructure for tracking and mitigating pre-existing harm does not exist, and yet every other blue check mark on Twitter / X / ThoughtLeaderLand was calling into question if AI would kill us in the near future.

When we fall into doomerism, we fail the consumers who are already using this tool and experiencing the issues (such as students who are being failed for assignments that are falsely marked as AI generated). We fail to protect the people who are already being told to kill themselves. And we fail our users by not building the infrastructure to protect them and handle the incidents they WILL experience as opposed to talking for hours about the incidents they COULD experience.

I hope the generative AI space grows to adapt to this infrastructure. I would like to see a joint, public AI incident database, customer service pipelines that direct those affected to resources, preventative filtering / premature shutoff access to chatbots when restricted topics are being discussed, and so on.

Generative AI is the first piece of consumer tech that has felt like magic in a while, and it should be a welcome breathe of fresh air to the tech industry that we’ve captured the public’s imagination again. Let’s use that trust in us to mitigate the harms we face today, not implement fear about the harms we may face years and years down the line.

--

--

amanda southworth
amanda southworth

Written by amanda southworth

trying to build software that will save your life.

No responses yet