automated software is meeting the limits of the human brain.

amanda southworth
12 min readSep 26, 2024

--

My bedroom ft. my rilakkuma and space shuttle stuffies :-)

We didn’t always know texting and driving was bad.

By the time I was 6 or 7, smartphones were already proliferating our world. So, learning that texting and driving wasn’t always a no-no took me aback.

I learned about the first legal case linking texting while driving to homicide in the book, “A Deadly Wandering: A Mystery, a Landmark Investigation, and the Astonishing Science of Attention in the Digital Age” by Matt Richtel. Until I read that,I just inherently assumed that we knew that it wasn’t a good thing.

But, it was only until multiple people died, and US courts realized there were no laws prohibiting it, that it became a policy and research topic.

Texting while driving was just seen as something that could be done with focus and effort. Now modern psychology and our research tells us that it’s no longer safe — it’s not our lack of effort, it’s our brains inability to process multiple streams of information and make correct decisions.

It tells us what I think we forget: we as humans fail to recognize the limits of our brains against technology. We do not actively know what crosses that technology threshold between harmful and unharmful to us. Or put otherwise — what we can or cannot handle.

The penetration of unknowns in technology goes beyond our phones. When I lived in SF / Oakland, self-driving cars were a fact of the road. San Francisco is unique in that fully self-driving cars are very transparently automated. There’s often no one sitting in the driver’s seat, and usually there’s a solid logo found on the car to tell you who it belongs to.

There’s also the unmarked cars that are automated, and they are all across the US. These are your every day drivers in some of the millions of cars with computer-assisted driving technology.

We should be scared about what automation means for our cars: because there’s already some precedent in the skies, and it doesn’t look good.

Asiana Airlines Flight 214 in SFO killed 3 people partially because a new training pilot forgot the settings of the plane outside of autopilot, leading them slam the tail into a rock wall on landing.

The airplane without it’s tail, the day it crashed.

Pilots are at the forefront of something consumers en-mass will start to experience: we’re building technology that pushes us to limits of cognitive processing.

Humans are smart, but we are not computers. The average pilot has way more training in an aircraft and understanding of how to respond to an emergency than the average human driver, and they still have issues using automation without enabling skill degradation.

Soon, that issue is making its way to anyone with a phone and internet connection.

In the heat of the moment, we fail to properly compute information, forget critical things, or have a delayed reaction time that computers just don’t have. And the results can be deadly: plane crashes, car crashes, and more. All symptoms of an inability to make sense of the data, or being forced to be an actor in a system that previously automated your job away.

(Allegedly) when Tesla Autopilot senses an imminent crash — it will disengage.

Is it reasonable to expect a human brain to understand and react in time to a crash when an automated system is in charge? I don’t think the answer is yes. Obviously if something is imminent, it means an accident will probably happen quickly and someone may not have the reaction time to correct the problem.

Would it also be smart to let the computer navigate in a crash? That doesn’t feel right either. The first person has already been killed in a self-driving car accident, in which Uber’s autonomous driving system failed to recognize a pedestrian, and the human driver failed to stop.

The limits of automation are coming clearer as we have more assisted driving tech on the road. And I learned those limits clearly on the road, with my friend Ashley.

Ashley and I on the roadtrip as we survived driving a RAV4 for the first time.

Ashley, if you are reading this: I love you! And, maybe stop reading.

I have driven with her a bit and noticed that she is not good at staying in the lines on the road. How do I know this? Because her Subaru Outback chirps every time she departs from the lane.

Does this make her fix the behavior? No.

It desensitizes her to the noise, leaving there to be no spectrum between small lane departure and big ones. No matter how big the fuck up, the computer chirps the same. Proportionally, it’s a lot of small errors and a few big ones. Yet the big errors go uncorrected, because the small errors are from overly sensitive thresholds.

We both road tripped from Oregon to Vermont for my move this summer. I rented a brand new RAV4 for us, after selling my beloved 2010 Subaru Forester that had no tech in it.

I had never driven a car that had any automated driving systems, and I didn’t realize the cruise control on the RAV4 was adaptive: that means if a another car gets in front of you, the RAV brakes and slows down.

We had many experiences where using the adaptive cruise control meant someone would cut us off, and the car would rapidly decrease and freak us the fuck out. Because it overcorrected and estimated, we eventually turned off adaptive cruise control entirely.

The RAV also had line following correction software. Given that we were driving through the middle of Nowherefuck, America, we went through a lot of construction and therefore: drove over a lot of painted lines to follow the flow of traffic.

This made the RAV swerve us along the painted line, even though it was not the line we should have been following. We almost hit a number of blockades and barriers, and felt the feature was making us less safe, not more.

There was no forewarning or training by Enterprise in regards to those features being on the car, and we learned they were active while driving.

Sure, these companies may be legally covered by putting it in the user manual. But, there’s a clear lack of education and clarity that puts humans at risk, legal obligations aside.

Outside of RAVs and Subarus, we can think about this mitigation of direct human input in favor of large scale computer systems in a different way.

What I’m gonna tell ChatGPT to do to the world.

There is a tradeoff we make with mass transit vs. single car vehicles, and the tradeoff matches most modes of transportation. It also gives us a glimpse of the future we’re about to face with more centralized, automated technology in our lives.

A: You can use mass transit, which is statistically safer, better for the environment, but has higher profile and more visible accidents. For example, think of modern plane and bus crashes and train derailments.

There’s very few disasters, but the ones you know about are widely publicized. The control is centralized outside of your reach, so whether or not you get into an accident is unlikely but not within your hands. Even though statistically less people die in those fewer crashes, the amount of people who get harmed in those accidents is incredibly high. Hundreds of people can die in a mass transit accident.

B: You can also choose single use transportation, which is less safe, and has more common accidents that affect a smaller number of occupants. For example, single use transit can be passenger cars, and general aviation planes. A single-use transportation method could kill 1–5 people on average consistently, compared to a mass transit accident that could kill 100–300 people more rarely.

This mirrors a crossroads we’re coming to in consumer software, and software products in general.

I call this dynamic in tech — AS (automatically scaling) or / MA (manual autonomy) features. Automatically scaling are features or models that do something for all users, and that take an option out of their hands. Manual autonomy features let users do the hard work, and do not automate something that could otherwise be digitally done.

To me, an automatically scaling feature is any computerized system that removes human input in favor of a computer model or program that’s designed to work across all use cases.

Some examples that come to mine that match the automatically scaling / manual autonomy categories are:

  • Watching what YT algorithm recommends and getting into a echo chamber vs. doing hand-picked research on your own creators through other platforms
  • Using ChatGPT to generate an article vs. doing the research yourself and handwriting / fact checking it
  • Getting into a self-driving car that benefits from millions of drive hours vs. manually driving your car
  • Taking AI-tuned images with your camera vs. not using AI features and editing them yourself
  • Using Siri to dictate a message to someone vs. pulling over and typing it yourself

Most consumer software features can be divided into these two categories and more than ever, companies are using automatically scaling AI features as a selling point. Just take a look at how Google and Apple are marketing their new phones: it’s all focused on Apple Intelligence and Google Gemini.

Apple’s AI landing page.
Google Gemini’s landing page.

The majority of features I have seen coming from consumer products are using generative AI. Automatic scaling features that reduce and automate human effort make up the majority of features being shipped in software, even though the consensus is not clear on whether consumers actually use these features enough to justify the computing power required.

OpenAI, the golden child, made the fastest used consumer product in history, has massive partnerships with computing providers, and is still unprofitable. So, I don’t think it will bode well for anyone else out of enterprise use cases.

More importantly: there is only so much oversight of automated tools that human brains can handle while being attentive. Automation is scary because 9.99/10 times, we can trust it without fail. But, when that 0.01/10 time of failure happens — it will catch us incredibly off guard, if we catch it at all.

The more likely explanation is that we will ignore the side-effects of automatically scaling technologies in consumer technology. We already do, and can see the limits of tech scale and consumer behavior in phone calls.

Humans in the modern world online are on the receiving end of multiple mass communication systems: email, text, and phone calls, etc. If you’re gen-z, you probably don’t answer the phone — because you know the majority of calls are spam.

Even if there are real phone calls in there, we ignore them because we loop them in with the spam. Our brains cannot reasonably divide information between a cold call from someone whose number you don’t know and a spam artist who wants to know about your warranty. Therefore, we ignore the communications entirely.

Admittedly, my email is starting to look like this, too. I’ve worked hard to only subscribe things I actually care about, and I still clear out useless spam emails I never engage with on a daily basis. Same with social media advertising — there’s just too much content and too much communication happening for me to parse through.

That is the cost of automatically scaling features: there is often too much information to reasonably parse, and so we blanket reject or accept most things. Automatic scaling software that digitally dials people, sends spam messages, and sends mass amount of emails is behind the content overload that is making us reach the limits of the human brain.

Software is making its way into our education system through AI tutors, into our medical system through AI doctors, into our cars with computer-assisted systems, and probably further and further into our devices. We increasingly say that the digitization of certain things will be “good” in the name of efficiency, but we don’t know the cost of digitized “good”.

I’ve been pretty vague because this is an abstract topic, but it’s worth looking at an actual use case people have suggested.

Let’s imagine an AI doctor trained on historical medical data and content that provides guidance to doctors. In this scenario, a doctor will no longer see a patient. They will read symptoms, and read the diagnosis the model spits out to see if they agree.

If I went through the system, I would be diagnosed probably with bipolar, borderline, or social anxiety disorder.

But what I have is autism. It just wasn’t recognized because the symptoms look different for women. The model would incorrectly diagnose me, and the doctor would probably approve those incorrect diagnoses because the model has been right so often before.

Training AI systems on historical data keeps us ingrained in those historical biases, and that’s why it’s dangerous. It’s correct enough to be trustable, but incorrect a small enough amount of times to where it can be incredibly dangerous to run programs without strong human oversight and intervention.

We see automation and complexity being solved by technology in some areas, and think therefore that it would be good to apply that to ALL areas.

Just like texting and driving and pilots facing skill degradation because of autopilot— we will not know the cost of these systems until they distract us or fail, and we do not have the capabilities to step in.

I’m not saying this as an AI doom and gloomer. In 2017, I created an AI edutech startup with some friends that did would take someone’s notes and turn them into flashcards, a practice test, and digital study materials. I came to be very close with the process of OCR, neural nets, the limits of training data, and the summarization systems available to us at the time.

I also didn’t want to build a product on a blind hunch, and I read “The Efficiency Paradox: What Big Data Can’t Do” by Edward Tenner. In it, he posits something that changed my entire development career.

The default assumption that technology should exist to optimize, is not always the best one. I would recommend you read the book, but I think it’s more relevant today than it ever has been.

I like AI, and I know how to use it. I also don’t believe that AI should be in everything. When you have a hammer that makes you money when you use it, you convince everyone that their problems look like a nail.

When we look to solve problems with technology, we’re answering an underlying question about our assumptions: What does technology exist to do? The answer we’ve been given so far, is to replace humans and make our lives easier by doing stuff for us. To enable endless creativity and options without needing to develop the skills to build to the quality we want. But is that really an assumption we want to build the digital infrastructure of our world on? Does taking humans out of the equation actually make our lives better, or does it take away the ability for us to keep and manage our skills?

We need to define the lens from which we determine the values of the tech that comes to us, and NOT let the tech establish its own values.

We also need to understand more about the human limits of consuming and interacting with technology. We already know there’s a correlation between negative mental health, societal polarization, and social media usage. Technology changes our brains and our connections to each other.

It would be naive to think we should continue integrating technology at the rate we have, without coming to terms with the societial overstimulation that comes from content in relation to consumer technology.

Many companies right now are working to make their living off of these automating scale solutions. But whether or not we choose to adopt these technologies that cut humans out should be a decision we all make for our communities, and on a case by case basis. It is not a decision that should be made by the marketing arms of OpenAI, Tesla, or Amazon.

The decisions about what should and should not be taken out of our hands is our decision as consumers alone. Computers are an incredible resource, BUT our human brains don’t always reap their benefits correctly.

Automation doesn’t always solve our problems, but instead converts them into new dangers. Dangers that we know human brains are terrible at understanding and handling.

(P.S: Toyota if you give me a free RAV4, I’ll delete this article)

--

--

amanda southworth
amanda southworth

Written by amanda southworth

trying to build software that will save your life.

Responses (1)