building technology will never be free of harm. that doesn’t mean we shouldn’t try.
Running a tech startup fucking sucks. It’s very emotionally all encompassing. You are so wired in, and you have no choice but to be. It is your life, your paycheck, your friends, your hours, your reason for getting up and putting your head to the circular saw that is the modern tech industry over and over again.
When you are working in such an extreme emotional environment, you don’t often make great choices relating to the world outside of you. You as a startup have to make selfish decisions, because it keeps you alive.
You make the choices that benefit you. You make the choices that make the VC cash machine continually print, and that keeps your entire team from getting laid off. You do not know if you have a future, so there is rarely a point in thinking that far ahead.
Every startup for the most part thinks this: there is not a future unless we forge it with our hands, so we should move fast and make a lot of decisions even if they are wrong. Startups are biased towards action, impact, emotional irrationality, being flashy, drawing attention, etc.
There is a certain brutality in creating a company. Business is not about economic transactions as much as it is a framework for making decisions. That framework is rather simple: get money, buy stuff that makes you money, sell the things, cut back on what you buy if you’re not making enough.
It’s a framework that standalone makes sense, but it’s also a framework that when viewed in isolation leads to dehumanization.
These are the perfect conditions that cause harm.
I’m not talking malicious harm, rather the more insidious kind: the knock-on effect a piece of software has.
The true pain caused by the tech industry is not the robot lurking in the distance. It’s the startups building their business around exploitation and overleveraging of pre-existing loopholes that fossilize into corporate monoliths, locking the product into further systemic mechanisms that create harm.
Let’s take AirBnB.
The stage is set with low interest rates on financing, and lack of ‘innovation’ within the hotel space. ->
This spawns a cheap alternative to hotels within people’s homes, which accidentally creates a knock-on effect of people taking advantage of low-interest rates to stack up residential real estate and reduce the housing supply. ->
The vacancy rate is lower, so moving is harder and the value of housing goes up. The counties are not prepared to build more housing, and so illegal AirBnBs are made in unsafe conditions. ->
The people in a city struggle to find an affordable place to live, while an AirBnB is the going rate of a hotel room currently, and booking / hosting on AirBnB has more negative aspects than previously thought.
I myself lived in an illegal AirBnb with Valkyrie in Oakland that was built into the basement of a house. It did not meet any fire codes, and there was a bar built into the doorway that would hit 6ft+ people in the head. 3k a month and walkable to BART, though.
It’s a rotting, accelerated with tech. Pre-existing market conditions and co-incidences spiral into an industry, and crash into exacerbating societal ills.
Startups are as urgent and tumultuous as they are because they are inherently a bet. There is no promise that your product or tech will make it off the ground, find a place with people, or make money enough for you to make it worth it.
That’s why venture capital, where the majority of tech startups get their money, is a numbers game.
Many will fight, most will fall, and the ones who mature and succeed gain unprecedented scale and resources in a digital world. A digital world where a person from different backgrounds and circumstances may interact with a product in a way not foreseen by the product development team who created it.
A piece of software does not exist in contextual isolation outside of pre-existing social dynamics. For too long, we have thought about what we should build without thinking about what we want our product to perpetuate.
You never know who will come into contact with what you build, and how it may affect them. You can only guess, and put your products into the world to get a sense of what can happen.
That’s why we cannot stop all harm related to technological innovation. There is just too many unpredictable variances at scale. Most physical inventions have a ‘curve’ of safety, where the first versions are the most untenable, and they eventually progress into a steady baseline.
The correct steps to this involve working directly within the cultural context that you’re developing for, to create smaller pilot projects, feature flags to create ‘ramping up’ periods, and developing clear measurements for ideal vs. non-ideal outcomes across ,
For a tech startup, there is always product development underway. New features, new directions and strategic markets, and more creation.
This leads to an interesting problem: not only do you have to ensure the core of the business and product is safe and not fundamentally fiscally or ethically fumbling, you have to consistently evaluate features as they are created and shared.
Just like the product is evolving and never finished, so is the impact of it. Therefore, you have to establish your core “product thesis” and work that to a baseline of acceptable safety (e.g: you are not enabling terrible behavior at a large scale), and then continue further.
When you don’t work to a baseline of safety or your core product thesis is harmful, that’s when we have tangible harm from startups that grow into bigger companies, and drag their problems into the world’s wake.
Our assessment software at Faura has been used by 5,000+ homeowners. I can reasonably look at that number, but I cannot tangibly hold 5,000 different lives in my head. 5,000 different people with families, trajectories, dreams, wants, and agency.
It would be easy for anyone to see a number and flatten what’s behind it. That’s the majority of what a startup does: building systems to flatten interactions and services into measurable, tangible numbers to determine strategy.
That’s what happens when they mature into corporations with real power: the harm they cause is the price they pay for cultural revelance. The harm is ‘flattened’ from something real people are hurt by into a PR scandal, or a bad headline.
Everything is a datapoint, and by extension of interacting with the ‘machine’, you and the harm you experience becomes a data point as well.
It took me a while to realize that the harms that matured startups do is not out of malice, but out of something worse: apathy. They have reached peak scale, and there are hundreds of fires to put out, and thousands of to-do list items waiting.
The warning signs about the impact aren’t seen because they aren’t being looked for, or it’s just one thing to think about out of many. It’s only when those problems escalate and turn into scandals or lawsuits that a company takes action seriously.
It is more than understandable that products will continue to shift and grow as they interact with the market. I do not wish for every startup to be bogged in safety bureacracy and regulation.
But it’s not acceptable for a startup reaching maturation to know about and perpetuate harm. Matured consumer companies at a large scale most certainly have the ability to be able to understand and quantify their impact on the world to change it for the better.
Yet, they do not. The products that bring sledgehammers through our lives in the forms of survelliance companies, rent-rate co-ordination software, etc, become fossilized fragments of our economy. Sometimes, the brutality and harm of the software is why it’s alive.
The price of power and influence on others is knowing what you do to them.
It is a fact as old as time that people will learn what they are doing is bad, and they continue to do it. But, unique to tech startups is the scale of harm that they can do without even realizing it.
These are harms that can be measured, and tackled. Harms that reach our family, loved ones, and friends with unprecedented scale and almost global reach.
These are also harms that are left to fester, because they can be built into the core business model of a company. That harm is a small price for business, it’s practically a line item. Companies are here to make money. Morality is expensive.
Until we fix the price of corporate harm so that it’s not feasible to fuck over a bunch of people and still profit, we will never see companies care about it.
In a dream world, people who create startups understand systemic issues. They are not just some guy from Stanford, but people from varying backgrounds.
There is an alternate path where when a startup creates their first documents as they scale up, they think about the greater impact of their products. They identify key factors and things to keep an eye on, as well as working with people who actually use their products (and crucially those that don’t) to create a clear understanding of incentives, uses, and outcomes.
They track those outcomes against predicted harms, and work with the communities they enter into long term to keep eyes on the ground.
Most of the harms that tech companies create are not ‘unforeseen’ or out of nowhere. They just don’t put the energy into thinking about it. Why would they? They have to fight for their future and survival.
Usually at this point, I would interject to talk about the importance of ethics. Fuck that. Companies are systems to make money. Systems do not have ethics. The people around them need to.
Within a startup lens, this is an ecosystem issue: the people at the top of Silicon Valley and providing funding are often regulation-adverse libertarians who pour their beliefs down through their companies and resources onto the next generation of founders.
Marc Andreessen, a prominent VC who created the first widely used web browser, wrote an essay amidst the gen AI boom arguing AI should be given free reign without regulation or fear because it might fix the planet, and that basically everyone who cares about tech safety is a pessimistic hater who is standing in the way of progress. He also sits on the board of Neom, Saudi Arabia’s line city.
No one will lecture you more on safety than a billionaire who appreciates human rights.
Safety requires effort, power, and resources. These are all things that companies can choose to invest in.
The ideal solution is a tech-forward government writing effective and loose policy that is able to reign in these companies effectively past a grace period, yet provide reasonable guardrails without secondary impacts.
Our beautiful, virile, spring chicken US government currently is playing political puppet theatre with 2 old men who will die before they see the onset of their legislative choices reach their prime. So, yeah. We’re doomed.
There are so many reasons to say “why not?” when it comes to building software that etches our world. We are so scared of what we might not do, and rarely scared of what we will do.
Companies are not people. They can experience hardship, but corporations of all shapes and sizes are put on the Chapter 11 chopping block or sent to AccquisitionWorld. Companies are flexible, monolithic groups that are used to adjusting to changes in the world and the environment. They are important, but they are made up of people.
It’s cowardly to not take responsibility when you want power. Responsibility over how your company impacts others, over the choices you make, and over the things that you may cause. Ignoring harm is ignoring the responsibility that comes with power. Do we want companies who think it’s not in their interest to care building the digital infrastructure of our everyday lives?
The people who will experience and live with the after-effects of harm are real. They are not numbers. They do not get to go on the corporate chopping block and have their assets made anew.
Companies are temporary sources of revenue at a fixed period of time, but the people they impact live on and on. That’s enough of a reason for me.