Hostile tech: the good, the bad and the iffy
It is our ethical responsibility to do what we can to prevent bad tech experiences and build an equal, accessible digital world for all.
FROM multimillion-dollar ransomware payouts, to data breaches that expose hundreds of millions of users' private information, the headlines surrounding hostile tech are certainly eye-catching. But they do not tell the whole story.
The problem is, there is far more to hostile tech than deliberate, targeted attempts to disrupt your systems and steal your data. Hacking, ransomware attacks, data breaches, and DDoS (distributed denial-of-service) attacks dominate the hostile tech narrative. Undoubtedly, they can cause severe reputational damage, but in practice, they are just one piece of a much bigger picture.
It is important not to get too alarmed by the term "hostile tech". We are not just talking about the kind of direct attacks that our technology assets suffer from every day. We need to think about hostility a lot more broadly.
Hostile tech does not just encompass things that are illegal. We are not even talking about things that are necessarily malicious. For example, there are people who are perfectly happy with being surveilled online, if it means that they get more personalised ads. Others will go to incredible lengths to prevent any kind of digital surveillance tracking them, because they see surveillance as unethical.
Hostile tech is sometimes completely unintentional. For instance, people did not set out to build image recognition software that delivers inconsistent results when identifying the faces of Black women. They did not maliciously set out to make a biased product - they just had a poor data set.
These are not malicious projects - and they are certainly not direct attempts to undermine or damage the brands deploying them. In many cases, they are not even a failure of design or planning. Overwhelmingly, hostile tech emerges because teams have not fully considered how a tech decision could have different impacts across all the potential stakeholder groups.
Navigate Asia in
a new global order
Get the insights delivered to your inbox.
And that is a critical point. When we begin our software or technology projects, we typically have a specific stakeholder group in mind that we are trying to serve or meet the needs of directly. But what we do not often think about is the impact of that product on other stakeholders.
Maybe there is an environmental impact of your product that you have not considered. Training a single natural language processing model can carry the same CO2 emissions footprint as 125 round-trip flights between New York and Beijing - an impact few account for.
Maybe there is an equity issue you had not accounted for. For example, one of the things we saw with home schooling as we went into the pandemic was that houses without good Internet service simply could not support 2 or 3 children attending online lessons as their parents also worked online. What many saw was a wonderful revolution in digital education but they did not see those left behind by digital inequality.
THE STAKES ARE VERY REAL
The digital inequalities exposed by the pandemic and ongoing climate crisis are just 2 reasons why now is the time for organisations to acknowledge and address the hostile impacts of their technology decisions.
Technology is now deeply rooted in virtually every aspect of our lives. Medical decisions, credit decisions, probation or sentencing decisions, all of these things that have huge impacts on human lives are now themselves massively impacted by our technology choices.
The stakes are very real, and the impacts on the stakeholders we fail to account for when making these decisions are immense. That is why it is so important for organisations of all kinds to embrace a responsible tech mindset.
Responsible tech - sometimes referred to as ethical tech or equitable tech - is an umbrella term that encompasses multiple notions, all centred around doing the right thing in and with technology. That could mean anything from taking steps to make an application more accessible, to implementing policies to help consistently deliver equitable tech experiences.
On paper, it is a relatively simple concept, yet it remains clouded by misconceptions. I recently read something that said responsibility is easy to define in fields like civil engineering, where your responsibilities are to ensure that buildings are stable, do not collapse, and do not otherwise negatively impact the lives of citizens - with the implication being that it is somehow harder to define in software or technology.
Yes, we are not bound by any kind of Hippocratic Oath like the medical profession. But we are often guilty of giving ourselves a little too much leeway when it comes to making ethical and responsible decisions with our technology. Take the Volkswagen emissions testing scandal for example. As the company leadership acknowledged, the decision to implement software designed to alter vehicle emissions under test conditions was deeply flawed from an ethical perspective.
The biggest thing that businesses today need to do to reduce unintentional tech hostility and make responsible decisions is to explicitly think about the "invisible" stakeholders that could be impacted by any given technology decision.
That means considering:
- The groups products and services are tested with - are they truly reflective of the end user groups we anticipate will use the product? And are all stakeholder groups represented and given a voice in that process?
- The quality and accuracy of the data sets used to power data-driven services - are they free from bias, and are they capable of enabling truly reflective and inclusive experiences for all?
- Whether you are designing with equality and ease of use in mind - and whether complex features or capabilities are coming at the cost of overall usability and accessibility?
- Whether the decision represents any kind of non-human hostility? For example, is it aligned with our sustainability goals, and is it likely to have a negative environmental impact?
ASK THE RIGHT QUESTIONS
Another thing I like to encourage organisations to do is make an explicit statement about what you care about, and what you want your technology to help achieve. As Cathy O'Neil, author of Weapons of Math Destruction, says, there are times when you have to trade off fairness and profit.
It is up to you where you want to sit along that spectrum, but what is important is to make your goals and intentions clear. I worked with an organisation that developed a framework that expressed their values and principles around the use of customer data, clearly laying out how they intend to operationalise that data, and why that decision was made. It took months to develop, but it made their intentions and ethical position completely clear, and easy to stay aligned with.
Hostile tech can take many forms, and can easily creep into any technology decision. As technology decision-makers, it is up to us to ask the right questions and consider how the tech we deploy will be used by everyone - and how it is likely to impact their lives and experiences - to reduce this hostility over time.
Shifting to this responsible approach and mindset is relatively simple in theory, but will take real dedication from organisations and professionals across our industry before meaningful results are seen.
Just as it is our mandated responsibility to safeguard customer data from malicious threats, it is our ethical responsibility to do what we can to prevent hostile tech experiences and build an equal, accessible digital world for all.
- The writer is global chief technology officer of Thoughtworks.
Decoding Asia newsletter: your guide to navigating Asia in a new global order. Sign up here to get Decoding Asia newsletter. Delivered to your inbox. Free.
Share with us your feedback on BT's products and services