The Algorithm Does Not Care

The Algorithm Does Not Care

Last month I tweeted this observation, and wrote a similar sentiment in Swedish on LinkedIn:

A dangerous fallacy is that killer robots will take the appearance of monstrous, mechanical machines. Instead, killer robots have (among many other disguises) taken the appearance of web forms automatically determining if people are eligible for financial assistance.

Let me explain what I mean by this. Because I am not against the use of algorithms to search through large troves of data for patterns and answers to specific questions posed by humans. Computers can be amazing tools when searching for data points and alleviating tedious information-retrieval tasks.

But.

Algorithms, by their nature, create a new type of distance between the human performing the action and the human subjected to the action. This creates a world of problems related to transparency, accountability and moral efficacy.

The main argument in defense of algorithms is that they eliminate the obstacle of human inefficiency. Often this sentiment fails to take into account the many benefits of human inefficiency, such as taking time to reflect, taking time to question and taking time to listen.

In the very fabric of human interconnectedness is also the moral awareness brought about by sharing physical space, eye-contact and microcommunication. If you can tell someone is upset even as they are smiling and their words are saying the opposite, you understand microcommunication.

Factors contributing to algorithmic risk

  • Distance from subject: The further the decision-maker is from the human subject dealing with the impact of that decision, the easier it is for them to distance themselves from the humanity of that subject. Think: drone warfare.
  • Time from subject: Distance can also manifest in time. There are algorithms built today that will affect people 10, 20, 30 years in the future. How do we care for someone we can not yet know, and how can we break free from the prejudice today when it is embedded in code. Think: eugenics.
  • Actor knowledge of subject: when algorithms are implemented without a full understanding of the problem space, its context and its people, there will be mistakes made as there is in all development. Think: Convention on the Rights of the Child (and why it was written)
  • Subject Awareness of Action: Many people are unaware of how many decisions affecting them are made by algorithms every day. Some make life easier, some harm health and some raise costs (many examples in the links below). Within this unawareness it becomes more and more difficult for people to exercise their rights. To object to the invisible treatment. To judge its fairness. Think: your phone.
  • Operator Awareness of Actions: Increasingly concerning, the people responsible for active algorithms have less insight into their workings as time progresses, more subcontractors are involved and people switch jobs. How do we control what we ourselves do not understand but are employed to give the appearance of understanding? Who is willing to take the fall for an automated decision that harms? Think: Tay, the AI chatbot.
  • Regulatory Awareness of Actions: As subjects and operators themselves lose sight of automated decision-making so will of course any regulatory institutions and their staff. As decisions become faster and more invisible, and individual humans escape accountability, oversight becomes ever more difficult. Think: Robots or their makers in prison?
  • Data contamination: Even when algorithms are designed to not collect information about residential area or gender, research shows time and time again that this information still plays a part because the sheer volume of data that many algorithms rely on still encode and reveal the prejudices they are actively trying to avoid. Algorithms are never neutral and yet that keeps being the biggest smoke-screen argument for their deployment. Think: A thermometer in the hand of a black person is a gun according to Google.
  • Capacity for listening: Remember microcommunication. Who is actively listening for indications of harm, misunderstanding and the broader perspective? Did this human require medical care? The scheduling algorithm certainly does not care about anything that does not concern scheduling. Think: “talk to the hand”.
  • Actions per hour / Number of subjects: The sheer volume of people impacted and the frequency of the decision-making will of course also play a part in determining how much potential for danger an algorithm contains, and thus how much effort should be placed in working to mitigate and minimise those risks. Think: notifications from all your various inboxes.

All of these risks can be addressed and managed and talked about. Sometimes with the outcome of making things less efficient. Making something less efficient can still make a lot of sense. When the intent is to protect.

But to protect we first need to acknowledge the risks of algorithms on an industry-wide scale. And makers needs to assume responsibility. Makers need to see the broader potential for harm and care for all the people (and nature) they impacting. And regulators need to more clearly determine the direction and the constraints for a sustainable forward movement.

We are certainly not there yet.

Thank you for caring,
/Per

P.S. When talking about algorithms I tend to include narrow AI, which are algorithms designed to change over time in such a way that re-calibration happens according to a pre-defined machine-learning system. They sound intelligent, but they still only focus on completing the specific task they are programmed to do, under a narrow set of constraints and limitations. Also referred to as weak AI, it is still the only type of AI humans have realised.

Remember: A human still designed the way the algorithm “learns” and changes itself, but the phenomenon is often used to double down on projecting responsibility onto “the other”.


Algorithmic bias - The Wikipedia definition
Algorithmic bias describes systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Bias can emerge from many factors, including but not limited to the design of the algorithm or the unintended or unanticipated use or decisions relating to the way data is coded, collected, selected or used to train the algorithm. For example, algorithmic bias has been observed in search engine results and social media platforms. This bias can have impacts ranging from inadvertent privacy violations to reinforcing social biases of race, gender, sexuality, and ethnicity. The study of algorithmic bias is most concerned with algorithms that reflect “systematic and unfair” discrimination. This bias has only recently been addressed in legal frameworks, such as the European Union’s General Data Protection Regulation (2018) and Artificial Intelligence Act (2021).
en.wikipedia.org

Examples

Algorithms: How they can reduce competition and harm consumers - GOV.UK
We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.
www.gov.uk

The Hidden Dangers in Algorithmic Decision Making | by Nicole Kwan | Towards Data Science
The quiet revolution of artificial intelligence looks nothing like the way movies predicted; AI seeps into our lives not by overtaking our lives as sentient robots, but instead, steadily creeping…
towardsdatascience.com

What happened when a ‘wildly irrational’ algorithm made crucial healthcare decisions
Advocates say having computer programs decide how much help vulnerable people can get is often arbitrary – and in some cases downright cruel.
www.theguardian.com

450,000 Women Missed Breast Cancer Screenings Due to “Algorithm Failure” - IEEE Spectrum
A disclosure in the United Kingdom has sparked a heated debate about the health impacts of an errant algorithm
spectrum.ieee.org

Robodebt scheme - Wikipedia
The Robodebt scheme, formally Online Compliance Intervention (OCI), was an unlawful method of automated debt assessment and recovery employed by Services Australia as part of its Centrelink payment compliance program.
en.wikipedia.org

A Drug Addiction Risk Algorithm and Its Grim Toll on Chronic Pain Sufferers
A sweeping AI has become central to how the US handles the opioid crisis. It may only be making the crisis worse.
www.wired.com

Fired by Bot: Amazon Turns to Machine Managers And Workers Are Losing Out - Bloomberg
Contract drivers say algorithms terminate them by email—even when they have done nothing wrong.
www.bloomberg.com

Algorithms are controlling your life - Vox
Author Hannah Fry on the risks and benefits of living in a world shaped by algorithms.
www.vox.com

Algorithms have already taken over human decision making
From the law to the media we’re becoming artificial humans, mere tools of the machines.
theconversation.com

INFOGRAPHIC: Historical bias in AI systems
The example shown below is fictional but based on the types of scenarios that are known to occur in real life.

AI systems are trained using data. AI systems learn patterns in the data and then make assumptions based on that data that can have real-world consequences.

For example, if the training data shows a higher prevalence of suitable individuals in one group versus another, an AI system trained on that data will prefer candidates from that group when selecting people.
humanrights.gov.au

Acknowledging harm and addressing it

Contractual terms for algorithms - Innovatie
The City of Amsterdam has developed contractual terms for the algorithms that we purchase from suppliers and we are happy to share our knowledge.
www.amsterdam.nl

New Zealand has a new Framework for Algorithms. — NEWZEALAND.AI
This Charter sets a strong foundation for guiding NZ GOVT agencies on how to implement algorithms in a manner that warrants trust…
newzealand.ai

When algorithms decide what you pay

Episode 2 of Breaking the Black Box: When Algorithms Decide What You Pay
You may not realize it, but every website you visit is created, literally, the moment you arrive. Each element of the page — the pictures, the ads, the text, the comments — live on computers in different places and are sent to your device when you request them. That means that it’s easy for compani…

Who made that decision: You or an Algorithm?

Who Made That Decision: You or an Algorithm?
When we buy something on Amazon or watch something on Netflix, we think it’s our own choice. Well, it turns out that algorithms influence one-third of our decisions on Amazon and more than 80% on Netflix. What’s more, algorithms have their own biases. They can even go rogue.In his recent book title…

The tweets

axbom’s Twitter Archive—№ 35,822
A read-only indieweb self-hosted archive of all 37047 of axbom’s tweets.

Svenska / In Swedish

DN Debatt. ”Full insyn måste råda i offentliga algoritmer” - DN.SE
Akademikerförbundet SSR: Vi säger i vårt remissvar nej till den statliga utredningen om hur automatiserat beslutsfattande ska se ut.
www.dn.se

Har du blivit diskriminerad av en algoritm? Nu kräver forskare hårdare kontroller - Computer Sweden
Automatiserade beslut baserade på olika algoritmer styr allt mer våra liv. Men vad kan vi göra om besluten blir fel? Nu efterlyser en grupp forskare starkare reglering i dataskyddslagen och en oberoende kontroll.
computersweden.idg.se

Transparenta algoritmer i försäkringsbranschen (PDF)
Populärvetenskaplig diskussion om några av de svårigheter som kan uppstå med automatiserat beslutsfattande och algoritmer som liknar svarta lådor, liksom möjligheterna att genom lämplig transparens komma till rätta med dessa.
www.diva-portal.org

Ansvarsfull teknikutveckling | KOMET
Teknikutvecklingen har potential att föra med sig lösningar på ett antal globala samhällsutmaningar som klimat och miljö, hälsa och digital transformation. Komet arbetar bland annat med att lyfta frågor och stimulera dialog om teknikutveckling och dess påverkan i samhället.
www.kometinfo.se

Om BankID

Jag genomför just nu en undersökning om BankID. Det är fem stycken ja/nej frågor och en valfri fritextruta. Svara gärna. Dela gärna. 🙏

Svara: Fem frågor om användning av annan persons BankID.
En anonym undersökning om hur personer använder och har tillgång till BankID som inte är deras eget.
q.axbom.se

Osäkerheterna med BankID
Det pratas ofta om hur smidigt det är med BankID. Och om hur säkert det är. Den stora skillnaden idag jämfört med fysisk legitimation är förstås att ingen människa tittar på legitimationen och verifierar att du är du. Och i användningen uppstår ideligen gråzoner och luckor i lagen.
axbom.se


Comment