Tags: algorithms

22

sparkline

Tuesday, January 23rd, 2024

Deplatforming Myself: A Tech Manifesto – Haste Makes Waste

The modern web is constantly, endlessly hoovering up massive amounts of data about you, only some of which is correct, and then feeding you its best guess of what will glue your eyeballs to the screen just a little bit longer, no matter what that is, whether it’s actually good for you or not.

Monday, December 11th, 2023

How Certain Algorithms to Improve the Human Condition Have Failed – The Markup

A terrific piece from Aaron Sankin that goes from Waldsterben to software development via firefighting and the RAND corporation.

Bureaucracies use measurements to optimize and rearrange the world around them. For those measurements to be effective, they have to be conducted in units as relevant as possible to the conditions on the ground.

Saturday, May 27th, 2023

Saturday, May 15th, 2021

The cage

I subscribe to Peter Gasston’s newsletter, The Tech Landscape. It’s good. Peter’s a smart guy with his finger on the pulse of many technologies that are beyond my ken. I recommend subscribing.

But I was very taken aback by what he wrote in issue 202. It was to do with algorithmic recommendation engines.

This week I want to take a little dump on a tweet I read. I’m not going to link to it (I’m not that person), but it basically said something like: “I’m afraid to Google something because I don’t want the algorithm to think I like it, and I’m afraid to click a link because I don’t want the algorithm to show me more like it… what a cage.”

I saw the same tweet. It resonated with me. I had responded with a link to a post I wrote a while back called Get safe. That post made two points:

  1. GET requests shouldn’t have side effects. Adding to a dossier on someone’s browsing habits definitely counts as a side effect.
  2. It is literally a fundamental principle of the web platform that it should be safe to visit a web page.

But Peter describes ubiquitous surveillance as a feature, not a bug:

It’s observing what someone likes or does, then trying to make recommendations for more things like it—whether that’s books, TV shows, clothes, advertising, or whatever. It works on probability, so it’s going to make better guesses the more it knows you; if you like ten things of type A, then liking one thing of type B shouldn’t be enough to completely change its recommendations. The problem is, we don’t like “the algorithm” if it doesn’t work, and we don’t like it if works too well (“creepy!”). But it’s not sinister, and it’s not a cage.

He would be correct if the balance of power were tipped towards the person actively looking for recommendations. As I said in my earlier post:

Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies?

When Peter says “it’s not sinister, and it’s not a cage” that may be true for him, but that is not a shared feeling, as the original tweet demonstrates. I don’t think it’s fair to dismiss someone else’s psychological pain because you don’t think they “get it”. I’m pretty sure everyone “gets” how recommendation engines are supposed to work. That’s not the issue. Trying to provide relevant content isn’t the problem. It’s the unbelievably heavy-handed methods that make it feel like a cage.

Peter uses the metaphor of a record shop:

“The algorithm” is the best way to navigate a world of infinite choice; imagine you went to a record shop (remember them?) which had every recording ever released; how would you find new music? You’d either buy music by bands you know you already liked, or you’d take a pure gamble on something—which most of the time would be a miss. So you’d ask a store worker, and they’d recommend the music they liked—but that’s no guarantee you’d like it. A good worker would ask what type of music you like, and recommend music based on that—you might not like all the recommendations, but there’s more of a chance you’d like some. That’s just what “the algorithm” does.

But that’s not true. You don’t ask “the algorithm” for a recommendation—it foists them on you whether you want them or not. A more apt metaphor would be that you walked by a record shop once and the store worker came out and followed you down the street, into your home, and watched your every move for the rest of your life.

What Peter describes sounds great—a helpful knowledgable software agent that you ask for recommendations. But that’s not what “the algorithm” is. And that’s why it feels like a cage. That’s why it is a cage.

The original tweet was an open, honest, and vulnerable insight into what online recommendation engines feel like. That’s a valuable insight that should be taken on board, not dismissed.

And what a lack of imagination to look at an existing broken system—that doesn’t even provide good recommendations while making people afraid to click on links—and shrug and say that this is the best we can do. If this really is “is the best way to navigate a world of infinite choice” then it’s no wonder that people feel like they need to go on a digital detox and get away from their devices in order to feel normal. It’s like saying that decapitation is the best way of solving headaches.

Imagine living in a surveillance state like East Germany, and saying “Well, how else is the government supposed to make informed decisions without constantly monitoring its citizens?” I think it’s more likely that you’d feel like you’re in a cage.

Apples to oranges? Kind of. But whether it’s surveillance communism or surveillance capitalism, there’s a shared methodology at work. They’re both systems that disempower people for the supposedly greater good of amassing data. Both are built on the false premise that problems can be solved by getting more and more data. If that results in collateral damage to people’s privacy and mental health, well …it’s all for the greater good, right?

It’s fucking bullshit. I don’t want to live in that cage and I don’t want anyone else to have to live in it either. I’m going to do everything I can to tear it down.

Friday, November 22nd, 2019

Sacha Baron Cohen’s Keynote Address at ADL’s 2019 Never Is Now Summit on Anti-Semitism and Hate | Anti-Defamation League

On the internet, everything can appear equally legitimate. Breitbart resembles the BBC. The fictitious Protocols of the Elders of Zion look as valid as an ADL report. And the rantings of a lunatic seem as credible as the findings of a Nobel Prize winner. We have lost, it seems, a shared sense of the basic facts upon which democracy depends.

Wednesday, July 3rd, 2019

inessential: No Algorithms

My hypothesis: these algorithms — driven by the all-consuming need for engagement in order to sell ads — are part of what’s destroying western liberal democracy, and my app will not contribute to that.

Sunday, June 16th, 2019

Relearn CSS layout: Every Layout

A new site from Heydon and Andy that provides CSS algorithms for common layout patterns.

If you find yourself wrestling with CSS layout, it’s likely you’re making decisions for browsers they should be making themselves. Through a series of simple, composable layouts, Every Layout will teach you how to better harness the built-in algorithms that power browsers and CSS.

Sunday, June 24th, 2018

Derek Powazek - AI is Not a Community Management Strategy

A really excellent piece from Derek on the history of community management online.

You have to decide what your platform is for and what it’s not for. And, yeah, that means deciding who it’s for and who it’s not for (hint: it’s not bots, nor nazis). That’s not a job you can outsource. The tech won’t do it for you. Not just because it’s your job, but because outsourcing it won’t work. It never does.

Saturday, June 16th, 2018

Artificial Intelligence for more human interfaces | Christian Heilmann

An even-handed assessment of the benefits and dangers of machine learning.

Tuesday, May 29th, 2018

We are all trapped in the “Feed” – Om on Tech

No matter where I go on the Internet, I feel like I am trapped in the “feed,” held down by algorithms that are like axes trying to make bespoke shirts out of silk. And no one illustrates it better than Facebook and Twitter, two more services that should know better, but they don’t. Fake news, unintelligent information and radically dumb statements are getting more attention than what matters. The likes, retweets, re-posts are nothing more than steroids for noise. Even when you are sarcastic in your retweets or re-shares, the system has the understanding of a one-year-old monkey baby: it is a vote on popularity.

Superfan! — Sacha Judd

The transcript of a talk that is fantastic in every sense.

Fans are organised, motivated, creative, technical, and frankly flat-out awe-inspiring.

Saturday, April 7th, 2018

Future Ethics

Cennydd is writing (and self-publishing) a book on ethics and digital design. It will be released in September.

Technology is never neutral: it has inevitable social, political, and moral impact. The coming era of connected smart technologies, such as AI, autonomous vehicles, and the Internet of Things, demands trust: trust the tech industry has yet to fully earn.

Thursday, March 8th, 2018

New Dark Age: Technology, Knowledge and the End of the Future by James Bridle

James is writing a book. It sounds like a barrel of laughs.

In his brilliant new work, leading artist and writer James Bridle offers us a warning against the future in which the contemporary promise of a new technologically assisted Enlightenment may just deliver its opposite: an age of complex uncertainty, predictive algorithms, surveillance, and the hollowing out of empathy.

Thursday, March 1st, 2018

Fair Is Not the Default - Library - Google Design

Why building inclusive tech takes more than good intentions.

When we run focus groups, we joke that it’s only a matter of seconds before someone mentions Skynet or The Terminator in the context of artificial intelligence. As if we’ll go to sleep one day and wake up the next with robots marching to take over. Few things could be further from the truth. Instead, it’ll be human decisions that we made yesterday, or make today and tomorrow that will shape the future. So let’s make them together, with other people in mind.

Wednesday, December 20th, 2017

Future Historians Probably Won’t Understand Our Internet - The Atlantic

You can’t log into the same Facebook twice.

The world as we experience it seems to be growing more opaque. More of life now takes place on digital platforms that are different for everyone, closed to inspection, and massively technically complex. What we don’t know now about our current experience will resound through time in historians of the future knowing less, too. Maybe this era will be a new dark age, as resistant to analysis then as it has become now.

Thursday, December 14th, 2017

Curation - Snook.ca

In the name of holy engagement, the native experience of products like Twitter, Facebook, and Instagram are moving away from giving people the ability to curate. They do this by taking control away from you, the user. By showing what other people liked, or by showing recommendations, without any way to turn it off, they prevent people from creating a better experience for themselves.

Tuesday, October 3rd, 2017

A good science fiction story… - daverupert.com

Dave applies two quotes from sci-fi authors to the state of today’s web.

A good science fiction story should be able to predict not the automobile but the traffic jam.

—Frederik Pohl

The function of science fiction is not only to predict the future, but to prevent it.

—Ray Bradbury

Friday, September 22nd, 2017

Idle Words: Anatomy of a Moral Panic

The real story in this mess is not the threat that algorithms pose to Amazon shoppers, but the threat that algorithms pose to journalism. By forcing reporters to optimize every story for clicks, not giving them time to check or contextualize their reporting, and requiring them to race to publish follow-on articles on every topic, the clickbait economics of online media encourage carelessness and drama.

Monday, June 12th, 2017

Here are 3 legal cases from the future

  1. People v. Dronimos
  2. Writers v. A.I. Rowling
  3. The Algorithm Defense

Wednesday, March 15th, 2017

Systems Smart Enough To Know When They’re Not Smart Enough | Big Medium

I can forgive our answer machines if they sometimes get it wrong. It’s less easy to forgive the confidence with which the bad answer is presented, giving the impression that the answer is definitive. That’s a design problem.