Journal tags: algorithm

5

sparkline

Feed reading

I described using my feed reader like this:

I would hate if catching up on RSS feeds felt like catching up on email.

Instead it’s like this:

When I open my RSS reader to catch up on the feeds I’m subscribed to, it doesn’t feel like opening my email client. It feels more like opening a book.

It also feels different to social media. Like Lucy Bellwood says:

I have a richer picture of the group of people in my feed reader than I did of the people I regularly interacted with on social media platforms like Instagram.

There’s also the blessed lack of any algorithm:

Because blogs are much quieter than social media, there’s also the ability to switch off that awareness that Someone Is Always Watching.

Cory Doctorow has been praising the merits of RSS:

This conduit is anti-lock-in, it works for nearly the whole internet. It is surveillance-resistant, far more accessible than the web or any mobile app interface.

Like Lucy, he emphasises the lack of algorithm:

By default, you’ll get everything as it appears, in reverse-chronological order.

Does that remind you of anything? Right: this is how social media used to work, before it was enshittified. You can single-handedly disenshittify your experience of virtually the entire web, just by switching to RSS, traveling back in time to the days when Facebook and Twitter were more interested in showing you the things you asked to see, rather than the ads and boosted content someone else would pay to cram into your eyeballs.

The only algorithm at work in my feed reader—or on Mastodon—is good old-fashioned serendipity, when posts just happened to rhyme or resonate. Like this morning, when I read this from Alice:

There is no better feeling than walking along, lost in my own thoughts, and feeling a small hand slip into mine. There you are. Here I am. I love you, you silly goose.

And then I read this from Denise

I pass a mother and daughter, holding hands. The little girl is wearing a sequinned covered jacket. She looks up at her mother who says “…And the sun’s going to come out and you’re just going to shine and shine and shine.”

The cage

I subscribe to Peter Gasston’s newsletter, The Tech Landscape. It’s good. Peter’s a smart guy with his finger on the pulse of many technologies that are beyond my ken. I recommend subscribing.

But I was very taken aback by what he wrote in issue 202. It was to do with algorithmic recommendation engines.

This week I want to take a little dump on a tweet I read. I’m not going to link to it (I’m not that person), but it basically said something like: “I’m afraid to Google something because I don’t want the algorithm to think I like it, and I’m afraid to click a link because I don’t want the algorithm to show me more like it… what a cage.”

I saw the same tweet. It resonated with me. I had responded with a link to a post I wrote a while back called Get safe. That post made two points:

  1. GET requests shouldn’t have side effects. Adding to a dossier on someone’s browsing habits definitely counts as a side effect.
  2. It is literally a fundamental principle of the web platform that it should be safe to visit a web page.

But Peter describes ubiquitous surveillance as a feature, not a bug:

It’s observing what someone likes or does, then trying to make recommendations for more things like it—whether that’s books, TV shows, clothes, advertising, or whatever. It works on probability, so it’s going to make better guesses the more it knows you; if you like ten things of type A, then liking one thing of type B shouldn’t be enough to completely change its recommendations. The problem is, we don’t like “the algorithm” if it doesn’t work, and we don’t like it if works too well (“creepy!”). But it’s not sinister, and it’s not a cage.

He would be correct if the balance of power were tipped towards the person actively looking for recommendations. As I said in my earlier post:

Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies?

When Peter says “it’s not sinister, and it’s not a cage” that may be true for him, but that is not a shared feeling, as the original tweet demonstrates. I don’t think it’s fair to dismiss someone else’s psychological pain because you don’t think they “get it”. I’m pretty sure everyone “gets” how recommendation engines are supposed to work. That’s not the issue. Trying to provide relevant content isn’t the problem. It’s the unbelievably heavy-handed methods that make it feel like a cage.

Peter uses the metaphor of a record shop:

“The algorithm” is the best way to navigate a world of infinite choice; imagine you went to a record shop (remember them?) which had every recording ever released; how would you find new music? You’d either buy music by bands you know you already liked, or you’d take a pure gamble on something—which most of the time would be a miss. So you’d ask a store worker, and they’d recommend the music they liked—but that’s no guarantee you’d like it. A good worker would ask what type of music you like, and recommend music based on that—you might not like all the recommendations, but there’s more of a chance you’d like some. That’s just what “the algorithm” does.

But that’s not true. You don’t ask “the algorithm” for a recommendation—it foists them on you whether you want them or not. A more apt metaphor would be that you walked by a record shop once and the store worker came out and followed you down the street, into your home, and watched your every move for the rest of your life.

What Peter describes sounds great—a helpful knowledgable software agent that you ask for recommendations. But that’s not what “the algorithm” is. And that’s why it feels like a cage. That’s why it is a cage.

The original tweet was an open, honest, and vulnerable insight into what online recommendation engines feel like. That’s a valuable insight that should be taken on board, not dismissed.

And what a lack of imagination to look at an existing broken system—that doesn’t even provide good recommendations while making people afraid to click on links—and shrug and say that this is the best we can do. If this really is “is the best way to navigate a world of infinite choice” then it’s no wonder that people feel like they need to go on a digital detox and get away from their devices in order to feel normal. It’s like saying that decapitation is the best way of solving headaches.

Imagine living in a surveillance state like East Germany, and saying “Well, how else is the government supposed to make informed decisions without constantly monitoring its citizens?” I think it’s more likely that you’d feel like you’re in a cage.

Apples to oranges? Kind of. But whether it’s surveillance communism or surveillance capitalism, there’s a shared methodology at work. They’re both systems that disempower people for the supposedly greater good of amassing data. Both are built on the false premise that problems can be solved by getting more and more data. If that results in collateral damage to people’s privacy and mental health, well …it’s all for the greater good, right?

It’s fucking bullshit. I don’t want to live in that cage and I don’t want anyone else to have to live in it either. I’m going to do everything I can to tear it down.

Utopia

Trys and James recently unveiled their Utopia project. They’ve been tinkering away at it behind the scenes for quite a while now.

You can check out the website and read the blog to get the details of how it accomplishes its goal:

Elegantly scale type and space without breakpoints.

I may well be biased, but I really like this project. I’ve been asking myself why I find it so appealing. Here are a few of the attributes of Utopia that strike a chord with me…

It’s collaborative

Collaboration is at the heart of Clearleft’s work. I know everyone says that, but we’ve definitely seen a direct correlation: projects with high levels of collaboration are invariably more successful than projects where people are siloed.

The genesis for Utopia came about after Trys and James worked together on a few different projects. It’s all too easy to let design and development splinter off into their own caves, but on these projects, Trys and James were working (literally) side by side. This meant that they could easily articulate frustrations to one another, and more important, they could easily share their excitement.

The end result of their collaboration is some very clever code. There’s an irony here. This code could be used to discourage collaboration! After all, why would designers and developers sit down together if they can just pass these numbers back and forth?

But I don’t think that Utopia will appeal to designers and developers who work in that way. Born in the spirit of collaboration, I suspect that it will mostly benefit people who value collaboration.

It’s intrinsic

If you’re a control freak, you may not like Utopia. The idea is that you specify the boundaries of what you’re trying to accomplish—minimum/maximum font sizes, minumum/maximum screen sizes, and some modular scales. Then you let the code—and the browser—do all the work.

On the one hand, this feels like surrending control. But on the other hand, because the underlying system is so robust, it’s a way of guaranteeing quality, even in situations you haven’t accounted for.

If someone asks you, “What size will the body copy be when the viewport is 850 pixels wide?”, your answer would have to be “I don’t know …but I do know that it will be appropriate.”

This feels like a very declarative way of designing. It reminds me of the ethos behind Andy and Heydon’s site, Every Layout. They call it algorithmic layout design:

Employing algorithmic layout design means doing away with @media breakpoints, “magic numbers”, and other hacks, to create context-independent layout components. Your future design systems will be more consistent, terser in code, and more malleable in the hands of your users and their devices.

See how breakpoints are mentioned as being a very top-down approach to layout? Remember the tagline for Utopia, which aims for fluid responsive design?

Elegantly scale type and space without breakpoints.

Unsurprisingly, Andy really likes Utopia:

As the co-author of Every Layout, my head nearly fell off from all of the nodding when reading this because this is the exact sort of approach that we preach: setting some rules and letting the browser do the rest.

Heydon describes this mindset as automating intent. I really like that. I think that’s what Utopia does too.

As Heydon said at Patterns Day:

Be your browser’s mentor, not its micromanager.

The idea is that you give it rules, you give it axioms or principles to work on, and you let it do the calculation. You work with the in-built algorithms of the browser and of CSS itself.

This is all possible thanks to improvements to CSS like calc, flexbox and grid. Jen calls this approach intrinsic web design. Last year, I liveblogged her excellent talk at An Event Apart called Designing Intrinsic Layouts.

Utopia feels like it has the same mindset as algorithmic layout design and intrinsic web design. Trys and James are building on the great work already out there, which brings me to the final property of Utopia that appeals to me…

It’s iterative

There isn’t actually much that’s new in Utopia. It’s a combination of existing techniques. I like that. As I said recently:

I’m a great believer in the HTML design principle, Evolution Not Revolution:

It is better to evolve an existing design rather than throwing it away.

First of all, Utopia uses the idea of modular scales in typography. Tim Brown has been championing this idea for years.

Then there’s the idea of typography being fluid and responsive—just like Jason Pamental has been speaking and writing about.

On the code side, Utopia wouldn’t be possible without the work of Mike Reithmuller and his breakthroughs on responsive and fluid typography, which led to Tim’s work on CSS locks.

Utopia takes these building blocks and combines them. So if you’re wondering if it would be a good tool for one of your projects, you can take an equally iterative approach by asking some questions…

Are you using fluid type?

Do your font-sizes increase in proportion to the width of the viewport? I don’t mean in sudden jumps with @media breakpoints—I mean some kind of relationship between font size and the vw (viewport width) unit. If so, you’re probably using some kind of mechanism to cap the minimum and maximum font sizes—CSS locks.

I’m using that technique on Resilient Web Design. But I’m not changing the relative difference between different sized elements—body copy, headings, etc.—as the screen size changes.

Are you using modular scales?

Does your type system have some kind of ratio that describes the increase in type sizes? You probably have more than one ratio (unlike Resilient Web Design). The ratio for small screens should probably be smaller than the ratio for big screens. But rather than jump from one ratio to another at an arbitrary breakpoint, Utopia allows the ratio to be fluid.

So it’s not just that font sizes are increasing as the screen gets larger; the comparative difference is also subtly changing. That means there’s never a sudden jump in font size at any time.

Are you using custom properties?

A technical detail this, but the magic of Utopia relies on two powerful CSS features: calc() and custom properties. These two workhorses are used by Utopia to generate some CSS that you can stick at the start of your stylesheet. If you ever need to make changes, all the parameters are defined at the top of the code block. Tweak those numbers and watch everything cascade.

You’ll see that there’s one—and only one—media query in there. This is quite clever. Usually with CSS locks, you’d need to have a media query for every different font size in order to cap its growth at the maximum screen size. With Utopia, the maximum screen size—100vw—is abstracted into a variable (a custom property). The media query then changes its value to be the upper end of your CSS lock. So it doesn’t matter how many different font sizes you’re setting: because they all use that custom property, one single media query takes care of capping the growth of every font size declaration.

If you’re already using CSS locks, modular scales, and custom properties, Utopia is almost certainly going to be a good fit for you.

If you’re not yet using those techniques, but you’d like to, I highly recommend using Utopia on your next project.

Voice of the bot-hive

Creating telephone answering systems can be fun as I discovered at History Hack Day when I put together the Huffduffer hotline using the Tropo API. There’s something thrilling about using the human voice as an interface on your loosely joined small pieces. Navigating by literally talking to a machine feels simultaneously retro and sci-fi.

I think there’s a lot of potential for some fun services in this area. What a shame then that the technology has mostly been used for dreary customer service narratives:

Horrific glimpse of a broken future. I sniffed while a voice activated phone menu was being read out and it started from the beginning again.

There’s been a lot of talk lately about injecting personality into web design, often through the tone of voice in the . When personality is conveyed in the spoken as well as the written word, the effect is even more striking.

Have a listen for yourself by calling:

That’s the number for Customer Service Romance:

What happens when Customer Service bots start getting too smart? What if they start needing help too? How would they use the tools at their disposal to reach out to those they care about? What if they start caring about us a little too much?

It’s using the Voxeo service, which looks similar to Tropo.

The end result is amusing …but also slightly disconcerting. You may find yourself chuckling, but your laughter will be tinged with nervousness.

Customer Service Romance on Huffduffer

On the face of it, it’s an amusing little art project. But it’s might also be a glimpse of an impending bot-driven algorithmpocalypse.

Botonomy

In his talk at the Lift conference last year Kevin Slavin talks about the emergent patterns in , the bots that buy and sell with one another occasionally resulting in . It’s a great, slightly dark talk and I highly recommend you watch the video.

This is the same territory that explored in his book Daemon. The book is (science) fiction but as Suarez explains in his Long Now seminar, the reality is that much of our day to day lives is already governed by algorithms. In fact, the more important the question—e.g. “Will my mortgage be approved?”—the more likely that the decision will not be made by a human being.

Daniel Suarez: Daemon: Bot-mediated Reality on Huffduffer

Kevin Slavin mentions that financial algorithms are operating at such a high rate that the speed of light can make a difference to a company’s fortunes, hence the increase in real-estate prices close to network hubs. Now a new paper entitled Relativistic Statistical Arbitrage by Alexander Wissner-Gross and Cameron Freer has gone one further in mapping out “optimal intermediate locations between trading centers,” based on the Earth’s geometry and the speed of light.

In his novel Accelerando, Charles Stross charts the evolution of both humans and algorithms before, during and after a technological singularity.

The 2020s:

A marginally intelligent voicemail virus masquerading as an IRS auditor has caused havoc throughout America, garnishing an estimated eighty billion dollars in confiscatory tax withholdings into a numbered Swiss bank account. A different virus is busy hijacking people’s bank accounts, sending ten percent of their assets to the previous victim, then mailing itself to everyone in the current mark’s address book: a self-propelled pyramid scheme in action. Oddly, nobody is complaining much.

The 2040s:

High in orbit around Amalthea, complex financial instruments breed and conjugate. Developed for the express purpose of facilitating trade with the alien intelligences believed to have been detected eight years earlier by SETI, they function equally well as fiscal gatekeepers for space colonies.

The 2060s:

The damnfool human species has finally succeeded in making itself obsolete. The proximate cause of its displacement from the pinnacle of creation (or the pinnacle of teleological self-congratulation, depending on your stance on evolutionary biology) is an attack of self-aware corporations. The phrase “smart money” has taken on a whole new meaning, for the collision between international business law and neurocomputing technology has given rise to a whole new family of species—fast-moving corporate carnivores in the Net.