Journal tags: robot

11

sparkline

Crawlers

A few months back, I wrote about how Google is breaking its social contract with the web, harvesting our content not in order to send search traffic to relevant results, but to feed a large language model that will spew auto-completed sentences instead.

I still think Chris put it best:

I just think it’s fuckin’ rude.

When it comes to the crawlers that are ingesting our words to feed large language models, Neil Clarke describes the situtation:

It should be strictly opt-in. No one should be required to provide their work for free to any person or organization. The online community is under no responsibility to help them create their products. Some will declare that I am “Anti-AI” for saying such things, but that would be a misrepresentation. I am not declaring that these systems should be torn down, simply that their developers aren’t entitled to our work. They can still build those systems with purchased or donated data.

Alas, the current situation is opt-out. The onus is on us to update our robots.txt file.

Neil handily provides the current list to add to your file. Pass it on:

User-agent: CCBot
Disallow: /

User-agent: ChatGPT-User
Disallow: /

User-agent: GPTBot
Disallow: /

User-agent: Google-Extended
Disallow: /

User-agent: Omgilibot
Disallow: /

User-agent: FacebookBot
Disallow: /

In theory you should be able to group those user agents together, but citation needed on whether that’s honoured everywhere:

User-agent: CCBot
User-agent: ChatGPT-User
User-agent: GPTBot
User-agent: Google-Extended
User-agent: Omgilibot
User-agent: FacebookBot
Disallow: /

There’s a bigger issue with robots.txt though. It too is a social contract. And as we’ve seen, when it comes to large language models, social contracts are being ripped up by the companies looking to feed their beasts.

As Jim says:

I realized why I hadn’t yet added any rules to my robots.txt: I have zero faith in it.

That realisation was prompted in part by Manuel Moreale’s experiment with blocking crawlers:

So, what’s the takeaway here? I guess that the vast majority of crawlers don’t give a shit about your robots.txt.

Time to up the ante. Neil’s post offers an option if you’re running Apache. Either in .htaccess or in a .conf file, you can block user agents using mod_rewrite:

RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (CCBot|ChatGPT|GPTBot|Omgilibot| FacebookBot) [NC]
RewriteRule ^ – [F]

You’ll see that Google-Extended isn’t that list. It isn’t a crawler. Rather it’s the permissions model that Google have implemented for using your site’s content to train large language models: unless you opt out via robots.txt, it’s assumed that you’re totally fine with your content being used to feed their stochastic parrots.

Permission

Back when the web was young, it wasn’t yet clear what the rules were. Like, could you really just link to something without asking permission?

Then came some legal rulings to establish that, yes, on the web you can just link to anything without checking if it’s okay first.

What about search engines and directories? Technically they’re rifling through all the stuff we publish and reposting snippets of it. Is that okay?

Again, through some legal precedents—but mostly common agreement—everyone decided that on balance it was fine. After all, those snippets they publish are helping your site get traffic.

In short order, search came to rule the web. And Google came to rule search.

The mutually beneficial arrangement persisted uneasily. Despite Google’s search results pages getting worse and worse in recent years, the company’s huge market share of search means you generally want to be in their good books.

Google’s business model relies on us publishing web pages so that they can put ads around the search results linking to that content, and we rely on Google to send people to our websites by responding smartly to search queries.

That has now changed. Instead of responding to search queries by linking to the web pages we’ve made, Google is instead generating dodgy summaries rife with hallucina… lies (a psychic hotline, basically).

Google still benefits from us publishing web pages. We no longer benefit from Google slurping up those web pages.

With AI, tech has broken the web’s social contract:

Google has steadily been manoeuvring their search engine results to more and more replace the pages in the results.

As Chris puts it:

Me, I just think it’s fuckin’ rude.

Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.

Ben proposes an update to robots.txt that would allow us to specify licensing information:

Robots.txt needs an update for the 2020s. Instead of just saying what content can be indexed, it should also grant rights.

Like crawl my site only to provide search results not train your LLM.

It’s a solid proposal. But Google has absolutely no incentive to implement it. They hold all the power.

Or do they?

There is still the nuclear option in robots.txt:

User-agent: Googlebot
Disallow: /

That’s what Vasilis is doing:

I have been looking for ways to not allow companies to use my stuff without asking, and so far I coulnd’t find any. But since this policy change I realised that there is a simple one: block google’s bots from visiting your website.

The general consensus is that this is nuts. “If you don’t appear in Google’s results, you might as well not be on the web!” is the common cry.

I’m not so sure. At least when it comes to personal websites, search isn’t how people get to your site. They get to your site from RSS, newsletters, links shared on social media or on Slack.

And isn’t it an uncomfortable feeling to think that there’s a third party service that you absolutely must appease? It’s the same kind of justification used by people who are still on Twitter even though it’s now a right-wing transphobic cesspit. “If I’m not on Twitter, I might as well not be on the web!”

The situation with Google reminds me of what Robin said about Twitter:

The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.

We can rescind our invitation to Google.

Mars distracts

A few years ago, I wrote about how much I enjoyed the book Aurora by Kim Stanley Robinson.

Not everyone liked that book. A lot of people were put off by its structure, in which the dream of interstellar colonisation meets the harsh truth of reality and the book follows where that leads. It pours cold water over the very idea of humanity becoming interplanetary.

But our own solar system is doable, right? I mean, Kim Stanley Robinson is the guy who wrote the Mars trilogy and 2312, both of which depict solar system colonisation in just a few centuries.

I wonder if the author might regret the way that some have taken his Mars trilogy as a sort of manual, Torment Nexus style. Kim Stanley Robinson is very much concerned with this planet in this time period, but others use his work to do the opposite.

But the backlash to Mars has begun.

Maciej wrote Why Not Mars:

The goal of this essay is to persuade you that we shouldn’t send human beings to Mars, at least not anytime soon. Landing on Mars with existing technology would be a destructive, wasteful stunt whose only legacy would be to ruin the greatest natural history experiment in the Solar System. It would no more open a new era of spaceflight than a Phoenician sailor crossing the Atlantic in 500 B.C. would have opened up the New World. And it wouldn’t even be that much fun.

Manu Saadia is writing a book about humanity in space, and he has a corresponding newsletter called Against Mars: Space Colonization and its Discontents:

What if space colonization was merely science-fiction, a narrative, or rather a meta-narrative, a myth, an ideology like any other? And therefore, how and why did it catch on? What is so special and so urgent about space colonization that countless scientists, engineers, government officials, billionaire oligarchs and indeed, entire nations, have committed work, ingenuity and treasure to make it a reality.

What if, and hear me out, space colonization was all bullshit?

I mean that quite literally. No hyperbole. Once you peer under the hood, or the nose, of the rocket ship, you encounter a seemingly inexhaustible supply of ghoulish garbage.

Two years ago, Shannon Stirone went into the details of why Mars Is a Hellhole

The central thing about Mars is that it is not Earth, not even close. In fact, the only things our planet and Mars really have in common is that both are rocky planets with some water ice and both have robots (and Mars doesn’t even have that many).

Perhaps the most damning indictment of the case for Mars colonisation is that its most ardent advocate turns out to be an idiotic small-minded eugenicist who can’t even run a social media company, much less a crewed expedition to another planet.

But let’s be clear: we’re talking here about the proposition of sending humans to Mars—ugly bags of mostly water that probably wouldn’t survive. Robots and other uncrewed missions in our solar system …more of that, please!

dConstruct 2015 podcast: Brian David Johnson

The newest dConstruct podcast episode features the indefatigable and effervescent Brian David Johnson. Together we pick apart the futures we are collectively making, probe the algorithmic structures of science fiction narratives, and pay homage to Asimovian robotic legal codes.

Brian’s enthusiasm is infectious. I have a strong hunch that his dConstruct talk will be both thought-provoking and inspiring.

dConstruct 2015 is getting close now. Our future approaches. Interviewing the speakers ahead of time has only increased my excitement and anticipation. I think this is going to be a truly unmissable event. So, uh, don’t miss it.

Grab your ticket today and use the code ‘ansible’ to take advantage of the 10% discount for podcast listeners.

dConstruct 2015 podcast: Carla Diana

The dConstruct podcast episodes are coming thick and fast. The latest episode is a thoroughly enjoyable natter I had with the brilliant Carla Diana.

We talk about robots, smart objects, prototyping, 3D printing, and the world of teaching design.

Remember, you can subscribe to the podcast feed in any podcast software you like, or if iTunes is your thing, you can also subscribe directly in iTunes.

And don’t forget to use the discount code ‘ansible’ when you’re buying your dConstruct ticket …because you are coming to dConstruct, right?

Space by Botwest

I had a whole day of good talks yesterday at South By Southwest yesterday …and none of them were in the Austin Convention Center. In a very real sense, the good stuff at this event is getting pushed to the periphery.

The day started off in the Driskill Hotel with the New Aesthetic panel that James assembled. It was great, like a mini-conference packed into one hour with wonderfully dense knowledge bombs lobbed from all concerned. Joanne McNeil gave us the literary background, Ben searched for meaning (and humour) in advertising trends, Russell looked at how machines are changing what we read and write, and Aaron …um, talked about the helium-balloon predator drone in the corner of the room.

With our brains primed for the intersections where humans and machines meet, it wasn’t hard to keep pattern-matching for it. In fact, the panel right afterwards on technology and fashion was filled with wonderful wearable expressions of the New Aesthetic.

Alas, I wasn’t able to attend that panel because I had to get to the green room to prepare for my own appearance on Get Excited and Make Things With Science with Ariel and Matt. It was a lot of fun and it was a real pleasure to be on a panel with such smart people.

I basically used the panel as an opportunity to geek out about some of my favourite science-related hacks and websites:

After that I stayed in the Driskill for a panel on robots and AI. One of the panelists was Bina48.

I heard had heard about Bina48 from a Radiolab episode.

Radiolab - Talking to Machines on Huffduffer

Jon Ronson described the strange experience of interviewing her—how the questions always tended to the profound and meaningful rather than trivial and chatty. Sure enough, once Bina was (literally) unveiled on the panel—a move that was wisely left till halfway through because, as the panelists said, “after that, you’re not going to pay attention to a word we say”—people started asking questions like “Do you dream?” and “What is the meaning of life?”

I asked her “Where were you before you were here?” She calmly answered that she was made in Texas. The New Aesthetic panelists would’ve loved her.

I was surprised by how much discussion of digital preservation there was on the robots/AI panel. Then again, the panel was hosted by a researcher from The Digital Beyond.

Bina48’s personality is based on the mind file of a real person containing exactly the kind of data that we are publishing every day to third-party sites. The question of what happens to that data was the subject of the final panel I attended, Saying Goodbye to Your Digital Self, featuring representatives from The Internet Archive, Archive Team, and Google’s Data Liberation Front.

Digital preservation is an incredibly important topic—one close to my heart—but the panel (in the Omni hotel) was, alas, sparsely attended.

Like I said, at this year’s South by Southwest, a lot of the good stuff was at the edges.

I, Interface

’s , though currently fictional, are an excellent set of design principles:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

One could easily imagine a similar set of laws being applied to field of user experience and interface design:

  1. An interface may not injure a user or, through inaction, allow a user to come to harm.
  2. An interface must obey any orders given to it by users, except where such orders would conflict with the First Law.
  3. An interface must protect its own existence as long as such protection does not conflict with the First or Second Law.

Okay, that last one’s a bit of a stretch but you get the idea.

In his later works Asimov added the zeroth law that supersedes the initial three laws:

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

I think that this can also apply to user experience and interface design.

Take the password anti-pattern (please!). On the level of an individual site, it could be considered a benefit to the current user, allowing them to quickly and easily hand over lots of information about their contacts. But taken on the wider level, it teaches people that it’s okay to hand over their email password to third-party sites. The net result of reinforcing that behaviour is definitely not good for the web as a whole.

I’m proposing a zeroth law of user experience that goes beyond the existing paradigm of user-centred design:

  1. An interface may not harm the web, or, by inaction, allow the web to come to harm.

One hundred and seven

The word “awesome” is over-used. I’m about to over-use it some more.

The internet is mostly awesome. Some human beings are also awesome. When you combine the two, you get awesome things. Here are just two such awesome things:

  1. Anton Peck is a brilliant illustrator. He’s currently executing a project called 100 Little Robots. Anton will craft postcards for 100 people, each postcard displaying a unique hand-drawn robot. If you want to be one of those 100 people, order your robot card now. I got mine and it is, well …awesome.
  2. James Bridle is a brilliant writer. He just finished a series of articles called Seven posts about the future. Read them all. Seriously. They are all wonderful and the final story reads like a Nerdpunk copulation of Salman Rushdie and William Gibson.

Awesome.

A drawing of a robot

Alpha

I’ve always thought that Brighton has a lot of steampunk appeal. Quite apart from the potential for criminal mastermind lairs within the , there are a whole slew of wonderful inventions from the mind of .

The is still in use today. The , alas, is not. And while still stands in the centre of town, its moving parts have been disabled (due to noise complaints and damage to the structural integrity):

The hydraulically operated copper sphere moved up and down a 16-foot (4.9 m) metal mast every hour, based on electrical signals transmitted from the Royal Observatory, Greenwich.

But even with all this steampunk history, I was still surprised to read the story of Alpha the robot on Paleo-Future:

During the autumn of 1932 a group of curious onlookers assembled in Brighton, England to see inventor Harry May’s latest invention, Alpha the robot. The mechanical man was controlled by verbal commands and sat in a chair silently while May carefully placed a gun in Alpha’s hand.

It all goes horribly awry according to contemporary reports, doubtless exaggerated. I, for one, welcome our new metal overlords.

Cyberneticzoo.com has more details on Alpha, including Time magazine’s account of its 1934 tour of America:

When commanded, the robot lowered its arm, raised the other, lowered it, turned its head from side to side, opened and closed its prognathous jaw, sat down. Then Impresario May asked Alpha a question:

“How old are you?”

From the robot’s interior a cavernous Cockney voice responded:

“Fourteen years.”

May: What do you weigh?

Alpha: One ton.

A dozen other questions and answers followed, some elaborately facetious. When May inquired what the automaton liked to eat, it responded with a minute-long discourse on the virtues of toast made with Macy’s automatic electric toaster.

David Buckley has more details including a spread from Practical Mechanics explaining Alpha’s inner workings.

Automata

The Flash on the Beach conference is currently underway here in Brighton. I spoke at the conference two years ago so thanks to organiser John Davey’s commitment to giving past speakers guest passes to future events, I’ve been popping in and out of the Dome over the past couple of days to sit in on some talks.

Yesterday I saw Branden Hall talk about Brilliant Ideas that I’ve Blatantly Stolen. Although his specific examples dealt with ActionScript, his overall message was applicable to any developer: look around at other languages and frameworks and scavenge anything you like the look of.

I wanted to make it to Aral’s talk this morning but as he was on first thing and I’m a lazy bugger, that didn’t really work out. I did, however, make it over in time to hear Carla Diana.

Carla made her name in the Flash world a few years ago with her wonderful site Repercussion where you can play around with sounds through a lovely isometric interface. Lately she’s been working with robots. Or rather, one robot in particular: Leo.

Here’s Leo attempting to become a that :

Leo

Carla’s job was to come up with a skin for Leo that didn’t send children running screaming. Yes, it’s the problem that plagues Japanese robots and Robert Zemeckis CGI movies in equal measure: .

Want to see something uncanny?

Boston Dynamics Big Dog

I was at Carla’s talk with Sophie and we were talking about robots afterwards (as you would). She said that watching robots in motion often makes her feel sad. Looking at that video, particularly the bit where the quadruped is kicked to demonstrate its balance, I understand what she means.

Funnily enough, my favourite robot is also a quadruped. All I want for Christmas is a .

Ghost in the Shell: S.A.C. 2nd GIG

Or I maybe I should just build my own. The latest project that Carla Diana is working on is something to make the arduino enthusiast drool. It’s called littleBits:

littleBits is an opensource library of discrete electronic components pre-assembled in tiny circuit boards. Just as Legos allow you to create complex structures with very little engineering knowledge, littleBits are simple, intuitive, space-sensitive blocks that make prototyping with sophisticated electronics a matter of snapping small magnets together.

littleBits intro

That sound

Despite being a huge Pixar fan, I still haven’t seen Wall•E. That’s mostly due to my belief that a typical cinema is not necessarily the best viewing environment for any movie, but particularly for one that you want to get really engrossed in …unless the cinema is empty of humans.

I’m not sure if I can hold out much longer though, especially after reading this wonderful story about how the people at Pixar responded to one blogger’s reaction to seeing the first trailer for the movie last year. Eda Cherry describes herself as having a strong fondness for robots so Wall•E is already pushing all the right buttons. The moment when he says his own name is the moment that pushes her over the edge — it makes her cry every time. Partly it’s the robot’s droopy eyes as he looks up into space but also:

It’s the voice modulation.

That would be . I remember as a child receiving the quarterly Star Wars fan club newsletter, Bantha Tracks, and reading about the amazing amount of found sounds that went into the soundscape of that galaxy far, far away: animal noises, broken TV sets, tuning forks tapped against high-tension wires. And of course R2D2, voiced by Ben Burtt himself.

Now, with Wall•E, he’s voicing another lovable robot, one capable of moving humans to tears. His involvement is no coincidence. In the initial brainstorming for the project, John Lasseter repeatedly described it as R2D2: The Movie.

The journey involved in turning that initial idea into a finished film is a long one. For a closer look at the process at Pixar, be sure to read Peter Merholz’s chat with Michael B. Johnson. Their storyboarding process sounds a lot like wireframing:

We’d much rather fail with a bunch of sketches that we did (relatively) quickly and cheaply, than once we’ve modeled, rigged, shaded, animated, and lit the film. Fail fast, that’s the mantra.