Neatnik Notes · Gotta block ’em all
While we’re playing whack-a-mole, let’s poison these rodents.
While we’re playing whack-a-mole, let’s poison these rodents.
Blocking the bots is step one.
See, this is exactly why we need to poison these bots.
A handy resource for keeping your blocklist up to date in your robots.txt
file.
Though the name of the website is unfortunate with its racism-via-laziness nomenclature.
A few months back, I wrote about how Google is breaking its social contract with the web, harvesting our content not in order to send search traffic to relevant results, but to feed a large language model that will spew auto-completed sentences instead.
I still think Chris put it best:
I just think it’s fuckin’ rude.
When it comes to the crawlers that are ingesting our words to feed large language models, Neil Clarke describes the situtation:
It should be strictly opt-in. No one should be required to provide their work for free to any person or organization. The online community is under no responsibility to help them create their products. Some will declare that I am “Anti-AI” for saying such things, but that would be a misrepresentation. I am not declaring that these systems should be torn down, simply that their developers aren’t entitled to our work. They can still build those systems with purchased or donated data.
Alas, the current situation is opt-out. The onus is on us to update our robots.txt
file.
Neil handily provides the current list to add to your file. Pass it on:
User-agent: CCBot
Disallow: /
User-agent: ChatGPT-User
Disallow: /
User-agent: GPTBot
Disallow: /
User-agent: Google-Extended
Disallow: /
User-agent: Omgilibot
Disallow: /
User-agent: FacebookBot
Disallow: /
In theory you should be able to group those user agents together, but citation needed on whether that’s honoured everywhere:
User-agent: CCBot
User-agent: ChatGPT-User
User-agent: GPTBot
User-agent: Google-Extended
User-agent: Omgilibot
User-agent: FacebookBot
Disallow: /
There’s a bigger issue with robots.txt
though. It too is a social contract. And as we’ve seen, when it comes to large language models, social contracts are being ripped up by the companies looking to feed their beasts.
As Jim says:
I realized why I hadn’t yet added any rules to my
robots.txt
: I have zero faith in it.
That realisation was prompted in part by Manuel Moreale’s experiment with blocking crawlers:
So, what’s the takeaway here? I guess that the vast majority of crawlers don’t give a shit about your
robots.txt
.
Time to up the ante. Neil’s post offers an option if you’re running Apache. Either in .htaccess
or in a .conf
file, you can block user agents using mod_rewrite
:
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (CCBot|ChatGPT|GPTBot|Omgilibot| FacebookBot) [NC]
RewriteRule ^ – [F]
You’ll see that Google-Extended
isn’t that list. It isn’t a crawler. Rather it’s the permissions model that Google have implemented for using your site’s content to train large language models: unless you opt out via robots.txt
, it’s assumed that you’re totally fine with your content being used to feed their stochastic parrots.
I realized why I hadn’t yet added any rules to my
robots.txt
: I have zero faith in it.
Now that the horse has bolted—and ransacked the web—you can shut the barn door:
To disallow GPTBot to access your site you can add the GPTBot to your site’s robots.txt:
User-agent: GPTBot Disallow: /
I’m not down with Google swallowing everything posted on the internet to train their generative AI models.
This would mean a lot more if it happened before the wholesale harvesting of everyone’s work.
But I’m sure Google will put a mighty fine lock on that stable door that the horse bolted from.
Back when the web was young, it wasn’t yet clear what the rules were. Like, could you really just link to something without asking permission?
Then came some legal rulings to establish that, yes, on the web you can just link to anything without checking if it’s okay first.
What about search engines and directories? Technically they’re rifling through all the stuff we publish and reposting snippets of it. Is that okay?
Again, through some legal precedents—but mostly common agreement—everyone decided that on balance it was fine. After all, those snippets they publish are helping your site get traffic.
In short order, search came to rule the web. And Google came to rule search.
The mutually beneficial arrangement persisted uneasily. Despite Google’s search results pages getting worse and worse in recent years, the company’s huge market share of search means you generally want to be in their good books.
Google’s business model relies on us publishing web pages so that they can put ads around the search results linking to that content, and we rely on Google to send people to our websites by responding smartly to search queries.
That has now changed. Instead of responding to search queries by linking to the web pages we’ve made, Google is instead generating dodgy summaries rife with hallucina… lies (a psychic hotline, basically).
Google still benefits from us publishing web pages. We no longer benefit from Google slurping up those web pages.
With AI, tech has broken the web’s social contract:
Google has steadily been manoeuvring their search engine results to more and more replace the pages in the results.
Me, I just think it’s fuckin’ rude.
Google is a portal to the web. Google is an amazing tool for finding relevant websites to go to. That was useful when it was made, and it’s nothing but grown in usefulness. Google should be encouraging and fighting for the open web. But now they’re like, actually we’re just going to suck up your website, put it in a blender with all other websites, and spit out word smoothies for people instead of sending them to your website. Instead.
Ben proposes an update to robots.txt
that would allow us to specify licensing information:
Robots.txt needs an update for the 2020s. Instead of just saying what content can be indexed, it should also grant rights.
Like crawl my site only to provide search results not train your LLM.
It’s a solid proposal. But Google has absolutely no incentive to implement it. They hold all the power.
Or do they?
There is still the nuclear option in robots.txt
:
User-agent: Googlebot
Disallow: /
That’s what Vasilis is doing:
I have been looking for ways to not allow companies to use my stuff without asking, and so far I coulnd’t find any. But since this policy change I realised that there is a simple one: block google’s bots from visiting your website.
The general consensus is that this is nuts. “If you don’t appear in Google’s results, you might as well not be on the web!” is the common cry.
I’m not so sure. At least when it comes to personal websites, search isn’t how people get to your site. They get to your site from RSS, newsletters, links shared on social media or on Slack.
And isn’t it an uncomfortable feeling to think that there’s a third party service that you absolutely must appease? It’s the same kind of justification used by people who are still on Twitter even though it’s now a right-wing transphobic cesspit. “If I’m not on Twitter, I might as well not be on the web!”
The situation with Google reminds me of what Robin said about Twitter:
The speed with which Twitter recedes in your mind will shock you. Like a demon from a folktale, the kind that only gains power when you invite it into your home, the platform melts like mist when that invitation is rescinded.
We can rescind our invitation to Google.
A few years ago, I wrote about how much I enjoyed the book Aurora by Kim Stanley Robinson.
Not everyone liked that book. A lot of people were put off by its structure, in which the dream of interstellar colonisation meets the harsh truth of reality and the book follows where that leads. It pours cold water over the very idea of humanity becoming interplanetary.
But our own solar system is doable, right? I mean, Kim Stanley Robinson is the guy who wrote the Mars trilogy and 2312, both of which depict solar system colonisation in just a few centuries.
I wonder if the author might regret the way that some have taken his Mars trilogy as a sort of manual, Torment Nexus style. Kim Stanley Robinson is very much concerned with this planet in this time period, but others use his work to do the opposite.
But the backlash to Mars has begun.
Maciej wrote Why Not Mars:
The goal of this essay is to persuade you that we shouldn’t send human beings to Mars, at least not anytime soon. Landing on Mars with existing technology would be a destructive, wasteful stunt whose only legacy would be to ruin the greatest natural history experiment in the Solar System. It would no more open a new era of spaceflight than a Phoenician sailor crossing the Atlantic in 500 B.C. would have opened up the New World. And it wouldn’t even be that much fun.
Manu Saadia is writing a book about humanity in space, and he has a corresponding newsletter called Against Mars: Space Colonization and its Discontents:
What if space colonization was merely science-fiction, a narrative, or rather a meta-narrative, a myth, an ideology like any other? And therefore, how and why did it catch on? What is so special and so urgent about space colonization that countless scientists, engineers, government officials, billionaire oligarchs and indeed, entire nations, have committed work, ingenuity and treasure to make it a reality.
What if, and hear me out, space colonization was all bullshit?
I mean that quite literally. No hyperbole. Once you peer under the hood, or the nose, of the rocket ship, you encounter a seemingly inexhaustible supply of ghoulish garbage.
Two years ago, Shannon Stirone went into the details of why Mars Is a Hellhole
The central thing about Mars is that it is not Earth, not even close. In fact, the only things our planet and Mars really have in common is that both are rocky planets with some water ice and both have robots (and Mars doesn’t even have that many).
Perhaps the most damning indictment of the case for Mars colonisation is that its most ardent advocate turns out to be an idiotic small-minded eugenicist who can’t even run a social media company, much less a crewed expedition to another planet.
But let’s be clear: we’re talking here about the proposition of sending humans to Mars—ugly bags of mostly water that probably wouldn’t survive. Robots and other uncrewed missions in our solar system …more of that, please!
I’ve come to believe the best way to look at our Mars program is as a faith-based initiative. There is a small cohort of people who really believe in going to Mars, the way some people believe in ghosts or cryptocurrency, and this group has an outsize effect on our space program.
Maciej lays out the case against a crewed mission to Mars.
Like George Lucas preparing to release another awful prequel, NASA is hoping that cool spaceships and nostalgia will be enough to keep everyone from noticing that their story makes no sense. But you can’t lie your way to Mars, no matter how sincerely you believe in what you’re doing.
And don’t skip the footnotes:
Fourth graders writing to Santa make a stronger case for an X-Box than NASA has been able to put together for a Mars landing.
This version of Roboto from Font Bureau is a very variable font indeed.
Thorough (and grim) research from Chris.
An online documentary series featuring interviews with smart people about the changing role of design.
As technology becomes more complex and opaque, how will we as designers understand its potential, do hands-on work, translate it into forms people can understand and use, and lead meaningful conversations with manufacturers and policymakers about its downstream implications? We are entering a new technology landscape shaped by artificial intelligence, advanced robotics and synthetic biology.
So far there’s Kevin Slavin, Molly Wright Steenson, and Alexandra Daisy Ginsberg, with more to come from the likes of Matt Jones, Anab Jain, Dan Hill, and many, many more.
Well, this an interesting format experiment—the latest Black Mirror just dropped, and it’s a PDF.
A deep dive into Pixar’s sci-fi masterpiece, featuring entertaining detours to communist propaganda and Disney theme parks.
Prompted by his time at Clearleft’s AI gathering in Juvet, Chris has been delving deep into the stories we tell about artificial intelligence …and what stories are missing.
And here we are at the eponymous answer to the question that I first asked at Juvet around 7 months ago: What stories aren’t we telling ourselves about AI?
A thoroughly entertaining talk by Andy looking at the past, present, and future of robots, AI, and automation.
That’s a harsh headline but it is unfortunately deserved. We should indeed hold Mozilla to a higher standard.