Update 2024-12-06
Some parts of this have been updated to handle user agents with spaces in them
In April Ethan wrote this post about blocking AI bots with Apache and .htaccess
. I've already done this for my robots.txt file but quite frankly I don't trust any of the AI companies to respect that. Jason Santa Maria (via Ethan):
robots.txt
is a bit like asking bots to not visit my site; with.htaccess
, you’re not asking
It's been on my list since then to work out how to do this with nginx then yesterday I decided to be lazy and ask on Mastodon if anyone knew how to do this. Luke pointed me to this post about using .htaccess
in nginx (which you can't do because it's an Apache feature) but it did include a link to an .htaccess
to nginx convertor which got me going in the right direction[1]. After some digging around there are a few ways to do this. I could block each individual bot in it's own block:
if ($http_user_agent = "BadBotOne") {
return 403
}
Or much more preferably, I can include them all in one block:
# case sensitive
if ($http_user_agent ~ (BadBotOne|BadBotTwo)) {
return 403
}
# case insensitive
if ($http_user_agent ~* (BadBotOne|BadBotTwo)) {
return 403
}
Unlike .htaccess
I can't just make an nginx.conf
file in my Eleventy site and be done with it: nginx config files don't live at the root of the site they serve. Turns out, you can include other conf files inside your main nginx.conf
which is handy. I did a quick test of a redirect in my main conf file to confirm it works as expected:
# nginx.conf
server {
include /home/forge/rknight.me/nginx.conf;
# ...
# the rest of my nginx config
}
# nginx.conf file generated by 11ty
rewrite ^/thisisatestandnotarealpage /now permanent;
One other thing I wanted to do was not expose this file on my site but how can I set the location of the file to outside the public
folder in Eleventy? In another turns out moment, if you do permalink: ../nginx.conf
(note the ..
) the file will be created one level up from the output directory. So if we take a look at my site in full, the brand new nginx.conf
file has been built exactly where I want it:
cli
config
public <-- the directory my site builds to
src
+ nginx.conf
package-lock.json
package.json
I didn't want to commit this file to version control, so I added it to my .gitignore
.
public
node_modules
+ nginx.conf
I was already pulling the bot data from this repository to generate my robots.txt
file so I just need to update my data file to have a second version of the data in the correct format for the nginx config. I'm also filtering out AppleBot
because my understanding of this one is it's a search crawler and not related to AI gobbling.
const res = await fetch("https://raw.githubusercontent.com/ai-robots-txt/ai.robots.txt/main/robots.txt")
let txt = await res.text()
txt = txt.split("\n")
.filter(line => line !== "User-agent: Applebot")
.join("\n")
const bots = txt.split("\n")
.filter(line => {
return line.startsWith("User-agent:") && line !== "User-agent: Applebot"
})
.map(line => line.split(": ")[1].trim().replaceAll(' ', '\\ '))
const data = {
txt: txt,
nginx: bots.join('|'),
}
I added a new file called nginx.conf.njk
which looks like this:
---
permalink: ../nginx.conf
eleventyExcludeFromCollections: true
---
# Block AI bots
if ($http_user_agent ~* "( )"){
return 403;
}
Which outputs like so:
if ($http_user_agent ~* "(AdsBot-Google|Amazonbot|anthropic-ai|Applebot|Applebot-Extended|AwarioRssBot|AwarioSmartBot|Bytespider|CCBot|ChatGPT-User|ClaudeBot|Claude-Web|cohere-ai|DataForSeoBot|Diffbot|FacebookBot|FriendlyCrawler|Google-Extended|GoogleOther|GPTBot|img2dataset|ImagesiftBot|magpie-crawler|Meltwater|omgili|omgilibot|peer39_crawler|peer39_crawler/1.0|PerplexityBot|PiplBot|scoop.it|Seekr|YouBot)"){
return 403;
}
As a bonus for doing this, I was able to add all my redirects from when I moved all my blog posts under the /blog
directory and the config for showing pretty RSS feeds. This way if I ever rebuild the server I won't lose access to these and they stay in version control which is way better.
To check this was working as expected, I set a custom user agent in Chrome - Hit the three dots in the inspector > more tools > network conditions > User Agent. Then I set the user agent to ClaudeBot
, refreshed my site, and saw a lovely 403 forbidden page.
Anyway, Fuck AI crawlers.
Update
It would be a real shame if I did Melanie's suggestion of redirecting to a 10GB file instead
return 307 https://ash-speed.hetzner.com/10GB.bin;
Blocking Bots with Nginx https://rknight.me/blog/blocking-bots-with-nginx/
Here's how I automated blocking all the known AI bots from accessing my site.
I also learned if you do ../path.html for the permalink in #11ty it will build that file _outside_ of the site directory which is handy.