Qubyte Codes - IndieWebCamp Brighton 2024
Mark’s write-up of the excellent Indie Web Camp Brighton that he co-organised with Paul.
Mark’s write-up of the excellent Indie Web Camp Brighton that he co-organised with Paul.
The past weekend’s Indie Web Camp Brighton was wonderful! Many thanks to Mark and Paul for all their work putting it together.
There was a great turn-out. It felt like the perfect time for an Indie Web Camp. There’s a real appetite for getting away from ever more extractive silos and staking claim to our own corners of the web. Most of the attendees were at their first ever Indie Web Camp.
Paul asked me to oversee the schedule planning on day one, which I was happy to do. We made sure that first-timers got first dibs on proposing sessions. In the end, every single session was proposed by new attendees.
Day two was all about putting ideas into practice: coding, designing, and writing on our own website. I’m always blown away by how much gets done in just one short day. Best of all is when there’s someone who starts the weekend without their own website but finishes with a live site. That happened again this time.
I spent the second day tinkering with something I started at Indie Web Camp Nuremberg in October. Back then, I got related posts working here on my journal; a list of suggested follow-up posts to read based on the tags of the current post.
I wanted to do the same for my links; show links related to the one I’m currently linking to. It didn’t take too long to get that up and running.
But then I thought about it some more and realised it would be good to also show blog posts related to the link. So I did that. Then I realised it would be really good to show related links under blog posts too.
So now, if everything’s working correctly, then at the end of this post you will not only see related blog posts I’ve previously written, but also links related to the content of this post.
It was a very inspiring weekend. There’s something about being in a room with other people working on their websites that makes me super productive.
While we were hacking away on day two, somebody mentioned that they still find hard to explain the indie web to people.
“It’s having your own website”, I said.
But surely there’s more to it than that, they wondered.
Nope. If someone has their own website, then they’re part of the indie web. It doesn’t matter if that website is made with a complicated home-rolled tech stack or if it’s a Squarespace site.
What you do with your own website is entirely up to you. The technologies are just plumbing wether it’s webmentions, RSS, or anything else. None of it is a requirement. Heck, even HTML is optional. If you want to put plain text files on your website, go for it. It’s your website.
After two days at border:none in Nuremberg, it was time for two days at Indie Web Camp, also in Nuremberg.
I hadn’t been to an Indie Web Camp since before The Situation. It felt very good to be back. I had almost forgotten how inspiring and productive they can be.
This one had a good turnout of around twenty people. We had ourselves an excellent first day of thought-provoking sessions. Then on day two it was time to put some of those ideas into action.
A little trick I like to do on the practical day is to have two tasks to attempt: one of them quite simple, and the other more ambitious. That way, as long as I get the simpler task done, I’ll always have at least something to demo at the end of the day.
This time I attempted three bits of home improvement on my website.
The first problem I set myself was ostensibly the simple one. But it involved regular expressions, so then I had two problems.
I wanted to automatically link up Mastodon usernames if I mentioned one in my notes. For example, during border:none I mentioned Brian’s mastodon username in a note: @briansuda@loðfíll.is.
That turned out to be an excellent test case. Those Icelandic characters made sure I wasn’t making unwarranted assumptions about character sets.
Here’s the regular expression I came up with. It’s not foolproof by any means. Basically it looks for @[email protected]
.
Good enough. Ship it.
My next task was a bit more ambitious. It involved SQL queries, something I’m slightly better at than regular expressions but that’s a very low bar.
I wanted to show related posts when you get to the end of one of my blog posts.
I’ve been tagging all my blog posts for years so that’s the mechanism I used for finding similar posts. There’s probably a clever SQL statement that could do this, but I ended up brute-forcing it a bit.
I don’t feel too bad about the hacky clunky nature of my solution, because I cache blog post pages. That means only the first person to view the blog post (usually me) will suffer any performance impacts from my clunky database queries. After that everything’s available straight from a cached file.
Let’s say you’re reading a blog post of mine that I’ve tagged with ten different keywords. I make a separate SQL query for each keyword to get all the other posts that use that tag. Then it’s a matter of sorting through all the results.
I loop through the results of each tag and apply a score to the tagged post. If the post shares one tag with the post you’re looking at, it has a score of one. If it shares two tags, it has a score of two, and so on.
I decided that for a post to be considered related, it had to share at least three tags. I also decided to limit the list of related posts to a maximum of five.
It worked out pretty well. If you scroll down on my recent post about JavaScript, you’ll see links to related posts about JavaScript. If you read through a post on accessibility testing, you’ll find other posts about accessibility testing. If you make it to the end of this post about Mars colonisation you’ll see links to more posts about exploring our solar system.
Right now I’m just doing this for my blog but I’d like to do it for my links too. A job for a future Indie Web Camp.
I was very inspired by Remy’s recent post on how he’s tackling link rot on his site. I wanted to do the same for mine.
On the first day at Indie Web Camp I led a session on link rot to gather ideas and alternative approaches. We had a really good discussion, though it’s always worth bearing in mind that there’ll never be a perfect solution. There’ll always be some false positives and some false negatives.
The other Jeremy at Indie Camp Nuremberg blogged about the session. Sebastian Greger was attending remotely and the session inspired him to spend the second day also tackling linkrot.
In the end I decided to stick with Remy’s two-pronged approach:
Here’s the JavaScript I wrote for the first part.
It’s very similar to Remy’s but with one little addition. I check to see if the clicked link is inside an h-entry
and if it is, I pass on the date from the post’s dt-published
value.
Here’s the PHP I wrote for the server-side redirector. The comments tell the story of what the code is doing:
curl
request to get the response headers from the URL. The time limit is set to 1 second.Not perfect by any means, but it works for the most common cases of link rot.
For the demo at the end of the day I went back into my archive of over 10,000 links and plucked out some old posts, like this one from December 2005. It takes a little while to do the rerouting but eventually you get to see the archived version from the same time period as when I linked to it.
Here’s another link from 2005. Here’s another. Those links are broken now, but with a little patience, you’ll still get to read them on the Internet Archive.
The Internet Archive’s wayback machine really is a gift. I can’t imagine how would it be even remotely possible to try to address link rot on my site without archive.org.
I will continue to donate money to the Internet Archive and I encourage you to do the same.
If you employ a hack, don’t be so ashamed. Don’t be too proud, either. Above all, don’t be lazy—be certain and deliberate about why you’re using a hack.
I agree that hacks for prototyping are a-okay:
When it comes to prototypes, A/B tests, and confirming hypotheses about your product the best way to effectively deliver is actually by writing the fastest, shittiest code you can.
I’m not so sure about production code though.
The ZX Spectrum in a time of revolution:
Gaming the Iron Curtain offers the first book-length social history of gaming and game design in 1980s Czechoslovakia, or anywhere in the Soviet bloc. It describes how Czechoslovak hobbyists imported their computers, built DIY peripherals, and discovered games as a medium, using them not only for entertainment but also as a means of self-expression.
I can very much relate to what Craig is saying here.
This sounds like seamful design:
How to enable not users but adaptors? How can people move from using a product, to understanding how it hangs together and making their own changes? How do you design products with, metaphorically, screws not nails?
I can’t decide if this is industrial sabotage or political protest. Either way, I like it.
99 second hand smartphones are transported in a handcart to generate virtual traffic jam in Google Maps.Through this activity, it is possible to turn a green street red which has an impact in the physical world by navigating cars on another route to avoid being stuck in traffic
Here’s the talk that Remy and I gave at Fronteers in Amsterdam, all about our hack week at CERN. We’re both really pleased with how this turned out and we’d love to give it again!
The New York Times is publishing science-fictional op-eds. The first one is from Ted Chiang on the Gene Equality Project forty years in our future:
White supremacist groups have claimed that its failure shows that certain races are incapable of being improved, given that many — although by no means all — of the beneficiaries of the project were people of color. Conspiracy theorists have accused the participating geneticists of malfeasance, claiming that they pursued a secret agenda to withhold genetic enhancements from the lower classes. But these explanations are unnecessary when one realizes the fundamental mistake underlying the Gene Equality Project: Cognitive enhancements are useful only when you live in a society that rewards ability, and the United States isn’t one.
Last week was a bit of an event whirlwind. In the space of seven days I was at Indie Web Camp, Beyond Tellerrand, and Accessibility Club in Düsseldorf, followed by a train ride to Utrecht for Frontend United. Phew!
Indie Web Camp Düsseldorf was—as always—excellent. Once again, Sipgate generously gave us the use of their lovely, lovely space for the weekend. We had one day of really thought-provoking discussions, followed by a day of heads-down hacking and making.
I decided it was time for me to finally own my replies. For a while now, I’ve been posting notes on my own site and syndicating to Twitter. But whenever I replied to someone else’s tweet, I did from Twitter. I wanted to change that.
From a coding point of view, it wasn’t all that tricky. The real challenges were to do with the interface. I needed to add another field for the URL I’m replying to …but I didn’t want my nice and minimal posting interface to get too cluttered. I ended up putting the new form field inside a details
element with a summary
of “Reply to” so that the form field would be hidden by default, and toggled open by hitting that “Reply to” text:
<details>
<summary>
<label for="replyto">Reply to</label>
</summary>
<input type="url" id="replyto" name="replyto">
</details>
I sent my first test reply to a post on Aaron’s website. Aaron was sitting next to me at the time.
Once that was all working, I sent my first reply to a tweet. It was a response to a tweet from Tantek. Tantek was also sitting next to me at the time.
I spent most of the day getting that Twitter syndication to work. I had something to demo, but I foolishly decided to risk it all by attempting to create a bookmarklet so that I could post directly from a tweet page (instead of hopping back to my own site in a different tab). By canabalising the existing bookmarklet I use for posting links, I just about managed to get it working in time for the end of day demos.
So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.
Then again, I kind of like how wonderfully random and out-of-context they look. You can browse through all my replies so far.
I’m glad I got this set up. Now when Andy posts stuff on Twitter, I’m custodian of my responses:
@AndyBudd: Who are your current “Design Heroes”?
adactio.com: I would say Falcor from Neverending Story, the big flying dog.
Martin gives a personal history of his time at the two CERN hack projects …and also provides a short history of the universe.
This is the lovely little film about our WorldWideWeb hack project. It was shown yesterday at CERN during the Web@30 celebrations. That was quite a special moment.
Remy’s keeping a list of hyperlinks to stories covering our recent hack week at CERN.
Here’s the source code for the WorldWideWeb project we did at CERN.
This is a lovely write-up of the WorldWideWeb hack week at CERN:
The Web is a success story in open standards, natural and by-design progressive enhancement, and the future-proof archivability of human-readable code.
Recreating the original WorldWideWeb browser was an exercise in digital archeology. With a working NeXT machine in the room, Kimberly was able to examine the source code for the first every browser and discover a treasure trove within. Like this gem in HTUtils.h
:
#define TCP_PORT 80 /* Allocated to http by Jon Postel/ISI 24-Jan-92 */
Sure enough, by June of 1992 port 80 was documented as being officially assigned to the World Wide Web (Gopher got port 70). Jean-François Groff—who worked on the World Wide Web project with Tim Berners-Lee—told us that this was a moment they were very pleased about. It felt like this project of theirs was going places.
Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A
tags.
So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.
The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.
Going into our recreation of WorldWideWeb at CERN, I knew I wanted to convey this historical context somehow.
The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.
So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.
I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.
Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.
I marked up each timeline as an ordered list of h-event
s:
<li class="h-event y1968">
<a href="https://en.wikipedia.org/wiki/NLS_%28computer_system%29" class="u-url">
<time class="dt-start" datetime="1968-12-09">1968</time>
<abbr class="p-name" title="oN-Line System">NLS</abbr>
</a>
</li>
With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.
The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc()
. I wanted to use translateX()
but I couldn’t get the maths to work for that, so I had use plain ol’ left
and right
:
.y1968 {
left: calc((1968 - 1959) * (100%/30) - 5em);
}
For events before 1989, it’s the distance of the event from 1959. For events after 1989, it’s the distance of the event from 2019:
.y2014 {
right: calc((2019 - 2014) * (100%/30) - 5em);
}
(Each h-event
has a width of 5em
so that’s where the extra bit at the end comes from.)
I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.
As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.
I fiddled with the content a bit after peppering Robert Cailliau with questions over lunch. And I got some very valuable feedback from Jean-François. Some examples he provided:
1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.
1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced news as a built-in information resource, then various platforms built on the web rendered it obsolete.
1982: Literary Machines, Ted Nelson’s book which was on our desk at all times
I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.
The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.
The US Mission to the UN in Geneva came by to visit us during our hackweek at CERN.
“Our hope is that over the next few days we are going to recreate the experience of what it would be like using that browser, but doing it in a way that anyone using a modern web browser can experience,” explains team member Jeremy Keith. The aim is to “give people the feeling of what it would have been like, in terms of how it looked, how it felt, the fonts, the rendering, the windows, how you navigated from link to link.”
Here’s the CERN write-up of our week of hacking to produce the recreation of the WorldWideWeb browser.