The past weekend’s Indie Web Camp Brighton was wonderful! Many thanks to Mark and Paul for all their work putting it together.
There was a great turn-out. It felt like the perfect time for an Indie Web Camp. There’s a real appetite for getting away from ever more extractive silos and staking claim to our own corners of the web. Most of the attendees were at their first ever Indie Web Camp.
Paul asked me to oversee the schedule planning on day one, which I was happy to do. We made sure that first-timers got first dibs on proposing sessions. In the end, every single session was proposed by new attendees.
Day two was all about putting ideas into practice: coding, designing, and writing on our own website. I’m always blown away by how much gets done in just one short day. Best of all is when there’s someone who starts the weekend without their own website but finishes with a live site. That happened again this time.
I spent the second day tinkering with something I started at Indie Web Camp Nuremberg in October. Back then, I got related posts working here on my journal; a list of suggested follow-up posts to read based on the tags of the current post.
I wanted to do the same for my links; show links related to the one I’m currently linking to. It didn’t take too long to get that up and running.
But then I thought about it some more and realised it would be good to also show blog posts related to the link. So I did that. Then I realised it would be really good to show related links under blog posts too.
So now, if everything’s working correctly, then at the end of this post you will not only see related blog posts I’ve previously written, but also links related to the content of this post.
It was a very inspiring weekend. There’s something about being in a room with other people working on their websites that makes me super productive.
While we were hacking away on day two, somebody mentioned that they still find hard to explain the indie web to people.
“It’s having your own website”, I said.
But surely there’s more to it than that, they wondered.
Nope. If someone has their own website, then they’re part of the indie web. It doesn’t matter if that website is made with a complicated home-rolled tech stack or if it’s a Squarespace site.
What you do with your own website is entirely up to you. The technologies are just plumbing wether it’s webmentions, RSS, or anything else. None of it is a requirement. Heck, even HTML is optional. If you want to put plain text files on your website, go for it. It’s your website.
After two days at border:none in Nuremberg, it was time for two days at Indie Web Camp, also in Nuremberg.
I hadn’t been to an Indie Web Camp since before The Situation. It felt very good to be back. I had almost forgotten how inspiring and productive they can be.
This one had a good turnout of around twenty people. We had ourselves an excellent first day of thought-provoking sessions. Then on day two it was time to put some of those ideas into action.
A little trick I like to do on the practical day is to have two tasks to attempt: one of them quite simple, and the other more ambitious. That way, as long as I get the simpler task done, I’ll always have at least something to demo at the end of the day.
This time I attempted three bits of home improvement on my website.
Autolinking Mastodon usernames
The first problem I set myself was ostensibly the simple one. But it involved regular expressions, so then I had two problems.
I wanted to automatically link up Mastodon usernames if I mentioned one in my notes. For example, during border:none I mentioned Brian’s mastodon username in a note: @briansuda@loðfíll.is.
That turned out to be an excellent test case. Those Icelandic characters made sure I wasn’t making unwarranted assumptions about character sets.
My next task was a bit more ambitious. It involved SQL queries, something I’m slightly better at than regular expressions but that’s a very low bar.
I wanted to show related posts when you get to the end of one of my blog posts.
I’ve been tagging all my blog posts for years so that’s the mechanism I used for finding similar posts. There’s probably a clever SQL statement that could do this, but I ended up brute-forcing it a bit.
I don’t feel too bad about the hacky clunky nature of my solution, because I cache blog post pages. That means only the first person to view the blog post (usually me) will suffer any performance impacts from my clunky database queries. After that everything’s available straight from a cached file.
Let’s say you’re reading a blog post of mine that I’ve tagged with ten different keywords. I make a separate SQL query for each keyword to get all the other posts that use that tag. Then it’s a matter of sorting through all the results.
I loop through the results of each tag and apply a score to the tagged post. If the post shares one tag with the post you’re looking at, it has a score of one. If it shares two tags, it has a score of two, and so on.
I decided that for a post to be considered related, it had to share at least three tags. I also decided to limit the list of related posts to a maximum of five.
Right now I’m just doing this for my blog but I’d like to do it for my links too. A job for a future Indie Web Camp.
Link rot
I was very inspired by Remy’s recent post on how he’s tackling link rot on his site. I wanted to do the same for mine.
On the first day at Indie Web Camp I led a session on link rot to gather ideas and alternative approaches. We had a really good discussion, though it’s always worth bearing in mind that there’ll never be a perfect solution. There’ll always be some false positives and some false negatives.
It’s very similar to Remy’s but with one little addition. I check to see if the clicked link is inside an h-entry and if it is, I pass on the date from the post’s dt-published value.
Here’s the PHP I wrote for the server-side redirector. The comments tell the story of what the code is doing:
Check that the request is coming from my site.
There also has to be a URL provided in the query string.
Make a very quick curl request to get the response headers from the URL. The time limit is set to 1 second.
If there was any error (like a time out), give up and go to the URL.
Pick the response headers apart to get the HTTP status code.
If the response is OK, go to the URL.
If the response is a redirect, go around again but this time use the redirect URL.
Construct the archive.org search endpoint.
If we have a date, provide it. Otherwise ask for the latest snapshot.
Ping that archive.org URL. This time there’s no time limit; this might take a while.
If there’s an archived copy, redirect to that.
There’s no archived copy. Give up and go the URL anyway.
Not perfect by any means, but it works for the most common cases of link rot.
For the demo at the end of the day I went back into my archive of over 10,000 links and plucked out some old posts, like this one from December 2005. It takes a little while to do the rerouting but eventually you get to see the archived version from the same time period as when I linked to it.
Here’s another link from 2005. Here’s another. Those links are broken now, but with a little patience, you’ll still get to read them on the Internet Archive.
The Internet Archive’s wayback machine really is a gift. I can’t imagine how would it be even remotely possible to try to address link rot on my site without archive.org.
Indie Web Camp Düsseldorf was—as always—excellent. Once again, Sipgate generously gave us the use of their lovely, lovely space for the weekend. We had one day of really thought-provoking discussions, followed by a day of heads-down hacking and making.
I decided it was time for me to finally own my replies. For a while now, I’ve been posting notes on my own site and syndicating to Twitter. But whenever I replied to someone else’s tweet, I did from Twitter. I wanted to change that.
From a coding point of view, it wasn’t all that tricky. The real challenges were to do with the interface. I needed to add another field for the URL I’m replying to …but I didn’t want my nice and minimal posting interface to get too cluttered. I ended up putting the new form field inside a details element with a summary of “Reply to” so that the form field would be hidden by default, and toggled open by hitting that “Reply to” text:
I spent most of the day getting that Twitter syndication to work. I had something to demo, but I foolishly decided to risk it all by attempting to create a bookmarklet so that I could post directly from a tweet page (instead of hopping back to my own site in a different tab). By canabalising the existing bookmarklet I use for posting links, I just about managed to get it working in time for the end of day demos.
So I’m owning my replies now. At the moment, they show up in my home page feed just like any other notes I post. I’m not sure if I’ll keep it that way. They don’t make much sense out of context.
Then again, I kind of like how wonderfully random and out-of-context they look. You can browse through all my replies so far.
I’m glad I got this set up. Now when Andy posts stuff on Twitter, I’m custodian of my responses:
Jean-François also told us that the WorldWideWeb browser/editor was kind of like an advanced prototype. The idea was to get something up and running as quickly as possible. Well, the NeXT operating system had a very robust Text Object, so the path of least resistance for Tim Berners-Lee was to take the existing word-processing software and build a hypertext component on top of it. Likewise, instead of creating a brand new format, he used the existing SGML format and added one new piece: linking with A tags.
So the WorldWideWeb application was kind of like a word processor and document viewer mashed up with hypertext. Ted Nelson complains to this day that the original sin of the web was that it borrowed this page-based metaphor. But Nelson’s Project Xanadu, originally proposed in 1974 wouldn’t become a working reality until 2014—a gap of forty years. Whereas Tim Berners-Lee proposed his system in March 1989 and had working code within a year. There’s something to be said for being pragmatic and working with what you’ve got.
The web was also a mashup of ideas. Hypertext existed long before the web—Ted Nelson coined the term in 1963. There were conferences and academic discussions devoted to hypertext and hypermedia. But almost all the existing hypertext systems—including Tim Berners-Lee’s own ENQUIRE system from the early 80s—were confined to a local machine. Meanwhile networked computers were changing everything. First there was the ARPANET, then the internet. Tim Berners-Lee’s ambitious plan was to mash up hypertext with networks.
The World Wide Web officially celebrates its 30th birthday in March of this year. It’s kind of an arbitrary date: it’s the anniversary of the publication of Information Management: A Proposal. Perhaps a more accurate date would be the day the first website—and first web server—went online. But still. Let’s roll with this date of March 12, 1989. I thought it would be interesting not only to look at what’s happened between 1989 and 2019, but also to look at what happened between 1959 and 1989.
So now I’ve got two time cones that converge in the middle: 1959 – 1989 and 1989 – 2019. For the first time period, I made categories of influences: formats, hypertext, networks, and computing. For the second time period, I catalogued notable results: browsers, servers, and the evolution of HTML.
I did a little bit of sketching and quickly realised that these converging timelines could be represented somewhat like particle collisions. Once I had that idea in my head, I knew how I would be spending my time during the hack week.
Rather than jumping straight into the collider visualisation, I took some time to make a solid foundation to build on. I wanted to be sure that the timeline itself would be understable even if it were, say, viewed in the first ever web browser.
I marked up each timeline as an ordered list of h-events:
With the markup in place, I could concentrate on making it look halfway decent. For small screens, the layout is very basic—just a series of lists. When the screen gets wide enough, I lay those lists out horzontally one on top of the other. In this view, you can more easily see when events coincide. For example, ENQUIRE, Usenet, and Smalltalk all happen in 1980. But the real beauty comes when the screen is wide enough to display everthing at once. You can see how an explosion of activity in the early 90s. In 1994 alone, we get the release of Netscape Navigator, the creation of HTTPS, and the launch of Amazon.com.
The whole thing is powered by CSS transforms and positioning. Each year on a timeline has its own class that gets moved to the correct chronological point using calc(). I wanted to use translateX() but I couldn’t get the maths to work for that, so I had use plain ol’ left and right:
(Each h-event has a width of 5em so that’s where the extra bit at the end comes from.)
I had to do some tweaking for legibility: bunches of events happening around the same time period needed to be separated out so that they didn’t overlap too much.
As a finishing touch, I added a few little transitions when the page loaded so that the timeline fans out from its centre point.
1971: Unix man pages, one of the first instances of writing documents with a markup language that is interpreted live by a parser before being presented to the user.
1980: Usenet News, because it was THE everyday discussion medium by the time we created the web technology, and the Web first embraced
news as a built-in information resource, then various platforms built on the web rendered it obsolete.
1982: Literary Machines, Ted Nelson’s book which was on our desk at all times
I really, really enjoyed building this “collider” timeline. It was a chance for me to smash together my excitement for web history with my enjoyment of using the raw materials of the web; HTML and CSS in this case.
The timeline pales in comparison to the achievement of the rest of the team in recreating the WorldWideWeb application but I was just glad to be able to contribute a little something to the project.
Nine people came together at CERN for five days and made something amazing. I still can’t quite believe it.
Coming into this, I thought it was hugely ambitious to try to not only recreate the experience of using the first ever web browser (called WorldWideWeb, later Nexus), but to also try to document the historical context of the time. Now that it’s all done, I’m somewhat astounded that we managed to achieve both.
Behold! A simulation of using the first ever web browser, recreated inside your web browser.
Now you could try clicking around on the links on the opening doucment—remembering that you need to double-click on links to activate them—but you’ll quickly find that most of them don’t work. They’re long gone. So it’s probably going to be more fun to open a new page to use as your starting point. Here’s how you do that:
Select Document from the menu options on the left.
A new menu will pop open. Select Open from full document reference.
Type a URL, like, say https://adactio.com
Press that lovely chunky Open button.
You are now surfing the web through a decades-old interface. Double click on a link to open it. You’ll notice that it opens in a new window. You’ll also notice that there’s no way of seeing the current URL. Back then, the idea was that you would navigate primarily by clicking on links, creating your own “associative trails”, as first envisioned by Vannevar Bush.
But the WorldWideWeb application wasn’t just a browser. It was a Hypermedia Browser/Editor.
From that Document menu you opened, select New file…
Type the name of your file; something like test.html
Start editing the heading and the text.
In the main WorldWideWeb menu, select Links.
Now focus the window with the document you opened earlier (adactio.com).
With that window’s title bar in focus, choose Mark all from the Links menu.
Go back to your test.html document, and highlight a piece of text.
With that text highlighted, click on Link to marked from the Links menu.
If you want, you can even save the hypertext document you created. Under the Document menu there’s an option to Save a copy offline (this is the one place where the wording of the menu item isn’t exactly what was in the original WorldWideWeb application). Save the file so you can open it up in a text editor and see what the markup would’ve looked it.
I don’t know about you, but I find this utterly immersive and fascinating. Imagine what it must’ve been like to browse, create, and edit like this. Hypertext existed before the web, but it was confined to your local hard drive. Here, for the first time, you could create links across networks!
After five days time-travelling back thirty years, I have a new-found appreciation for what Tim Berners-Lee created. But equally, I’m in awe of what my friends created thirty years later.
Remy did all the JavaScript for the recreated browser …in just five days!
Kimberly was absolutely amazing, diving deep into the original source code of the application on the NeXT machine we borrowed. She uncovered some real gems.
Of course Mark wanted to make sure the font was as accurate as possible. He and Brian went down quite a rabbit hole, and with remote help from David Jonathan Ross, they ended up recreating entire families of fonts.
John exhaustively documented UI patterns that Angela turned into marvelous HTML and CSS.
Just to be clear, the line-mode browser wasn’t the world’s first web browser. That honour goes to Tim Berners-Lee’s WorldWideWeb programme. But whereas WorldWideWeb only ran on NeXT machines, the line-mode browser worked cross-platform and was, therefore, instrumental in demonstrating the power of the web as a universally-accessible medium.
In the run-up to the 30th anniversary of the original (vague but exciting) proposal for what would become the World Wide Web, we’ve been invited back to try to recreate the experience of using that first web browser, the one that one ever ran on NeXT machines.
I missed the first day due to travel madness—flying back from Interaction 19 in Seattle during snowmageddon to Heathrow and then to Geneva—but by the time I arrived, my hackmates had already made a great start in identifying the objectives:
Give people an understanding of the user experience of the WorldWideWeb browser.
Demonstrate that a read/write philosophy was there from the beginning.
Give context—what was going on at the time?
That second point is crucial. WorldWideWeb wasn’t just a web browser; it was a browser/editor. That’s by far the biggest change in terms of the original vision of the web and what we ended up getting from Mosaic onwards.
Remy is working hard on the first point. He documented the first day and now on the second day, he’s made enormous progress already.
I’m focusing on point number three. I want to show the historical context for the World Wide Web. Here’s my plan…
Seeing as we’re coming up on the thirtieth anniversary, I thought it would be interesting to take the year of the proposal (1989) and look back in a time cone of thirty years previous to that at the influences on Tim Berners-Lee. I also want to look at what has happened with the web in the thirty years since the proposal. So the date of the proposal will be a centre point, with the timespan of 1959-1989 converging on it from the past, and the timespan of 1989-2019 diverging from it into the future. I hope it could make for a nice visualisation. Maybe I could try to get it look like data from a particle collision.
We’re here till the weekend and everyone else has already made tremendous progress. Kimberly has been hacking the Gibson …well, that’s what it looked like when she was deep in the code of the NeXT machine we’ve borrowed from Musée Bolo (merci beaucoup!).
It was great—travelling through the land of Steinbeck and Guthrie at the speed of Kerouac and Springsteen. We stopped for the night at Pismo Beach and then continued on, rolling into Santa Monica at sunset.
The weekend was spent in the usual Indie Web Camp fashion: a day of BarCamp-style discussions, followed by a day of hacking on our personal websites.
After some hacking, I got it working. So now you can see combined archives by year, month, and day (I managed to add a sparkline to the month view as well):
I did face a bit of a conundrum. Both my home page stream and my tag pages show posts in reverse chronological order, with the newest posts at the top. I’ve decided to replicate that for the archive view, but I’m not sure if that’s the right decision. Maybe the list of years should begin with 2001 and end with 2016, instead of the other way around. And maybe when you’re looking at a month of posts, you should see the first posts in that month at the top.
Anyway, I’ll live with it in reverse chronological order for a while and see how it feels. I’m just glad I managed to get it down—I’ve been meaning to do it for quite a while. Once again, I’m amazed by how much gets accomplished when you’re in the same physical space as other helpful, motivated people all working on improving their indie web presence, little by little.
Indie Web Camp Brighton 2016 is done and dusted. It’s hard to believe that it’s already in its fifth(!) year. As with previous years, it was a lot of fun.
There was a design session looking at alternatives to simply presenting everything in a stream. Some great ideas came out of that. And there was a session all about bookmarking and linking. That one really got my brain whirring with ideas for the second day—the making/coding day.
I’ve learned from previous Indie Web Camps that a good strategy for the second day is to have two tasks to tackle: one that’s really easy (so you’ve at least got that to demo at the end), and one that’s more ambitious. This time, I put together a list of potential goals, and then ordered them by difficulty. By the end of the day, I managed to get a few of them done.
The code to do that was pretty straightforward. I needed to hit this endpoint:
http://web.archive.org/save/{url}
I also updated my bookmarklet for posting links so that, if I’ve highlighted any text on the page I’m linking to, that text is automatically pasted in to the description.
I tweaked my webmentions a bit so that if I receive a webmention that has a type of bookmark-of, that is displayed differently to a comment, or a like, or a share. Here’s an example of Aaron bookmarking one of my articles.
The more ambitious plan was to create an over-arching /tags area for my site. I already have tag-based navigation for my journal and my links:
I didn’t get around to adding pagination. That’s something I should definitely add, because some of those pages get veeeeery long. But I did spend some time adding sparklines. They can be quite revealing, especially on topics that were hot ten years ago, but have faded over time, or topics that have becoming more and more popular with each year.
If you haven’t been to an Indie Web Camp before, it’s a very straightforward proposition. The idea is that you should have your own website. That’s it. Every thing else is predicated on that. So while there’ll be plenty of discussions, demos, and designs, they’re all in service to that fundamental premise.
The first day of an Indie Web Camp is like a BarCamp. We make a schedule grid at the start of the day and people organise topics by room and time slot. It sounds chaotic. It is chaotic. But it works surprisingly well. The discussions can be about technologies, or interfaces, or ideas, or just about anything really.
The second day is for making. After the discussions from the previous day, most people will have a clear idea at this point for something they might want to do. It might involve adding some new technology to their website, or making some design changes, or helping build a tool. For people starting from scratch, this is the perfect time for them to build and launch a basic website.
At the end of the second day, everyone demos what they’ve done. I’m always amazed by how much people can accomplish in just one weekend. There’s something about having other people around to help you that makes it super productive.
You might be thinking “but I’m not a coder!” Don’t worry—there’ll be plenty of coders there so you can get their help on whatever you might decide to do. If you’re a designer, your skills will be in high demand by those coders. It’s that mish-mash of people that makes it such a fun gathering.
We’ve been doing a lot of soul-searching at Clearleft recently; examining our values; trying to make implicit unspoken assumptions explicit and spoken. That process has unearthed some activities that have been at the heart of our company from the very start—sharing, teaching, and nurturing. After all, Clearleft would never have been formed if it weren’t for the generosity of people out there on the web sharing with myself, Andy, and Richard.
One of the values/mottos/watchwords that’s emerging is “Share what you learn.” I like that a lot. It echoes the original slogan of the World Wide Web project, “Share what you know.” It’s been a driving force behind our writing, speaking, and events.
In the same spirit, we’ve been running internship programmes for many years now. John is the latest of a long line of alumni that includes Anna, Emil, and James.
By the way—and this should go without saying, but apparently it still needs to be said—the internships are always, always paid. I know that there are other industries where unpaid internships are the norm. I’ve even heard otherwise-intelligent people defend those unpaid internships for the experience they offer. But what kind of message does it send to someone about the worth of their work when you withhold payment for it? Our industry is young. Let’s not fall foul of the pernicious traps set by older industries that have habitualised exploitation.
So this year we decided to try a different approach by scouring the end of year degree shows for hot new talent. We found them not in the interaction courses as we’d expected, but from the worlds of Product Design, Digital Design and Robotics. We assembled a team of three interns , with a range of complementary skills, gave them a space on the mezzanine floor of our new building, and set them a high level brief.
The first such programme resulted in Chüne. The latest Clearleft internship project has just come to an end. The result is Notice.
This time ‘round, the three young graduates were Chloe, Chris and Monika. They each have differing but complementary skill sets: Chloe is a user interface designer; Chris is a product designer; Monika is an artist who knows her way around hardware hacking and coding.
Once again, they were set a fairly loose brief. They should come up with something “to enrich the lives of local residents” and it should have a physical and digital component to it.
They got stuck in to researching and brainstorming ideas. At the end of each week, we’d all gather together to get a playback of what they were coming up with. It was at these playbacks that the interns were introduced to a concept that they will no doubt encounter again in their professional lives: seagulling AKA the swoop and poop. For once, it was the Clearlefties who were in the position of being swoop-and-poopers, rather than swoop-and-poopies.
As the midway point of the internship approached, there were some interesting ideas, but no clear “winner” to pursue. Something else was happening around this time too: dConstruct 2015.
The interns pitched in with helping out at the event, and in return, we kidnapped some of the speakers—namely John Willshire and Chris Noessel—to offer them some guidance.
There was also plenty of inspiration to be had from the dConstruct talks themselves. One talk in particular struck a chord: Dan Hill’s The City Of Things …especially the bit where he railed against the terrible state of planning application notices:
Most of the time, it ends up down the bottom of the lamppost—soiled and soggy and forgotten. This should be an amazing thing!
Hmm… sounds like something that could enrich the lives of local residents.
Not long after that, Matt Webb came to visit. He encouraged the interns to focus in on just the two ideas that really excited them rather then the 5 or 6 that they were considering. So at the next playback, they presented two potential projects—one about biking and the other about city planning. They put it to a vote and the second project won by a landslide.
That was the genesis of Notice. After that, they pulled out all the stops.
Not content with designing one device, they came up with a range of three devices to match the differing scope of planning applications. They set about making a working prototype of the device intended for the most common applications.
Last week marked the end of the project and the grand unveiling.
They’ve done a great job. All the details are on the website, including this little note I wrote about the project:
This internship programme was an experiment for Clearleft. We wanted to see what would happen if you put through talented young people in a room together for three months to work on a fairly loose brief. Crucially, we wanted to see work that wasn’t directly related to our day-to-day dealings with web design.
We offered feedback and advice, but we received so much more in return. Monika, Chloe, and Chris brought an energy and enthusiasm to the Clearleft office that was invigorating. And the quality of the work they produced together exceeded our wildest expectations.
We hereby declare this experiment a success!
Personally, I think the work they’ve produced is very strong indeed. It would be a shame for it to end now. Perhaps there’s a way that it could be funded for further development. Here’s hoping.
As impressed as I am with the work, I’m even more impressed with the people. They’re not just talented and hard work—they’re a jolly nice bunch to have around.
It was fascinating at Indie Web Camp Germany to see how much could be accomplished by taking some pre-existing small things and loosely joining them.
For example, there are already webmention and micropubplug-ins for quite a few CMSs. If you’re using Wordpress or Jekyll, you can get pretty far pretty quickly by making use of what people have already provided. And after that Indie Web Camp, you can add Drupal and Kirby to the list of CMSs with readily-available components.
I was somewhat surprised—and very pleased—that people made use of some little PHP snippets that I had posted as gists. I deliberately posted them as gists to show how minimal and barebones the code could be—no need for a whole project, or installers, or dockering the node to yeoman the gulp, or whatever it is the cool kids do these days.
This modular approach also worked well for interface elements. Glenn and Aaron worked on separate projects to create small JavaScript enhancements for posting interfaces. Assemble enough of these enhancements together and before you know it, you’ve got something approaching Medium.
By the end of the second day, I was amazed to see how much progress people had made. Like Johannes says:
I was pretty impressed by how much people got done. At the final demo session, everyone had something he or she had done to update their website – although I’m pretty sure that the end of this event will not be the end of their efforts to try and own their stuff online.
It was quite inspiring. In fact, I think I’ve been inspired to have an Indie Web Camp in Brighton. I’m thinking we could have it at the same time as Indie Web Camp Portland, which is on July 11th and 12th.
I made a little improvement to the links section of my site. Now every time I link to something, I check to see if it accepts webmentions and if it does, I ping it to let you know that I’ve linked to it.
Every year at Clearleft, there’s a week where we step away from client work, go off the grid, and disappear into the countryside to work on something fun. We call it Hack Farm.
Hack Farm usually takes place around November, but due to various complexities, Hack Farm 2014 wound up getting pushed back to the start of 2015. Last week we formed a convoy, stocked up on the bare essentials (food, post-it notes, and booze), and drove west for four hours until we were in Herefordshire at a place called The Colloquy—a return to the site of the first ever Hack Farm.
I kept notes on each day.
Day Zero
We arrive in the late afternoon, settle into our respective rooms, and eat some wonderful home-cooked food. After dinner, even though everyone’s pretty knackered, we agree that it’s best to figure out what everyone will be working on for the next few days.
Everyone gets a chance to pitch their ideas, and then we all do some dot-voting to whittle down the options. In short order, we arrive at four different projects for four different teams.
One of my ideas is chosen. This is something I’ve been pitching every single year at Hack Farm, and every single year it ends up narrowly missing out. This year, it’s finally going to happen!
We choose a room to use as our home base and begin.
We start by agreeing on a hypothesis—more of an assumption, really—that we’ll be basing everything upon:
People are more like to give blood if they are not alone.
We start writing down questions that people might ask related to giving blood. Some of these questions might well turn out to be out of scope for this project, or can already be better answered by an existing service like blood.co.uk e.g.:
Can I give blood?
How often can I give blood?
Will it hurt?
How long will it take?
Other questions are potentially open to us providing answers:
Where can I give blood?
When I can I give blood?
Who else is giving blood?
That last one is a question that doesn’t seem to be answered anywhere else.
We brain-dump potential data sources that answered the “who”, “when”, and “where” questions? The data from blood.co.uk could potentially answer the “when” and “where” questions e.g. when and where is the next donation? Data from Twitter, Facebook, or your address book could answer the “who” questions e.g. who are you, and who are your friends?
We brainstorm potential outputs of the project. The obvious choices are a website or a native app, but there could also potentially be email, SMS, or even posters and postcards.
We think about potential incentives for the users of this service: peer pressure, gamification, bragging rights, reassurance, etc.
So there’s a lot of divergent thinking going on: at this stage, there are no bad ideas (no, really!).
We also establish the goals of the project—what we would like to see happen as a result of this service existing. The very minimum success criteria is:
Someone gives blood who hasn’t given blood before.
There’s a follow-on criteria for measuring longer-term success:
A group gives blood regularly.
We split into two groups to work on a propositional statement, then come together to merge what we came up with. Here it is:
For people who want to give blood, who need encouragement and motivation, Blood Buddies brings together people you know to make it a shared experience. That way, you’re more likely to give blood.
Unlike blood.co.uk, it frames giving blood as a shared, rewarding activity.
Blood Buddies is a codename for now. The final service might have a different name, like Bluddies maybe.
After lunch, we start to work on user stories and personas. After a while, we think we’ve got a pretty clear idea for the minimal viable user journey.
Now we take a little break and stretch our legs.
When we regroup, we start researching technical possibilities (like Twitter authentication, GMail address book, Facebook contacts, etc.), while also throwing ideas around to do with branding, tone of voice, etc. James Box comes in and helps us out with a handy branding exercise.
In an effort the name the thing, we create a page filled with relevant words that might be combined into a name. Eventually we reach the “just fucking end it!” moment. The service is called “Blood Buddies” after all. The tagline is …drumroll… “Get plastered together!”
Meanwhile, having investigated the technical possibilities, it looks like Twitter’s API will be the easiest (relatively) to start with.
We write out our epics and create a little kanban board. We have our tasks figured out:
implement sign-in with Twitter,
create a style guide,
mock up the homepage,
mock up a sign-up form,
and more.
Tomorrow everyone can assign a task to themselves and get cracking (some people have started already).
Day Two
After a late Superbowl night, we arise and begin tackling the day’s tasks.
I managed to get a very rudimentary Twitter sign-in working (eventually!) so now my task is to do something with the data that Twitter is returning …namely, storing it in a database. And because this relies on signing in with Twitter to get any results, this needs to get on to an actual web server as soon as possible.
Cue a day of wrangling with PHP, MySQL, OAuth, Git, Apache, SSH keys, and DNS settings …with an intermittent internet connection that drops out at the most inconvenient of times.
Andy is storyboarding the promo video that will help sell the story of Blood Buddies.
Meanwhile James and Tessa are hammering out a visual language for Blood Buddies. So the work is being approached from two different ends: the server side (how it works behind the scenes) and the interface (how it looks to the end user). In the middle is the user flow, and that’s what Richard is working on, also looking ahead past the minimal viable product to include features that can be added later.
By late afternoon the most basic server-side functionality is done, and the site is live at bloodbuddies.co.uk. Of course, there’s very, very, very little to see there, but at least our team can start adding themselves to the database.
So now the task is to join up the back-end functionality with the visual design and copy. As these strands come together, it feels like we’re getting back to a more collaborative phase: whereas yesterday involved lots of group activity, today was more splintered. But that’s going to change now that we’re going to join up the individual pieces into a unified interface.
Today felt quite productive considering that three out of the five people on our team are on cooking duty.
Day Three
Today is a day of rest. It’s a beautiful day. We go for a drive through the countryside, pop into a pub for some grub, and go walking on the hills.
Day Four
We’re down to just three team members today. Tessa is working on a different project and Andy is spending the day sleeping, puking, and generally recovering from a heavy night. N00b.
We get cracking on with integrating the visual design with the back-end functionality. That means bashing out some CSS. After an hour or two, we’ve got something basic in place.
While James works on refining the visuals—including a kick-ass logo—Richard is writing lots and lots of copy, and figuring out user flows.
Meanwhile I’m trying to get server-side stuff in place, fiddling with DNS and email; not my favourite activity.
Once the DNS is pointed to the Digital Ocean server, and with the Twitter sign-in working okay, we realise that we’ve actually launched! Admittedly it’s very basic and it needs plenty of refinement, but it’s a start.
We head out for the evening meal together. Just one more day to go.
Day Five
James starts the day by finishing up his kick-ass Blood Buddies logo.
Richard is writing and editing lots of witty copy.
Andy is storyboarding a promotional video.
I’m trying to get emails working, so that when someone you know signs up to Blood Buddies, we can email you to let you know. By lunchtime, we’ve got it all working.
Lots of the details are in place now: the logo, web fonts, an error page, a favicon …it feels good to be iterating on a live site.
After lunch, James, Richard, and I work on expanding out the home page. Once everything is in pretty good shape, we all come together (with Andy and Tessa) to talk about what the next steps could be after this minimum viable product.
There’s consensus that the most important step would be adding more ways of signing into the site, instead of just Twitter. Also, there’s a lot of functionality we could add if we can scrape the data from blood.co.uk
But that’s for another day. Right now we’ve got a barebones site, but it’s working.
Most pundits call it “the Internet of Things” but there’s another phrase from Andy Huntington that I first heard from Russell Davies: “the Geocities of Things.” I like that.
I’ve never had much exposure to this world of hacking electronics. I remember getting excited about the possibilities at a Brighton BarCamp back in 2008:
I now have my own little arduino kit, a bread board and a lucky bag of LEDs. Alas, know next to nothing about basic electronics so I’m really going to have to brush up on this stuff.
I never did do any brushing up. But that all changed last week.
Seb is doing a new two-day workshop. He doesn’t call it Internet Of Things. He doesn’t call it Geocities Of Things. He calls it Stuff That Talks To The Interwebs, or STTTTI, or ST4I. He needed some guinea pigs to test his workshop material on, so Clearleft volunteered as tribute.
In short, it was great! And this time, I didn’t stop hacking when I got home.
First off, every workshop attendee gets a hand-picked box of goodies to play with and keep: an arduino mega, a wifi shield, sensors, screens, motors, lights, you name it. That’s the hardware side of things. There are also code samples and libraries that Seb has prepared in advance.
Now, remember, I lack even the most basic knowledge of electronics, but after two days of fiddling with this stuff, it started to click.
On the first workshop day, we all did the same exercises, connected things up, getting them to talk to the internet, that kind of thing. For the second workshop day, Seb encouraged us to think about what we might each like to build.
I was quite taken with the ability of the piezo buzzer to play rudimentary music. I started to wonder if there was a way to hook it up to The Session and have it play the latest jigs, reels, and hornpipes that have been submitted to the site in ABC notation. A little bit of googling revealed that someone had already taken a stab at writing an ABC parser for arduino. I didn’t end up using that code, but it convinced me that what I was trying to do wasn’t crazy.
So I built a machine that plays Irish traditional music from the internet.
The hardware has a piezo buzzer, an “on” button, an “off” button, a knob for controlling the speed of the tune, and an obligatory LED.
The software has a countdown timer that polls a URL every minute or so. The URL is http://tune.adactio.com/. That in turn uses The Session’s read-only API to grab the latest tune activity and then get the ABC notation for whichever tune is at the top of that list. Then it does some cleaning up—removing some of the more advanced ABC stuff—and outputs a single line of notes to be played. I’m fudging things a bit: the device has the range of a tin whistle, and expects tunes to be in the key of D or G, but seeing as that’s at least 90% of Irish traditional music, it’s good enough.
Whenever there’s a new tune, it plays it. Or you can hit the satisfying “on” button to manually play back the latest tune (and yes, you can hit the equally satisfying “off” button to stop it). Being able to adjust the playback speed with a twiddly knob turns out to be particularly handy if you decide to learn the tune.
I added one more lo-fi modification. I rolled up a piece of paper and placed it over the piezo buzzer to amplify the sound. It works surprisingly well. It’s loud!
I’ll keep tinkering with it. It’s fun. I realise I’m coming to this whole hardware-hacking thing very late, but I get it now: it really does feel similar to that feeling you would get when you first figured out how to make a web page back in the days of Geocities. I’ve built something that’s completely pointless for most people, but has special meaning for me. It’s ugly, and it’s inefficient, but it works. And that’s a great feeling.
(P.S. Seb will be running his workshop again on the 3rd and 4th of February, and there will a limited amount of early-bird tickets available for one hour, between 11am and midday this Thursday. I highly recommend you grab one.)
This was the fifth Science Hack Day in San Francisco and the 40th worldwide. That’s truly incredible. I mean, I literally can’t believe it. When I organised the very first Science Hack Day back in 2010, I had no idea how far it would go. But Ariel has been indefatigable in making it a truly global event. She is amazing. And at this year’s San Francisco event, she outdid herself in putting together a fantastic cross-section of scientists, designers, and developers: paleontology, marine biology, geology, astronomy, particle physics, and many, many more disciplines were represented in the truly diverse attendees.
After an inspiring set of lightning talks on the first day, ideas started getting bounced around and the hacking began to take shape. I had a vague idea for—yetanother—space-related hack. What clinched it was picking the brains of NASA’s Keri Bean. She’d help me get hold of the dataset I needed for my silly little hack.
I wanted to make that idea approachable, so I thought about the kinds of people we might want to have living with us on the interior shell of a rotating hollowed-out asteroid. How about the people you follow on Twitter?
The only question that remains then is: which asteroid is the right one for you and your Twitter friends? Keri tracked down the motherlode of asteroid data and I started hacking the simplest of mashups—Twitter meets space rocks.
Give it your Twitter username and it will tell you exactly which one of the asteroids in the main belt is right for you (I considered adding an enterprise option that would tell you where you could store your social network in the cloud …the Oort cloud, that is).
Be default, your asteroid will have the population density of Earth, which is quite generously. But if you want a more sparsely-populated habitat—say, the population density of Australia—or a more densely-populated world—with something like the population density of Japan—then you will be assigned a larger or smaller asteroid accordingly.
You’ll also be told by how much you should increase or decrease the rotation of the asteroid to get one gee of centrifugal force on the interior. Figuring out the equations for calculating centrifugal force almost broke me, but luckily I had help from a rocket scientist and a particle physicist …I’m not even kidding. And I should point out that the calculations take some liberties—I’m assuming a spherical body, which is quite a stretch, given the lumpy nature of most asteroids.
At 13:37 on the second day, the demos began. Keri and I were first up.
Give Habitasteroids a whirl for yourself. It’s a silly little thing, but I quite like how it turned out.
Speaking of silly things …at some point in the proceedings, Keri put the call out for asteroid data to her fellow space enthusiasts on Twitter. They responded with asteroid-related puns.
Indie Web Camp UK took place here in Brighton right after this year’s dConstruct. I was organising dConstruct. I was also organising Indie Web Camp. This was a problem.
It was a problem because I’m no good at multi-tasking, and I focused all my energy on dConstruct (it more or less dominated my time for the past few months). That meant that something had to give and that something was the organising of Indie Web Camp.
The event itself went perfectly smoothly. All the basics were there: a great venue, a solid internet connection, and a plan of action. But because I was so focused on dConstruct, I didn’t put any time into trying to get the word out about Indie Web Camp. Worse, I didn’t put any time into making sure that a diverse range of people knew about the event.
So in the end, Indie Web Camp UK 2014 was quite a homogenous gathering. That’s a real shame, and it’s my fault. My excuse is that I was busy with all things dConstruct, but that’s just that; an excuse. On the plus side, the effort I put into making dConstruct a diverse event paid off, but I’ll know better in future than to try to organise two back-to-back events. I need to learn to delegate and ask for help.
But I don’t want to cast Indie Web Camp in a totally negative light (I just want to acknowledge how it could have been better). It was actually pretty great. As with previous events, it was remarkably productive. The format of one day of talks, followed by one day of hacking is spot on.
I hadn’t planned to originally, but I spent the second day getting adactio.com switched over to https. Just a couple of weeks ago I wrote:
I’m looking forward to switching my website over to https:// but I’m not going to do it until the potential pain level drops.
Well, I’m afraid that potential pain level has not dropped. In fact, I can confirm that get TLS working is massive pain in the behind. But on the first day of Indie Web Camp, Tim Retout led a session on security and offered up his expertise for day two. I took full advantage of his generous offer.
With Tim’s help, I was able to get adactio.com all set. If I hadn’t had his help, it probably would’ve taken me days …or I simply would’ve given up. I took plenty of notes so I could document the process. I’ll write it up soon, but alas, it will only be useful to people with the same kind of hosting set up as I have.
By the end of Indie Web Camp, thanks to Tim’s patient assistance, quite a few people has switched on TSL for their sites. The https page on the Indie Web Camp wiki is turning into quite a handy resource.
Towards the end of each year, we Clearlefties head off to a remote location in the countryside for a week of hacking on non-client work. It’s all good unclean fun.
It started two years ago when we made Map Tales. Then last year we worked on the Politmus project. A few months back, it was the turn of Hackfarm 2013.
This time it was bigger than ever. Rather than having everyone working on one big project all week, it made more sense to split into smaller teams and work on a few different smaller projects. Ant has written a detailed description of what went down.
By the middle of the week, I found myself on a team with James, other James, Graham, and an Andy. We started working on something that Boxman has wanted for a while now: a simple little app for adding steps to a list of things to do.
Here’s what differentiates it from the many other to-do list apps out there: you start by telling it what time you want to be finished by. Then, after you’ve added all your steps, it tells you what time you need to get started. An example use case would be preparing a Sunday roast. You know all the steps involved, and you know what time you want to sit down to eat, so what time do you need start your preparation?
We call it Tiny Planner. It’s not “done” in any meaningful sense of the word, and let’s face it, it probably never will be. What happens at hackdays, stays at hackdays …unfinished. Still, the code is public if anyone fancies doing something with it.
What made this project interesting from my perspective, was that it was one of them new-fangled single-page-app thingies. You know the kind: the ones that are made without progressive enhancement, and cease to exist in the absence of JavaScript. Exactly the kind of thing I would normally never work on, in other words.
It was …interesting. I though it would be a good opportunity to evaluate all the various JS-or-it-doesn’t-happen frameworks like Angular, Ember, and Backbone. So I started reading the documentation. I guess I hadn’t realised quite how stupid I am, because I couldn’t make any headway. It was quite dispiriting. So I left Graham to do all the hard JavaScript work and concentrated on the CSS instead. So much for investigating new technologies.
Partly because the internet connection at Hackfarm was so bad, we decided to reduce the server dependencies as much as possible. In the end, we didn’t need any server at all. All the data is stored in the browser in local storage. A handy side-effect of that is that we could offline everything—this may one of the few legitimate uses of appcache. Mind you, I never did get ‘round to actually adding the appcache component because, well, you know what it’s like with cache-invalidation and all that. (And like I said, the code’s public now so if it ever does get put into a presentable state, someone can add the offline stuff then.)
From a development perspective, it was an interesting experiment all ‘round; dabbling in client-side routing, client-side templating, client-side storage, client-side everything really. But it did feel …weird. There’s something uncanny about building something that doesn’t have proper URLs. It uses web technologies but it doesn’t really feel like it’s part of the web.
Anyway, feel free to play around with Tiny Planner, bearing in mind that it’s not a finished thing.
I should really put together a plan for finishing it. If only there were an app for that.
I wanted to build a visualisation based on Matt’s brilliant light cone idea, but I found it far too daunting to try to find data in a usable format and come up with a way of drawing a customisable geocentric starmap of our corner of the galaxy. So I put that idea on the back burner…
At this year’s San Francisco Science Hack Day, I came back to that idea. I wanted some kind of mashup that demonstrated the connection between the time that light has travelled from distant stars, and the events that would have been happening on this planet at that moment. So, for example, a star would be labelled with “the battle of Hastings” or “the sack of Rome” or “Columbus’s voyage to America”. To do that, I’d need two datasets; the distance of stars, and the dates of historical events (leaving aside any Gregorian/Julian fuzziness).
For wont of a better hack, Chloe agreed to help me out. We set to work finding a good dataset of stellar objects. It turned out that a lot of the best datasets from NASA were either about our local solar neighbourhood, or else really distant galaxies and stars that are emitting prehistoric light.
The best dataset we could find was the Near Star Catalogue from Uranometria but the most distant star in that collection was only 70 or 80 light years away. That meant that we could only mash it up with historical events from the twentieth century. We figured we could maybe choose important scientific dates from the past 70 or 80 years, but to be honest, we really weren’t feeling it.
We had reached this impasse when it was time for the Science Hack Day planetarium show. It was terrific: we were treated to a panoramic tour of space, beginning with low Earth orbit and expanding all the way out to the cosmic microwave background radiation. At one point, the presenter outlined the reach of Earth’s radiosphere. That’s the distance that ionosphere-penetrating radio and television signals from Earth, travelling at the speed of light, have reached. “It extends about 70 light years out”, said the presenter.
This was perfect! That was exactly the dataset of stars that we had. It was a time for a pivot. Instead of the lofty goal of mapping historical events to the night sky, what if we tried to do something more trivial and fun? We could demonstrate how far classic television shows have travelled. Has Star Trek reached Altair? Is Sirius receiving I Love Lucy yet?
No, not TV shows …music! Now we were onto something. We would show how far the songs of planet Earth had travelled through space and which stars were currently receiving which hits.
Chloe remembered there being an API from Billboard, who have collected data on chart-topping songs since the 1940s. But that API appears to be gone, and the Echonest API doesn’t have chart dates. So instead, Chloe set to work screen-scraping Wikipedia for number one hits of the 40s, 50s, 60s, 70s …you get the picture. It was a lot of finding and replacing, but in the end we had a JSON file with every number one for the past 70 years.
Meanwhile, I was putting together the logic. Our list of stars had the distances in parsecs. So I needed to convert the date of a number one hit song into the number of parsecs that song had travelled, and then find the last star that it has passed.
We were tempted—for developer convenience—to just write all the logic in JavaScript, especially as our data was in JSON. But even though it was just a hack, I couldn’t bring myself to write something that relied on JavaScript to render the content. So I wrote some really crappy PHP instead.
By the end of the first day, the functionality was in place: you could enter a date, and find out what was number one on that date, and which star is just now receiving that song.
After the sleepover (more like a wakeover) in the aquarium, we started to style the interface. I say “we” …Chloe wrote the CSS while I made unhelpful remarks.
For the icing on the cake, Chloe used her previous experience with the Rdio API to add playback of short snippets of each song (when it’s available).
When I organised the first ever Science Hack Day in London in 2010, I made sure to write about how I organised the event. That’s because I wanted to encourage other people to organise their own Science Hack Days:
If I can do it, anyone can. And anyone should.
Later that year, Ariel organised a Science Hack Day in Palo Alto at the Institute For The Future. It was magnificent. Since then, Ariel has become a tireless champion and global instigator of Science Hack Day, spreading the idea, encouraging new events all over the world, and where possible, travelling to them. I just got the ball rolling—she has really run with it.
She organised another Science Hack Day in San Francisco for last weekend and I was lucky enough to attend—it coincided nicely with my travel plans to the States for An Event Apart in Austin. Once again, it was absolutely brilliant. There were tons of ingenious hacks, and the attendees were a wonderfully diverse bunch: some developers and designers, but also plenty of scientists and students, many (perhaps most) from out of town.
But best of all was the venue: The California Academy of Sciences. It’s a fantastic museum, and after 5pm—when the public left—we had the place to ourselves. Penguins, crocodiles, a rainforest, an aquarium …it’s got it all. I didn’t get a chance to do all of the activities that were provided—I was too busy hacking or helping out—like stargazing on the roof, or getting a tour of the archives. But I did make it to the private planetarium show, which was wonderful.
The Science Hackers spent the night, unrolling their sleeping bags in all the nooks and crannies of the aquarium and the African hall. It was like being a big kid. Mind you, the fun of sleeping over in such a great venue was somewhat tempered by the fact that trying to sleep in a sleeping bag on just a yoga mat on a hard floor is pretty uncomfortable. I was quite exhausted by day two of the event, but I powered through on the wave of infectious enthusiasm exhibited by all the attendees.
Then when it came time to demo all the hacks …well, I was blown away. So much cool stuff.
Ariel and her team really outdid themselves. I’m so happy I was able to make it to the event. If you get the chance to attend a Science Hack Day, take it. And if there isn’t one happening near you, why not organise one? Ariel has put together a handy checklist to get you started so you can get excited and make things with science.
I’m still quite amazed that this was the 24th Science Hack Day! When I organised the first one three years ago, I had no idea that it could spread so far, but thanks to Ariel, it has become a truly special phenomenon.
What’s that NEXTID element? Turns it out it’s something specific to the NeXT operating system.
Why does the first iteration of HTML already contain H1 through to H6? It’s because they were lifted wholesale from a flavour of SGML—Standard Generalized Markup Language—that was already in use at CERN.
Oh, and Brian asked Robert Cailliau why they went with the term World Wide Web. “Well,” he said, “we had to call it something. And we thought we could always change it later.”
Then there was the story of the line-mode browser. It was created by Nicola Pellow, who was a student at CERN in 1990. She later worked on the Mac browser but her involvement with kickstarting the world wide web ended around 1993. She never showed up to any of the reunions.
We poked around in the (surprisingly short) source code of the line-mode browser. We found the lines that described how elements should be styled—the term “style sheet” appeared in a comment!
If you’ve fired up the line-mode browser simulator and run some websites through it, you’ll probably see occasions where a whole bunch of JavaScript—nestled between script tags in the head of the document—gets rendered to the screen.
We could’ve hidden that JavaScript, but we made a deliberate decision to display it. That’s what the line-mode browser would have done. The script element didn’t exist back then. Heck, JavaScript didn’t exist back then. So browsers would have handled the unknown element in the standard HTML way: ignore the opening and closing tags and just render what’s in-between them. That’s still the error-handling model for unrecognised elements in HTML.
This is why we used to write our JavaScript like this:
The HTML comments stopped the JavaScript from being rendered to the screen in older browsers (like the line-mode browser). Using the opening HTML comment <!-- is functionally equivalent to // single-line comments in JavaScript …although you still need to prefix the closing --> comment with a //.
I remember doing this when I first started making websites in the 90s. You can see it if you view source on the first version of this website.
Later on, we all switched to XHTML so we updated the syntax to make it valid XML.
The <![CDATA[ part stops an XML parser from trying to parse the JavaScript. But HTML parsers would choke on that because it starts with an angle bracket. Hence the JavaScript-style // comment.
Anyway, we don’t bother with HTML or XHTML comments at the start of our script blocks anymore. And that’s why the line-mode browser simulator renders the JavaScript to the screen.
Note that the JavaScript isn’t executed. That’s thanks to a clever little hack by Remy: the line-mode browser simulator changes the type attribute of every script element to text/plain, effectively defusing them. Smart!