Here’s Paul’s take on this year’s CSS Day. He’s not an easy man to please, but the event managed to impress even him.
As CSS Day celebrates its milestone anniversary, I was reminded how lucky we are to have events that bring together two constituent parties of the web: implementors and authors (with Sara Soueidan’s talk about the relationship between CSS and accessibility reminding us of the users we ultimately build for). My only complaint is that there are not more events like this; single track, tight subject focus (and amazing catering).
My stint as one of the hosts of CSS Day went very well indeed. I enjoyed myself and people seemed to like the cut of my jib.
During the event there was a real buzz on Mastodon, which was heartening to see. I was beginning to worry that hashtagging events was going to be collatoral damage from Elongate, but there was plenty of conference-induced FOMO to be experienced on the fediverse.
The event itself was, as always, excellent. Both in terms of content and organisation.
Some themes emerged during CSS Day, which I always love to see. These emergent properties are partly down to curation and partly down to serendipity.
The last few years of CSS Day have felt like getting a firehose of astonishing new features being added to the language. There was still plenty of cutting-edge stuff this year—masonry! anchor positioning!—but there was also a feeling of consolidation, asking how to get all this amazing new stuff into our workflows.
Matthias’s opening talk on day one and Stephen’s closing talk on the same day complemented one another perfectly. Both managed to inspire while looking into the nitty-gritty practicalities of the web design process.
It was, astoundingly, Matthias’s first ever conference talk. I have no doubt it won’t be the last—it was great!
I gave Stephen a good-natured roast in my introduction, partly because it was his birthday, partly because we’re old friends, but mostly because it was enjoyable for me to watch him squirm. Of course his talk was, as always, superb. Don’t tell him, but he might be one of my favourite speakers.
The topic of graphic design tools came up more than once. It’s interesting to see how the issues with them have changed. It used to be that design tools—Photoshop, Sketch, Figma—were frustrating because they were writing cheques that CSS couldn’t cash. Now the frustration is the exact opposite. Our graphic design tools aren’t capable of the kind of fluid declarative design we can now accomplish in web browsers.
But the biggest rift remains not with tools or technologies, but with people and mindsets. Our tools can reinforce mindsets but the real divide happens in how different people approach CSS.
Both Josh and Kevin get to the heart of this in their tremendous tutorials, and that was reflected in their talks. They showed the difference between having the bare minimum understanding of CSS in order to get something done as quickly as possible, and truly understanding how CSS works in order to open up a world of possibilities.
For people in the first category, Sarah Dayan was there to sing the praises of utility-first CSS AKA atomic CSS. I commend her bravery!
During the Q&A, I restrained myself from being too Paxmanish. But I did have l’esprit d’escalier afterwards when I realised that the entire talk—and all the answers afterwards—depended on two mutually-incompatiable claims:
The great thing about atomic CSS is that it’s a constrained vocabulary so your team has to conform, and
The other great thing about it is that it’s utility-first, not utility-only so you can break out of it and use regular CSS if you want.
Insert .gif of character from The Office looking to camera.
Most of the questions coming in during the Q&A reflected my own take: how about we use utility classes for some things, but not all things. Seems sensible.
Anyway, regardless of what I or anyone else thinks about the substance of what Sarah was saying, there was no denying that it was a great presentation. They were all great presentations. That’s unusual, and I say that as a conference organiser as well as an attendee. Everyone brings their A-game to CSS Day.
Mind you, it is exhausting. I say it every year, but it always feels like one talk too many. Not that any individual talk wasn’t good, but the sheer onslaught of deep dives into the innards of CSS has my brain exploding before the day is done.
A highlight for me was getting to introduce Fantasai’s talk on the design principles of CSS, which was right up my alley. I don’t think most people realise just how much we owe her for her years of work on standards. The web would be in a worse place without the Herculean work she’s done behind the scenes.
Another highlight was getting to see some of the students I met back in March. They were showing some of their excellent work during the breaks. I find what they’re doing just as inspiring as the speakers on stage.
In fact, when I was filling in the post-conference feedback form, there was a question: “Who would you like to see speak at CSS Day next year?” I was racking my brains because everyone I could immediately think of has already spoken at some point. So I wrote, “It would be great to see some of those students speaking about their work.”
I think it would be genuinely fascinating to get their perspective on what we consider modern CSS, which to them is just CSS.
Either way I’ll back next year for sure.
It’s funny, but usually when a conference is described as “inspiring” it’s because it’s tackling big galaxy-brain questions. But CSS Day is as nitty-gritty as it gets and I found it truly inspiring. Like, I couldn’t wait to open up my laptop and start writing some CSS. That kind of inspiring.
There’s a specific module his students are partaking in that’s right up my alley. They’re given a PDF inheritance-tax form and told to convert it for the web.
Yes, all the excitement of taxes combined with the thrilling world of web forms.
Seriously though, I genuinely get excited by the potential for progressive enhancement here. Sure, there’s the obvious approach of building in layers; HTML first, then CSS, then a sprinkling of JavaScript. But there’s also so much potential for enhancement within each layer.
Got your form fields marked up with the right input types? Great! Now what about autocomplete, inputmode, or pattern attributes?
Got your styles all looking good on the screen? Great! Now what about print styles?
Got form validation working? Great! Now how might you use local storage to save data locally?
As well as taking this practical module, most of the students were also taking a different module looking at creative uses of CSS, like making digital fireworks, or creating works of art with a single div. It was fascinating to see how the different students responded to the different tasks. Some people loved the creative coding and dreaded the progressive enhancement. For others it was exactly the opposite.
Alternating between production projects and prototyping projects can be quite fun, if a little disorienting. It’s almost like I have to flip a switch in my brain to change tracks.
Here’s something I noticed: the students love using :has() in CSS. That’s so great to see! Whereas I might think about how to do something for a few minutes before I think of reaching for :has(), they’ve got front of mind. I’m jealous!
In general, their challenges weren’t with the vocabulary or syntax of HTML, CSS, and JavaScript. The more universal problem was project management. Where to start? What order to do things in? How long to spend on different tasks?
If you can get good at dealing with those questions and not getting overwhelmed, then the specifics of the actual coding will be easier to handle.
This was particularly apparent when it came to JavaScript, the layer of the web stack that was scariest for many of the students.
I encouraged them to break their JavaScript enhancements into two tasks: what you want to do, and how you then execute that.
Start by writing out the logic of your script not in JavaScript, but in whatever language you’re most comfortable with: English, Dutch, whatever. In the course of writing this down, you’ll discover and solve some logical issues. You can also run your plain-language plan past a peer to sense-check it.
It’s only then that you move on to translating your logic into JavaScript. Under each line of English or Dutch, write the corresponding JavaScript. You might as well put // in front of the plain-language sentence while you’re at it to make it a comment—now you’ve got documentation baked in.
You’ll still run into problems at this point, but they’ll be the manageable problems of syntax and typos.
So in the end, it wasn’t my knowledge of specific HTML, CSS, or JavaScript APIs that proved most useful to pass on to the students. It was advice like that around how to approach HTML, CSS, or JavaScript.
I also learned a lot during my time at the school. I had some very inspiring conversations with the web developers of tomorrow. And I was really impressed by how much the students got done just in the three days I was hanging around.
Every week of June sees me at a web event, but in a different capacity each time.
At the end of the first full week in June, I went to CSS Day in Amsterdam as an attendee. It was thought-provoking, as always. And it was great to catch up with my front-of-the-front-end friends.
Last week I went to Pixel Pioneers in Bristol as a speaker. Fortunately I was on first so I was able to get the speaking done with and enjoy the rest of the talks. It was a lovely little event and there was yet more catching up with old friends and making new ones.
This week is the big one. UX London is happening this week. This time I’m not there as an attendee or a speaker. I’m there as the curator and host.
On the one hand, I’m a bag of nerves. I’ve been preparing for this all year and now it’s finally happening. I keep thinking of all the things that could possibly go wrong.
On the other hand, I’m ridiculously excited. I know I should probably express some modesty, but looking at the line-up I’ve assembled, I feel an enormous sense of pride. I’m genuinely thrilled at the prospect of all those great talks and workshops.
Nervous and excited. Those are the two wolves inside me right now.
If you’re going to be at UX London, I hope that you’re equally excited (and not nervous). There are actually still some last-minute tickets available if you haven’t managed to get one yet.
I was in Amsterdam two weeks ago for CSS Day. It was glorious!
I mean, even without the conference it was just so nice to travel somewhere—by direct train, no less!—and spend some time in a beautiful European city enjoying the good weather.
And of course the conference was great too. I’ve been to CSS Day many times. I love it although technically it should be CSS days now—the conference runs for two days.
It’s an event that really treats CSS for what it is—a powerful language worthy of respect. Also, it has bitterballen.
This time I wasn’t just there as an attendee. I also had the pleasure of opening up the show. I gave a talk called In And Out Of Style, a look at the history—and alternative histories—of CSS.
I really, really enjoyed giving this talk. It was so nice to be speaking to a room—or in this case, a church—with real people. I’m done giving talks to my screen. It’s just not the same. Giving this talk made me realise how much I need that feedback from the crowd—the laughs, the nods, maybe even the occasional lightbulb appearing over someone’s head.
As usual, my talk was broad and philosophical in nature. Big-picture pretentious talks are kind of my thing. In this case, I knew that I could safely brush over the details of all the exciting new CSS stuff I mentioned because other talks would be diving deep. And boy, did they ever dive deep!
It’s a cliché to use the adjective “inspiring” to describe a conference, but given all that’s happening in the world of CSS right now, it was almost inevitable that CSS Day would be very inspiring indeed this year. Cascade layers, scoped styles, container queries, custom properties, colour spaces, animation and much much more.
If anything, it was almost too much. If I had one minor quibble with the event it would be that seven talks in a day felt like one talk too many to my poor brain (I think that Marc gets the format just right with Beyond Tellerrand—two days of six talks each). But what a great complaint to have—that there was a glut of great talks!
They’ve already announced the dates for next year’s CSS Day(s): June 8th and 9th, 2023. I strongly suspect that I’ll be there.
Thank you very much to ppk, Krijn, Martijn, and everyone involved in making this year’s CSS Day so good!
I’m very excited about speaking at CSS Day this year. My talk is called In And Out Of Style:
It’s an exciting time for CSS! It feels like new features are being added every day. And yet, through it all, CSS has managed to remain an accessible language for anyone making websites. Is this an inevitable part of the design of CSS? Or has CSS been formed by chance? Let’s take a look at the history—and some alternative histories—of the World Wide Web to better understand where we are today. And then, let’s cast our gaze to the future!
Technically, CSS Day won’t be the first outing for this talk but it will be the in-person debut. I had the chance to give the talk online last week at An Event Apart. Giving a talk online isn’t quite the same as speaking on stage, but I got enough feedback from the attendees that I’m feeling confident about giving the talk in Amsterdam. It went down well with the audience at An Event Apart.
If the description has you intrigued, come along to CSS Day to hear the talk in person. And if you like the subject matter, I’ve put together these links to go with the talk…
I don’t think I’m alone in this. I think that lots of people are yearning for some in-person contact after two years of online events. The good news is that there are some excellent in-person web conferences on the horizon.
Beyond Tellerrand is back in Düsseldorf on May 2nd and 3rd. Marc ran some of the best online events during lockdown with his Stay Curious cafés, but there’s nothing beats the atmosphere of Beyond Tellerrand on its home turf.
If you can’t make it Düsseldorf—I probably can’t because I’m getting my passport renewed right now—there’s All Day Hey in Leeds on May 5th. Harry has put a terrific line-up together for this one-day, very affordable event.
June is shaping up to be a good month for events too. First of all, there’s CSS Day in Amsterdam on June 9th and 10th. I really, really like this event. I’m not just saying that because I’m speaking at this year’s CSS Day. I just love the way that the conference treats CSS with respect. If you self-identigy as a CSS person, then this is the opportunity to be with your people.
But again, if you can’t make it Amsterdam, never fear. The Pixel Pioneers conference returns to Bristol on June 10th. Another one-day event in the UK with a great line-up.
Finally, there’s the big one at the end of June. UX London runs from June 28th to June 30th:
Bringing the UX community back together
Yes, I’m biased because I’m curating the line-up but this is shaping up to be unmissable! It’s going to be so good to gather with our peers and get our brains filled by the finest of design minds.
This was a fun talk to put together. The first challenge was figuring out the right format for a two-person talk. It quickly became clear that Remy’s focus would be on the events of the five days we spent at CERN, whereas my focus would be on the history of computing, hypertext, and networks leading up to the creation of the web.
Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.
That worked remarkably well. The talk starts with me describing the creation of CERN in the 1950s. Then Remy talks about the first day of the hack week. I then talk about events in the 1960s. Remy talks about the second day at CERN. This continues until we join up about half way through the talk: I’ve arrived at the moment that Tim Berners-Lee first published the proposal for the World Wide Web, and Remy has arrived at the point of having running code.
At this point, the presentation switches gears and turns into a demo. I do not have the fortitude to do a live demo, so this was all down to Remy. He did it flawlessly. I have so much respect for people brave enough to do live demos, and do them well.
But the talk doesn’t finish there. There’s a coda about our return to CERN a month after the initial hack week. This was an opportunity for both of us to close out the talk with our hopes and dreams for the World Wide Web.
I know I’m biased, but I thought the structure of the presentation worked really well: two interweaving timelines culminating in a demo and finishing with the big picture.
There was a forcing function on preparing this presentation: Remy was moving house, and I was already going to be away speaking at some other events. That limited the amount of time we could be in the same place to practice the talk. In the end, I think that might have helped us make the most of that time.
We were both feeling the pressure to tell this story well—it means so much to us. Personally, I found that presenting with Remy made me up my game. Like I said:
It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.
This talk could have easily turned into a boring slideshow of “what we did on our holidays”, but I think we managed to successfully avoid that trap. We’re both proud of this talk and we’d love to give it again some time. If you’d like it at your event, get in touch.
This talk about recreating the first ever web browser was a joint presentation with Remy Sharp, delivered at the Fronteers conference in Amsterdam in October 2019.
This sets a chain of events in motion that gives us elementary particles, then more complex particles like atoms, which form stars and planets, including our own, on which life evolves, which brings us to the recent past when this whole process results in the universe generating a way of looking at itself: physicists.
A physicist is the atom’s way of knowing about atoms.
—George Wald
By the end of World War Two, physicists in Europe were in short supply. If they hadn’t already fled during Hitler’s rise to power, they were now being actively wooed away to the United States.
64 years ago
To counteract this brain drain, a coalition of countries forms the European Organization for Nuclear Research, or to use its French acronym, CERN.
They get some land in a suburb of Geneva on the border between Switzerland and France, where they set about smashing particles together and recreating the conditions that existed at the birth of the universe.
Every year, CERN is host to thousands of scientists who come to run their experiments.
Remy
Fast forward to February 2019, a group of 9 of us were invited to CERN as an elite group of hackers to recreate a different experiment.
We are there to recreate a piece of software first published 30 years ago. Given this goal, we need to answer some important questions first:
How does this software look and feel?
How does it work?
How you interact with it?
How does it behave?
The software is so old that it doesn’t run on any modern machines, so we have a NeXT machine specially shipped from the nearby museum. This is no ordinary machine. It was one of the only two NeXT machines that existed at CERN in the late 80s.
Now we have the machine to run this special software.
By some fluke the good people of the web have captured several different versions of this software and published them on Github.
So we selected the oldest version we could find. We download it from Github to our computers. Now we have to transfer it to the NeXT machine.
Except there’s no USB drive. It didn’t exist. CD ROM? Floppy drive? The NeXT computer had a “floptical drive”—bespoke to NeXT computers—all very well, but in 2019 we don’t have those drives.
To transfer the software from our machine, to the NEXT machine, we needed to use the network.
Jeremy
62 years ago
In 1957, J.C.R. Licklider was the first person to publicly demonstrate the idea of time sharing: linking one computer to another.
56 years ago
Six years later, he expanded on the idea in a memo that described an Intergalactic Computer Network.
By this time, he was working at ARPA: the department of Defense’s Advanced Research Projects Agency. They were very interested in the idea of linking computers together, for very practical reasons.
America’s military communications had a top-down command-and-control structure. That was a single point of failure. One pre-emptive strike and it’s game over.
The solution was to create a decentralised network of computers that used Paul Baran’s brilliant idea of packet switching to move information around the network without any central authority.
This idea led to the creation of the ARPANET. Initially it connected a few universities. The ARPANET grew until it wasn’t just computers at each endpoint; it was entire networks. It was turning into a network of networks …an internetwork, or internet, for short. In order for these networks to play nicely with one another, they needed to agree on using the same set of protocols for packet switching.
Bob Kahn and Vint Cerf crafted the simplest possible set of low-level protocols: the Transmission Control Protocol and the Internet Protocol. TCP/IP.
TCP/IP is deliberately dumb. It doesn’t care about the contents of the packets of data being passed around the internet. People were then free to create more task-specific protocols to sit on top of TCP/IP.
There are protocols specifically for email, for example. Gopher is another example of a bespoke protocol. And there’s the File Transfer Protocol, or FTP.
Remy
Back in our war room in 2019, we finally work out that can use FTP to get the software across. FTP is an arcane protocol, but we can agree that it will work across the two eras.
Although we have to manually install FTP servers onto our machines. FTP doesn’t ship with new machines because it’s generally considered insecure.
Now we finally have the software installed on the NeXT computer and we’re able to run the application.
We double click the shading looking, partly hand drawn icon with a lightning bolt on it, and we wait…
Once the software’s finally running, we’re able to see that it looks a bit like an ancient word processor. We can read, edit and open documents. There’s some basic styles lots of heavy margins. There’s a super weird menu navigation in place.
But there’s something different about this software. Something that makes this more than just a word processor.
These documents, they have links…
Jeremy
Ted Nelson is fond of coining neologisms. You can thank him for words like “intertwingled” and “teledildonics”.
56 years ago
He also coined the word “hypertext” in 1963. It is defined by what it is not.
Hypertext is text which is not constrained to be linear.
Ever played a “choose your own adventure” book? That’s hypertext.
You can jump from one point in the book to a different point that has its own unique identifier.
The idea of hypertext predates the word. In 1945, Vannevar Bush published a visionary article in The Atlantic Monthly called As We May Think.
He imagines a mechanical device built into a desk that can summon reams of information stored on microfilm, allowing the user to create “associative trails” as they make connections between different concepts. He calls it the Memex.
Also in 1945, a young American named Douglas Engelbart has been drafted into the navy and is shipping out to the Pacific to fight against Japan. Literally as the ship is leaving the harbour, word comes through that the war is over. He still gets shipped out to the Philippines, but now he’s spending his time lounging in a hut reading magazines. That’s how he comes to read Vannevar Bush’s Memex article, which lodges in his brain.
51 years ago
Douglas Engelbart decides to dedicate his life to building the computer equivalent of the Memex.
On December 9th, 1968, he unveils his oNLine System—NLS—in a public demonstration. Not only does he have a working implementation of hypertext, he also shows collaborative real-time editing, windows, graphics, and oh yeah—for this demo, he invents the mouse.
It truly is The Mother of All Demos.
39 years ago
There were a number of other attempts at creating hypertext systems. In 1980, a young computer scientist named Tim Berners-Lee found himself working at CERN, where scientists were having a heck of time just keeping track of information.
He created a system somewhat like Apple’s Hypercard, but with clickable links. He named it ENQUIRE, after a Victorian book of manners called Enquire Within Upon Everything.
ENQUIRE didn’t work out, but Tim Berners-Lee didn’t give up on the problem of managing information at CERN. He thinks about all the work done before: Vannevar Bush’s Memex; Ted Nelson’s Xanadu project; Douglas Engelbart’s oNLine System.
A lot of hypertext ideas really are similar to a choose-your-own-adventure: jumping around from point to point within a book. But what if, instead of imagining a hypertext book, we could have a hypertext library? Then you could jump from one point in a book to a different point in a different book in a completely different part of the library.
In other words, what if you took the world of hypertext and the world of networks, and you smashed them together?
30 years ago
On the 12th of March, 1989, Tim Berners-Lee circulates the first draft of a document titled Information Management: A Proposal.
The diagrams are incomprehensible. But his supervisor at CERN, Mike Sendall, sees the potential. He reads the proposal and scrawls these words across the top: “vague, but exciting.”
Tim Berners-Lee gets the go-ahead to spend some time on this project. And he gets the budget for a nice shiny NeXT machine. With the support of his colleague Robert Cailliau, Berners-Lee sets about making his theoretical project a reality. They kick around a few ideas for the name.
They thought of calling it The Mesh. They thought of calling it The Information Mine, but Tim rejected that, knowing that whatever they called it, the words would be abbreviated to letters, and The Information Mine would’ve seemed quite egotistical.
So, even though it’s only going to exist on one single computer to begin with, and even though the letters of the abbreviation take longer to say than the words being abbreviated, they call it …the World Wide Web.
As Robert Cailliau told us, they were thinking “Well, we can always change it later.”
Tim Berners-Lee brainstorms a new protocol for hypertext called the HyperText Transfer Protocol—HTTP.
He thinks about a format for hypertext called the Hypertext Markup Language—HTML.
He comes up with an addressing scheme that uses Unique Document Identifiers—UDIs, later renamed to URIs, and later renamed again to URLs.
But he needs to put it all together into running code. And so Tim Berners-Lee sets about writing a piece of software…
Remy
Tim Berners-Lee’s document is a proposal at that stage 30 years ago. It’s just theory. So he needs to build a prototype to actually demonstrate how the World Wide Web would work.
The NeXT computer is the perfect ground for rapid software development because the NeXT operating system ships with a program called NSBuilder.
NSBuilder is software to build software. In fact, the “NS” (meaning NeXTSTEP) can be found in existing software today - you’ll find references to NSText in Safari and Mac developer documentation.
Tim Berners-Lee, using NSBuilder was able to create a working prototype of this software in just 6 weeks
He called it: WorldWideWeb.
We finally have the software working the way it ran 30 years ago.
But our project is to replicate this browser so that you can try it out, and see how web pages look through the lens of 1990.
But HTTPS doesn’t work. There was no HTTPS. There’s no HTTP2. HTTP1.0 hadn’t even been invented.
So I make a proxy. Effectively a monster-in-the-middle attack on all web requests, stripping the SSL layer and then returning the HTML over the HTTP 0.9 protocol.
And finally, we see…
We see junk.
We can see the text content of the website, but there’s a lot of HTML junk tags being spat out onto the screen, particularly at the start of the document.
Jeremy
<h1> <h2> <h3> <h4> <h5> <h6>
<ol> <ul> <li> <p>
These tags are probably very familiar to you. You recognise this language, right?
That’s right. It’s SGML.
SGML is the successor to GML, which supposedly stands for Generalised Markup Language. But that may well be a backronym. The format was created by Goldfarb, Mosher, and Lorie: G, M, L.
SGML is supposed to be short for Standard Generalised Markup Language.
A flavour of SGML was already being used at CERN when Tim Berners-Lee was working on his World Wide Web project. Rather than create a whole new format from scratch, he repurposed what people were already familiar with. This was his HyperText Markup Language, HTML.
One thing he did add was a tag called A for anchor.
Its href attribute is short for “hypertext reference”. Plop a URL in there and you’ve got a link.
The hypertext community thought this was a terrible way to make links.
They believed that two-way linking was vital. With two-way linking, the linked resource connects back to where the link originates. So if the linked resource moved, the link would stay intact.
That’s not the case with the World Wide Web. If the linked resource moves, the link is broken.
Perhaps you’ve experienced broken links?
When Tim Berners-Lee wrote the code for his WorldWideWeb browser, there was a grand total of 26 tags in HTML. I know that we’d refer to them as elements today, but that term wasn’t being used back then.
Now there are well over 100 elements in HTML. The reason why the language has been able to expand so much is down to the way web browsers today treat unknown elements: ignore any opening and closing tags you don’t recognise and only render the text in between them.
Remy
The parsing algorithm was brittle (when compared to modern parsers). There’s no DOM tree being built up. Indeed, the DOM didn’t exist.
Remember that the WorldWideWeb was a browser that effectively smooshed together a word processor and network requests, the styling method was based (mostly) around adding margins as the tags were parsed.
Kimberly Blessing was digging through the original 7344 lines of code for the WorldWideWeb source. She found the code that could explain why we were seeing junk.
<link rel="..."
In this case, when the parser encountered <link rel="…" it would see the <.
<
“Yes, a tag; let’s slurp it up”.
<li
Then it reads li and the parser is thinking, “This looks like a list item, good stuff.”
<lin
Then encounters the n (of link) and, excusing the paring algorithm because it was the first, would then abort the style it was about to apply and promptly spit out the rest of the content on screen, having already swallowed up the first four characters: <lin.
k rel="stylesheet" href="...">
With that, we decided to make the executive design decision that we would strip out any elements that were unknown to the original WorldWideWeb browser — link, script, video and img — which of course there was no image support in the world’s first browser.
This is the first little cheat we applied, so that the page would be more pleasing to you, the visitor of our emulator. Otherwise you’d be presented with a lot of scary looking junk.
So now we have all the reference we need to be able to replicate this browser:
The machine running the original operating system, which gives us colours, fonts, menus and so on.
The browser itself, how windows behave, what’s in the menus, what makes the experience unique to that period of time.
And finally how it looks when we visit URLs.
So off we go.
Jeremy
While Remy sets about recreating the functionality of the WorldWideWeb browser, Angela was recreating the user interface using CSS.
Inputs. Buttons. Icons. Menus. All with the exact borders, highlights and shadows used in the UI of the NeXT operating system, including having the scrollbar on the left side of windows.
Meanwhile the rest of us were putting together an explanatory website to give some backstory to what we were doing. I spent most of my time working on a timeline showing thirty years before and thirty years after the original proposal for the web.
The WorldWideWeb browser inherited fonts from the NeXTSTEP operating system. It mostly used Helvetica and a font called Ohlfs (created by Keith Ohlfs). Helvetica is ubiquitous but Ohlfs was never seen outside of a NeXT machine.
Our teammates Mark and Brian were obsessed with accurately recreating the typography. We couldn’t use modern fonts which are vector based. We need pixeliness.
So Mark and Brian took a screenshot of the NeXT machine’s alphabet. With help from afar from font designer David Jonathan Ross, they traced each square pixel in a vector program and then exported that as a web font. Now we’ve got a web font that deliberately isn’t anti-aliased. It’s a vector format that recreates the look of a bitmap.
Put the pixelly font together with the CSS interface elements and you’ve got something that really looks like the old WorldWideWeb programme.
Remy
This is the final product of our work at CERN that week. A fully working WorldWideWeb emulator giving a reasonable close experience of what it was like to surf the web as if it were 30 years ago.
This is entirely in the browser and was written using:
React,
React Draggable for the windows and menus,
React Hotkeys for keyboard combo shortcuts (we replicated the original OS as much as we could),
idb-keyval for some local storage,
Parcel for bundling.
These tools weren’t chosen particular because they were the best tools for the job, but rather because they were the tools I knew that well enough that would help speed up my development process.
We worked hard to replicate the look and feel as much as we could. We even replicated typos found throughout the WorldWideWeb app:
An excercise in global information availability
Why don’t we see how it looks…
Jeremy
There’s kind of irony in this in that it relies heavily on JavaScript. In fact, there’s nothing there other than JavaScript. But of course the WorldWideWeb browser couldn’t deal with JavaScript—JavaScript hadn’t been invented yet. So the one URL that definitely wouldn’t work in this emulator is …the emulator itself.
We’ll go ahead and open the Fronteers website. I go to “Document” and then I go to “Open from full document reference” (because the word URL didn’t exist). I’m going to pop the Fronteers URL in here. And there it is. We’ve got the Fronteers website. Looks pretty good. (One of my favourite UI bits is this scrollbar on the left hand side instead of the right.)
We can follow the links. Actually one of my favourite features that was in this original browser that we replicated was this “Navigate” menu. I’ve just opened the first link in the document, but I can click on “Next”, and “Next” a bunch of times and it will cycle through each one of the links on the page that I launched from and let me read all the pages that the Fronteers site links to (which I really like). I can go backwards and forwards, and so on.
One thing you might have already noticed is that there are no URLs here. And in fact, to view source, it was considered a kind of diagnostic option and it was very very tucked away. The reason for this is that URLs—and the source HTML or SGML—was considered ugly and potentially a bad user experience.
But there’s one thing about navigating here that’s different. To open this link, I had to double-click.
Jeremy
The WorldWideBrowser was more of a prototype than anything else. It demonstrated the potential of the World Wide Web project, but it only worked on NeXT machines.
To show how the World Wide Web could work on any computer, the second ever web browser was the Line Mode Browser, coded by Nicola Pellow. It had a very basic text interface—no clicking on links—but it could be installed anywhere.
Lots of other geeks and nerds were working on their own web browsers, but it was Marc Andreesen’s Mosaic browser that really blew the doors open for the web. It had a nice usable interface, and it (unilaterrally) introduced the innovation of images on the web.
Andreesen went on to found Netscape. The World Wide Web took off at an unprecedented rate. Microsoft brought out their Internet Explorer browser and started trying to catch up with Netscape. We had the browser wars. Later we got even more browsers, like Safari and Chrome, while Netscape morphed into Firefox and Internet Explorer morphed into Edge. And the rest is history.
But all of these browsers were missing something that was in the original WorldWideWeb browser.
Remy
The reason I have to double-click on these links is that, when I do a single click, it actually places the cursor. The cursor is blinking there on “Fronteers.” And the reason I can place the cursor is because I can edit the document.
I see Fronteers here is missing a heading. We want to welcome you all:
Welkom
We want to make that a heading. Let’s style that. It’s a heading.
So the browser was meant to edit documents. Let’s put a bit of text here:
Great talks from Remy and Jeremy
(forget about everyone else). Now if I want to create a link, I’ll go ahead and navigate to Jeremy’s site, https://adactio.com. I’m going to do “Link”, then “Mark all”, which is a way of copying the URL to that window. Then I go back to the Fronteers website, select “Jeremy”, and then do “Link to marked.” I can double-click on Jeremy’s name it will open up his website.
I can save this document as well. I’m going to call it fronteers.html.
Let’s do a hard reboot—a browser refresh. I come back to my machine a couple of days later, “Ah, the Fronteers page!”. I’m going to open that again, and it linked to that really handsome guy in the sprite shirt. And yes, the links still work.
In fact, this documentation that you see when the WorldWideWeb browser launches was written, styled, and linked using the WorldWideWeb browser. The WorldWideWeb browser was for a web that you could read and write.
But this didn’t survive. It was a hurdle that was too tricky to propose or implement across the different types servers that existed and for the upcoming browsers that were on the horizon.
And so it wasn’t standardised and doesn’t exist today.
But this is an important lesson from the time: reducing complexity increases the chances of mass adoption.
In the end, simplicity wins.
Jeremy
I think that’s a pattern we see over and over again, not just in the history of the web, but before the web. Simplicity wins.
Ted Nelson famously to this day thinks that the World Wide Web is weak sauce. It didn’t try to solve complex right out of the gate, like handling micro-payments.
As we saw, the hypertext community that one-way linking was ridiculous. But simplicity does win out.
Unfortunately that’s why browsers ended up just being browsers. We got some of the functionality back with wikis, content management systems, and social media to a certain extent. But I think it’s still a bit of a shame that when I want to browse a web page, I’m using one piece of software—the browser—but when I want to make a web page, I’m using another piece of software (or multiple pieces of software) to get something on to the web.
I feel like we lost something.
Remy
We head home after a week of hacking.
We were all invited back in March earlier this year for the Web@30 event that was taking place to celebrate the web but also Sir Tim Berners-Lee.
A few of us, Jeremy, Martin, and myself, went back to CERN for the the first leg of the event. There was even a video showing off our work as part of the main conference. Jeremy and I even chased Tim Berners-Lee back to London at the science museum like obsessive web fanboys. It was a lot of fun!
The night before I got a message from Jean-François Groff, pictured here on the right. JF Groff joined Tim Berners-Lee 30 years ago and created libwww (a precursor to libcurl).
The message read:
Sitting with Tim right now. He loves your browser!
Crushed it.
It’s amazing that we were able to pull this off in a week just with text editors and information that’s freely available. It’s mind boggling how much we can do today and how far it can reach. And it all started on that NeXTSTEP machine 30 years ago.
What I really loved about this project was working with this brilliantly old technology, digging around at the birth of browsers and the web.
I wouldn’t be stood here today, if it weren’t for the web.
I wouldn’t even know Jeremy, if it weren’t for the web.
I wouldn’t have a career, if it weren’t for the web.
I loved seeing how such old technology, the original WorldWideWeb browser was still able to render my blog. Because I put content first, delivered markup from the server. The page rendered because HTML really is backward compatible.
HTML and HTTP are just text. Nothing terribly fancy. Dare I say, beautifully simple, and as we said before, simplicity wins the day.
This same simplicity is what allows us all to have the chance for an equal voice. The web allows us to freely publish our thoughts and experiences. We have to fight to protect that kind of web.
And we’ve got to work at keeping it simple.
Jeremy
When we returned to CERN for the 30th anniversary celebrations, one of the other people there was the journalist Zeynep Tefepkçi.
She was on a panel along with Tim Berners-Lee, Robert Caillau, Jean-François Groff, and Lou Montoulli. At the end of the panel discussion, she was asked:
What would you tell the next generation about how to use this wonderful tool?
She replied:
If you have something wonderful, if you do not defend it, you will lose it.
If you do not defend the magic and the things that make it wonderful, it’s just not going to stay magical by itself.
Defend the simplicity and resilience that’s so central to the web.
I don’t know about you, but I often feel that just trying to make a web page has become far too complicated. But this is complexity that we have chosen with our tools, processes, and assumptions. We’ve buried the magic. The magic of linking web pages together. The magic of a working global hypertext system, where nobody needs to ask for permission to publish.
Tim Berners-Lee prototyped the first web browser, but the subsequent world wide web wasn’t created by any one person. It was created by everyone. That. Is. Magical.
I don’t want the web to become a place where only an elite priesthood get to experience the magic of creation. I’m going to fight to defend the openness of the world wide web. This is for everyone. Not just for everyone to use; it’s for everyone to create.
The Edinburgh-Madrid-London whirlwind wasn’t ideal. I gave the opening talk at Finch Conf, then immediately jumped in a taxi to get to the airport to fly to Madrid, so I missed all the excellent talks. I had FOMO for a conference I actually spoke at.
I did get to spend some time at Code Motion in Madrid, but that was a waste of time. It was one of those multi-track events where the trade show floor is prioritised over the talks (and the speakers don’t get paid). I gave my talk to a mostly empty room—the classic multi-track experience. On the plus side, I had a wonderful time with Jessica exploring Madrid’s many tapas delights. The food and drink made up for the sub-par conference.
I flew back from Madrid to the UK, and immediately went straight to London to deliver the closing talk of Generate CSS. So once again, I didn’t get to see any of the other talks. That’s a real shame—it sounds like they were all excellent.
The day after Generate though, I took the Eurostar to Amsterdam. That’s where I’ve been ever since. There were just as many events as in the previous week, but because they were all in Amsterdam, I could savour them properly, instead of spending half my time travelling.
Indie Web Camp Amsterdam was excellent, although I missed out on the afternoon discussions on the first day because I popped over to the Mozilla Tech Speakers event happening at the same time. I was there to offer feedback on lightning talks. I really, really enjoyed it.
I’d really like to do more of this kind of thing. There aren’t many activities I feel qualified to give advice on, but public speaking is an exception. I’ve got plenty of experience that I’m eager to share with up-and-coming speakers. Also, I got to see some really great lightning talks!
Then it was time for View Source. There was a mix of talks, panels, and breakout conversation corners. I saw some fantastic talks by people I hadn’t seen speak before: Melanie Richards, Ali Spittal, Sharell Bryant, and Tejas Kumar. I gave the closing keynote, which was warmly received—that’s always very gratifying.
Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.
I’m happy to say that it went off without a hitch. Remy definitely had the tougher task—he did a live demo. Needless to say, he did it flawlessly. It’s been a real treat working with Remy on this. Don’t tell him I said this, but he’s kind of a web hero of mine, so this was a real honour and a privilege for me.
I’ve got some more speaking engagements ahead of me. Most of them are in Europe so I’m going to do my utmost to travel to them by train. Flying is usually more convenient but it’s terrible for my carbon footprint. I’m feeling pretty guilty about that Madrid trip; I need to make ammends.
I’ll be travelling to France next week for Paris Web. Taking the Eurostar is a no-brainer for that one. Straight after that Jessica and I will be going to Frankfurt for the book fair. Taking the train from Paris to Frankfurt will be nice and straightforward.
I’ll be back in Brighton for Indie Web Camp on the weekend of October 19th and 20th—you should come!—and then I’ll be heading off to Antwerp for Full Stack Fest. Anywhere in Belgium is easily reachable by train so that’ll be another Eurostar journey.
After that, it gets a little trickier. I’ll be going to Berlin for Beyond Tellerrand but I’m not sure I can make it work by train. Same goes for Web Clerks in Vienna. Cities that far east are tough to get to by train in a reasonable amount of time (although I realise that, compared to many others, I have the luxury of spending time travelling by train).
And then there’s Ireland. I make trips back there to see my mother, but there’s no alternative to flying or taking a ferry—neither are ideal for the environment. At least I can offset the carbon from my flights; the travel equivalent to putting coins in the swear jar.
Don’t get me wrong—I’m not moaning about the amount of travel involved in going to conferences and workshops. It’s fantastic that I get to go to new and interesting places. That’s something I hope I never take for granted. But I can’t ignore the environmental damage I’m doing. I’ll be making more of an effort to travel by train to Europe’s many excellent web events. While I’m at it, I can ask Paul for his trainspotter expertise.
Back in the late 2000s, I used to go to Copenhagen every for an event called Reboot. It was a fun, eclectic mix of talks and discussions, but alas, the last one was over a decade ago.
I few months ago, I got an email from Thomas about the new event he’s running in Copenhagen called Techfestival. He was wondering if there was some way of making the WorldWideWeb project part of the event. We ended up settling on having a stand—a modern computer running a modern web browser running a recreation of the first ever web browser from almost three decades ago.
So I showed up at Techfestival and found that the computer had been set up in a Shoreditchian shipping container. I wasn’t exactly sure what I was supposed to do, so I just hung around nearby until someone wandering by would pause and start tentatively approaching the stand.
“Would you like to try the time machine?” I asked. Nobody refused the offer. I explained that they were looking at a recreation of the world’s first web browser, and then showed them how they could enter a URL to see how the oldest web browser would render a modern website.
Lots of people entered facebook.com or google.com, but some people had their own websites, either personal or for their business. They enjoyed seeing how well (or not) their pages held up. They’d take photos of the screen.
People asked lots of questions, which I really enjoyed answering. After a while, I was able to spot the themes that came up frequently. Some people were confusing the origin story of the internet with the origin story of the web, so I was more than happy to go into detail on either or both.
The experience helped me clarify in my own mind what was exciting and interesting about the birth of the web—how much has changed, and how much and stayed the same.
The World Wide Web turned 30 years old this year. To mark the occasion, a motley group of web nerds gathered at CERN, the birthplace of the web, to build a time machine. The first ever web browser was, confusingly, called WorldWideWeb. What if we could recreate the experience of using it …but within a modern browser! Join (Je)Remy on a journey through time and space and code as they excavate the foundations of Tim Berners-Lee’s gloriously ambitious and hacky hypertext system that went on to conquer the world.
Neither of us is under any illusions about the nature of a joint talk. It’s not half as much work; it’s more like twice the work. We’ve both seen enough uneven joint presentations to know what we want to avoid.
We’ve been honing the material and doing some run-throughs at the Clearleft HQ at 68 Middle Street this week. The talk has a somewhat unusual structure with two converging timelines. I think it’s going to work really well, but I won’t know until we actually deliver the talk in Amsterdam. I’m excited—and a bit nervous—about it.
Whether it’s in a shipping container in Copenhagen or on a stage in Amsterdam, I’m starting to realise just how much I enjoy talking about web history.
I was in Amsterdam again at the start of last week for the Progressive Web App Dev Summit, organised by Google. Most of the talks were given by Google employees, but not all—this wasn’t just a European version of Google I/O. Representatives from Opera, Mozilla, Samsung, and Microsoft were also there, and there were quite a few case studies from independent companies. That was very gratifying to see.
Almost all the talks were related to progressive web apps. I say, “almost all” because there were occasional outliers. There was a talk on web components, which don’t have anything directly to do with progressive web apps (and I hope there won’t be any attempts to suggest otherwise), and another on rendering performance that had good advice for anyone building any kind of website. Most of the talks were about the building blocks of progressive web apps: HTTPS, Service Workers, push notifications, and all that jazz.
I was very pleased to see that there was a move away from the suggesting that single-page apps with the app-shell architecture model were the only way of building progressive web apps.
Need to do more to change the perception that a progressive web app must be a single-page client-rendered app #pwadevsummit
There were lots of great examples of progressively enhancing existing sites into progressive web apps. Jeff Posnick’s talk was a step-by-step walkthrough of doing exactly that. Reading through the agenda, I was really happy to see this message repeated again and again:
In this session we’ll take an online-only site and turn it into a fully network-resilient, offline-first installable progressive web app. We’ll also break out of the app shell and look at approaches that better-suit traditional server-driven sites.
Progressive Web Apps should work everywhere for every user. But what happens when the technology and API’s are not available for in your users browser? In this talk we will show you how you can think about and build sites that work everywhere.
Progressive Web Apps should load fast, work great offline, and progressively enhance to a better experience in modern browsers.
How do you put the “progressive” into your current web app?
You can (and should!) build for the latest and greatest browsers, but through a collection of fallbacks and progressive enhancements you can bring a lot tomorrow’s web to yesterday’s browsers.
I think this is a really smart move. It’s a lot easier to sell people on incremental changes than it is to convince them to rip everything out and start from scratch (another reason why I’m dubious about any association between web components and progressive web apps—but I’ll save that for another post).
The other angle that I really liked was the emphasis on emerging markets, not just wealthy westerners. Tal Oppenheimer’s talk Building for Billions was superb, and Alex kicked the whole thing off with some great facts and figures on mobile usage.
In my mind, these two threads are very much related. Progressive enhancement allows us to have our progressive web app cake and eat it too: we can make websites that can be accessed on devices with limited storage and slow networks, while at the same time ensuring those same sites take advantage of all the newest features in the latest and greatest browsers. I talked to a lot of Google devs about ways to measure the quality of a progressive web app, and I’m coming to the conclusion that a truly high-quality site is one that can still be accessed by a proxy browser like Opera Mini, while providing a turbo-charged experience in the latest version of Chrome. If you think that sounds naive or unrealistic, then I think you might want to dive deeper into all the technologies that make progressive web apps so powerful—responsive design, Service Workers, a manifest file, HTTPS, push notifications; all of those features can and should be used in a layered fashion.
That ambient badging that Alex was talking about? Opera is doing it. The importance of being able to access URLs that I’ve been ranting about? Opera is doing it.
Then we had the idea to somehow connect it to the “pull-to-refresh” spinner, as a secondary gesture to the left or right.
Nice! I’m looking forward to seeing what other browsers come up with it. It’s genuinely exciting to see all these different browser makers in complete agreement on which standards they want to support, while at the same time differentiating their products by competing on user experience. Microsoft recently announced that progressive web apps will be indexed in their app store just like native apps—a really interesting move.
The Progressive Web App Dev Summit wrapped up with a closing panel, that I had the honour of hosting. I thought it was very brave of Paul to ask me to host this, considering my strident criticism of Google’s missteps.
Initially there were going to be six people on the panel. Then it became eight. Then I blinked and it suddenly became twelve. Less of a panel, more of a jury. Half the panelists were from Google and the other half were from Opera, Microsoft, Mozilla, and Samsung. Some of those representatives were a bit too media-trained for my liking: Ali from Microsoft tried to just give a spiel, and Alex Komoroske from Google wouldn’t give me a straight answer about whether he wants Android Instant apps to succeed—Jake was a bit more honest. I should have channelled my inner Paxman a bit more.
Needless to say, nobody from Apple was at the event. No surprise there. They’ve already promised to come to the next event. There won’t be an Apple representative on stage, obviously—that would be asking too much, wouldn’t it? But at least it looks like they’re finally making an effort to engage with the wider developer community.
All in all, the Progressive Web App Dev Summit was good fun. I found the event quite inspiring, although the sausage festiness of the attendees was depressing. It would be good if the marketing for these events reached a wider audience—I met a lot of developers who only found out about it a week or two before the event.
I really hope that people will come away with the message that they can get started with progressive web apps right now without having to re-architect their whole site. Right now the barrier to entry is having your site running on HTTPS. Once you’ve got that up and running, it’s pretty much a no-brainer to add a manifest file and a basic Service Worker—to boost performance if nothing else. From there, you’re in a great position to incrementally add more and more features—an offline-first approach with your Service Worker, perhaps? Or maybe start dabbling in push notifications. The great thing about all of these technologies (with the glaring exception of web components in their current state) is that you don’t need to bet the farm on any of them. Try them out. Use them as enhancements. You’ve literally got nothing to lose …and your users have everything to gain.
I’m about to have a crazy few days that will see me bouncing between Brighton and Amsterdam.
It starts tomorrow. I’m flying to Amsterdam in the morning and speaking at this Icons event in the afternoon about digital preservation and long-term thinking.
Then, the next morning, I’ll be opening up the inaugural HTML Special which is a new addition the CSS Day conference. Each talk on Thursday will cover one HTML element. I am honoured to speaking about the A element. Here’s the talk description:
The world exploded into a whirling network of kinships, where everything pointed to everything else, everything explained everything else…
Enquire within upon everything.
I’ve been working all out to get this talk done and I finally wrapped it up today. Right now, I feel pretty happy with it, but I bet I’ll change that opinion in the next 48 hours. I’m pretty sure that this will be one of those talks that people will either love or hate, kind of like my 2008 dConstruct talk, The System Of The World.
After CSS Day, I’ll be heading back to Brighton on Saturday, June 18th to play a Salter Cane gig in The Greys pub. If you’re around, you should definitely come along—not only is it free, but there will be some excellent support courtesy of Jon London, and Lucas and King.
Then, the next morning, I’ll be speaking at DrupalCamp Brighton, opening up day two of the event. I won’t be able to stick around long afterwards though, because I need to skidaddle to the airport to go back to Amsterdam!
Google are having their Progressive Web App Dev Summit there on Monday on Tuesday. I’ll be moderating a panel on the second day, so I’ll need to pay close attention to all the talks. I’ll be grilling representatives from Google, Samsung, Opera, Microsoft, and Mozilla. Considering my recent rants about some very bad decisions on the part of Google’s Chrome team, it’s very brave of them to ask me to be there, much less moderate a panel in public.
You can still register for the event, by the way. Like the Salter Cane gig, it’s free. But if you can’t make it along, I’d still like to know what you think I should be asking the panelists about.
Got a burning question for browser/device makers? Write it down, post it somewhere on the web with a link back to this post, and then send me a web mention (there’s a form for you to paste in the URL at the bottom of this post).
Google have asked me to moderate a panel on the second day of this event in Amsterdam dedicated to progressive web apps. Very brave of them, considering some of my recent posts.
I’ll be speaking to students in Vasilis’s class for Communication and Multimedia Design in Amsterdam right before CSS Day. There are (free) tickets available if you’re around. I’ll be talking about digital preservation and long-term thinking on the web.
This meetup is, like all other Icons Meetups, free to attend for everyone. For students and lecturers of CMD Amsterdam, of course. But also for all professional (digital) designers who want to be inspired.
The weather is glorious right now here in Brighton. As much as I get wanderlust, I’m more than happy to have been here for most of June and for this lovely July thus far.
Prior to the J months, I made a few European sojourns.
Mid-may was Mobilism time in Amsterdam, although it might turn out that this may have been the final year. That would be a real shame: it’s a great conference, and this year’s was no exception.
Two weeks after Mobilism, I was back on the continent for Beyond Tellerrand in Düsseldorf. I opened up the show with a new talk. It was quite ranty, but I was pleased with how it turned out, and the audience were very receptive. I’ll see about getting the video transcribed so I can publish the full text here.
Alas, I had to miss the second day of the conference so I could down to Porto for this year’s ESAD web talks, where I reprised the talk I had just debuted in Germany. It was my first time in Portugal and I really liked Porto: there’s a lot to explore and discover there.
Two weeks after that, I gave that same talk one last spin at FFWD.pro in Zagreb. I had never been to Croatia before and Jessica and I wanted to make the most of it, so we tagged on a trip to Dubrovnik. That was quite wonderful. It’s filled with tourists these days, but with good reason: it’s a beautiful medieval place.
With that, my little European getaways came to an end (for now). The only other conference I attended was Brighton’s own Ampersand, which was particularly fun this year. The Clearleft conferences just keep getting better and better.
In fact, this year’s Ampersand might have been the best yet. And this year’s UX London was definitely the best yet. I’d love to say that this year’s dConstruct will be the best yet, but given that last year’s was without doubt the best conference I’ve ever been to, that’s going to be quite a tall order.
Still, with this line-up, I reckon it’s going to be pretty darn great …and it will certainly be good fun. So if you haven’t yet done so, grab a ticket now and I’ll see you here in Brighton in September.
I’ve really, really enjoyed the previous two Mobilisms, and I always get a kick out of moderating panels so I’m pretty chuffed about getting the chance to host a panel for the third year running.
The first year, the panel was made up of Mobile browser vendors (excluding Apple, of course). Last year, it was more of a mixed bag of vendors and developers. This year …well, we’ll see. I’ll assemble the panel over the course of the conference’s two days. I plan to choose the sassiest and most outspoken of speakers—the last thing you want on a panel is a collection of meek, media-trained company shills.
Mind you, Dan has managed to buy his way onto the panel through some kind of sponsorship deal, but I’m hoping he’ll be able to contribute something useful about Firefox OS.
Apart from that one preordained panelist, everything else is up in the air. To help me decide who to invite onto the panel, it would be really nice to have an idea of what kind of topics people want to have us discuss. Basically, what’s hot and what’s not.
So …got a burning question about mobile, the web, or the “mobile web” (whatever that means)? I want to hear it.
If you could leave a comment with your question, ‘twould be much appreciated.