Journal tags: print

16

sparkline

Button types

I’ve been banging the drum for a button type="share" for a while now.

I’ve also written about other potential button types. The pattern I noticed was that, if a JavaScript API first requires a user interaction—like the Web Share API—then that’s a good hint that a declarative option would be useful:

The Fullscreen API has the same restriction. You can’t make the browser go fullscreen unless you’re responding to user gesture, like a click. So why not have button type=”fullscreen” in HTML to encapsulate that? And again, the fallback in non-supporting browsers is predictable—it behaves like a regular button—so this is trivial to polyfill.

There’s another “smell” that points to some potential button types: what functionality do browsers provide in their interfaces?

Some browsers provide a print button. So how about button type="print"? The functionality is currently doable with button onclick="window.print()" so this would be a nicer, more declarative way of doing something that’s already possible.

It’s the same with back buttons, forward buttons, and refresh buttons. The functionality is available through a browser interface, and it’s also scriptable, so why not have a declarative equivalent?

How about bookmarking?

And remember, the browser interface isn’t always visible: progressive web apps that launch with minimal browser UI need to provide this functionality.

Šime Vidas was wondering about button type="copy” for copying to clipboard. Again, it’s something that’s currently scriptable and requires a user gesture. It’s a little more complex than the other actions because there needs to be some way of providing the text to be copied, but it’s definitely a valid use case.

  • button type="share"
  • button type="fullscreen"
  • button type="print"
  • button type="bookmark"
  • button type="back"
  • button type="forward"
  • button type="refresh"
  • button type="copy"

Any more?

Get the FLoC out

I’ve always liked the way that web browsers are called “user agents” in the world of web standards. It’s such a succinct summation of what browsers are for, or more accurately who browsers are for. Users.

The term makes sense when you consider that the internet is for end users. That’s not to be taken for granted. This assertion is now enshrined in the Internet Engineering Task Force’s RFC 8890—like Magna Carta for the network age. It’s also a great example of prioritisation in a design principle:

When there is a conflict between the interests of end users of the Internet and other parties, IETF decisions should favor end users.

So when a web browser—ostensibly an agent for the user—prioritises user-hostile third parties, we get upset.

Google Chrome—ostensibly an agent for the user—is running an origin trial for Federated Learning of Cohorts (FLoC). This is not a technology that serves the end user. It is a technology that serves third parties who want to target end users. The most common use case is behavioural advertising, but targetting could be applied for more nefarious purposes.

The Electronic Frontier Foundation wrote an explainer last month: Google Is Testing Its Controversial New Ad Targeting Tech in Millions of Browsers. Here’s What We Know.

Let’s back up a minute and look at why this is happening. End users are routinely targeted today (for behavioural advertising and other use cases) through third-party cookies. Some user agents like Apple’s Safari and Mozilla’s Firefox are stamping down on this, disabling third party cookies by default.

Seeing which way the wind is blowing, Google’s Chrome browser will also disable third-party cookies at some time in the future (they’re waiting to shut that barn door until the fire is good’n’raging). But Google isn’t just in the browser business. Google is also in the ad tech business. So they still want to advertisers to be able to target end users.

Yes, this is quite the cognitive dissonance: one part of the business is building a user agent while a different part of the company is working on ways of tracking end users. It’s almost as if one company shouldn’t simultaneously be the market leader in three separate industries: search, advertising, and web browsing. (Seriously though, I honestly think Google’s search engine would get better if it were split off from the parent company, and I think that Google’s web browser would also get better if it were a separate enterprise.)

Anyway, one possible way of tracking users without technically tracking individual users is to assign them to buckets, or cohorts of interest based on their browsing habits. Does that make you feel safer? Me neither.

That’s what Google is testing with the origin trial of FLoC.

If you, as an end user, don’t wish to be experimented on like this, there are a few things you can do:

  • Don’t use Chrome. No other web browser is participating in this experiment. I recommend Firefox.
  • If you want to continue to use Chrome, install the Duck Duck Go Chrome extension.
  • Alternatively, if you manually disable third-party cookies, your Chrome browser won’t be included in the experiment.
  • Or you could move to Europe. The origin trial won’t be enabled for users in the European Union, which is coincidentally where GDPR applies.

That last decision is interesting. On the one hand, the origin trial is supposed to be on a small scale, hence the lack of European countries. On the other hand, the origin trial is “opt out” instead of “opt in” so that they can gather a big enough data set. Weird.

The plan is that if and when FLoC launches, websites would have to opt in to it. And when I say “plan”, I meanbest guess.”

I, for one, am filled with confidence that Google would never pull a bait-and-switch with their technologies.

In the meantime, if you’re a website owner, you have to opt your website out of the origin trial. You can do this by sending a server header. A meta element won’t do the trick, I’m afraid.

I’ve done it for my sites, which are served using Apache. I’ve got this in my .conf file:

<IfModule mod_headers.c>
Header always set Permissions-Policy "interest-cohort=()"
</IfModule>

If you don’t have access to your server, tough luck. But if your site runs on Wordpress, there’s a proposal to opt out of FLoC by default.

Interestingly, none of the Chrome devs that I follow are saying anything about FLoC. They’re usually quite chatty about proposals for potential standards, but I suspect that this one might be embarrassing for them. It was a similar situation with AMP. In that case, Google abused its monopoly position in search to blackmail publishers into using Google’s format. Now Google’s monopoly in advertising is compromising the integrity of its browser. In both cases, it makes it hard for Chrome devs claiming to have the web’s best interests at heart.

But one of the advantages of having a huge share of the browser market is that Chrome can just plough ahead and unilaterily implement whatever it wants even if there’s no consensus from other browser makers. So that’s what Google is doing with FLoC. But their justification for doing this doesn’t really work unless other browsers play along.

Here’s Google’s logic:

  1. Third-party cookies are on their way out so advertisers will no longer be able to use that technology to target users.
  2. If we don’t provide an alternative, advertisers and other third parties will use fingerprinting, which we all agree is very bad.
  3. So let’s implement Federated Learning of Cohorts so that advertisers won’t use fingerprinting.

The problem is with step three. The theory is that if FLoC gives third parties what they need, then they won’t reach for fingerprinting. Even if there were any validity to that hypothesis, the only chance it has of working is if every browser joins in with FLoC. Otherwise ad tech companies are leaving money on the table. Can you seriously imagine third parties deciding that they just won’t target iPhone or iPad users any more? Remember that Safari is the only real browser on iOS so unless FLoC is implemented by Apple, third parties can’t reach those people …unless those third parties use fingerprinting instead.

Google have set up a situation where it looks like FLoC is going head-to-head with fingerprinting. But if FLoC becomes a reality, it won’t be instead of fingerprinting, it will be in addition to fingerprinting.

Google is quite right to point out that fingerprinting is A Very Bad Thing. But their concerns about fingerprinting sound very hollow when you see that Chrome is pushing ahead and implementing a raft of browser APIs that other browser makers quite rightly point out enable more fingerprinting: Battery Status, Proximity Sensor, Ambient Light Sensor and so on.

When it comes to those APIs, the message from Google is that fingerprinting is a solveable problem.

But when it comes to third party tracking, the message from Google is that fingerprinting is inevitable and so we must provide an alternative.

Which one is it?

Google’s flimsy logic for why FLoC is supposedly good for end users just doesn’t hold up. If they were honest and said that it’s to maintain the status quo of the ad tech industry, it would make much more sense.

The flaw in Google’s reasoning is the fundamental idea that tracking is necessary for advertising. That’s simply not true. Sacrificing user privacy is fundamental to behavioural advertising …but behavioural advertising is not the only kind of advertising. It isn’t even a very good kind of advertising.

Marko Saric sums it up:

FLoC seems to be Google’s way of saving a dying business. They are trying to keep targeted ads going by making them more “privacy-friendly” and “anonymous”. But behavioral profiling and targeted advertisement is not compatible with a privacy-respecting web.

What’s striking is that the very monopolies that make Google and Facebook the leaders in behavioural advertising would also make them the leaders in contextual advertising. Almost everyone uses Google’s search engine. Almost everyone uses Facebook’s social network. An advertising model based on what you’re currently looking at would keep Google and Facebook in their dominant positions.

Google made their first many billions exclusively on contextual advertising. Google now prefers to push the message that behavioral advertising based on personal data collection is superior but there is simply no trustworthy evidence to that.

I sincerely hope that Chrome will align with Safari, Firefox, Vivaldi, Brave, Edge and every other web browser. Everyone already agrees that fingerprinting is the real enemy. Imagine the combined brainpower that could be brought to bear on that problem if all browsers made user privacy a priority.

Until that day, I’m not sure that Google Chrome can be considered a user agent.

Design sprints on the Clearleft podcast

The sixth episode of the Clearleft podcast is now live: design sprints!

It comes in at just under 24 minutes, which feels just about right to me. Once again, it’s a dive into one topic that asks “What is this?”, “What does this mean?”, and “Where did this come from?”

I could’ve invited just about any of the practitioners at Clearleft to join me on this one, but I setttled on Chris, who’s always erudite and sharp.

I also asked ex-Clearleftie Jerlyn to have a chat. You’ll notice that’s been a bit of theme on the Clearleft podcast; asking people who used to work at Clearleft to share their thoughts. I’d quite like to do at least an episode—maybe even a whole season—featuring ex-Clearlefties exclusively. So many great people have worked at the agency of the years, Jerlyn being a prime example.

I’d also like to do an episode some time with the regular contractors we’ve worked with at Clearleft. On this episode, I asked the super-smart Tom Prior to join me.

I recorded those three chats over the past couple of weeks. And it was kind of funny how there was, of course, a looming presence over the topic of design sprints: Jake Knapp. I had sent him an email too but I got an auto-responder saying that he was super busy and would take a while to respond. So I kind of mentally wrote it off.

I spent last week assembling and editing the podcast with the excellent contributions from Jerlyn, Chris, and Tom. But it did feel a bit like Waiting For Godot the way that Jake’s book was being constantly referenced.

Then, on the weekend, Godot showed up.

Jake said he’d have time for a chat on Wednesday. Aargh! That’s the release date for the podcast! I don’t suppose Monday would work?

Very graciously, Jake agreed to a Monday chat (at an ungodly early hour in his time zone). I got an excellent half hour of material straight from the horse’s mouth—a very excitable and fast-talking horse, too.

That left me with just a day to work the material into the episode! I felt like a journalist banging on the keyboard at midnight, ready to run into the printing room shouting “Stop the press!” …although I’m sure the truth is that nobody but me would notice if an episode were released a little late.

Anyway, it all got done in the end and I think it turned out pretty great!

Have a listen for yourself and see what you make of it.

This was the final episode of the first season. I’ll now take a little break from podcasting as I plot and plan for the next season. Watch this space! …and, y’know, subscribe to the podcast.

Mental models

I’ve found that the older I get, the less I care about looking stupid. This is remarkably freeing. I no longer have any hesitancy about raising my hand in a meeting to ask “What’s that acronym you just mentioned?” This sometimes has the added benefit of clarifying something for others in the room who might have been to shy to ask.

I remember a few years back being really confused about npm. Fortunately, someone who was working at npm at the time came to Brighton for FFConf, so I asked them to explain it to me.

As I understood it, npm was intended to be used for managing packages of code for Node. Wasn’t it actually called “Node Package Manager” at one point, or did I imagine that?

Anyway, the mental model I had of npm was: npm is to Node as PEAR is to PHP. A central repository of open source code projects that you could easily add to your codebase …for your server-side code.

But then I saw people talking about using npm to manage client-side JavaScript. That really confused me. That’s why I was asking for clarification.

It turns out that my confusion was somewhat warranted. The npm project had indeed started life as a repo for server-side code but had since expanded to encompass client-side code too.

I understand how it happened, but it confirmed a worrying trend I had noticed. Developers were writing front-end code as though it were back-end code.

On the one hand, that makes total sense when you consider that the code is literally in the same programming language: JavaScript.

On the other hand, it makes no sense at all! If your code’s run-time is on the server, then the size of the codebase doesn’t matter that much. Whether it’s hundreds or thousands of lines of code, the execution happens more or less independentally of the network. But that’s not how front-end development works. Every byte matters. The more code you write that needs to be executed on the user’s device, the worse the experience is for that user. You need to limit how much you’re using the network. That means leaning on what the browser gives you by default (that’s your run-time environment) and keeping your code as lean as possible.

Dave echoes my concerns in his end-of-the-year piece called The Kind of Development I Like:

I now think about npm and wonder if it’s somewhat responsible for some of the pain points of modern web development today. Fact is, npm is a server-side technology that we’ve co-opted on the client and I think we’re feeling those repercussions in the browser.

Writing back-end and writing front-end code require very different approaches, in my opinion. But those differences have been erased in “modern” JavaScript.

The Unix Philosophy encourages us to write small micro libraries that do one thing and do it well. The Node.js Ecosystem did this in spades. This works great on the server where importing a small file has a very small cost. On the client, however, this has enormous costs.

In a funny way, this situation reminds me of something I saw happening over twenty years ago. Print designers were starting to do web design. They had a wealth of experience and knowledge around colour theory, typography, hierarchy and contrast. That was all very valuable to bring to the world of the web. But the web also has fundamental differences to print design. In print, you can use as many typefaces as you want, whereas on the web, to this day, you need to be judicious in the range of fonts you use. But in print, you might have to limit your colour palette for cost reasons (depending on the printing process), whereas on the web, colours are basically free. And then there’s the biggest difference of all: working within known dimensions of a fixed page in print compared to working within the unknowable dimensions of flexible viewports on the web.

Fast forward to today and we’ve got a lot of Computer Science graduates moving into front-end development. They’re bringing with them a treasure trove of experience in writing robust scalable code. But web browsers aren’t like web servers. If your back-end code is getting so big that it’s starting to run noticably slowly, you can throw more computing power at it by scaling up your server. That’s not an option on the front-end where you don’t really have one run-time environment—your end users have their own run-time environment with its own constraints around computing power and network connectivity.

That’s a very, very challenging world to get your head around. The safer option is to stick to the mental model you’re familiar with, whether you’re a print designer or a Computer Science graduate. But that does a disservice to end users who are relying on you to deliver a good experience on the World Wide Web.

Design sprint?

Our hack week at CERN to reproduce the WorldWideWeb browser was five days long. That’s also the length of a design sprint. So …was what we did a design sprint?

I’m going to say no.

On the surface, our project has all the hallmarks of a design sprint. A group of people who don’t normally work together were thrown into an instense week of problem-solving and building, culminating in a tangible testable output. But when you look closer, the journey itself was quite different. A design sprint is typical broken into five phases, each one mapped on to a day of work:

  1. Understand and Map
  2. Demos and Sketch
  3. Decide and Storyboard
  4. Prototype
  5. Test

Gathered at CERN, hunched over laptops.

There was certainly plenty of understanding, sketching, and prototyping involved in our hack week at CERN, but we knew going in what the output would be at the end of the week. That’s not the case with most design sprints: figuring out what you’re going to make is half the work. In our case, we knew what needed to be produced; we just had to figure out how. Our process looked more like this:

  1. Understand and Map
  2. Research and Sketch
  3. Build
  4. Build
  5. Build

Now you could say that it’s a kind of design sprint, but I think there’s value in reserving the term “design sprint” for the specific five-day process. As it is, there’s enough confusion between the term “sprint” in its agile sense and “design sprint”.

Code print

You know what I like? Print stylesheets!

I mean, I’m not a huge fan of trying to get the damn things to work consistently—thanks, browsers—but I love the fact that they exist (athough I’ve come across a worrying number of web developers who weren’t aware of their existence). Print stylesheets are one more example of the assumption-puncturing nature of the web: don’t assume that everyone will be reading your content on a screen. News articles, blog posts, recipes, lyrics …there are many situations where a well-considered print stylesheet can make all the difference to the overall experience.

You know what I don’t like? QR codes!

It’s not because they’re ugly, or because they’ve been over-used by the advertising industry in completely inapropriate ways. No, I don’t like QR codes because they aren’t an open standard. Still, I must grudgingly admit that they’re a convenient way of providing a shortcut to a URL (albeit a completely opaque one—you never know if it’s actually going to take you to the URL it promises or to a Rick Astley video). And now that the parsing of QR codes is built into iOS without the need for any additional application, the barrier to usage is lower than ever.

So much as I might grit my teeth, QR codes and print stylesheets make for good bedfellows.

I picked up a handy tip from a Smashing Magazine article about print stylesheets a few years back. You can the combination of a @media print and generated content to provide a QR code for the URL of the page being printed out. Google’s Chart API provides a really handy shortcut for generating QR codes:

https://chart.googleapis.com/chart?cht=qr&chs=150x150&chl=http://example.com

Except that there’s no telling how long that will continue to work. Google being Google, they’ve deprecated the simple image chart API in favour of the over-engineered JavaScript alternative. So just as I recently had to migrate all my maps over to Leaflet when Google changed their Maps API from under the feet of developers, the clock is ticking on when I’ll have to find an alternative to the Image Charts API.

For now, I’ve got the QR code generation happening on The Session for individual discussions, events, recordings, sessions, and tunes. For the tunes, there’s also a separate URL for each setting of a tune, specifically for printing out. I’ve added a QR code there too.

Experimenting with print stylesheets and QR codes.

I’ve been thinking about another potential use for QR codes. I’m preparing a new talk for An Event Apart Seattle. The talk is going to be quite practical—for a change—and I’m going to be encouraging people to visit some URLs. It might be fun to include the biggest possible QR code on a slide.

I’d better generate the images before Google shuts down that API.

Print styles

I really wanted to make sure that the print styles for Resilient Web Design were pretty good—or at least as good as they could be given the everlasting lack of support for many print properties in browsers.

Here’s the first thing I added:

@media print {
  @page {
    margin: 1in 0.5in 0.5in;
    orphans: 4;
    widows: 3;
  }
}

That sets the margins of printed pages in inches (I could’ve used centimetres but the numbers were nice and round in inches). The orphans: 4 declaration says that if there’s less than 4 lines on a page, shunt the text onto the next page. And widows: 3 declares that there shouldn’t be less than 3 lines left alone on a page (instead more lines will be carried over from the previous page).

I always get widows and orphans confused so I remind myself that orphans are left alone at the start; widows are left alone at the end.

I try to make sure that some elements don’t get their content split up over page breaks:

@media print {
  p, li, pre, figure, blockquote {
    page-break-inside: avoid;
  }
}

I don’t want headings appearing at the end of a page with no content after them:

@media print {
  h1,h2,h3,h4,h5 {
    page-break-after: avoid;
  }
}

But sections should always start with a fresh page:

@media print {
  section {
    page-break-before: always;
  }
}

There are a few other little tweaks to hide some content from printing, but that’s pretty much it. Using print preview in browsers showed some pretty decent formatting. In fact, I used the “Save as PDF” option to create the PDF versions of the book. The portrait version comes from Chrome’s preview. The landscape version comes from Firefox, which offers more options under “Layout”.

For some more print style suggestions, have a look at the article I totally forgot about print style sheets. There’s also tips and tricks for print style sheets on Smashing Magazine. That includes a clever little trick for generating QR codes that only appear when a document is printed. I’ve used that technique for some page types over on The Session.

Design sprinting

James and I went to Ipswich last week for work. But this wasn’t part of an ongoing project—this was a short intense one-week feasibility study.

Leon from Suffolk Libraries got in touch with us about a project they’re planning to carry out soon: replacing their self-service machines with something more up-to-date. But rather than dive into commissioning the project straight away, he wisely decided to start with a one-week sprint to figure out exactly what the project would need to go ahead.

So that’s what James and I did. It was somewhat similar to the design sprint popularised by GV. We ensconced ourselves in the Ipswich library and packed a whole lot of work into five days. There was lots of collaboration, lots of sketching, lots of iterative design, and some rough’n’ready code. It was challenging, but a lot of fun. Also: we stayed in a pretty sweet AirBnB.

Our home for the week. This is a nice AirBnB.

You can read all about it in our case study. You can also read all about from Leon’s point of view on his blog:

I can’t recommend this kind of research sprint enough. We got a report, detailed technical validation of an idea, mock ups and a plan for how to proceed, while getting staff and stakeholders involved in the project – all in the space of 5 days.

I think this approach makes a lot of sense. By the end of the week, James and I felt pretty confident about estimating times and costs for the full project. Normally trying to estimate that kind of thing can be a real guessing game. But with the small of investment of one week’s worth of effort, you get a whole lot more certainty and confidence.

Have a look for yourself.

dConstruct 2015 podcast: Carla Diana

The dConstruct podcast episodes are coming thick and fast. The latest episode is a thoroughly enjoyable natter I had with the brilliant Carla Diana.

We talk about robots, smart objects, prototyping, 3D printing, and the world of teaching design.

Remember, you can subscribe to the podcast feed in any podcast software you like, or if iTunes is your thing, you can also subscribe directly in iTunes.

And don’t forget to use the discount code ‘ansible’ when you’re buying your dConstruct ticket …because you are coming to dConstruct, right?

Making

There’s definitely something stirring in the geek zeitgeist: something three-dimensional.

Tim Maly just published an article in Technology Review called Why 3-D Printing Isn’t Like Virtual Reality:

Something interesting happens when the cost of tooling-up falls. There comes a point where your production runs are small enough that the economies of scale that justify container ships from China stop working.

Meanwhile The Atlantic interviewed Brendan for an article called Why Apple Should Start Making a 3D Printer Right Now:

3D Printing is unlikely to prove as satisfying to manual labor evangelists as an afternoon spent with a monkey wrench. But by bringing more and more people into the innovation process, 3D printers could usher in a new generation of builders and designers and tinkerers, just as Legos and erector sets turned previous generations into amateur engineers and architects.

Last month Anil Dash published his wishlist for the direction this technology could take: 3D Printing, Teleporters and Wishes:

Every 3D printer should seamlessly integrate a 3D scanner, even if it makes the device cost much more. The reason is simple: If you set the expectation that every device can both input and output 3D objects, you provide the necessary fundamentals for network effects to take off amongst creators. But no, these devices are not “3D fax machines”. What you’ve actually made, when you have an internet-connected device that can both send and receive 3D-printed objects, is a teleporter.

Anil’s frustrations and hopes echo a white paper from 2010 by Michael Weinberg called It Will Be Awesome if They Don’t Screw it Up: 3D Printing, Intellectual Property, and the Fight Over the Next Great Disruptive Technology:

The ability to reproduce physical objects in small workshops and at home is potentially just as revolutionary as the ability to summon information from any source onto a computer screen.

Michael Weinberg also appears as one of the guests on an episode ABC Radio’s Future Tense along with Tom Standage, one of my favourite non-fiction authors.

The 3-D Printer - Future Tense - ABC Radio National (Australian Broadcasting Corporation) on Huffduffer

But my favourite piece of speculation on where this technology could take us comes from Russell Davies. He gave an excellent talk as part of the BBC’s Four Thought series in which he talks not so much about The Internet Of Things, but The Geocities Of Things. I like that.

BBC - Podcasts - Four Thought: Russell M. Davies 21 Sept 2011 on Huffduffer

It’s a short talk. Take the time to listen to it, then go grab a copy of Cory Doctorow’s book Makers and have a poke around Thingiverse

Spacelift

My sojourn up the western seaboard of the United States has come to an end. It began in San Diego with the final An Event Apart of the year, which was superb as always. From there, I travelled up to San Francisco for Cindy and Matt’s wedding celebration, followed by a few days in Seattle. The whole trip was rounded out back in California at the wonderfully titled Institute For The Future in Palo Alto. For that was the location of Science Hack Day San Francisco.

It was an amazing event. Ariel did a fantastic job—she put so much effort into making sure that everything was just right. I suspected it was going to be a lot of fun, but once again, I was blown away by the levels of ingenuity, enthusiasm and sheer brilliance on display.

In just 24 hours, the ingenious science hackers had created particle wind chime which converts particle collisions into music that Brian Eno would be proud of, grassroots aerial mapping with balloons which produced astonishing results (including an iPad app), as well as robots and LEDs a-plenty. The list of hacks is on the wiki.

Science Hack Day San Francisco Science Hack Day San Francisco Science Hack Day San Francisco Science Hack Day San Francisco

My own hack was modest in scope. Initially, I wanted to build a visualisation based on Matt’s brilliant idea, but I found it far too daunting to try to find data in a usable format and come up with a way of drawing a customisable geocentric starmap of our corner of the galaxy. So I put that idea on the back burner and decided to build something around my favourite piece of not-yet-existing technology: the .

The idea

Spacelift compares the cost efficiency of getting payloads into geosynchronous orbit using a space elevator compared to traditional rocketry. Basically, it’s a table. But I’ve tried to make it a pretty table with a bit of data visualisation to show at a glance how much more efficient a space elevator would be.

The payloads I’ve chosen are spacecraft. Beginning with the modest Voyager 1, it gets more and more ambitious with craft like an X-Wing or the Millennium Falcon before getting crazy with the USS Enterprise and the Battlestar Galactica.

So, for example, while you could get a TIE-fighter into the Clarke belt using a single , two , or three , it would cost considerably more than using a space elevator, where you’re basically just paying for the electricity.

If you click on the dollar amount for each transport mechanism, you’ll see the price calculated as a tower of pennies. Using a , for example, will cost you a tower of pennies 22 times larger than a space elevator, assuming a space elevator is at least 38,000 kilometres tall/long. Using a space elevator, on the other hand, requires spending a tower of pennies about the same height as itself. I don’t even bother trying to visualise the relative height differences for getting anything bigger than the Tantive IV into orbit as it would require close to infinite scrolling.

I’m fairly confident of the cost-per-kilogram values for the rockets while the is necessarily fuzzy, given that the mechanism doesn’t exist yet. But by far the trickiest info to track down was the mass of fictional spacecraft. There’s plenty of information on dimensions, speed and armaments, but very little on weight. Mathias saved the day with some diligent research that uncovered the motherlode.

Having such smart, helpful people around made the whole exercise a joy. It was quite a pleasure to walk over to a group of hackers, ask Is anyone here a rocket scientist? and have at least one person raise their hand. The constant presence of Cosmos playing on a loop just added to the atmosphere of exploration and fun.

Implementation

I’ve put the code on GitHub, ‘cause that’s what a real hacker would do. It’s my first GitHub repository. Be gentle with me.

There’s CSS3 and HTML5 a-plenty. I deliberately don’t use the IE shim to enable styling of HTML5 structural elements in lesser versions of Internet Explorer; there wouldn’t be much point delivering RGBa, opacity and text-shadow styles to a browser that can’t handle ‘em.

The background colour changes depending on the payload. I’m using a variation of the MD5 colour encoding popularised by Dopplr and documented by Brian in his excellent new book, Designing With Data.

I’m using League Gothic by The League Of Movable Type for the type—ya gotta have a condensed font for data visualisations, right?

There’s also a google-o-meter image from the Google chart API.

Needless to say, the layout is responsive and adapts to different viewing environments …including print.

Spacelift, printed

Using CSS transforms, each page of table of price comparisons becomes a handy page to print out and stick on the office wall to remind yourself why the human race needs a space elevator.

Microprinter has a posse

One of the coolest things I saw when I was at PaperCamp was Tom’s microprinter:

…an experiment in physical activity streams and notification, using a repurposed receipt printer connected to the web.

Now there’s a wiki where people—like Roo Reynolds—can come together and share their experiments in microprinting:

Hackers across the country are buying up old old receipt printers and imaginatively repurposing them into something new.

It’s such a great little step on the way to a Web of Things. Here’s another such step, from Fluid Interfaces, built for less than $350 using a webcam, a 3M projector, a mirror and a mobile phone:

Students at the MIT Media Lab have developed a wearable computing system that turns any surface into an interactive display screen. The wearer can summon virtual gadgets and internet data at will, then dispel them like smoke when they’re done.

Sounds like a way of levelling up in the game of being Matt Jones:

He sees mobile as something of a super power device and described something he calls “bionic noticing” - obsessively recording curious things he sees around him, driven by this multi-capable device in his pocket.

Typorn

My geek social calendar has been quite full over the past few days. On Saturday, I—along with half of the web developers in the land—went to Maidenhead for Drew and Rachel’s wedding.

Just as with Norm!’s wedding a few weeks ago, ‘twas a lovely, heartwarming affair. The pièce de résistance was the wedding “cake”: a tower of the finest British cheeses. Needless to say, I took many pictures and dutifully tagged them with the official wedding tag.

The weekend’s shenanigans extended into the start of the week. Rather than spending Monday at work, the Clearleft team made an outing to Ditchling Museum.

Despite its small size, the village of Ditchling looms large in the world of typography. and both lived and worked there. As a result, the museum’s collection is veritable treasure trove of typey goodness.

But we didn’t just spend the day ooh-ing and ah-ing over the wonderful pieces on display. We rolled up our sleeves and started using the printing press for ourselves, under the tutelage of Phil Baines. You may remember him from such websites as Public Lettering and such books as Penguin by Design.

It was a lot of fun. I can only echo what Stan said of his experience with the tactile inkiness of movable type:

I adore the way I can touch the past through the old metal type and really appreciate typography on a new level. I really can’t recommend classes like this enough. If you are a lover of type, you really owe it to yourself to spend some time with letterpress printing.

I was practically giggling with glee as I set 60pt Baskerville with Richard—my font of choice for Huffduffer. Handling the metal, smelling the ink, operating the printing press …it was simultaneously rough and sensual.

If you share my fetishism for the printed word, feel free to browse through my stash on Flickr. More delights are on display from Relly, Cennydd and James.

Wireframework

There’s been a lot of buzz lately around a new CSS framework called Blueprint. It’s basically a collection of resources pulled together from other sources: Khoi’s grids, Richard’s vertical rhythm, Eric’s reset and more.

Some people—including contributors to the CSS—have expressed their reservations about the non-semantic class names used in the framework. That’s a valid concern but, as Simon pointed out in the comments to Mark’s post, you don’t have to restrict yourself to those class names: you can always add your own semantics to the markup.

I don’t see myself using Blueprint. It just seems too restrictive for use in a real-world project. Maybe if I’m building a grid-based layout that’s precisely 960 pixels wide it could save me some time, but I’m mostly reminded of the quote apocryphally attributed to Henry Ford about the Model T:

The customer can have any color he wants so long as it’s black.

Unless I’m creating cookie-cutter sites, I don’t think a CSS framework can help me. That said, I think a framework like Blueprint has its place.

At Clearleft, a lot of our work involves wireframing. Every Information Architect has their own preference for tools and formats for creating wireframes and prototypes: some use Visio, others Omnigraffle. James and Richard usually start with paper and then move on to HTML, CSS and even a dab of JavaScript.

This results in quick wireframes that illustrate hierarchy, are addressable and allow for a good level of interaction. Creating HTML wireframes requires a different mindset to creating documents intended for the Web. You don’t have to worry about cross-browser CSS, bulletproof markup or unobtrusive JavaScript. With those concerns out of the equation, the benefits of using cookie-cutter code really come to the fore.

So while I might have reservations about using a JavaScript library on a production site, I’d have no such qualms when it comes to generating a quick prototype. The same goes for Blueprint. I think it could be ideally suited to HTML wireframes.

I may be a bit of a control freak, but I’d no sooner use a CSS framework for a live site than I’d use clip art for images. I firmly believe that creating good markup is a craft that, like good design, takes time. It may seem unrealistic to some, but I don’t want to compromise that quality without a very good reason.

That’s my hard-nosed attitude when it comes to creating documents for the World Wide Web. If the documents are intended purely as wireframes for internal use, then my attitude softens considerably. Then I think a framework like Blueprint could really shine.

Print stylesheets

CSS Naked Day was fun. It felt almost voyeuristic to peek under the CSS skirts of so many sites. It also made me realise that the browser default styles are what people are going to see if they decide to print out a page from many CSS-based sites.

I’ve had a rudimentary print stylesheet in place for Adactio for a while now, although I should probably revisit it and tweak it some time. But a lot of other sites that I’ve designed have been woefully lacking in print stylesheets.

For instance, the situation with the DOM Scripting site was brought home to me when I received this message from Adam Messinger:

Would it be possible to get a print stylesheet for the errata page that does a better job of preserving the table layout? As it is now, it’s sometimes hard to tell which page numbers match up to what errors. Just some borders on the table would help a lot.

That one made me slap my forehead. Of course! If ever there was a web page that was likely to be printed out, the errata for a printed book would be it.

As well as adding borders to the errata table, I set to work on creating a stylesheet for the whole site. It was fairly quickly and painless. I hid the navigation, let the content flow into one column, set the font sizes in points and used a minimum amount of colour.

DOM Scripting blog on screen DOM Scripting blog in print

I did much the same for Jessica’s site, WordRidden.

WordRidden on screen WordRidden in print

Principia Gastronomica was crying out for a print stylesheet. Half of the entries on that blog are recipes. Most people don’t have computer screens in their kitchens so it’s very likely that the recipes will be printed out.

A lot of the entries on Principia Gastronomica make heavy use of images. Everyone likes pictures of food, after all. I was faced with the question of whether or not to include these images in the printed versions.

In the end, I decided not to include the images. Firstly, it’s a real pain trying to ensure that the images don’t get split over two pages (page-break-before would be a draconian and wasteful solution). Secondly, I bowed to Jessica’s wisdom and experience in this matter. She often prints out recipes from sites like Epicurious and, when she does, she wants just the facts. Also, these are pages that are likely to printed out in the home, probably on a basic inkjet printer, rather than in an office equipped with a nice laser printer.

Principia Gastronomica on screen Principia Gastronomica in print

If you’re implementing a print stylesheet for your site, I highly recommend reading Richard’s guide, The Elements of Typographic Style Applied to the Web. All the advice is good for screen styles, but is especially applicable to print.

These articles on A List Apart are also required reading:

In an interview over on Vitamin, Eric makes the point that context is everything when deciding which stylesheets to serve up. Clearly, articles on a A List Apart are likely to printed out to be read, but the front page is more likely to be printed out to get a hard copy of the design.

For a detailed anatomy of a print stylesheet, be sure to read the latest in Mark’s Five Simple Steps to Typesetting on the web series: Printing the web.

Design, old and new

One of the very first panels on the very first day of South by Southwest was Traditional design and new technology. The subject matter and the people couldn’t be faulted but there were some technical difficulties with the sound. I was at the back of the room and the dodgy mics made it hard going at times.

Khoi and Mark had some really good insights into the role of traditional design disciplines in the brave new media world. I enjoyed the fact that the panelists weren’t always in agreement: I like it when things get stirred up a bit.

Towards the end of the discussion, a question came up that turned the subject on its head: how has new technology affected old media. I didn’t get the chance to mention it at the time, but I immediately thought of last year’s Guardian redesign.

There are a lot of very webby touches to the new-look Guardian: blue underlined “links”, sidebars with the acronym FAQ, etc. Perhaps it’s a result of this webbiness, but I really, really like the paper’s new look and feel.

I’m not the only one. When Shaun came to visit, he was quite taken with the Guardian. The custom made typeface — Egyptian — sealed the deal.

I didn’t post my initial reaction to the paper’s new look because I wanted to allow some time to live with it for a while. My feelings haven’t changed though. I still like it a lot.

I do wonder, though, whether my emotional response to the design stems from the fact that I’m web-based kind of guy with a web-based aesthetic. It would be interesting to compare my reaction (or Shaun’s) with that of someone who doesn’t spend a lot of time browsing websites.