Journal tags: test

29

sparkline

Bookmarklets for testing your website

I’m at day two of Indie Web Camp Brighton.

Day one was excellent. It was really hard to choose which sessions to go to because they all sounded interesting. That’s a good problem to have.

I ended up participating in:

  • a session on POSSE,
  • a session on NFC tags,
  • a session on writing, and
  • a session on testing your website that was hosted by Ros

In that testing session I shared some of the bookmarklets I use regularly.

Bookmarklets? They’re bookmarks that sit in the toolbar of your desktop browser. Just like any other bookmark, they’re links. The difference is that these links begin with javascript: rather than http. That means you can put programmatic instructions inside the link. Click the bookmark and the JavaScript gets executed.

In my mind, there are two different approaches to making a bookmarklet. One kind of bookmarklet contains lots of clever JavaScript—that’s where the smart stuff happens. The other kind of bookmarklet is deliberately dumb. All they do is take the URL of the current page and pass it to another service—that’s where the smart stuff happens.

I like that second kind of bookmarklet.

Here are some bookmarklets I’ve made. You can drag any of them up to the toolbar of your browser. Or you could create a folder called, say, “bookmarklets”, and drag these links up there.

Validation: This bookmarklet will validate the HTML of whatever page you’re on.

Validate HTML

Carbon: This bookmarklet will run the domain through the website carbon calculator.

Calculate carbon

Accessibility: This bookmarklet will run the current page through the Website Accessibility Evaluation Tools.

WAVE

Performance: This bookmarklet will take the current page and it run it through PageSpeed Insights, which includes a Lighthouse test.

PageSpeed

HTTPS: This bookmarklet will run your site through the SSL checker from SSL Labs.

SSL Report

Headers: This bookmarklet will test the security headers on your website.

Security Headers

Drag any of those links to your browser’s toolbar to “install” them. If you don’t like one, you can delete it the same way you can delete any other bookmark.

PageSpeed Insights bookmarklet

I’m a little obsessed with web performance. I like being able to check a page’s core web vitals quickly and easily.

Four years ago, I made a Lighthouse bookmarklet. Whatever web page you were on, when you clicked on the bookmarklet you’d get the Lighthouse results for that page. Handy!

It doesn’t work anymore. This is probably because Google are in the loop. Four years is pretty good innings for anything involving that company.

I kid (mostly). Lighthouse itself is still going strong, despite being a Google product. But the bookmarklet needs updating.

Rather than just get Lighthouse results, I figured that the full PageSpeed Insights results would be even better. If your website is in the Chrome UX report, you get to see those CrUX details too.

So here’s the updated bookmarklet:

PageSpeed Insights

Drag that up to your desktop browser’s bookmarks toolbar. Press it whenever you want to test the page you’re on.

Coding prototypes

We do quite a bit of prototyping at Clearleft. There’s no better way to reduce risk than to get something in front of users as quickly as possible to test whether you’re on the right track or not.

As Benjamin said in the podcast episode on prototyping:

It’s something to look at, something to prod. And ideally you’re trying to work out what works and what doesn’t.

Sometimes the prototype is mocked up in Figma. Preferably it’s built in code—HTML, CSS, and JavaScript. Having a prototype built in the materials of the medium helps establish a plausible suspension of disbelief during testing.

Also, as Trys said in that same podcast episode:

Prototypical code isn’t production code. It’s quick and it’s often a little bit dirty and it’s not really fit for purpose in that final deliverable. But it’s also there to be inspiring and to gather a team and show that something is possible.

I can’t reiterate that enough: prototype code isn’t production code.

I’ve written about the two different mindsets before:

So these two kinds of work require very different attitudes. For production work, quality is key. For prototyping, making something quickly is what matters.

Addy recently wrote an excellent blog post on the topic of prototyping. The value of a prototype is in the insight it imparts, not the code.

It’s crucial to remember that in a prototype, the code serves merely as a medium—a way to facilitate understanding. It’s a means to an end, not the end itself. The code of a prototype is disposable and mutable. In contrast, the lessons learned from a prototype, the insights gained from user interaction and feedback, are far more durable and impactful.

This!

It can be tempting to re-use code from a prototype. I get it. It seems like a waste to throw away code and build something from scratch. But trust me—and I speak from experience here—it will take more time to wrangle prototype code into something that’s production-ready.

The problem is that quality is often invisible. Think about semantics, performance, security, privacy, and accessibility. Those matter—for production code—but they’re under the surface. For someone who doesn’t understand the importance of those hidden qualities a prototype that looks like it works seems ready to ship. It’s understandable that they’d balk at the idea of just throwing that code away and writing new code. Sometimes the suspension of disbelief that a prototype is aiming for works too well.

As is so often the case, this isn’t a technical problem. It’s a communication issue.

Accessibility audits for all

It’s often said that it’s easier to make a fast website than it is to keep a website fast. Things slip through. If you’re not vigilant, performance can erode without you noticing.

It’s a similar story for other invisible but important facets of your website: privacy, security, accessibility. Because they’re hidden from view, you won’t be able to see if there’s a regression.

That’s why it’s a good idea to have regular audits for performance, privacy, security, and accessibility.

I wrote about accessibility testing a while back, and how there’s quite a bit that you can do for yourself before calling in an expert to look at the really gnarly stuff:

When you commission an accessibility audit, you want to make sure you’re getting the most out of it. Don’t squander it on issues that you can catch and fix yourself. Make sure that the bulk of the audit is being spent on the specific issues that are unique to your site.

I recently did an internal audit of the Clearleft website. After writing up the report, I also did a lunch’n’learn to share my methodology. I wanted to show that there’s some low-hanging fruit that pretty much anyone can catch.

To start with, there’s keyboard navigation. Put your mouse and trackpad to one side and use the tab key to navigate around.

Caveat: depending on what browser you’re using, you might need to update some preferences for keyboard navigation to work on links. If you’re using Safari, go to “Preferences”, then “Advanced”, and tick “Press Tab to highlight each item on a web page.”

Tab around and find out. You should see some nice chunky :focus-visible styles on links and form fields.

Here’s something else that anyone can do: zoom in. Increase the magnification to 200%. Everything should scale proportionally. How about 500%? You’ll probably see a mobile-friendly layout. That’s fine. As long as nothing is broken or overlapping, you’re good.

At this point, I reach for some tools. I’ve got some bookmarklets that do similar things: tota11y and ANDI. They both examine the source HTML and CSS to generate reports on structure, headings, images, forms, and so on.

These tools are really useful, but you need to be able to interpret the results. For example, a tool can tell you if an image has no alt text. But it can’t tell you if an image has good or bad alt text.

Likewise, these tools are great for catching colour-contrast issues. But there’s a big difference between a colour-contrast issue on the body copy compared to a colour-contrast issue on one unimportant page element.

I think that demonstrates the most important aspect of any audit: prioritisation.

Finding out that you have accessibility issues isn’t that useful if they’re all presented as an undifferentiated list. What you really need to know are which issues are the most important to fix.

By the way, I really like the way that the Gov.uk team prioritises accessibility concerns:

The team puts accessibility concerns in 2 categories:

  1. Theoretical: A question or statement regarding the accessibility of an implementation within the Design System without evidence of real-world impact.
  2. Evidenced: Sharing new research, data or evidence showing that an implementation within the Design System could cause barriers for disabled people.

The team will usually prioritise evidenced issues and queries over theoretical ones.

When I wrote up my audit for the Clearleft website, I structured it in order of priority. The most important things to fix are at the start of the audit. I also used a simple scale for classifying the severity of issues: low, medium, and high priority.

Thankfully there were no high-priority issues. There were a couple of medium-priority issues. There were plenty of low-priority issues. That’s okay. That’s a pretty good distribution.

If you’re interested, here’s the report I delivered…

Accessibility audit on clearleft.com

Colour contrast

There are a few issues with the pink colour. When it’s used on a grey background, or when it’s used as a background colour for white text, the colour contrast isn’t high enough.

The SVG arrow icon could be improved too.

Recommendations

Medium priority
  • Change the pink colour universally to be darker. The custom property --red is currently rgb(234, 33, 90). Change it to rgb(210, 20, 73) (thanks, James!)
Low priority
  • The SVG arrow icon currently uses currentColor. Consider hardcoding solid black (or a very, very dark grey) instead.

Images

Alt text is improving on the site. There’s reasonable alt text at the top level pages and the first screen’s worth of case studies and blog posts. I made a sweep through these pages a while back to improve the alt text but I haven’t done older blog posts and case studies.

Recommendations

Medium priority
  • Make a sweep of older blog posts and case studies and fix alt text.
Low priority
  • Images on the contact page have alt text that starts with “A photo of…” — this is redundant and can be removed.

Headings

The site is using headings sensibly. Sometimes the nesting of headings isn’t perfect, but this is a low priority issue. For example, on the contact page there’s an h1 followed by two h3s. In theory this isn’t correct. In practice (for screen reader users) it’s not an issue.

Recommendations

Low priority
  • On the home page, “UX London 2023” should probably be h3 instead of h1.
  • On the case studies index page we’re currently using h3 headings for the industry sector (“Charities”, “Education” etc.) but these should probably not be headings at all. On the blog index page we use a class “Tags” for a similar purpose. Consider reusing that pattern on the case studies index page.
  • On the about index page, “We’re driven to be” is an h3 and the subsequent three headings are h2s. Ideally this would be reversed: a single h2 followed by three h3s.

Link text

Sometimes the same text is used for different links.

Recommendations

Low priority
  • On the home page the text “Read the case study” is re-used for multiple links. It would be better if each link were different e.g. “Read about The Natural History Museum.”

Forms

The only form on the site is the newsletter sign-up form. It’s marked up pretty well: the input has an associated label, although a visible (clickable) label would be better.

Tabbing order

The site doesn’t use JavaScript to mess with tabbing order for keyboard users. The source order of elements in the markup generally makes sense so all is good.

The focus styles are nice and clear too!

Structure

The site is using HTML landmark elements sensibly (header, nav, main, footer, etc.).

Accessibility testing

I was doing some accessibility work with a client a little while back. It was mostly giving their site the once-over, highlighting any issues that we could then discuss. It was an audit of sorts.

While I was doing this I started to realise that not all accessibility issues are created equal. I don’t just mean in their severity. I mean that some issues can—and should—be caught early on, while other issues can only be found later.

Take colour contrast. This is something that should be checked before a line of code is written. When designs are being sketched out and then refined in a graphical editor like Figma, that’s the time to check the ratio between background and foreground colours to make sure there’s enough contrast between them. You can catch this kind of thing later on, but by then it’s likely to come with a higher cost—you might have to literally go back to the drawing board. It’s better to find the issue when you’re at the drawing board the first time.

Then there’s the HTML. Most accessibility issues here can be caught before the site goes live. Usually they’re issues of ommission: form fields that don’t have an explicitly associated label element (using the for and id attributes); images that don’t have alt text; pages that don’t have sensible heading levels or landmark regions like main and nav. None of these are particularly onerous to fix and they come with the biggest bang for your buck. If you’ve got sensible forms, sensible headings, alt text on images, and a solid document structure, you’ve already covered the vast majority of accessibility issues with very little overhead. Some of these checks can also be automated: alt text for images; labels for inputs.

Then there’s interactive stuff. If you only use native HTML elements you’re probably in the clear, but chances are you’ve got some bespoke interactivity on your site: a carousel; a mega dropdown for navigation; a tabbed interface. HTML doesn’t give you any of those out of the box so you’d need to make your own using a combination of HTML, CSS, JavaScript and ARIA. There’s plenty of testing you can do before launching—I always ask myself “What would Heydon do?”—but these components really benefit from being tested by real screen reader users.

So if you commission an accessibility audit, you should hope to get feedback that’s mostly in that third category—interactive widgets.

If you get feedback on document structure and other semantic issues with the HTML, you should fix those issues, sure, but you should also see what you can do to stop those issues going live again in the future. Perhaps you can add some steps in the build process. Or maybe it’s more about making sure the devs are aware of these low-hanging fruit. Or perhaps there’s a framework or content management system that’s stopping you from improving your HTML. Then you need to execute a plan for ditching that software.

If you get feedback about colour contrast issues, just fixing the immediate problem isn’t going to address the underlying issue. There’s a process problem, or perhaps a communication issue. In that case, don’t look for a technical solution. A design system, for example, will not magically fix a workflow issue or route around the problem of designers and developers not talking to each other.

When you commission an accessibility audit, you want to make sure you’re getting the most out of it. Don’t squander it on issues that you can catch and fix yourself. Make sure that the bulk of the audit is being spent on the specific issues that are unique to your site.

Weighing up UX

You can listen to an audio version of Weighing up UX.

This is the month of UX Fest 2021—this year’s online version of UX London. The festival continues with masterclasses every Tuesday in June and a festival day of talks every Thursday (tickets for both are still available). But it all kicked off with the conference part last week: three back-to-back days of talks.

I have the great pleasure of hosting the event so not only do I get to see a whole lot of great talks, I also get to quiz the speakers afterwards.

Right from day one, a theme emerged that continued throughout the conference and I suspect will continue for the rest of the festival too. That topic was metrics. Kind of.

See, metrics come up when we’re talking about A/B testing, growth design, and all of the practices that help designers get their seat at the table (to use the well-worn cliché). But while metrics are very useful for measuring design’s benefit to the business, they’re not really cut out for measuring user experience.

People have tried to quantify user experience benefits using measurements like NetPromoter Score, which is about as useful as reading tea leaves or chicken entrails.

So we tend to equate user experience gains with business gains. That makes sense. Happy users should be good for business. That’s a reasonable hypothesis. But it gets tricky when you need to make the case for improving the user experience if you can’t tie it directly to some business metric. That’s when we run into the McNamara fallacy:

Making a decision based solely on quantitative observations (or metrics) and ignoring all others.

The way out of this quantitative blind spot is to use qualitative research. But another theme of UX Fest was just how woefully under-represented researchers are in most organisations. And even when you’ve gone and talked to users and you’ve got their stories, you still need to play that back in a way that makes sense to the business folks. These are stories. They don’t lend themselves to being converted into charts’n’graphs.

And so we tend to fall back on more traditional metrics, based on that assumption that what’s good for user experience is good for business. But it’s a short step from making that equivalency to flipping the equation: what’s good for the business must, by definition, be good user experience. That’s where things get dicey.

Broadly speaking, the talks at UX Fest could be put into two categories. You’ve got talks covering practical subjects like product design, content design, research, growth design, and so on. Then you’ve got the higher-level, almost philosophical talks looking at the big picture and questioning the industry’s direction of travel.

The tension between these two categories was the highlight of the conference for me. It worked particularly well when there were back-to-back talks (and joint Q&A) featuring a hands-on case study that successfully pushed the needle on business metrics followed by a more cautionary talk asking whether our priorities are out of whack.

For example, there was a case study on growth design, which emphasised the importance of A/B testing for validation, immediately followed by a talk on deceptive dark patterns. Now, I suspect that if you were to A/B test a deceptive dark pattern, the test would validate its use (at least in the short term). It’s no coincidence that a company like Booking.com, which lives by the A/B sword, is also one of the companies sued for using distressing design patterns.

Using A/B tests alone is like using a loaded weapon without supervision. They only tell you what people do. And again, the solution is to make sure you’re also doing qualitative research—that’s how you find out why people are doing what they do.

But as I’ve pondered the lessons from last week’s conference, I’ve come to realise that there’s also a danger of focusing purely on the user experience. Hear me out…

At one point, the question came up as to whether deceptive dark patterns were ever justified. What if it’s for a good cause? What if the deceptive dark pattern is being used by an organisation actively campaigning to do good in the world?

In my mind, there was no question. A deceptive dark pattern is wrong, no matter who’s doing it.

(There’s also the problem of organisations that think they’re doing good in the world: I’m sure that every talented engineer that worked on Google AMP honestly believed they were acting in the best interests of the open web even as they worked to destroy it.)

Where it gets interesting is when you flip the question around.

Suppose you’re a designer working at an organisation that is decidedly not a force for good in the world. Say you’re working at Facebook, a company that prioritises data-gathering and engagement so much that they’ll tolerate insurrectionists and even genocidal movements. Now let’s say there’s talk in your department of implementing a deceptive dark pattern that will drive user engagement. But you, being a good designer who fights for the user, take a stand against this and you successfully find a way to ensure that Facebook doesn’t deploy that deceptive dark pattern.

Yay?

Does that count as being a good user experience designer? Yes, you’ve done good work at the coalface. But the overall business goal is like a deceptive dark pattern that’s so big you can’t take it in. Is it even possible to do “good” design when you’re inside the belly of that beast?

Facebook is a relatively straightforward case. Anyone who’s still working at Facebook can’t claim ignorance. They know full well where that company’s priorities lie. No doubt they sleep at night by convincing themselves they can accomplish more from the inside than without. But what about companies that exist in the grey area of being imperfect? Frankly, what about any company that relies on surveillance capitalism for its success? Is it still possible to do “good” design there?

There are no easy answers and that’s why it so often comes down to individual choice. I know many designers who wouldn’t work at certain companies …but they also wouldn’t judge anyone else who chooses to work at those companies.

At Clearleft, every staff member has two levels of veto on client work. You can say “I’m not comfortable working on this”, in which case, the work may still happen but we’ll make sure the resourcing works out so you don’t have anything to do with that project. Or you can say “I’m not comfortable with Clearleft working on this”, in which case the work won’t go ahead (this usually happens before we even get to the pitching stage although there have been one or two examples over the years where we’ve pulled out of the running for certain projects).

Going back to the question of whether it’s ever okay to use a deceptive dark pattern, here’s what I think…

It makes no difference whether it’s implemented by ProPublica or Breitbart; using a deceptive dark pattern is wrong.

But there is a world of difference in being a designer who works at ProPublica and being a designer who works at Breitbart.

That’s what I’m getting at when I say there’s a danger to focusing purely on user experience. That focus can be used as a way of avoiding responsibility for the larger business goals. Then designers are like the soldiers on the eve of battle in Henry V:

For we know enough, if we know we are the kings subjects: if his cause be wrong, our obedience to the king wipes the crime of it out of us.

Web Share API test

Remember a while back I wrote about some odd behaviour with the Web Share API in Safari on iOS?

When the share() method is triggered, iOS provides multiple ways of sharing: Messages, Airdrop, email, and so on. But the simplest option is the one labelled “copy”, which copies to the clipboard.

Here’s the thing: if you’ve provided a text parameter to the share() method then that’s what’s going to get copied to the clipboard—not the URL.

That’s a shame. Personally, I think the url field should take precedence.

Tess filed a bug soon after, which was very gratifying to see.

Now Phil has put together a test case:

  1. Share URL, title, and text
  2. Share URL and title
  3. Share URL and text

Very handy! The results (using the “copy” to clipboard action) are somewhat like rock, paper, scissors:

  • URL beats title,
  • text beats URL,
  • nothing beats text.

So it’s more like rock, paper, high explosives.

Lighthouse bookmarklet

I use Firefox. You should too. It’s fast, secure, and more privacy-focused than the leading browser from the big G.

When it comes to web development, the CSS developer tooling in Firefox is second-to-none. But when it comes to JavaScript and network-related debugging (like service workers), Chrome’s tools are currently better than Firefox’s (for now). For example, Chrome has a tab in its developer tools that lets you run Lighthouse on the currently open tab.

Yesterday, I got the Calibre newsletter, which always has handy performance-related links from Karolina. She pointed to a Lighthouse extension for Firefox. “Excellent!”, I thought, and I immediately installed it. But I had some qualms about installing a plug-in from Google into a browser from Mozilla, particularly as the plug-in page says:

This is not a Recommended Extension. Make sure you trust it before installing

Well, I gave it a go. It turns out that all it actually does is redirect to the online version of Lighthouse. “Hang on”, I thought. “This could just be a bookmarklet!”

So I immediately uninstalled the browser extension and made this bookmarklet:

Lighthouse

Drag that up to your desktop browser’s bookmarks toolbar. Press it whenever you’re on a site that you want to test.

Telling the story of performance

At Clearleft, we’ve worked with quite a few clients on site redesigns. It’s always a fascinating process, particularly in the discovery phase. There’s that excitement of figuring out what’s currently working, what’s not working, and what’s missing completely.

The bulk of this early research phase is spent diving into the current offering. But it’s also the perfect time to do some competitor analysis—especially if we want some answers to the “what’s missing?” question.

It’s not all about missing features though. Execution is equally important. Our clients want to know how their users’ experience shapes up compared to the competition. And when it comes to user experience, performance is a huge factor. As Andy says, performance is a UX problem.

There’s no shortage of great tools out there for measuring (and monitoring) performance metrics, but they’re mostly aimed at developers. Quite rightly. Developers are the ones who can solve most performance issues. But that does make the tools somewhat impenetrable if you don’t speak the language of “time to first byte” and “first contentful paint”.

When we’re trying to show our clients the performance of their site—or their competitors—we need to tell a story.

Web Page Test is a terrific tool for measuring performance. It can also be used as a story-telling tool.

You can go to webpagetest.org/easy if you don’t need to tweak settings much beyond the typical site visit (slow 3G on mobile). Pop in your client’s URL and, when the test is done, you get a valuable but impenetrable waterfall chart. It’s not exactly the kind of thing I’d want to present to a client.

Fortunately there’s an attention-grabbing output from each test: video. Download the video of your client’s site loading. Then repeat the test with the URL of a competitor. Download that video too. Repeat for as many competitor URLs as you think appropriate.

Now take those videos and play them side by side. Presentation software like Keynote is perfect for showing multiple videos like this.

This is so much more effective than showing a table of numbers! Clients get to really feel the performance difference between their site and their competitors.

Running all those tests can take time though. But there are some other tools out there that can give a quick dose of performance information.

SpeedCurve recently unveiled Page Speed Benchmarks. You can compare the performance of sites within a particualar sector like travel, retail, or finance. By default, you’ll get a filmstrip view of all the sites loading side by side. Click through on each one and you can get the video too. It might take a little while to gather all those videos, but it’s quicker than using Web Page Test directly. And it might be that the filmstrip view is impactful enough for telling your performance story.

If, during your discovery phase, you find that performance is being badly affected by third-party scripts, you’ll need some way to communicate that. Request Map Generator is fantastic for telling that story in a striking visual way. Pop the URL in there and then take a screenshot of the resulting visualisation.

The beginning of a redesign project is also the time to take stock of current performance metrics so that you can compare the numbers after your redesign launches. Crux.run is really great for tracking performance over time. You won’t get any videos but you will get some very appealing charts and graphs.

Web Page Test, Page Speed Benchmarks, and Request Map Generator are great for telling the story of what’s happening with performance right nowCrux.run balances that with the story of performance over time.

Measuring performance is important. Communicating the story of performance is equally important.

Summer of Apollo

It’s July, 2019. You know what that means? The 50th anniversary of the Apollo 11 mission is this month.

I’ve already got serious moon fever, and if you’d like to join me, I have some recommendations…

Watch the Apollo 11 documentary in a cinema. The 70mm footage is stunning, the sound design is immersive, the music is superb, and there’s some neat data visualisation too. Watching a preview screening in the Duke of York’s last week was pure joy from start to finish.

Listen to 13 Minutes To The Moon, the terrific ongoing BBC podcast by Kevin Fong. It’s got all my favourite titans of NASA: Michael Collins, Margaret Hamilton, and Charlie Duke, amongst others. And it’s got music by Hans Zimmer.

Experience the website Apollo 11 In Real Time on the biggest monitor you can. It’s absolutely wonderful! From July 16th, you can experience the mission timeshifted by exactly 50 years, but if you don’t want to wait, you can dive in right now. It genuinely feels like being in Mission Control!

From Ideation to Iteration: Design Thinking for Work and for Life by Una Kravets

The unstoppable engine of An Event Apart in Seattle rolls onward. The second talk of the second day is from the indominatable Una Kravets. Her talk is called From Ideation to Iteration: Design Thinking for Work and for Life. Here’s the description:

Have you ever had a looming deadline and no idea where to start? Do you have a big task to face but are having trouble figuring out how to get there? Have you ever wanted to learn a technology, or build a side project but didn’t know what to build? In this talk, Una will go over an actionable approach and several techniques for applying design thinking to our work and every aspect of our lives. This includes ideating product features, blog post ideas, or even what general direction we want to move toward in our businesses. We’ll go over traditional approaches and breakout techniques that will leave you feeling more in control and ready to reach your goals.

Let’s see if I can keep up with this…

Una’s going to talk about design thinking. Una does a variety of different work outside her day job—a podcast, dev doodles, etc. Sometimes at work she’s given big, big tasks like “build a design system.” Her reaction is “whut?” How do you even start with a task like that.

Also, we make big goals sometimes. Who makes new year’s resolutions? But what does “get more fit” or “earn more income” even mean?

In this talk, Una will break things down and show how design thinking can be applied to anything.

Design thinking

A stategic, solution-based approach to solving problems.

It’s a process. It’s iterative. It’s used by IBM, Apple, GE, and it’s taught to students at a lot of different universities.

Tim Brown of Ideo points out that there’s a Venn diagram of feasability, desirability, and viability. In the middle is the point of innovation.

The steps are:

  1. Empathise — develop a deep understanding of the challenge
  2. Define — clearly articulate the problem.
  3. Ideate — brainstorm potential solutions.
  4. Prototype — design a protoype to
  5. Test — and iterate.

Una feels that the feedback part is potentially missing there. IBM uses a loop diagram to include feedback. Ideo uses these steps:

  1. Frame a question
  2. Gather inspiration
  3. Generate ideas
  4. Make ideas tangible
  5. Test to learn
  6. Share the story

Another way to think about this is how the teams interact. There’s divergence and convergence throughout. Then there’s the double diamond: design, deliver, discover.

Ideo wrote a book called Design Thinking for Libraries. It has some useful tools and diagrams. Una found this fascinating because it wasn’t specifically about products. In healthcare, GE Health used design thinking for their Adventure Series MRI scanners—it resulted in 80% less need to use sedatives. The solution might seem obvious to us in hindsight, but it wouldn’t have been obvious to medical professionals in their everyday busy lives.

Design thinking is bullshit, says Natasha Jen. She describes how it’s become an over-used term that has lost its value. Una can relate—she gets annoyed when there’s too much talking and not enough doing. Design thinking is not diagrams and sticky notes. It’s a process. It’s very much about doing something to shift perspective. It’s another tool in our toolkit, even if it has become an overused term like “synergy.”

Back in 2014, when Una was working at IBM, she thought design thinking was stupid. It seemed to be all talk, talk, talk. It felt tedious. It was 75% talking and 25% development. The balance wasn’t right.

But it’s also true that solutioning too early leads to cruft. If you end up going back to the drawing board, maybe the time could’ve been better spent doing some design thinking up front. Focus on the problem, not the solution.

Now some developers might be thinking that this is outside their area. But it can really help you in your career. It can help you choose technologies. Also, everyone on the team, regardless of role, is responsible for the product.

1. Empathise

Understand your users and the challenge. This could be a task that a user is trying to accomplish, or it could be you trying to get a raise.

Sometimes we forget who our user is. The techniques in this first step helps us solve their needs, not our needs.

You might have many users that you’re trying to help, but try focusing in on a few. You can create personas. When Una was working at Digital Ocean, the users were developers. The personas reflected this. Do the research to get to know your users.

Next, you can create an empathy map for your users. What are their goals? What are their hopes? What will they gain from your product?

Connect the empathy map to a specific context—a goal and or a scenario that the user is going through.

Bear in mind that there are many layers to your user. There are conscious rational thoughts, but also subconscious emotional thoughts. Empathy mapping helps you understand how to best communicate with your user.

Una shares a real-life scenario of hers: create a new shop-able product that increases time on site. That’s a pretty big goal. She creates a persona for a college-educated woman working in the medical field who commutes on the subway, keeps a skin-caring routing, and is getting into cake-making. Next, Una creates an empathy map for this persona. What she says, thinks, feels, and does. All of this is within the context of browsing your fashion media website casually at work.

2: Define

The problem statement should be:

  • Human-centred,
  • Specific, but not too technical (don’t solutionise too soon),
  • Narrow in scope.

How can we best create a product-highlighting web experience that Rosalyn will enjoy to increase her time on site?

You can use a tool with two columns: As-Is and to-Be. The first column is what users currently do. The second column is what you want to achieve.

3. Ideate

This is the fun part. Good old-fashioned brainstorming is good here. Go for quantity here. Get loads of ideas out.

There’s also a “worst possible ideas” game you can play at this stage. It can be a good ice-breaker.

Have a second round of brainstorming where you play the “yes, and…” game to build on the first round.

When Una was working on The Zoe Report, she found that moodboards were really useful. The iteration cycle was very fast. A moodboard allowed them to skip a lot of the back and forth between design and development. They built the website without any visual design mock-ups. They prototyped quickly, tested quickly, and shipped quickly.

Journey-mapping is the next tool you can use in this ideation phase. Map out the steps between the start and end of a user journey. Keep it simple. This is a great time to refine your product and reduce complexity.

Next, start sketching out ideas. Again, this is a great time to uncover issues and solve problems before things get too defined. But remember, when you’re showcasing your ideas in sketches, too many ideas can lead to analyis paralysis.

Oh dear. Jam. Jam. Jam. Jam. Jam. Yes, Una is using the paradox-of-choice jam example …the study who’s findings could not be reproduced.

4. Prototype

Go forth and build. A prototype can exist on a number of different axes:

  • Representation—the form it takes.
  • Precision—the detail it contains.
  • Interactivity—the extent a user can interact with it.
  • Evolution—the life stage it is at.

There are lot of prototyping tools out there. Prototypr.io helps you find the right tool for you. It breaks things down by fidelity and life cycle.

But not all prototyping has to be digital. Paper prototyping only needs pen, paper, and scissors. Some tips:

  • Use a transparency sheet for forms.
  • Use well-visible and mid-tip pens.
  • Draw up your prototype in black and white—people can get caught up in colour.

But on the web, Una recommends getting to digital as quickly as possible because interaction is such a big part of the experience. That’s why Una likes to prototype in code. But this is still a rapid prototyping phase so don’t get too caught up in the details.

5. Test

Testing with internal teams is fine during the ideation phase, but to understand how users will relate to your product, you need to test with representative people. We are not our users.

As well as the user, have a facilitator, a computer, and a scriber. As a facilitator, it’s a good idea to reduce the amount of input you give a user. Don’t hand-hold too much or you will give away your pre-existing knowledge. Encourage your user to be verbal.

Sessioncam is a tool for creating a heatmap of where people are interacting. There are also tools for tracking clicks or mouse hovers. These all feel so utterly icky to me.

The metrics you might be looking to gather could be click-through rate, time-on-site, etc. But, Una cautions, be very wary of adding all these third-party scripts to your site and slowing it down. Who’s testing the A/B tests?

On Bustle, Una wanted to measure interactions on mobile. They tested different UI elements for interactions. They ended up updating the product with a horizontal swiping component. They were able to improve the product and ship a more refined experience.

6. Review and iterate

Una feels that this step is the most important. Analyse your successes and failures, and plan to improve.

Technology changes over time so what’s feasible and viable also changes.

Design thinking on the daily

You can use design thinking in your everyday life. Maybe you want to learn JavaScript, or write blog posts, or get more fit. Una used design thinking brainstorming to break down her goals, categorise and organise them.

“Get better at JavaScript” is a goal that Una has every year.

Empathise. In this situation, the user is you. You can still create a persona of yourself. Define. Why do you want to get better at JavaScript? Is it about making better use of your time? Ideate. There are so many different ways to learn. There are books and video courses. Or you could have a project to focus on. Break. It. Down! Create actionable steps and define how you will measure progress. Match the list of things you want to learn with the list of possible side projects. Prototype. Don’t take it literally. Just build something. Test. You’re testing on yourself in this case. Review. Una does an annual review. It’s a nice therapeutic exercise and helps her move forward into the next year with actionable goals.

Another goal might be “Write a blog post.”

Empathise. Your users are your potential readers. Who do you have in mind? Make personas. Define. What’s the topic? Ideate. If you don’t know what to write about, brainstorm. What are you working on at work that you’re learning from? Select one to try. Prototype. Write. Test. Maybe show it to co-workers. Review. How did it go? Good? Bad? Refine your process for the next blog post.

Here’s a goal: “Buy a gift for someone.”

Empathise. What does this person like? What have they enjoyed receiving in the past? Define. Is the gift something they’ll enjoy for a long time? Something they can share? Ideate. Bounce ideas off friends and relatives. Prototype. In this case, this means getting the gift. Review. Did they like it?

“Get Fit.”

In this case, the review part is probably the most important part.

Marie Kondo asks “Does this spark joy?” Ask the same question of your goals.

Remember, design thinking is not just about talking, and sticky notes. It’s about getting in the right headspace for your users.

Design thinking matters—because everything we do, we do for people. Having the tools to see through the lens of those people will help you be a more well-rounded person.

Hooked and booked

At Booking.com, they do a lot of A/B testing.

At Booking.com, they’ve got a lot of dark patterns.

I think there might be a connection.

A/B testing is a great way of finding out what happens when you introduce a change. But it can’t tell you why.

The problem is that, in a data-driven environment, decisions ultimately come down to whether something works or not. But just because something works, doesn’t mean it’s a good thing.

If I were trying to convince you to buy a product, or use a service, one way I could accomplish that would be to literally put a gun to your head. It would work. Except it’s not exactly a good solution, is it? But if we were to judge by the numbers (100% of people threatened with a gun did what we wanted), it would appear to be the right solution.

When speaking about A/B testing at Booking.com, Stuart Frisby emphasised why it’s so central to their way of working:

One of the core principles of our organisation is that we want to be very customer-focused. And A/B testing is really a way for us to institutionalise that customer focus.

I’m not so sure. I think A/B testing is a way to institutionalise a focus on business goals—increasing sales, growth, conversion, and all of that. Now, ideally, those goals would align completely with the customer’s goals; happy customers should mean more sales …but more sales doesn’t necessarily mean happy customers. Using business metrics (sales, growth, conversion) as a proxy for customer satisfaction might not always work …and is clearly not the case with many of these kinds of sites. Whatever the company values might say, a company’s true focus is on whatever they’re measuring as success criteria. If that’s customer satisfaction, then the company is indeed customer-focused. But if the measurements are entirely about what works for sales and conversions, then that’s the real focus of the company.

I’m not saying A/B testing is bad—far from it! (although it can sometimes be taken to the extreme). I feel it’s best wielded in combination with usability testing with real users—seeing their faces, feeling their frustration, sharing their joy.

In short, I think that A/B testing needs to be counterbalanced. There should be some kind of mechanism for getting the answer to “why?” whenever A/B testing provides to the answer to “what?” In-person testing could be one way of providing that balance. Or it could be somebody’s job to always ask “why?” and determine if a solution is qualitatively—and not just quantitatively—good. (And if you look around at your company and don’t see anyone doing that, maybe that’s a role for you.)

If there really is a connection between having a data-driven culture of A/B testing, and a product that’s filled with dark patterns, then the disturbing conclusion is that dark patterns work …at least in the short term.

Installing Progressive Web Apps

When I was testing the dConstruct Audio Archive—which is now a Progressive Web App—I noticed some interesting changes in how Chrome on Android offers the “add to home screen” prompt.

It used to literally say “add to home screen.”

Getting the “add to home screen” prompt for https://huffduffer.com/ on Android Chrome. And there’s the “add to home screen” prompt for https://html5forwebdesigners.com/ HTTPS + manifest.json + Service Worker = “Add to Home Screen” prompt. Add to home screen.

Now it simply says “add.”

The dConstruct Audio Archive is now a Progressive Web App

I vaguely remember there being some talk of changing the labelling, but I could’ve sworn it was going to change to “install”. I’ve got to be honest, just having the word “add” doesn’t seem to provide much context. Based on the quick’n’dirty usability testing I did with some co-workers, it just made things confusing. “Add what?” “What am I adding?”

Additionally, the prompt appeared immediately on the first visit to the site. I thought there was supposed to be an added “engagement” metric in order for the prompt to appear; that the user needs to visit the site more than once.

You’d think I’d be happy that users will be presented with the home-screen prompt immediately, but based on the behaviour I saw, I’m not sure it’s a good thing. Here’s what I observed:

  1. The user types the URL archive.dconstruct.org into the address bar.
  2. The site loads.
  3. The home-screen prompt slides up from the bottom of the screen.
  4. The user immediately moves to dismiss the prompt (cue me interjecting “Don’t close that!”).

This behaviour is entirely unsurprising for three reasons:

  1. We web designers and web developers have trained users to dismiss overlays and pop-ups if they actually want to get to the content. Nobody’s going to bother to actually read the prompt if there’s a 99% chance it’s going to say “Sign up to our newsletter!” or “Take our survey!”.
  2. The prompt appears below the “line of death” so there’s no way to tell it’s a browser or OS-level dialogue rather than a JavaScript-driven pop-up from the site.
  3. Because the prompt now appears on the first visit, no trust has been established between the user and the site. If the prompt only appeared on later visits (or later navigations during the first visit) perhaps it would stand a greater chance of survival.

It’s still possible to add a Progressive Web App to the home screen, but the option to do that is hidden behind the mysterious three-dots-vertically-stacked icon (I propose we call this the shish kebab icon to distinguish it from the equally impenetrable hamburger icon).

I was chatting with Andreas from Mozilla at the View Source conference last week, and he was filling me in on how Firefox on Android does the add-to-homescreen flow. Instead of a one-time prompt, they’ve added a persistent icon above the “line of death” (the icon is a combination of a house and a plus symbol).

When a Firefox 58 user arrives on a website that is served over HTTPS and has a valid manifest, a subtle badge will appear in the address bar: when tapped, an “Add to Home screen” confirmation dialog will slide in, through which the web app can be added to the Android home screen.

This kind of badging also has issues (without the explicit text “add to home screen”, the user doesn’t know what the icon does), but I think a more persistently visible option like this works better than the a one-time prompt.

Firefox is following the lead of the badging approach pioneered by the Samsung Internet browser. It provides a plus symbol that, when pressed, reveals the options to add to home screen or simply bookmark.

What does it mean to be an App?

I don’t think Chrome for Android has any plans for this kind of badging, but they are working on letting the site authors provide their own prompts. I’m not sure this is such a good idea, given our history of abusing pop-ups and overlays.

Sadly, I feel that any solution that relies on an unrequested overlay is doomed. That’s on us. The way we’ve turned browsing the web—especially on mobile—into a frustrating chore of dismissing unwanted overlays is a classic tragedy of the commons. We blew it. Users don’t trust unrequested overlays, and I can’t blame them.

For what it’s worth, my opinion is that ambient badging is a better user experience than one-time prompts. That opinion is informed by a meagre amount of testing though. I’d love to hear from anyone who’s been doing more detailed usability testing of both approaches. I assume that Google, Mozilla, and Samsung are doing this kind of testing, and it would be really great to see the data from that (hint, hint).

But it might well be that ambient badging is just too subtle to even be noticed by the user.

On one end of the scale you’ve got the intrusiveness of an add-to-home-screen prompt, but on the other end of the scale you’ve got the discoverability problem of a subtle badge icon. I wonder if there might be a compromise solution—maybe a badge icon that pulses or glows on the first or second visit?

Of course that would also need to be thoroughly tested.

Testing

It’s tempting to think of testing with screen-readers as being like testing with browsers. With browser testing, you’re checking to see how a particular piece of software deals with the code you’re throwing at it. A screen reader is a piece of software too, so it makes sense to approach it the same way, right?

I don’t think so. I think it’s really important that if someone is going to test your site with a screen reader, it should be someone who uses a screen reader every day.

Think of it this way: you wouldn’t want a designer or developer to do usability testing by testing the design or code on themselves. That wouldn’t give you any useful data. They’re already familiar with what problems the design is supposed to be solving, and how the interface works. That’s why you need to do usability testing with someone from outside, someone who wasn’t involved in the design or development process.

It’s no different when it comes to users of assistive technology. You’re not trying to test their technology; you’re trying to test how well the thing you’re building works for the person using the technology.

In short:

Don’t think of screen-reader testing as a form of browser testing; think of it as a form of usability testing.

Brighton device lab

People of Brighton (and environs), I have a reminder for you. Did you know that there is an open device lab in the Clearleft office?

That’s right! You can simply pop in at any time and test your websites on Android, iOS, Windows Phone, Blackberry, Kindles, and more.

The address is 68 Middle Street. Ring the “Clearleft” buzzer and say you’re there to use the device lab.. There’ll always be somebody in the office. They’ll buzz you in and you can take the lift to the first floor. No need to make a prior appointment—feel free to swing by whenever you like.

There is no catch. You show up, test your sites on whatever devices you want, and maybe even stick around for a cup of tea.

Tell your friends.

I was doing a little testing this morning, helping Charlotte with a pesky bug that was cropping up on an iPad running iOS 8. To get the bottom of the issue, I needed to be able to inspect the DOM on the iPad. That turns out to be fairly straightforward (as of iOS 6):

  1. Plug the device into a USB port on your laptop using a lightning cable.
  2. Open Safari on the device and navigate to the page you want to test.
  3. Open Safari on your laptop.
  4. From the “Develop” menu in your laptop’s Safari, select the device.
  5. Use the web inspector on your laptop’s Safari to inspect elements to your heart’s content.

It’s a similar flow for Android devices:

  1. Plug the device into a USB port on your laptop.
  2. Open Chrome on the device and navigate to the page you want to test.
  3. Open Chrome on your laptop.
  4. Type chrome://inspect into the URL bar of Chrome on your laptop.
  5. Select the device.
  6. On the device, grant permission (a dialogue will have appeared by now).
  7. Use developer tools on your laptop’s Chrome to inspect elements to your heart’s content.

Using web inspector in Safari to inspect elements on a web page open on an iOS device. Using developer tools in Chrome to inspect elements on a web page open on an Android device.

Browser testing

On just about every client project that I work on, the subject of browser support comes up. Rightly so. It’s an important issue on which to get mutual understanding and agreement. But all too often, this important question is framed in a binary, true/false, go/no-go way: “Which browsers do we/don’t we support?”

Really, the first thing to get agreement on is not a list of browsers, but what we mean by the word “support”. In my mind, that word implies that a user of a particular browser should be able to accomplish the primary tasks on the website, whether that’s reading an article, booking a ticket, or buying a product. That doesn’t mean that the task must be experienced in pixel-perfect fidelity to an ideal visual design.

But to others, that’s exactly what “support” means. Personally, I’d call that optimisation. As Brad puts it:

There is a difference between support and optimization.

So to put it in glib terms, I support every browser …but I optimise for none.

Alright, fine. But I still need to get to some mutual understanding with a client about which browsers will get the optimised experience and which browsers will simply be supported.

Personally, I like the Filament Group’s approach of discussing this in terms of features rather than browsers. It makes sense to me to say the browsers that support geolocation will get the geolocation features, or the browsers that support offline caching will get the offline caching features. There’s no need to produce a list of what those browsers are for each feature, and in any case, the list would be constantly changing and updating with each new browser release.

But—and this is a big but—nine times out of ten, when the issue of browser support comes up, it isn’t about functionality; it’s about branding. What clients generally want to know is which browsers will get the ideal visual design. Obviously the newer versions of Chrome and Firefox are going to get all the lovely layouts, rounded corners, gradients, transparencies, and animations …but what about older versions of Internet Explorer? Even if users of IE8 and IE7 can accomplish their tasks, will the “degraded” visual presentation hurt their experience?

My hypothesis is that it won’t. Users of older versions of Internet Explorer aren’t doing a side-by-side comparison of the same website opened up in the latest Chrome nightly. Considering what their daily usage must be like—unable to use Facebook, unable to use Google services—I suspect that they are happy just to be able to complete their task, regardless of the site’s visual fidelity.

There’s another viewpoint—one that I’ve heard expressed by clients—that even users of older browsers should still get the ideal, pixel-perfect visual design. The hypothesis here is that, by allowing someone to experience anything less than the perfect presentation, the client’s brand will be damaged in the mind of that person.

Like I said, this is something that comes up on most client projects, and this is the point at which we’d have to come to an agreement about which hypothesis we’re going to go with. Of course I’m going to argue in favour of the first hypothesis, but I’ve come to realise that arguing in favour of either hypothesis is the wrong approach. We shouldn’t be debating this …we should be testing it.

We have two competing hypotheses about a group of users. Instead of trying to read their minds, why not test with that group of users to find out which hypothesis is correct? No matter what the results of the test, they will be valuable either way.

Think about the amount of work that’s going to go in to optimising for older browser versions—it’s going to take quite a bit of time and money. It makes sense to ensure that this time and money isn’t being spent on little more than a hunch that pixel-perfection is important to those users. On the other hand, if the test reveals that actually those users really will have a lesser opinion of a brand unless they get pixel-perfect parity with newer browsers, then you’ll know that the time and money spent making that happen isn’t wasted.

Josh wrote recently that 1 hour of research saves 10 hours of development time:

Or, in longer terms if more people appreciated how one day of user research can save weeks of coding I think they would do it more. It is remarkable what you decide to not build after talking to a few people closely.

When it comes to decisions around browser support/optimisation, I think that even a little bit of up-front research and testing could potentially save a lot of time, money, and heartache. I’m not sure exactly what form the testing should take, but I’m interested in figuring it out.

Anniversary

A funny thing happened when I was in Berlin two weekends ago. I was walking down the street that my AirBnB apartment was on when I heard someone say “Jeremy Keith?” It turned out it was Andre Jay Meissner, one of the founders of the excellent Open Device Lab website. We had emailed but never met before. Small world!

The Twitter account for the open device lab in Nuremburg pointed out that it’s been one year since I wrote a blog post about the open device lab I set up:

Much as I’d love to take credit for the idea of an open device lab, it simply isn’t true. Jason and Lyza had been working on setting up the open device lab in Portland for quite a while when I flung open the doors of the Clearleft test lab. But I will take credit for the “Ah, fuck it!” attitude that I introduced to the idea of sharing test devices with the community. Partly because I had seen how long it was taking the Portland device lab to get off the ground while they did everything by the book, I decided to just wait for the worst to happen instead of planning for it:

There are potential pitfalls to opening up a testing suite like this. What about the insurance? What about theft? What about breakage? But the thing about potential pitfalls is that they’re just that: potential. I’m treating all of them as YAGNI issues. I’ll address any problems if and when they occur rather than planning for worst-case scenarios.

It proved to be a great policy. So far, nothing has gone wrong. And it also served as an example to other people thinking about opening up device labs at their companies: “don’t sweat it; I didn’t!”

But as far as anniversaries go, the one-year birthday of the Clearleft device lab is not the most significant event of April 30th. Today is the twentieth anniversary of the publication of one of the most important documents in technological history: the document that officially put the World Wide Web into the public domain.

Open device labs are a small, small part of working on the web but I like to think that they represent the same kind of spirit of openness and sharing that Tim Berners-Lee and his colleagues demonstrated at CERN:

I really, really like the way that communal device labs have taken off. It’s like a physical manifestation of the sharing and openness that has imbued the practice of web design and development right from the start. View source, mailing lists, blog posts, Stack Overflow, and Github are made of bits; device labs are made of atoms. But they are all open for you to use and contribute to.

At UX London I had dinner with a Swiss entrepreneur who was showing off his proprietary native app on his phone and proudly declaring that he had been granted a patent. He seemed like a nice chap, but his attitude kind of made my skin crawl. It seemed so antithetical to the spirit of sharing and openness that I’m used to from the web.

James Gleick once described the web as the patent that never was:

Tim Berners-Lee invented the Web and the Web browser — that is, the world as we now know it — pretty much single-handedly, starting in about 1989, when he was working as a software engineer at CERN, the particle-physics laboratory in Geneva. He didn’t patent it, or any part of it. On the contrary, he has labored tirelessly to keep cyberspace open and nonproprietary.

We are all reaping the benefits of Sir Tim’s kindness and generosity.

Promo

The lovely and talented Paul and Kelly from Maxine Denver were in the Clearleft office to do some video work last week. After finishing a piece I was in, I suggested they keep “rolling” (to use an anachronistic term) so I could do a little tongue-in-cheek piece about the Clearleft device lab, a la Winnebago Man.

Here’s the result.

It reminded me of something, but I couldn’t figure out what. Then I remembered.

Open device labs

It’s been just nine months since I threw open the doors to the device lab in the Clearleft office. The response in just the first week was fantastic — people started donating their devices to the communal pool, doubling, then tripling the amount of phones and tablets.

The idea of having a communal device lab wasn’t new; Jason had been talking about setting up a lab in Portland but the paperwork involved was bogging it down. So when I set up the Brighton lab, I deliberately took an “ah, fuck it!” attitude …and that was new:

There are potential pitfalls to opening up a testing suite like this. What about the insurance? What about theft? What about breakage? But the thing about potential pitfalls is that they’re just that: potential. I’m treating all of them as YAGNI issues. I’ll address any problems if and when they occur rather than planning for worst-case scenarios.

So far, so good.

Since then I’ve been vocally encouraging others to set up communal devices labs wherever they may be—linking and tweeting whenever anybody so much as mentioned the possibility of getting a device lab up and running. Then the Lab Up! site was established to help people do just that.

Now there’s a brand new site that’s not just for people setting up device labs, but also for people looking a device lab to use: OpenDeviceLab.com.

  • Help people to locate the right Open Device Lab for the job,
  • explain and promote the Open Device Lab movement,
  • attract Contributors and Sponsors to help and donate to ODLs.

It’s an excellent resource. Head on over there and find out where your nearest device lab is located. And if you can’t find one, think about setting one up.

I really, really like the way that communal device labs have taken off. It’s like a physical manifestation of the sharing and openness that has imbued the practice of web design and development right from the start. View source, mailing lists, blog posts, Stack Overflow, and Github are made of bits; device labs are made of atoms. But they are all open for you to use and contribute to.

The test

There was once a time when the first thing you would do when you went to visit a newly-launched website was to run its markup through a validator.

Later on that was replaced by the action of bumping up the font size by a few notches—what Dan called the Dig Dug test.

Thanks to Ethan, we all started to make our browser windows smaller and bigger as soon as we visited a newly-launched site.

Now when I go to a brand new site I find myself opening up the “Network” tab in my browser’s developer tools to count the HTTP requests and measure the page weight.

Just like old times.