Tags: sync

37

sparkline

Tuesday, January 14th, 2025

A long-awaited talk

Back in 2019 I had the amazing experience of going to CERN and being part of a team building an emulator of the first ever browser.

Remy was on the team too. He did the heavy lifting of actually making the thing work—quite an achievement in just five days!

Coming into this, I thought it was hugely ambitious to try to not only recreate the experience of using the first ever web browser (called WorldWideWeb, later Nexus), but to also try to document the historical context of the time. Now that it’s all done, I’m somewhat astounded that we managed to achieve both.

Remy and I were both keen to talk about the work, which is why we did a joint talk at Fronteers in Amsterdam that year. We’re both quite sceptical of talks given by duos; people think it means it’ll be half the work, when actually it’s twice the work. In the end we come up with a structure for the talk that we both liked:

Now, we could’ve just done everything chronologically, but that would mean I’d do the first half of the talk and Remy would do the second half. That didn’t appeal. And it sounded kind of boring. So then we come up with the idea of interweaving the two timelines.

That worked remarkably well.

You can watch the video of that talk in Amsterdam. You can also read the transcript.

After putting so much work into the talk, we were keen to give it again somewhere. We had the chance to do that in Nottingham in early March 2020. (cue ominous foreboding)

The folks from local Brighton meetup Async had also asked if we wanted to give the talk. We were booked in for May 2020. (ominous foreboding intensifies)

We all know what happened next. The Situation. Lockdown. No conferences. No meetups.

But technically the talk wasn’t cancelled. It was just postponed. And postponed. And postponed. Before you know it, five years have passed.

Part of the problem was that Async is usually on the first Thursday of the month and that’s when I host an Irish music session in Hove. I can’t miss that!

But finally the stars aligned and last week Remy and I finally did the Async talk. You can watch a video of it.

I really enjoyed giving the talk and the discussion that followed. There was a good buzz.

It also made me appreciate the work that we put into stucturing the talk. We’ve only given it a few times but with a five year gap between presentations, I can confidentally say that’s it’s a timeless topic.

Friday, April 17th, 2020

Future Sync 2020

I was supposed to be in Plymouth yesterday, giving the opening talk at this year’s Future Sync conference. Obviously, that train journey never happened, but the conference did.

The organisers gave us speakers the option of pre-recording our talks, which I jumped on. It meant that I wouldn’t be reliant on a good internet connection at the crucial moment. It also meant that I was available to provide additional context—mostly in the form of a deluge of hyperlinks—in the chat window that accompanied the livestream.

The whole thing went very smoothly indeed. Here’s the video of my talk. It was The Layers Of The Web, which I’ve only given once before, at Beyond Tellerrand Berlin last November (in the Before Times).

As well as answering questions in the chat room, people were also asking questions in Sli.do. But rather than answering those questions there, I was supposed to respond in a social medium of my choosing. I chose my own website, with copies syndicated to Twitter.

Here are those questions and answers…

The first few questions were about last years’s CERN project, which opens the talk:

Based on what you now know from the CERN 2019 WorldWideWeb Rebuild project—what would you have done differently if you had been part of the original 1989 Team?

I responded:

Actually, I think the original WWW project got things mostly right. If anything, I’d correct what came later: cookies and JavaScript—those two technologies (which didn’t exist on the web originally) are the source of tracking & surveillance.

The one thing I wish had been done differently is I wish that JavaScript were a same-origin technology from day one:

https://adactio.com/journal/16099

Next question:

How excited were you when you initially got the call for such an amazing project?

My predictable response:

It was an unbelievable privilege! I was so excited the whole time—I still can hardly believe it really happened!

https://adactio.com/journal/14803

https://adactio.com/journal/14821

Later in the presentation, I talked about service workers and progressive web apps. I got a technical question about that:

Is there a limit to the amount of local storage a PWA can use?

I answered:

Great question! Yes, there are limits, but we’re generally talking megabytes here. It varies from browser to browser and depends on the available space on the device.

But files stored using the Cache API are less likely to be deleted than files stored in the browser cache.

More worrying is the announcement from Apple to only store files for a week of browser use:

https://adactio.com/journal/16619

Finally, there was a question about the over-arching theme of the talk…

Great talk, Jeremy. Do you encounter push-back when using the term “Progressive Enhancement”?

My response:

Yes! …And that’s why I never once used the phrase “progressive enhancement” in my talk. 🙂

There’s a lot of misunderstanding of the term. Rather than correct it, I now avoid it:

https://adactio.com/journal/9195

Instead of using the phrase “progressive enhancement”, I now talk about the benefits and effects of the technique: resilience, universality, etc.

Future Sync Distributed 2020

Monday, April 6th, 2020

Local-first software: You own your data, in spite of the cloud

The cloud gives us collaboration, but old-fashioned apps give us ownership. Can’t we have the best of both worlds?

We would like both the convenient cross-device access and real-time collaboration provided by cloud apps, and also the personal ownership of your own data embodied by “old-fashioned” software.

This is a very in-depth look at the mindset and the challenges involved in building truly local-first software—something that Tantek has also been thinking about.

Tuesday, March 17th, 2020

Scripts: async, defer

I’m constantly forgetting the difference between the async attribute and the defer attribute on script elements—this is a handy explanation.

Tuesday, October 29th, 2019

Periodic background sync

Yesterday I wrote about how much I’d like to see silent push for the web:

I’d really like silent push for the web—the ability to update a cache with fresh content as soon as it’s published; that would be nifty! At the same time, I understand the concerns. It feels more powerful than other permission-based APIs like notifications.

Today, John Holt Ripley responded on Twitter:

hi there, just read your blog post about Silent Push for acthe web, and wondering if Periodic Background Sync would cover a few of those use cases?

Periodic background sync looks very interesting indeed!

It’s not the same as silent push. As the name suggests, this is about your service worker waking up periodically and potentially fetching (and caching) fresh content from the network. So the service worker is polling rather than receiving a push. But I’ll take it! It’s definitely close enough for the kind of use-cases I’ve been thinking about.

Interestingly, periodic background sync also ties into the other part of what I was writing about: permissions. I mentioned that adding a site the home screen could be interpreted as a signal to potentially allow more permissions (or at least allow prompts for more permissions).

Well, Chromium has a document outlining metrics for attempting to gauge site engagement. There’s some good thinking in there.

Saturday, September 21st, 2019

Going offline with microformats

For the offline page on my website, I’ve been using a mixture of the Cache API and the localStorage API. My service worker script uses the Cache API to store copies of pages for offline retrieval. But I used the localStorage API to store metadata about the page—title, description, and so on. Then, my offline page would rifle through the pages stored in a cache, and retreive the corresponding metadata from localStorage.

It all worked fine, but as soon as I read Remy’s post about the forehead-slappingly brilliant technique he’s using, I knew I’d be switching my code over. Instead of using localStorage—or any other browser API—to store and retrieve metadata, he uses the pages themselves! Using the Cache API, you can examine the contents of the pages you’ve stored, and get at whatever information you need:

I realised I didn’t need to store anything. HTML is the API.

Refactoring the code for my offline page felt good for a couple of reasons. First of all, I was able to remove a dependency—localStorage—and simplify the JavaScript. That always feels good. But the other reason for the warm fuzzies is that I was able to use data instead of metadata.

Many years ago, Cory Doctorow wrote a piece called Metacrap. In it, he enumerates the many issues with metadata—data about data. The source of many problems is when the metadata is stored separately from the data it describes. The data may get updated, without a corresponding update happening to the metadata. Metadata tends to rot because it’s invisible—out of sight and out of mind.

In fact, that’s always been at the heart of one of the core principles behind microformats. Instead of duplicating information—once as data and again as metadata—repurpose the visible data; mark it up so its meta-information is directly attached to the information itself.

So if you have a person’s contact details on a web page, rather than repeating that information somewhere else—in the head of the document, say—you could instead attach some kind of marker to indicate which bits of the visible information are contact details. In the case of microformats, that’s done with class attributes. You can mark up a page that already has your contact information with classes from the h-card microformat.

Here on my website, I’ve marked up my blog posts, articles, and links using the h-entry microformat. These classes explicitly mark up the content to say “this is the title”, “this is the content”, and so on. This makes it easier for other people to repurpose my content. If, for example, I reply to a post on someone else’s website, and ping them with a webmention, they can retrieve my post and know which bit is the title, which bit is the content, and so on.

When I read Remy’s post about using the Cache API to retrieve information directly from cached pages, I knew I wouldn’t have to do much work. Because all of my posts are already marked up with h-entry classes, I could use those hooks to create a nice offline page.

The markup for my offline page looks like this:

<h1>Offline</h1>
<p>Sorry. It looks like the network connection isn’t working right now.</p>
<div id="history">
</div>

I’ll populate that “history” div with information from a cache called “pages” that I’ve created using the Cache API in my service worker.

I’m going to use async/await to do this because there are lots of steps that rely on the completion of the step before. “Open this cache, then get the keys of that cache, then loop through the pages, then…” All of those thens would lead to some serious indentation without async/await.

All async functions have to have a name—no anonymous async functions allowed. I’m calling this one listPages, just like Remy is doing. I’m making the listPages function execute immediately:

(async function listPages() {
...
})();

Now for the code to go inside that immediately-invoked function.

I create an array called browsingHistory that I’ll populate with the data I’ll use for that “history” div.

const browsingHistory = [];

I’m going to be parsing web pages later on, so I’m going to need a DOM parser. I give it the imaginative name of …parser.

const parser = new DOMParser();

Time to open up my “pages” cache. This is the first await statement. When the cache is opened, this promise will resolve and I’ll have access to this cache using the variable …cache (again with the imaginative naming).

const cache = await caches.open('pages');

Now I get the keys of the cache—that’s a list of all the page requests in there. This is the second await. Once the keys have been retrieved, I’ll have a variable that’s got a list of all those pages. You’ll never guess what I’m calling the variable that stores the keys of the cache. That’s right …keys!

const keys = await cache.keys();

Time to get looping. I’m getting each request in the list of keys using a for/of loop:

for (const request of keys) {
...
}

Inside the loop, I pull the page out of the cache using the match() method of the Cache API. I’ll store what I get back in a variable called response. As with everything involving the Cache API, this is asynchronous so I need to use the await keyword here.

const response = await cache.match(request);

I’m not interested in the headers of the response. I’m specifically looking for the HTML itself. I can get at that using the text() method. Again, it’s asynchronous and I want this promise to resolve before doing anything else, so I use the await keyword. When the promise resolves, I’ll have a variable called html that contains the body of the response.

const html = await response.text();

Now I can use that DOM parser I created earlier. I’ve got a string of text in the html variable. I can generate a Document Object Model from that string using the parseFromString() method. This isn’t asynchronous so there’s no need for the await keyword.

const dom = parser.parseFromString(html, 'text/html');

Now I’ve got a DOM, which I have creatively stored in a variable called …dom.

I can poke at it using DOM methods like querySelector. I can test to see if this particular page has an h-entry on it by looking for an element with a class attribute containing the value “h-entry”:

if (dom.querySelector('.h-entry h1.p-name') {
...
}

In this particular case, I’m also checking to see if the h1 element of the page is the title of the h-entry. That’s so that index pages (like my home page) won’t get past this if statement.

Inside the if statement, I’m going to store the data I retrieve from the DOM. I’ll save the data into an object called …data!

const data = new Object;

Well, the first piece of data isn’t actually in the markup: it’s the URL of the page. I can get that from the request variable in my for loop.

data.url = request.url;

I’m going to store the timestamp for this h-entry. I can get that from the datetime attribute of the time element marked up with a class of dt-published.

data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));

While I’m at it, I’m going to grab the human-readable date from the innerText property of that same time.dt-published element.

data.published = dom.querySelector('.h-entry .dt-published').innerText;

The title of the h-entry is in the innerText of the element with a class of p-name.

data.title = dom.querySelector('.h-entry .p-name').innerText;

At this point, I am actually going to use some metacrap instead of the visible h-entry content. I don’t output a description of the post anywhere in the body of the page, but I do put it in the head in a meta element. I’ll grab that now.

data.description = dom.querySelector('meta[name="description"]').getAttribute('content');

Alright. I’ve got a URL, a timestamp, a publication date, a title, and a description, all retrieved from the HTML. I’ll stick all of that data into my browsingHistory array.

browsingHistory.push(data);

My if statement and my for/in loop are finished at this point. Here’s how the whole loop looks:

for (const request of keys) {
  const response = await cache.match(request);
  const html = await response.text();
  const dom = parser.parseFromString(html, 'text/html');
  if (dom.querySelector('.h-entry h1.p-name')) {
    const data = new Object;
    data.url = request.url;
    data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
    data.published = dom.querySelector('.h-entry .dt-published').innerText;
    data.title = dom.querySelector('.h-entry .p-name').innerText;
    data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
    browsingHistory.push(data);
  }
}

That’s the data collection part of the code. Now I’m going to take all that yummy information an output it onto the page.

First of all, I want to make sure that the browsingHistory array isn’t empty. There’s no point going any further if it is.

if (browsingHistory.length) {
...
}

Within this if statement, I can do what I want with the data I’ve put into the browsingHistory array.

I’m going to arrange the data by date published. I’m not sure if this is the right thing to do. Maybe it makes more sense to show the pages in the order in which you last visited them. I may end up removing this at some point, but for now, here’s how I sort the browsingHistory array according to the timestamp property of each item within it:

browsingHistory.sort( (a,b) => {
  return b.timestamp - a.timestamp;
});

Now I’m going to concatenate some strings. This is the string of HTML text that will eventually be put into the “history” div. I’m storing the markup in a string called …markup (my imagination knows no bounds).

let markup = '<p>But you still have something to read:</p>';

I’m going to add a chunk of markup for each item of data.

browsingHistory.forEach( data => {
  markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
});

With my markup assembled, I can now insert it into the “history” part of my offline page. I’m using the handy insertAdjacentHTML() method to do this.

document.getElementById('history').insertAdjacentHTML('beforeend', markup);

Here’s what my finished JavaScript looks like:

<script>
(async function listPages() {
  const browsingHistory = [];
  const parser = new DOMParser();
  const cache = await caches.open('pages');
  const keys = await cache.keys();
  for (const request of keys) {
    const response = await cache.match(request);
    const html = await response.text();
    const dom = parser.parseFromString(html, 'text/html');
    if (dom.querySelector('.h-entry h1.p-name')) {
      const data = new Object;
      data.url = request.url;
      data.timestamp = new Date(dom.querySelector('.h-entry .dt-published').getAttribute('datetime'));
      data.published = dom.querySelector('.h-entry .dt-published').innerText;
      data.title = dom.querySelector('.h-entry .p-name').innerText;
      data.description = dom.querySelector('meta[name="description"]').getAttribute('content');
      browsingHistory.push(data);
    }
  }
  if (browsingHistory.length) {
    browsingHistory.sort( (a,b) => {
      return b.timestamp - a.timestamp;
    });
    let markup = '<p>But you still have something to read:</p>';
    browsingHistory.forEach( data => {
      markup += `
<h2><a href="${ data.url }">${ data.title }</a></h2>
<p>${ data.description }</p>
<p class="meta">${ data.published }</p>
`;
    });
    document.getElementById('history').insertAdjacentHTML('beforeend', markup);
  }
})();
</script>

I’m pretty happy with that. It’s not too long but it’s still quite readable (I hope). It shows that the Cache API and the h-entry microformat are a match made in heaven.

If you’ve got an offline strategy for your website, and you’re using h-entry to mark up your content, feel free to use that code.

If you don’t have an offline strategy for your website, there’s a book for that.

Sunday, July 21st, 2019

The Simplest Way to Load CSS Asynchronously | Filament Group, Inc.

Scott re-examines the browser support for loading everything-but-the-critical-CSS asynchronously and finds that it might now be as straightforward as this one declaration:

<link rel="stylesheet" href="/path/to/my.css" media="print" onload="this.media='all'">

I love the fact the Filament Group are actively looking at how deprecate their loadCSS polyfill—exactly the right attitude for polyfills in general.

Wednesday, May 8th, 2019

CSS-only chat

A truly monstrous async web chat using no JS whatsoever on the frontend.

This is …I mean …yes, but …it …I …

Tuesday, November 20th, 2018

Web workers vs Service workers vs Worklets

A great primer by Ire:

Web workers, service workers, and worklets are all scripts that run on a separate thread. So what are the differences between these three types of workers?

Sunday, November 11th, 2018

Push without notifications

On the first day of Indie Web Camp Berlin, I led a session on going offline with service workers. This covered all the usual use-cases: pre-caching; custom offline pages; saving pages for offline reading.

But on the second day, Sebastiaan spent a fair bit of time investigating a more complex use of service workers with the Push API.

The Push API is what makes push notifications possible on the web. There are a lot of moving parts—browser, server, service worker—and, frankly, it’s way over my head. But I’m familiar with the general gist of how it works. Here’s a typical flow:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker creates a notification linking to the URL, interrupting the user, and generally adding to the weight of information overload.

Here’s what Sebastiaan wanted to investigate: what if that last step weren’t so intrusive? Here’s the alternate flow he wanted to test:

  1. A website prompts the user for permission to send push notifications.
  2. The user grants permission.
  3. A whole lot of complicated stuff happens behinds the scenes.
  4. Next time the website publishes something relevant, it fires a push message containing the details of the new URL.
  5. The user’s service worker receives the push message (even if the site isn’t open).
  6. The service worker fetches the contents of the URL provided in the push message and caches the page. Silently.

It worked.

I think this could be a real game-changer. I don’t know about you, but I’m very, very wary of granting websites the ability to send me push notifications. In fact, I don’t think I’ve ever given a website permission to interrupt me with push notifications.

You’ve seen the annoying permission dialogues, right?

In Firefox, it looks like this:

Will you allow name-of-website to send notifications?

[Not Now] [Allow Notifications]

In Chrome, it’s:

name-of-website wants to

Show notifications

[Block] [Allow]

But in actual fact, these dialogues are asking for permission to do two things:

  1. Receive messages pushed from the server.
  2. Display notifications based on those messages.

There’s no way to ask for permission just to do the first part. That’s a shame. While I’m very unwilling to grant permission to be interrupted by intrusive notifications, I’d be more than willing to grant permission to allow a website to silently cache timely content in the background. It would be a more calm technology.

Think of the use cases:

  • I grant push permission to a magazine. When the magazine publishes a new article, it’s cached on my device.
  • I grant push permission to a podcast. Whenever a new episode is published, it’s cached on my device.
  • I grant push permission to a blog. When there’s a new blog post, it’s cached on my device.

Then when I’m on a plane, or in the subway, or in any other situation without a network connection, I could still visit these websites and get content that’s fresh to me. It’s kind of like background sync in reverse.

There’s plenty of opportunity for abuse—the cache could get filled with content. But websites can already do that, and they don’t need to be granted any permissions to do so; just by visiting a website, it can add multiple files to a cache.

So it seems that the reason for the permissions dialogue is all about displaying notifications …not so much about receiving push messages from the server.

I wish there were a way to implement this background-caching pattern without requiring the user to grant permission to a dialogue that contains the word “notification.”

I wonder if the act of adding a site to the home screen could implicitly grant permission to allow use of the Push API without notifications?

In the meantime, the proposal for periodic synchronisation (using background sync) could achieve similar results, but in a less elegant way; periodically polling for new content instead of receiving a push message when new content is published. Also, it requires permission. But at least in this case, the permission dialogue should be more specific, and wouldn’t include the word “notification” anywhere.

Sunday, September 23rd, 2018

Service workers in Samsung Internet browser

I was getting reports of some odd behaviour with the service worker on thesession.org, the Irish music website I run. Someone emailed me to say that they kept getting the offline page, even when their internet connection was perfectly fine and the site was up and running.

They didn’t mind answering my pestering follow-on questions to isolate the problem. They told me that they were using the Samsung Internet browser on Android. After a little searching, I found this message on a Github thread about using waitUntil. It’s from someone who works on the Samsung Internet team:

Sadly, the asynchronos waitUntil() is not implemented yet in our browser. Yes, we will implement it but our release cycle is so far. So, for a long time, we might not resolve the issue.

A-ha! That explains the problem. See, here’s the pattern I was using:

  1. When someone requests a file,
  2. fetch that file from the network,
  3. create a copy of the file and cache it,
  4. return the contents.

Step 1 is the event listener:

// 1. When someone requests a file
addEventListener('fetch', fetchEvent => {
  let request = fetchEvent.request;
  fetchEvent.respondWith(

Steps 2, 3, and 4 are inside that respondWith:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  caches.open(cacheName)
  .then( cache => {
    cache.put(request, copy);
  })
  // 4. return the contents.
  return responseFromFetch;
})

Step 4 might well complete while step 3 is still running (remember, everything in a service worker script is asynchronous so even though I’ve written out the steps sequentially, you never know what order the steps will finish in). That’s why I’m wrapping that third step inside fetchEvent.waitUntil:

// 2. fetch that file from the network
fetch(request)
.then( responseFromFetch => {
  // 3. create a copy of the file and cache it
  let copy = responseFromFetch.clone();
  fetchEvent.waitUntil(
    caches.open(cacheName)
    .then( cache => {
      cache.put(request, copy);
    })
  );
  // 4. return the contents.
  return responseFromFetch;
})

If a browser (like Samsung Internet) doesn’t understand the bit where I say fetchEvent.waitUntil, then it will throw an error and execute the catch clause. That’s where I have my fifth and final step: “try looking in the cache instead, but if that fails, show the offline page”:

.catch( fetchError => {
  console.log(fetchError);
  return caches.match(request)
  .then( responseFromCache => {
    return responseFromCache || caches.match('/offline');
  });
})

Normally in this kind of situation, I’d use feature detection to check whether a browser understands a particular API method. But it’s a bit tricky to test for support for asynchronous waitUntil. That’s okay. I can use a try/catch statement instead. Here’s what my revised code looks like:

fetch(request)
.then( responseFromFetch => {
  let copy = responseFromFetch.clone();
  try {
    fetchEvent.waitUntil(
      caches.open(cacheName)
      .then( cache => {
        cache.put(request, copy);
      })
    );
  } catch (error) {
    console.log(error);
  }
  return responseFromFetch;
})

Now I’ve managed to localise the error. If a browser doesn’t understand the bit where I say fetchEvent.waitUntil, it will execute the code in the catch clause, and then carry on as usual. (I realise it’s a bit confusing that there are two different kinds of catch clauses going on here: on the outside there’s a .then()/.catch() combination; inside is a try{}/catch{} combination.)

At some point, when support for async waitUntil statements is universal, this precautionary measure won’t be needed, but for now wrapping them inside try doesn’t do any harm.

There are a few places in chapter five of Going Offline—the chapter about service worker strategies—where I show examples using async waitUntil. There’s nothing wrong with the code in those examples, but if you want to play it safe (especially while Samsung Internet doesn’t support async waitUntil), feel free to wrap those examples in try/catch statements. But I’m not going to make those changes part of the errata for the book. In this case, the issue isn’t with the code itself, but with browser support.

Monday, April 30th, 2018

Going Offline: Designing An Ideal Offline Experience With Service Workers By Jeremy Keith

Here’s a great even-handed in-depth review of Going Offline:

If you’re interested in the “offline first” movement or want to learn more about Service Workers, Going Offline by Jeremy Keith is a really gentle and highly accessible introduction to the topic. At times, it even felt “too gentle”, with Keith taking a moment here and there to explain what a “variable” is and what “JSON” (JavaScript Object Notation) is. But, this just goes to show you the unassuming and welcoming mindset behind writing a book like this one.

Monday, January 8th, 2018

Async + Await

Slides from a conference talk with a really clear explanation of how async + await works with promises.

Monday, October 9th, 2017

“async” attribute on img, and corresponding “ready” event · Issue #1920 · whatwg/html

It looks like the async attribute is going to ship in Chrome for img elements:

This attribute would have two states:

  • “on”: This indicates that the developer prefers responsiveness and performance over atomic presentation of content.
  • “off”: This indicates that the developer prefers atomic presentation of content over responsiveness.

Friday, September 1st, 2017

Progressively Worse Apps

This article makes a good point about client-rendered pages:

Asynchronously loaded page elements shift click targets, resulting in a usability nightmare.

…but this has nothing, absolutely nothing to do with progressive web apps.

More fuel for the fire of evidence that far too many people think that progressive web apps and single page apps are one and the same.

Sunday, August 20th, 2017

Offline POSTs with Progressive Web Apps – Web Dev @ Microsoft – Medium

This is a smart way to queue up POST submissions for later if the user is offline. It’s not as powerful as background sync (because it requires the user to revisit your site) but it’s a good fallback for browsers that support service workers but don’t yet support background sync

Tuesday, February 21st, 2017

Send messages when you’re back online with Service Workers and Background Sync – Twilio Cloud Communications Blog

This example of using background sync looks like it’s specific to Twilio, but the breakdown of steps is broad enough to apply to many situations:

On the page we need to:

  1. Register a Service Worker
  2. Intercept the “submit” event for our message form
  3. Place the message details into IndexedDB, an in browser database
  4. Register the Service Worker to receive a “sync” event

Then, in the Service Worker we need to:

  1. Listen for sync events
  2. When a sync event is received, retrieve the messages from IndexedDB
  3. For each message, send a request to our server to send the message
  4. If the message is sent successfully, then remove the message from IndexedDB

And that’s it.

Sunday, January 29th, 2017

Callback Hell

At first when I was reading this JavaScript coding guide, I thought “Isn’t this exactly what promises address?” but that is then addressed further down:

Before looking at more advanced solutions, remember that callbacks are a fundamental part of JavaScript (since they are just functions) and you should learn how to read and write them before moving on to more advanced language features, since they all depend on an understanding of callbacks.

Fair enough. In any case, what you’ll find here is mainly good advice for writing modular code.

Saturday, January 28th, 2017

The Promise of a Burger Party - Mariko Kosaka

Mariko has a real knack for explaining technical concepts in a very accessible way. This time it’s JavaScript promises.

Wednesday, January 11th, 2017

Making Resilient Web Design work offline

I’ve written before about taking an online book offline, documenting the process behind the web version of HTML5 For Web Designers. A book is quite a static thing so it’s safe to take a fairly aggressive offline-first approach. In fact, a static unchanging book is one of the few situations that AppCache works for. Of course a service worker is better, but until AppCache is removed from browsers (and until service worker is supported across the board), I’m using both. I wouldn’t recommend that for most sites though—for most sites, use a service worker to enhance it, and avoid AppCache like the plague.

For Resilient Web Design, I took a similar approach to HTML5 For Web Designers but I knew that there was a good chance that some of the content would be getting tweaked at least for a while. So while the approach is still cache-first, I decided to keep the cache fairly fresh.

Here’s my service worker. It starts with the usual stuff: when the service worker is installed, there’s a list of static assets to cache. In this case, that list is literally everything; all the HTML, CSS, JavaScript, and images for the whole site. Again, this is a pattern that works well for a book, but wouldn’t be right for other kinds of websites.

The real heavy lifting happens with the fetch event. This is where the logic sits for what the service worker should do everytime there’s a request for a resource. I’ve documented the logic with comments:

// Look in the cache first, fall back to the network
  // CACHE
  // Did we find the file in the cache?
      // If so, fetch a fresh copy from the network in the background
      // NETWORK
          // Stash the fresh copy in the cache
  // NETWORK
  // If the file wasn't in the cache, make a network request
      // Stash a fresh copy in the cache in the background
  // OFFLINE
  // If the request is for an image, show an offline placeholder
  // If the request is for a page, show an offline message

So my order of preference is:

  1. Try the cache first,
  2. Try the network second,
  3. Fallback to a placeholder as a last resort.

Leaving aside that third part, regardless of whether the response is served straight from the cache or from the network, the cache gets a top-up. If the response is being served from the cache, there’s an additional network request made to get a fresh copy of the resource that was just served. This means that the user might be seeing a slightly stale version of a file, but they’ll get the fresher version next time round.

Again, I think this acceptable for a book where the tweaks and changes should be fairly minor, but I definitely wouldn’t want to do it on a more dynamic site where the freshness matters more.

Here’s what it usually likes like when a file is served up from the cache:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      return responseFromCache;
  }

I’ve introduced an extra step where the fresher version is fetched from the network. This is where the code can look a bit confusing: the network request is happening in the background after the cached file has already been returned, but the code appears before the return statement:

caches.match(request)
  .then( responseFromCache => {
  // Did we find the file in the cache?
  if (responseFromCache) {
      // If so, fetch a fresh copy from the network in the background
      event.waitUntil(
          // NETWORK
          fetch(request)
          .then( responseFromFetch => {
              // Stash the fresh copy in the cache
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      return responseFromCache;
  }

It’s asynchronous, see? So even though all that network code appears before the return statement, it’s pretty much guaranteed to complete after the cache response has been returned. You can verify this by putting in some console.log statements:

caches.match(request)
.then( responseFromCache => {
  if (responseFromCache) {
      event.waitUntil(
          fetch(request)
          .then( responseFromFetch => {
              console.log('Got a response from the network.');
              caches.open(staticCacheName)
              .then( cache => {
                  cache.put(request, responseFromFetch);
              });
          })
      );
      console.log('Got a response from the cache.');
      return responseFromCache;
  }

Those log statements will appear in this order:

Got a response from the cache.
Got a response from the network.

That’s the opposite order in which they appear in the code. Everything inside the event.waitUntil part is asynchronous.

Here’s the catch: this kind of asynchronous waitUntil hasn’t landed in all the browsers yet. The code I’ve written will fail.

But never fear! Jake has written a polyfill. All I need to do is include that at the start of my serviceworker.js file and I’m good to go:

// Import Jake's polyfill for async waitUntil
importScripts('/js/async-waituntil.js');

I’m also using it when a file isn’t found in the cache, and is returned from the network instead. Here’s what the usual network code looks like:

fetch(request)
  .then( responseFromFetch => {
    return responseFromFetch;
  })

I want to also store that response in the cache, but I want to do it asynchronously—I don’t care how long it takes to put the file in the cache as long as the user gets the response straight away.

Technically, I’m not putting the response in the cache; I’m putting a copy of the response in the cache (it’s a stream, so I need to clone it if I want to do more than one thing with it).

fetch(request)
  .then( responseFromFetch => {
    // Stash a fresh copy in the cache in the background
    let responseCopy = responseFromFetch.clone();
    event.waitUntil(
      caches.open(staticCacheName)
      .then( cache => {
          cache.put(request, responseCopy);
      })
    );
    return responseFromFetch;
  })

That all seems to be working well in browsers that support service workers. For legacy browsers, like Mobile Safari, there’s the much blunter caveman logic of an AppCache manifest.

Here’s the JavaScript that decides whether a browser gets the service worker or the AppCache:

if ('serviceWorker' in navigator) {
  // If service workers are supported
  navigator.serviceWorker.register('/serviceworker.js');
} else if ('applicationCache' in window) {
  // Otherwise inject an iframe to use appcache
  var iframe = document.createElement('iframe');
  iframe.setAttribute('src', '/appcache.html');
  iframe.setAttribute('style', 'width: 0; height: 0; border: 0');
  document.querySelector('footer').appendChild(iframe);
}

Either way, people are making full use of the offline nature of the book and that makes me very happy indeed.