Journal tags: php

11

sparkline

Syndicating to Bluesky

Last year I described how I syndicate my posts to different social networks.

Back then my approach to syndicating to Bluesky was to piggy-back off my micro.blog account (which is really just the RSS feed of my notes):

Micro.blog can also cross-post to other services. One of those services is Bluesky. I gave permission to micro.blog to syndicate to Bluesky so now my notes show up there too.

It worked well enough, but it wasn’t real-time and I didn’t have much control over the formatting. As Bluesky is having quite a moment right now, I decided to upgrade my syndication strategy and use the Bluesky API.

Here’s how it works…

First you need to generate an app password. You’ll need this so that you can generate a token. You need the token so you can generate …just kidding; the chain of generated gobbledegook stops there.

Here’s the PHP I’m using to generate a token. You’ll need your Bluesky handle and the app password you generated.

Now that I’ve got a token, I can send a post. Here’s the PHP I’m using.

There’s something extra code in there to spot URLs and turn them into links. Bluesky has a very weird way of doing this.

It didn’t take too long to get posting working. After some more tinkering I got images working too. Now I can post straight from my website to my Bluesky profile. The Bluesky API returns an ID for the post that I’ve created there so I can link to it from the canonical post here on my website.

I’ve updated my posting interface to add a toggle for Bluesky right alongside the toggle for Mastodon. There used to be a toggle for Twitter. That’s long gone.

Now when I post a note to my website, I can choose if I want to send a copy to Mastodon or Bluesky or both.

One day Bluesky will go away. It won’t matter much to me. My website will still be here.

Federation syndication

I’m quite sure this is of no interest to anyone but me, but I finally managed to fix a longstanding weird issue with my website.

I realise that me telling you about a bug specific to my website is like me telling you about a dream I had last night—fascinating for me; incredibly dull for you.

For some reason, my site was being brought to its knees anytime I syndicated a note to Mastodon. I rolled up my sleeves to try to figure out what the problem could be. I was fairly certain the problem was with my code—I’m not much of a back-end programmer.

My tech stack is classic LAMP: Linux, Apache, MySQL and PHP. When I post a note, it gets saved to my database. Then I make a curl request to the Mastodon API to syndicate the post over there. That’s when my CPU starts climbing and my server gets all “bad gateway!” on me.

After spending far too long pulling apart my PHP and curl code, I had to come to the conclusion that I was doing nothing wrong there.

I started watching which processes were making the server fall over. It was MySQL. That seemed odd, because I’m not doing anything too crazy with my database reads.

Then I realised that the problem wasn’t any particular query. The problem was volume. But it only happened when I posted a note to Mastodon.

That’s when I had a lightbulb moment about how the fediverse works.

When I post a note to Mastodon, it includes a link back to the original note to my site. At this point Mastodon does its federation magic and starts spreading the post to all the instances subscribed to my account. And every single one of them follows the link back to the note on my site …all at the same time.

This isn’t a problem when I syndicate my blog posts, because I’ve got a caching mechanism in place for those. I didn’t think I’d need any caching for little ol’ notes. I was wrong.

A simple solution would be not to include the link back to the original note. But I like the reminder that what you see on Mastodon is just a copy. So now I’ve got the same caching mechanism for my notes as I do for my journal (and I did my links while I was at it). Everything is hunky-dory. I can syndicate to Mastodon with impunity.

See? I told you it would only be of interest to me. Although I guess there’s a lesson here. Something something caching.

Oh, embed!

I wrote yesterday about how messing about on your own website can be a welcome distraction. I did some tinkering with adactio.com on the weekend that you might be interested in.

Let me set the scene…

I’ve started recording and publishing a tune a day. I grab my mandolin, open up Quicktime and make a movie of me playing a jig, a reel, or some other type of Irish tune. I include a link to that tune on The Session and a screenshot of the sheet music for anyone who wants to play along. And I embed the short movie clip that I’ve uploaded to YouTube.

Now it’s not the first time I’ve embedded YouTube videos into my site. But with the increased frequency of posting a tune a day, the front page of adactio.com ended up with multiple embeds. That is not good for performance—my Lighthouse score took quite a hit. Worst of all, if a visitor doesn’t end up playing an embedded video, all of the markup, CSS, and JavaScript in the embedded iframe has been delivered for nothing.

Meanwhile over on The Session, I’ve got a strategy for embedding YouTube videos that’s better for performance. Whenever somebody posts a link to a video on YouTube, the thumbnail of the video is embedded. Only when you click the thumbnail does that image get swapped out for the iframe with the video.

That’s what I needed to do here on adactio.com.

First off, I should explain how I’m embedding things generally ‘round here. Whenever I post a link or a note that has a URL in it, I run that URL through a little PHP script called getEmbedCode.php.

That code checks to see if the URL is from a service that provides an oEmbed endpoint. A what-Embed? oEmbed!

oEmbed is like a minimum viable read-only API. It was specced out by Leah and friends years back. You ping a URL like this:

http://example.com/oembed?url=https://example.com/thing

In this case http://example.com/oembed is the endpoint and url is the value of a URL from that provider. Here’s a real life example from YouTube:

https://www.youtube.com/oembed?url=https://www.youtube.com/watch?v=-eiqhVmSPcs

So https://www.youtube.com/oembed is the endpoint and url is the address of any video on YouTube.

You get back some JSON with a pre-defined list of values like title and html. That html payload is the markup for your embed code.

By default, YouTube sends back markup like this:

<iframe
width="480"
height="270"
src="https://www.youtube.com/embed/-eiqhVmSPcs?feature=oembed"
frameborder="0
allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture"
allowfullscreen>
</iframe>

But now I want to use an img instead of an iframe. One of the other values returned is thumbnail_url. That’s the URL of a thumbnail image that looks something like this:

https://i.ytimg.com/vi/-eiqhVmSPcs/hqdefault.jpg

In fact, once you know the ID of a YouTube video (the ?v= bit in a YouTube URL), you can figure out the path to multiple images of different sizes:

(Although that last one—maxresdefault.jpg—might not work for older videos.)

Okay, so I need to extract the ID from the YouTube URL. Here’s the PHP I use to do that:

parse_str(parse_url($url, PHP_URL_QUERY), $arguments);
$id = $arguments['v'];

Then I can put together some HTML like this:

<div>
<a class="videoimglink" href="'.$url.'">
<img width="100%" loading="lazy"
src="https://i.ytimg.com/vi/'.$id.'/default.jpg"
alt="'.$response['title'].'"
srcset="
https://i.ytimg.com/vi/'.$id.'/mqdefault.jpg 320w,
https://i.ytimg.com/vi/'.$id.'/hqdefault.jpg 480w,
https://i.ytimg.com/vi/'.$id.'/maxresdefault.jpg 1280w
">
</a>
</div>

Now I’ve got a clickable responsive image that links through to the video on YouTube. Time to enhance. I’m going to add a smidgen of JavaScript to listen for a click on that link.

Over on The Session, I’m using addEventListener but here on adactio.com I’m going to be dirty and listen for the event directly in the markup using the onclick attribute.

When the link is clicked, I nuke the link and the image using innerHTML. This injects an iframe where the link used to be (by updating the innerHTML value of the link’s parentNode).

onclick="event.preventDefault();
this.parentNode.innerHTML='<iframe src=https://www.youtube-nocookie.com/embed/'.$id.'?autoplay=1></iframe>'"

But notice that I’m not using the default YouTube URL for the iframe. That would be:

https://www.youtube.com/embed/-eiqhVmSPcs

Instead I’m swapping out the domain youtube.com for youtube-nocookie.com:

https://www.youtube-nocookie.com/embed/-eiqhVmSPcs

I can’t remember where I first came across this undocumented parallel version of YouTube that has, yes, you guessed it, no cookies. It turns out that, not only is the default YouTube embed code bad for performance, it is—unsurprisingly—bad for privacy too. So the youtube-nocookie.com domain can protect your site’s visitors from intrusive tracking. Pass it on.

Anyway, I’ve got the markup I want now:

<div>
<a class="videoimglink" href="https://www.youtube.com/watch?v=-eiqhVmSPcs"
onclick="event.preventDefault();
this.parentNode.innerHTML='<iframe src=https://www.youtube-nocookie.com/embed/-eiqhVmSPcs?autoplay=1></iframe>'">
<img width="100%" loading="lazy"
src="https://i.ytimg.com/vi/-eiqhVmSPcs/default.jpg"
alt="The Banks Of Lough Gowna (jig) on mandolin"
srcset="
https://i.ytimg.com/vi/-eiqhVmSPcs/mqdefault.jpg 320w,
https://i.ytimg.com/vi/-eiqhVmSPcs/hqdefault.jpg 480w,
https://i.ytimg.com/vi/-eiqhVmSPcs/maxresdefault.jpg 1280w
">
</a>
</div>

The functionality is all there. But I want to style the embedded images to look more like playable videos. Time to break out some CSS (this is why I added the videoimglink class to the YouTube link).

.videoimglink {
    display: block;
    position: relative;
}

I’m going to use generated content to create a play button icon. Because I can’t use generated content on an img element, I’m applying these styles to the containing .videoimglink a element.

.videoimglink::before {
    content: '▶';
}

I was going to make an SVG but then I realised I could just be lazy and use the unicode character instead.

Right. Time to draw the rest of the fucking owl:

.videoimglink::before {
    content: '▶';
    display: inline-block;
    position: absolute;
    background-color: var(--background-color);
    color: var(--link-color);
    border-radius: 50%;
    width: 10vmax;
    height: 10vmax;
    top: calc(50% - 5vmax);
    left: calc(50% - 5vmax);
    font-size: 6vmax;
    text-align: center;
    text-indent: 1vmax;
    opacity: 0.5;
}

That’s a bunch of instructions for sizing and positioning. I’d explain it, but that would require me to understand it and frankly, I’m not entirely sure I do. But it works. I think.

With a translucent play icon positioned over the thumbnail, all that’s left is to add a :hover style to adjust the opacity:

.videoimglink:hover::before,
.videoimglink:focus::before {
    opacity: 0.75;
}

Wheresoever thou useth :hover, thou shalt also useth :focus.

Okay. It’s good enough. Ship it!

The Banks Of Lough Gowna (jig) on mandolin

If you embed YouTube videos on your site, and you’d like to make them more performant, check out this custom element that Paul made: Lite YouTube Embed. And here’s a clever technique that uses the srcdoc attribute to get a similar result (but don’t forget to use the youtube-nocookie.com domain).

Indy web

It was Indie Web Camp Brighton on the weekend. After a day of thought-provoking discussions, I thoroughly enjoyed spending the second day tinkering on my website.

For a while now, I’ve wanted to add maps to my monthly archive pages (to accompany the calendar heatmaps I added at a previous Indie Web Camp). Whenever I post anything to my site—a blog post, a note, a link—it’s timestamped and geotagged. I thought it would be fun to expose that in a glanceable way. A map seems like the right medium for that, but I wanted to avoid the obvious route of dropping a load of pins on a map. Instead I was looking for something more like the maps in Indiana Jones films—a line drawn from place to place to show the movement over time.

I talked to Aaron about this and his advice was that a client-side JavaScript embedded map would be the easiest option. But that seemed like overkill to me. This map didn’t need to be pannable or zoomable; just glanceable. So I decided to see if how far I could get with a static map. I timeboxed two hours for it.

After two hours, I admitted defeat.

I was able to find the kind of static maps I wanted from Mapbox—I’m already using them for my check-ins. I could even add a polyline, which is exactly what I wanted. But instead of passing latitude and longitude co-ordinates for the points on the polyline, the docs explain that I needed to provide …cur ominous thunder and lightning… The Encoded Polyline Algorithm Format.

Go to that link. I’ll wait.

Did you read through the eleven steps of instructions? Did you also think it was a piss take?

  1. Take the initial signed value.
  2. Multiply it by 1e5.
  3. Convert that decimal value to binary.
  4. Left-shift the binary value one bit.
  5. If the original decimal value is negative, invert this encoding.
  6. Break the binary value out into 5-bit chunks.
  7. Place the 5-bit chunks into reverse order.
  8. OR each value with 0x20 if another bit chunk follows.
  9. Convert each value to decimal.
  10. Add 63 to each value.
  11. Convert each value to its ASCII equivalent.

This was way beyond my brain’s pay grade. But surely someone else had written the code I needed? I did some Duck Duck Going and found a piece of PHP code to do the encoding. It didn’t work. I Ducked Ducked and Went some more. I found a different piece of PHP code. That didn’t work either.

At this point, my allotted time was up. If I wanted to have something to demo by the end of the day, I needed to switch gears. So I did.

I used Leaflet.js to create the maps I wanted using client-side JavaScript. Here’s the JavaScript code I wrote.

It waits until the page has finished loading, then it searches for any instances of the h-geo microformat (a way of encoding latitude and longitude coordinates in HTML). If there are three or more, it generates a script element to pull in the Leaflet library, and a corresponding style element. Then it draws the map with the polyline on it. I ended up using Stamen’s beautiful watercolour map tiles.

Had some fun at Indie Web Camp Brighton on the weekend messing around with @Stamen’s lovely watercolour map tiles. (I was trying to create Indiana Jones style travel maps for my site …a different kind of Indy web.)

That’s what I demoed at the end of the day.

But I wasn’t happy with it.

Sure, it looked good, but displaying the map required requests for a script, a style sheet, and multiple map tiles. I made sure that it didn’t hold up the loading of the rest of the page, but it still felt wasteful.

So after Indie Web Camp, I went back to investigate static maps again. This time I did finally manage to find some PHP code for encoding lat/lon coordinates into a polyline that worked. Finally I was able to construct URLs for a static map image that displays a line connecting multiple points with a line.

I’ve put this maps on any of the archive pages that also have calendar heat maps. Some examples:

If you go back much further than that, the maps start to trail off. That’s because I wasn’t geotagging everything from the start.

I’m pretty happy with the final results. It’s certainly far more responsible from a performance point of view. Oh, and I’ve also got the maps inside a picture element so that I can swap out the tiles if you switch to dark mode.

It’s a shame that I can’t use the lovely Stamen watercolour tiles for these static maps though.

Am I cached or not?

When I was writing about the lie-fi strategy I’ve added to adactio.com, I finished with this thought:

What I’d really like is some way to know—on the client side—whether or not the currently-loaded page came from a cache or from a network. Then I could add some kind of interface element that says, “Hey, this page might be stale—click here if you want to check for a fresher version.”

Trys heard my plea, and came up with a very clever technique to alter the HTML of a page when it’s put into a cache.

It’s a function that reads the response body stream in, returning a new stream. Whilst reading the stream, it searches for the character codes that make up: <html. If it finds them, it tacks on a data-cached attribute.

Nice!

But then I was discussing this issue with Tantek and Aaron late one night after Indie Web Camp Düsseldorf. I realised that I might have another potential solution that doesn’t involve the service worker at all.

Caveat: this will only work for pages that have some kind of server-side generation. This won’t work for static sites.

In my case, pages are generated by PHP. I’m not doing a database lookup every time you request a page—I’ve got a server-side cache of posts, for example—but there is a little bit of assembly done for every request: get the header from here; get the main content from over there; get the footer; put them all together into a single page and serve that up.

This means I can add a timestamp to the page (using PHP). I can mark the moment that it was served up. Then I can use JavaScript on the client side to compare that timestamp to the current time.

I’ve published the code as a gist.

In a script element on each page, I have this bit of coducken:

var serverTimestamp = <?php echo time(); ?>;

Now the JavaScript variable serverTimestamp holds the timestamp that the page was generated. When the page is put in the cache, this won’t change. This number should be the number of seconds since January 1st, 1970 in the UTC timezone (that’s what my server’s timezone is set to).

Starting with JavaScript’s Date object, I use a caravan of methods like toUTCString() and getTime() to end up with a variable called clientTimestamp. This will give the current number of seconds since January 1st, 1970, regardless of whether the page is coming from the server or from the cache.

var localDate = new Date();
var localUTCString = localDate.toUTCString();
var UTCDate = new Date(localUTCString);
var clientTimestamp = UTCDate.getTime() / 1000;

Then I compare the two and see if there’s a discrepency greater than five minutes:

if (clientTimestamp - serverTimestamp > (60 * 5))

If there is, then I inject some markup into the page, telling the reader that this page might be stale:

document.querySelector('main').insertAdjacentHTML('afterbegin',`
  <p class="feedback">
    <button onclick="this.parentNode.remove()">dismiss</button>
    This page might be out of date. You can try <a href="javascript:window.location=window.location.href">refreshing</a>.
  </p>
`);

The reader has the option to refresh the page or dismiss the message.

This page might be out of date. You can try refreshing.

It’s not foolproof by any means. If the visitor’s computer has their clock set weirdly, then the comparison might return a false positive every time. Still, I thought that using UTC might be a safer bet.

All in all, I think this is a pretty good method for detecting if a page is being served from a cache. Remember, the goal here is not to determine if the user is offline—for that, there’s navigator.onLine.

The upshot is this: if you visit my site with a crappy internet connection (lie-fi), then after three seconds you may be served with a cached version of the page you’re requesting (if you visited that page previously). If that happens, you’ll now also be presented with a little message telling you that the page isn’t fresh. Then it’s up to you whether you want to have another go.

I like the way that this puts control back into the hands of the user.

Updating email addresses with Mailchimp’s API

I’ve been using Mailchimp for years now to send out a weekly newsletter from The Session. But I never visit the Mailchimp website. Instead, I use the API to create a campaign each week, and then send it out. I also use the API whenever a member of The Session updates their email preferences (or changes their details).

I got an email from Mailchimp that their old API was being deprecated and I’d need to update to their more recent one. The code I was using had been happily running for about seven years, but now I’d have to change it.

Luckily, Drew has written a really handy Mailchimp API wrapper for PHP, the language that The Session’s codebase is in. Thanks, Drew! I downloaded that wrapper and updated my code accordingly.

Everything went pretty smoothly. I was able to create campaigns, send campaigns, add new subscribers, and delete subscribers. But I ran into an issue when I wanted to update someone’s email address (on The Session, you can edit your details at any time, including your email address).

Here’s the set up:

use \DrewM\MailChimp\MailChimp;
$MailChimp = new MailChimp('abc123abc123abc123abc123abc123-us1');
$list_id = 'b1234346';
$subscriber_hash = $MailChimp -> subscriberHash('[email protected]');
$endpoint = 'lists/'.$listID.'/members/'.$subscriber_hash;

Now to update details, according to the API, I can use the patch method on that endpoint:

$MailChimp -> patch($endpoint, [
    'email_address' => '[email protected]'
]);

But that doesn’t work. Mailchimp effectively treats email addresses as unique IDs for subscribers. So the only way to change someone’s email address appears to be to delete them, and then subscribe them fresh with the new email address:

$MailChimp -> delete($endpoint);
$newendpoint = 'lists/'.$listID.'/members';
$MailChimp -> post($newendpoint, [
    'email_address' => '[email protected]',
    'status' => 'subscribed'
]);

That’s somewhat annoying, as the previous version of the API allowed email addresses to be updated, but this workaround isn’t too arduous.

Anyway, I figured it share this just in case it was useful for anyone else migrating to the newer API.

Update: Belay that. Turns out that you can update email addresses, but you have to be sure to include the status value:

$MailChimp -> patch($endpoint, [
    'email_address' => '[email protected]',
    'status' => 'subscribed'
]);

Okay, that’s a lot more straightforward. Ignore everything I said.

100 words 098

When I’m grilling outside, I cook on a gas barbecue. There are quite a few people who would take issue with this. Charcoal is clearly better, they claim. And they’re right. But the thing is, I can fire up my gas barbecue quickly and just get down to cooking.

When I’m programming on the server, I code in PHP. There are quite a few people who would take issue with this. Any other language is clearly better, they claim. And they’re right. But the thing is, I can fire up my text editor quickly and just get down to coding.

Pattern primer

I’m on a workshopping roll. Fresh from running my Responsive Enhancement workshop in Belfast, I’m now heading to Düsseldorf for Beyond Tellerand where I’ll be running the workshop on Sunday (and if you can’t make it, don’t forget that you can book the workshop for your own workplace too).

As part of the process of building a responsive site from the content out rather than the canvas in, I talk about beginning with the individual components divorced from any layout context. Or, as Mark puts it, “start with the bits.”

That’s the way I’ve been starting most of my projects lately: beginning with the atomic units of content and styling them first before even thinking about layout. This ensures that those styles are extremely robust—because they don’t depend on any particular context, they can be safely dropped into any part of a page.

I’ve been calling this initial collection of markup snippets a pattern primer. To help create the pattern primer, I’ve written a little bit of PHP to automatically generate a page of patterns from a folder of HTML snippets.

In my workshop I keep promising to put that script on Github. I finally got around to doing that and you can find it at github.com/adactio/Pattern-Primer.

Take a look at an example pattern primer to get an idea of what a handy deliverable this can be if you’re handing off to other developers. It also acts like a page of unit tests for CSS—whenever you’ve been messing around in the stylesheet you can refresh the page to quickly check to see if anything looks screwed up.

Grab the code; improve upon it; share your changes.

Announcing Huffduffer

Back in April, I wrote:

I’ve been thinking about maybe putting together a podcast — just an RSS feed — that points to interesting inspirational talks, sort of like Jon’s Found Sounds podcasts but for spoken word instead of music.

Well, as soon as I started trying to do that I discovered that, contrary to what Tim Bray says, creating an RSS feed by hand is a pain in the ass. So I decided that I would automate the task of creating an RSS feed complete with enclosures. Then I realised that if this was going to be useful to me, it might well be useful to other people looking to create podcasts of found sounds. So I made a website:

Huffduffer

The term derives from the abbreviation HF/DF. It refers to a technique, widely employed during World War II, to triangulate the position of radio transmissions. I thought that was a suitable term to revive for the practice of finding interesting MP3 files on the web.

Using the service is pretty straightforward. First of all, you have to sign up. No, I haven’t implemented OpenID support. Sorry. I hope to get around to it at some stage.

Secondly, you find MP3 files out there on the web. Using either a bookmarklet, or a form on the site itself, you “huffduff” the file: give it a title, description, and tags.

That’s pretty much it. People can subscribe to your podcast and you can subscribe to other people’s podcasts. You can also subscribe to a podcast of files with a certain tag or a combination of files from a particular person with a particular tag. Basically, if there’s a page for it on the site, there’s probably a corresponding podcast you can subscribe to.

So if you’ve ever fancied curating your own podcast, head on over to huffduffer.com and sign up for an account. If you’re interested in the kind of audio I find interesting, you can subscribe to my podcast.

By its nature, this will never be a popular, mass-market site. But, as is the case with most things built to scratch a particular itch, I hope it will turn out to be useful for some other people. If other people do end up using the site, that will open some opportunities for bubbling up some interesting stuff: popular MP3s, popular tags, recommendations of files from users who share similar interests with you.

I had quite a lot of fun building Huffduffer. It’s been a while since I’ve done any back-end programming so I used this as an opportunity to get intimate with the whole MVC idea. I thought about building the site in Django or Ruby on Rails, but in the end I decided to stick with PHP. I investigated some of the PHP frameworks out there and, while they all had parts that I liked, I decided to roll my own code …my own framework, really.

On the front end, the site is built in HTML5. I did this partly for the heck of it, and partly to show that HTML5 is not some future technology but something that you can use right now. The validator by Henri Sivonen proved invaluable.

The visual design of the site is very minimal, as most of my sites tend to be. On the plus side, this means the site is lean and fast-loading. On the minus side, it’s monochrome to point of boredom. But I spent quite a while crafting the typography just the way I want it in the belief that, if you’re going to concentrate on one aspect of visual design, the typography is probably the best place to start.

I’ll be iterating on Huffduffer over time. It’ll be interesting to see how the site gets used (if at all) and react accordingly.

Check it for yourself and see if it’s something you might like to use. If you have any questions, comments, or suggestions about the site, feel free to chime in on Get Satisfaction.

Hacky holidays on OS X

Christmas is a time for giving, a time for over-indulgence, a time for lounging around and for me, a time for doing those somewhat time-consuming tasks that I’d otherwise never get around to doing… like upgrading my operating system.

I used the downtime here in Arizona to install Leopard on my Macbook. I knew from reading other people’s reports that it might take some time to get my local web server back up and running. Sure enough, I had to jump through some hoops.

I threw caution to the wind and chose the “upgrade” option. Normally I’d choose “Archive and Install” but it sounds like this caused some problems for Roger .

The upgrade went smoothly. Before too long, I had a brand spanking new OS that was similar to the old OS but ever so slightly uglier and slower.

My first big disappointment was discovering that my copy of Photoshop 7 didn’t work at all. Yes, I know that’s a really old version but I don’t do too much image editing on my laptop so it’s always been good enough. I guess I should have done some reading up on compatibility before installing Leopard. Fortunately, I was able to upgrade from Photoshop 7 to Photoshop CS3—I was worried that I might have had to buy a new copy.

But, as I said, the bulk of my time was spent getting my local LAMP constellation back up and running. I did most of my editing in BBEdit—if you install the BBEdit command line tools, you can use the word bbedit in Terminal to edit documents. If you use Textmate, mate is the command you want.

Leopard ships with Apache 2 which manages virtual hosts differently to the previous version. Instead of keeping all the virtual host information in /etc/httpd/httpd.conf (or /etc/httpd/users/jeremy.conf), the new version of Apache stores it in /private/etc/apache2/extra/httpd-vhosts.conf. I fired up Terminal and typed:

bbedit /private/etc/apache2/extra/httpd-vhosts.conf

That file shows a VirtualHost example. After unlocking the file, I commented out the example and added my own info:

<VirtualHost *:80>
   ServerName adactio.dev
   DocumentRoot "/Users/jeremy/Sites/adactio/public_html"
</VirtualHost>

The default permissions are somewhat draconian so to avoid getting 403:Forbidden messages when trying to look at any local sites, I also added these lines to the httpd-vhosts.conf file:

<Directory /Users/*/Sites/>
    Options Indexes Includes FollowSymLinks SymLinksifOwnerMatch ExecCGI MultiViews
    AllowOverride All
    Order allow,deny
    Allow from all
</Directory>

I then saved the file, which required an admin password.

The good news is that Leopard doesn’t mess with the hosts file (located at /private/etc/hosts). That’s where I had listed the same host names I had chosen in the previous file:

127.0.0.1 localhost
127.0.0.1 adactio.dev

But for any of that to get applied, I needed to edit the httpd.conf file:

bbedit /private/etc/apache2/httpd.conf

I uncommented this line:

# Include /private/etc/apache2/extra/httpd-vhosts.conf

While I was in there, I also removed the octothorp from the start of this line:

# LoadModule php5_module libexec/apache2/libphp5.so

That gets PHP up and running. Leopard ships with PHP5 which is A Good Thing.

Going into Systems Preferences, then Sharing and then ticking the Web Sharing checkbox, I started up my web server and was able to successfully navigate to http://adactio.dev/. There I was greeted with an error message informing me that my local site wasn’t able to connect to MySQL.

Do not fear: MySQL is still there. But I needed to do two things:

  1. Tell PHP where to look for the connection socket and
  2. Get MySQL to start automatically on login.

For the first step, I needed a php.ini file to edit. I created this by copying the supplied php.ini.default file:

cd /private/etc
cp php.ini.default php.ini
bbedit php.ini

I found this line:

mysql.default_socket =

…and changed it to:

mysql.default_socket = /private/tmp/mysql.sock

I had previously installed MySQL by following these instructions but now the handy little preference pane for starting and stopping MySQL was no longer working. It was going to be a real PITA if I had to manually start up MySQL every time I restarted my computer so I looked for a way of getting it to start up automatically.

I found what I wanted on the TomatoCheese Blog. Here’s the important bit:

Remove the MySQL startup item (we’ll use the preferred launchd instead):

 sudo rm -R /Library/StartupItems/MYSQLCOM

Also, right-click and remove the MySQL preference pane in System Preferences because we’ll be using the preferred launchd instead.

Copy this MySQL launchd configuration file to /Library/LaunchDaemons, and change its owner to root:

sudo chown root /Library/LaunchDaemons/com.mysql.mysqld.plist

That did the trick for me. When I restarted my machine, MySQL started up automatically.

So after some command line cabalism and Google sleuthing, I had my local webdev environment back up and running on Leopard.

The Future of Web Apps, day two

I’m feeling quite refreshed and ready for another day of geekery. There weren’t too many drinking shenanigans going on last night.

The official watering hole for the FOWA drinkipoos turned out to be a yuppie nightmare. The entrance hallway was filled with gaudy images that were probably intended to recall 1950s pin-ups but actually just looked like page 3 pages torn from a tatty copy of The Sun. The drinks were ludicrously overpriced and getting out of the toilets required a significant toll charge. All of this would have been mitigated if there were some ancillary benefits such as watching young nubile bodies gyrating on a dancefloor but a sign at the entrance made it very clear that dancing was forbidden. This being England, the sign added, “we apologise for the inconvenience.”

Before long, a rebellion was organised and a gaggle of geeks made a mass exodus to a lovely cosy pub across the street. Happiness and chattiness emerged. After that, there was time for one civilised nightcap in the hotel bar with the dynamic duo of Tara and Chris, Google’s Jonathan Rochelle (a scholar and a gentleman) and Natalie—free from Simon’s clutches while he worked frantically on his slides.

It’s day two of FOWA now and there’s still no sign of free WiFi. Khoi has kindly given me a BT Openzone scratch’n’sniff WiFi card he got yesterday so I’ll use that to dip in and out of the river of connectivity and expand on this running commentary throughout the day.

Mark Anders

Adobe kicked off the day with a Flex demo. Having attended Flash on the Beach, there wasn’t anything new for me here but it was interesting to watch other people’s reactions to the speed of Actionscript 3 and the ease of downloading an Apollo app.

Chris Wilson

Microsoft’s Chris Wilson is on stage giving a state of the Web address. He talked about the origins of Ajax, gave a nice shout out to microformats and he mentioned the power of tagging (Hi, Chris!). There’s plenty of talk about security which isn’t that enthralling to me personally but its probably the most important aspect of IE7 for most people on the planet. Alpha transparency in PNGs; now that’s more like it.

Khoi Vinh

Khoi is talking about The Future (capitalisation intentional) which will, as he says, be awesome. But first, let’s hear about some of the design challenges at The New York Times. He’s showing some nice examples of what art direction is. You’ll see art direction in the print version of the paper all the time, but the online counterparts are just templated. There are exceptions like the fifth anniversary of the September 11th attacks and the infographics for the November elections, but of course these are events that are predictable and can be planned for. For breaking news, real-time design just isn’t possible… yet.

Khoi makes an interesting point about the schizophrenia in new technology. At the same time that we’re getting into hi-def television and DVDs, we’re also flocking to YouTube even though the video quality is really lo-fi. And while SLR cameras are getting more and more powerful, we’re using crappy little camera phones more and more. This schizophrenia throws up some design challenges for a media outlet like The New York Times.

There’s no such thing as a free feature, says Khoi. And remember, the more expressive a designer gets, the more the user has to pay for it (download times and such). So for any new feature, there must be a really valid reason for it to exist. Oh, and options are obstructions. Too many prefs are a sign of unresolved design issues that couldn’t be squeezed into the main interface.

Thank you, Khoi. And now it’s Simon’s turn. Hmmmm… I wonder what he’ll be talking about: OpenID, perhaps?

Simon Willison

Oh man, Simon’s on a roll. Talking a mile a minute, getting jibes in at Microsoft, cracking jokes about Ben and Mena Trott… he’s got the audience in the palm of his twirling, whizzing hand.

Long story, short: OpenID rocks. If you’re creating any kind of membership-based site, you need to check this out. If you’re member of a lot of different sites, you need to check this out. Oh, and in case you missed it, both AOL and Digg announced support for OpenID over the past few days. The momentum looks unstoppable at this stage.

I love the fact that the evangelism for OpenID is coming from passionate developers like Simon, not from some corporate representative. Like the microformats movement, it’s bottom-up rather than top-down. Other companies are buying slots at this conference to pitch their products but Simon gets to talk about OpenID because it’s so freakin’ cool and can’t simply be ignored.

Ah, OpenID and microformats: now there’s a cool combo. Simon has won my heart and the hearts of everyone else in the audience, I suspect. He’s talking about portable social networks and everything. Bravo, Mr. Willison!

Jonathan Rochelle

After a pleasant lunch with some of the Last.fm posse, I’m back in the auditorium to hear what Jonathan from Google has to say about Google Docs and Spreadsheets (killer name, indeed). These aren’t the kind of Web apps I’m likely to use myself but I’m interesting in the technology behind them. I’m assuming that, given the complexity of the applications, the Ajax used will be of the non-Hijax variety.

Open Mic

Time to break out into something a little unusual. This, as Ryan puts it, is the user-generated part of the conference. Over the past few weeks, delegates have been able to log on to the FOWA site and vote for some short presentations they’d like to see at this point. The three highest-scoring subjects will now present.

  1. The virtual office. Okay, that works.

  2. A documentation technique called Jedi — Just Enough Documentation for Interactions. Great backronym!

  3. The topic with the most votes is… which apps will succeed and which will fail in 2007? Who knows?

Daniel Appelquist

And now it’s time for a talk on mobile. Let’s hear from Daniel Appelquist from Vodaphone. I’m not entirely sure that a provider is necessarily going to be the most subjective voice on this but we’ll see.

Actually, there’s something interesting stuff here, especially around the intersection of mobile and Ajax. There’s plenty of talk about standards, so that’s all good. I’ll have to corner him later for a chat.

Rasmus Lerdorf

Now let’s hear from the creator of PHP, Rasmus Lerdorf. He’s taking us on a trip down memory lane, looking at Mosaic and early versions of HTML and PHP. Rasmus basically wrote PHP to scratch his own itch—it’s the typical open source story.

Here’s a reassuring confession from someone who has written a programming language:

I hate programming. It’s tedious. It’s no fun. It’s like flying: sitting in a smelly metal tube with other people. But I love problem-solving.

Looking at PHP today, it’s a lot more verbose. The Computer Science geeks like it now but it sure has moved far away from being a quick and dirty tool for getting something done. Ironically, there are students today that only have a background in object-oriented programming and have to be taught what procedural programming is.

Here’s an interesting idea on why people join an open-source community: oxytocin, a neuropeptide otherwise known as nature’s trust hormone. That’s in addition to the usual incentives like self-interest and self-expression. It’s the same motivation that drives people to play World of Warcraft in a big way. Open source projects like PHP are like Web 2.0 community sites: Flickr, Digg and Wikipedia would be nothing without the user-contributed content. The same goes for any open-source project.

In addressing the issue of performance, Rasmus has lost me but that’s due to my own mental deficiency rather than any fault with his presentation style.

Security is even tougher. As he says, “basically, you can never click on a link.” He has two browsers: one for browsing and one for sites that have personal data. It’s kind of paranoid, it’s kind of sad but, when you understand the consequences of cross-site scripting, it’s entirely justified.

PHP5 makes it trivially easy to take XML from Web services and do stuff with it. I can vouch for that.

Time for a quick announcement.

Tariq Krim

Tariq is from Netvibes. I haven’t played with it myself but Mike Stenhouse was raving about it yesterday.

There’s a big announcement coming right now. Here it is… a Universal Widget API or UWA if you prefer a TLA.

If you care, you heard it here first folks.

Wait, here’s another announcement: support for OpenID. Yay! All the cool kids are doing it.

Right. Make way for the guys from Moo.

Richard Moross and Stefan Magdalinski

Print is dead? Bollocks says Richard. And of course he’s right. Derek Powazek would agree, I’m sure.

Moo cards are cool. I’ve got some: little cards with my Flickr food pictures and the URL of Principia Gastronomica. A significant proportion of this audience also have Moo cards. Best of all, anybody here can get free Moo cards if they give these guys a business card in return.

Business cards don’t have to be boring. They can tell a story.

With Moo cards, the difference makes all the difference. Y’know, Qoop launched much the same product—business cards made with the Flickr API—a week before Moo cards launched. But Moo could compete on the differences: unusual size and high-quality recycled card. Everybody talked about Moo cards; nobody talked about Qoop’s cards.

Partnership is everything for Moo. Without Flickr, they’d be nothing.

Marketing is a four letter word: free. Giving away free cards is great marketing. I concur: the free cards I got from Moo clinched the decision to buy cards from them.

The attention to detail in Moo’s physical package really seals the deal. There are little Easter eggs in there and the luggage-tag card that comes with every pack gets everyone talking. There’s an incredible amount that has to be done by hand but that’s what guarantees the right level of quality.

Now Stefan is giving a peak behind the curtain at the technical side of Moo. If you want to know what he’s saying, well, you should have come to the conference then, shouldn’t you? You can’t expect me to do everything now, can you?