About 15 years ago, I did a little project to make a ‘rosary’ for what is often referred to as a ‘secular prayer’.
Shipping Forecast Rosary – South East Iceland
Each of the forecast’s regions are represented in laser-cut marine plywood, and strung – in the order of the broadcast – to be thumbed through as you listen.
Shipping Forecast Rosary – German Bight
It was a very quick idea but I’ve always loved it – and it seemed to resonate with a few folks.
This attempted to create alternate coastlines from the shipping forecast areas.
I’m less happy with the execution here, but it’s still a fun idea. Might be more satisfying as something playable, generative – perhaps it has a future as a code experiment with an LLM’s assistance…
“Look at the pattern this seashell makes. The dappled whorl, curving inward to infinity. That’s the shape of the universe itself. There’s a constant pressure, pushing toward pattern. A tendency in matter to evolve into ever more complex forms. It’s a kind of pattern gravity, a holy greening power we call viriditas, and it is the driving force in the cosmos. Life, you see. Like these sand fleas and limpets and krill—although these krill in particular are dead, and helping the fleas. Like all of us,” waving a hand like a dancer. “And because we are alive, the universe must be said to be alive. We are its consciousness as well as our own. We rise out of the cosmos and we see its mesh of patterns, and it strikes us as beautiful. And that feeling is the most important thing in all the universe—its culmination, like the colour of the flower at first bloom on a wet morning. It’s a holy feeling, and our task in this world is to do everything we can to foster it.”
“Deuterium–tritium fusion, the kind of fusion that most star builders are doing, releases ten million times the amount of energy per kilogram as coal. Ten million. If you had a fusion reactor in your house, you’d have to go to the deuterium-tritium shed once for every ten million times you went to the coal shed. What this means is that the mass of a single cup of water contains the equivalent energy of 290 times what the average person in the US uses each year. The mass of an Olympic swimming pool contains an amount of energy in excess of total world annual energy use.”
“People aren’t the apex species they think they are. Other creatures—bigger, smaller, slower, faster, older, younger, more powerful—call the shots, make the air, and eat sunlight. Without them, nothing.”
“‘It’s going to be all right,’ he said, looking at as many of them as he could. ‘Every moment in history contains a mix of archaic elements, things from all over the past, right back into prehistory itself. The present is always a melange of these variously archaic elements. There are still knights coming through on horseback and taking the crops of peasants. There are still guilds, and tribes. Now we see so many people leaving their jobs to work in the flood relief efforts. That’s a new thing, but it’s also a pilgrimage. They want to be pilgrims, they want to have a spiritual purpose, they want to do real work – meaningful work. There is no reason to keep being stolen from. Those of you here who represent the aristocracy look worried. Perhaps you will have to work for yourselves, and live off that. Live at the same level as anyone else. And it’s true – that will happen. But it’s going to be all right, even for you. Enough is as good as a feast. And it’s when everyone is equal that your kids are safest.”
“There is a possible future for humanity where we have stabilized our climate, where everyone has the energy and resources that they need to survive and thrive, where we get to connect with each other in myriad ways. I get to use “we” in the best possible way, meaning all of humanity. Running the numbers and realizing exactly how abundant renewable energy is, and then realizing how close we are to being able to harness it—it’s an absolute game changer. We’re accustomed to thinking about making the transition away from fossil fuels to renewable sources as one that we are doing under duress, making a sacrifice to stave off disaster. But that’s not what we’re doing. What we’re doing is leveling up. We—you, me, anyone who is alive today—we have the opportunity to not just live through but contribute to a species-wide transition from struggle to security, from scarcity to abundance. We can be the best possible ancestors to future generations, putting them on a permanent, sustainable path of abundance and thriving. And we can do it for all of our descendants—all of humanity—not just a narrow line. But we can only do it together, and we still need to figure out how to get there.”
“It’s a miracle the weeds push up. Where is their sustenance, what are they feeding on? They see them only on the roads, by the mast towers, and on the airport runway where they landed. It is as if they thrive on provocation, rising up only when they have something to tear down. They are impish and morbid and embittered and they sort of love them. On the black rubble beaches, on the lower hillsides, they linger; they sit back, wait for the hubris of industry.”
“Another of his favourites, even more puzzling to young men and women conditioned to seek answers, was, ‘Uncertainty is all we have. It’s our advantage. It’s the virtue of the day.’”
Was able to get some time this week to catchup with Bryan Boyer.
We talked about some of the work he was doing with his students, particularly challenging them to think about design interventions and prototyping those across the ‘pace layers’ as famously depicted by Stewart Brand in his book “How buildings learn”
The image is totemic for design practitioners and theorists of a certain vintage (although I’m not sure how fully it resonates with today’s digital ‘product’ design / UX/UI generation) and certainly has been something I’ve wielded over the last two decades or so.
I think my first encounter with it would have been around 2002/2003 or so, in my time at Nokia.
I distinctly remember a conference where (perhaps unsurprisingly!) Dan Hill quoted it – I think it was DIS in Cambridge Massachusetts, where I also memorably got driven around one night in a home-brew dune buggy built and piloted (for want of a better term) by Saul Griffith.
For those not familiar with it – here it is.
The ‘point’ is to show the different cadences of change and progress in different idealised strata of civilisation (perhaps a somewhat narrow WEIRD-ly defined civilisation) – and moreover, much like the slips, schisms and landslides of different geological layers – make the reader aware of the shearing forces and tensions between those layers.
It is a constant presence in the discourse which both leads to it’s dismissal / uncritical acceptance as a cliche.
But this familiarity, aside from breeding contempt means it is also something quite fun to play in semi-critical ways.
While talking with Bryan, I discussed the biases perhaps embedded in showing ‘fashion’ as a wiggly ‘irrational’ line compared to the other layers.
What thoughts may come from depicting all the layers as wiggly?
Another thought from our chat was to extend the geological metaphor to the layers.
Geologists and earth scientists often find the most interesting things at the interstices of the layers. Deposits or thin layers that tell a rich tale of the past. Tell-tale indicators of calamity suck as the K–Pg/K-T boundary. Annals of a former world.
The laminar boundary between infrastructure and institutions is perhaps the layer that gets the least examination in our current obsession with “product”…
I’ve often discussed with folks the many situations where infrastructure (capex) is mistaken for something that can replace institutions/labour (opex) – and where the role of service design interventions or strategic design prototypes can help mitigate.
In the pace layers, perhaps we can call that the “Dan Hill Interstitial Latencies Layer” – pleasingly recurrent in its acronymic form (D-HILL) and make it irregular and gnarly to indicate the difficulties there…
The Representational Planar OP-Ex layer (R-POPE) might be another good name, paying homage to the other person I associate with this territory, Richard Pope. I’ve just started reading Richard’s book “Platformland” which I’m sure will have a lot to say about it.
“We might interact with them as individuals but they’re inherently collective, social, and spatial. Because they bring resources to where they’re used, they create enduring relationships not just between the people who share the network but also between those people and place, where they are in the world and the landscape the network traverses. These systems make manifest our ability to cooperate to meet universal needs and care for each other.”
So, perhaps… rather than superficial snark about a design talk cliche, the work of unpacking and making connective tissue across the pace layers might seem more vital in that context.
John De La Parra, a food scientist from the Rockerfeller Foundation spoke on the first day of The Conference, after a pretty esoteric (to me) presentation that asked us to participate in a guided meditation, listen to plants and submit our dreams to an experimental app.
“See, there are basically two kinds of Philosophy – one’s called prickly, the other one is called goo. Prickly people are precise, rigorous, logical – they like everything chopped up and clear. Goo people like it vague, big picture, random, imprecise, incomplete and irrational. Prickly people believe in particles, goo people believe in waves. They always argue with each other but what they don’t realize is neither one of them can take their position without their opposition being there. You wouldn’t know you are advocating prickles unless someone else was advocating goo. You wouldn’t even know what prickles was and what goo was. Life is not prickles or goo, its gooey-prickles or prickly-goo.”
That stuck with me – and I searched for the quote on my return to the UK – to find that it had been animated, in a production from… Matt Stone and Trey Parker of South Park fame.
Nice one, universe.
* Incidentally, note the anachronism in that scene from “Her”, where Artificial Superintelligence coincides with gas hobs and stove-top kettles!Paging Rewiring America to demand an electrified directors cut!
Usually I’d write up the talk here, but it’s over at my new site Kardashevstreet.com, where I’ll be posting stuff related to work on solar and the energy transition from now on.
Since leaving Lunar Energy at the end of July, I’ve been trying to figure out how to keep going in the loose domain of ‘design as it relates to the energy transition’ – and have a few things in the works which will manifest there over the next few months hopefully …
I left my role as Head of Design at Lunar Energy at the end of July this year after roughly 2.5years. I originally posted this to LinkedIn at the time, but thought I would repost it here where I own the words (a bit) more.
After 2.5 action-packed years, today is my last day as head of design at Lunar Energy
I’m grateful to my colleagues for all I’ve learned during that time, and for Chris Wright, Simon Daniel and Kunal Girotra for hiring me in the first place, after I left Google back in 2021.
It was a fantastic challenge to work hands-on across every aspect of design at a startup again – from the brand identity, to the industrial design, app UX all the way to compliance labels, packaging and installer collateral. Oh and all the fun internal schwag like the ‘mission patch’ stickers you can see here on my laptop as I hand it back.
I’m really proud of the work the team have done so far – elevating great design and experience in the service of their mission to move our homes to be powered by the endless energy of our Sun.
Lunar will continue to deliver on the design and quality of end-user experience – as it ramps up installation of the Lunar System this year. I’ll be cheering them on, but as the 0-to-1 challenges have slowed, it’s time for me to move on.
Well, a bit of a break through August and hanging out with the family – I think it’s the first time in a couple of decades I don’t have something new immediately lined up, so I’m going to enjoy that feeling for now!
I have some speaking and teaching lined up which I’ll be able to speak about soon – but very up for a chat about the near-future if you think that there’s something matt-shaped there I should know about.
Aside from clean energy hardware, software and services – I’m keen to get back into the fray of AI, especially personal AI experiences across hw & sw such as I’d been working on prior to leaving Google.
But for now, deleting slack (phew) from my phone… and onwards!
Well – August is over, and now I’m actively looking around for those Jones-shaped jobs.
Get in touch if you would like to chat to me about teaching, consultancy, project work or even full-time opportunities in the realms of AI across HW&SW, design for the energy transition or anything else that you think my be up my (Kardashev) street.
I left my job at Lunar Energy last month and August has been about recharging – some holidays with family and also wandering London a bit catching up with folks, seeing some art/design, and generally regenerative flaneur-y.
Yesterday, for instance, was off to lunch with my talented friends at the industrial design firm Approach Studio in Hackney.
This entailed getting the overground, and in doing so found something wonderful at Brockley Station.
Placed along the platform were “InfoTotems” (at least that’s what they were called on the back of them). Sturdy, about 1.5m high and with – crucially in the bright SE London sunlight of August – easily-readable low-power E-Ink screens.
E-ink “InfoTotem” at Brockley Station, SE London
They seemed to function as very simple but effective dynamic way-finding, nudging me down the platform to where it predicted I’d find a less-busy carriage.
Location sensitive, dynamic signage: E-ink “InfoTotem” at Brockley Station, SE London
Wonderfully, when I did so, I got this message on the next InfoTotem.
“You’re in the right place”: contextual reassurance from E-ink “InfoTotem” at Brockley Station, SE London
Nothing more than that – no extraneous information, just something very simple, reassuring and useful.
It felt really appropriate and thoughtful.
Not overreaching, over promising , overloading with *everything* else this thing could possible do as a software-controlled surface.
Very nice, TfL folks.
E-ink “InfoTotem” at Brockley Station, SE London: Back view, UID…
I’m going to try to do a bit more poking on the provenance of this work, and where it might be heading, as I find it really delightful.
It made me recall one of my favourite BERG projects I worked on, “The Journey” which was for Dentsu London – looking at ways to augment the realities of a train journey with light touch digital interventions on media surfaces along the timeline of the experience.
Place-based reassurance: Sketch for “The Journey” work with BERG for Dentsu London
Place-based reassurance: E-Ink magnetic-backed dynamic signage. Still from “The Journey” work with BERG for Dentsu London
I think what I like about the InfoTotems – is that instead of a singular product doing a thing on the platform, it’s treated as a spatial experience between the HW surfaces, and as a result it feels like a service inhabiting the place, rather than just the product.
Without that overloading I was referring to, what else could they do?
Obviously this example of nudging me down the platform to a less-busy carriage is based on telemetry it’s received from the arriving train.
Could there be more that conveys the spirit of the place – observations or useful nuggets – that are connected to where you are temporarily, but where the totems sit more permanently.
In “The Journey” there’s a lovely short bit where Jack is travelling through the UK countryside and looks at a ticket that has been printed for him, a kind of low-res augmented reality.
It’s a prompt for him to look out the window to notice something, knowing where he’s sitting and what time he’s going to go past a landmark.
Could low-powered edge AI start do something akin to this? To build out context or connections between observations made about the surroundings?
Cyclist counter sign in Manchester UK, Image via road.cc
We’ve all seen signs that count – for example ‘water bottles filled’ or ‘bike riders using this route today’ – but an edge AI could perhaps do something more lyrical, or again use the multiple positioned screens along the platform to tell a more serialised, unique story.
Maybe it has a memory of place, a journal. It would need some delicate, sensitive, playful non-creepy design – as well as technological underpinnings ie. Privacy preserving sensing and edge-AI.
I recall Matt Webb also working with Hoxton Analytics who were pursuing stuff in this space to create non-invasive sensing of things like traffic and footfall in commercial space.
In terms of edge AI that starts to relate to the spatial world, I’m tracking the work of friends who have started Archetype.ai to look at just that. I need to delve not it and understand it more.
“A visual language to demystify the tech in cities.” – Patrick Keenan et al, Sidewalk Labs, c2019
Of course the danger is once we start covering it in these icons of disclosure, and doing more and more mysterious things with our totems, we lose the calm ‘just enough internet’ approach that I love so much about this current iteration.
Maybe they’re just right as they are – and I should listen to them…
A couple of weeks ago, at the end of July, I booked a slot to try out the Apple Vision Pro.
It has been available for months in the USA, and might already be in the ‘trough of disillusionment’ there already – but I wanted to give it a try nonetheless.
I sat on a custom wood and leather bench in the Apple Store Covent Garden that probably cost more than a small family car, as a custom machine scanned my glasses to select the custom lenses that would be fitted to the headset.
I chatted to the personable, partially-scripted Apple employee who would be my guide for the demo.
Eventually the device showed up on a custom tray perfectly 10mm smaller than the custom sliding shelf mounted in the custom wood and leather bench.
The beautifully presented Apple Vision Pro at the Apple Store Covent Garden
And… I got the demo?
It was impressive technically, but the experience – which seemed to be framed as one of ‘experiencing content’ left me nonplused.
I’m probably an atypical punter, but the bits I enjoyed the most were the playful calibration processes, where I had to look at coloured dots and pinch my fingers, accompanied by satisfying playful little touches of motion graphics and haptics.
That is, the stuff where the spatial embodiment was the experience was the most fun, for me…
Apple certainly have gone to great pains to try a and distinguish the Vision Pro from AR and VR – making sure it’s referenced throughout as ‘spatial computing’ – but there’s very little experience of space, in a kinaesthetic sense.
It’s definitely conceived of as ‘spatial-so-long-as-you-stay-put-on-the-sofa computing’ rather than something kinetic, embodied.
The technical achievements of the fine grain recognition of gesture are incredible – but this too serves to reduce the embodied experience.
At the end of the demo, the Apple employee seemed to be noticeably crestfallen that I hadn’t gasped or flinched at the usual moments through the immersive videos of sport, pop music performance and wildlife.
He asked me what I would imagine using the Vision Pro for – and I said int he nicest possible way I probably couldn’t imagine using it – but I could imagine interesting uses teamed with something like Shapr3d and the Apple Pencil on my iPad.
He looked a little sheepish and said that wasn’t probably going to happen but sooner with SW updates, I could use the Vision Pro as an extended display. OK- that’s … great?
But I came away imagining more.
I happened to run into an old friend and colleague from BERG in the street near the Apple Store and we started to chat about the experience I’d just had.
I unloaded a little bit on them, and started to talk about the disappointing lack of embodied experiences.
We talked about the constraint of staying put on the sofa – rather than wandering around with the attendant dangers.
But we’ve been thinking about ‘stationary’ embodiment since Dourish, Sony Eyetoy and the Wii, over 20 years ago.
It doesn’t seem like that much of a leap to apply some of those thoughts to this new level of resolution and responsiveness that the Vision Pro presents.
With all that as a preamble – here are some crappy sketches and first (half-formed) thoughts I wanted to put down here.
Imagining the combination of a Vision Pro, iPad and Apple Pencil
Vision Pro STL Printer Sim
The first thing that came to mind in talking to my old colleague in the street was to take some of the beautiful realistically-embedded-in-space-with-gorgeous-shadows windows that just act like standard 2D pixel containers in the Vision Pro interface and turn them into ‘shelves’ or platens that you could have 3D virtual objects atop.
One idea was to extend my wish for some kind of Shapr3D experience into being able to “previsualise” the things I’m making in the real world. The app already does a great job of this with it’s AR features, but how about having a bit of fun with it, and rendering the object on the Vision Pro via a super fast, impossibly capable (simulated) 3d printer – that of course because it’s simulated can print in any material…
Sketch of Vision Pro 3d sim-printer
(Roughly) Animated sketch of Vision Pro 3d sim-printer
Once my designed objected had been “printed” in the material of my choosing, super-fast (and without any of the annoying things that can happen when you actually try to 3d print something…) I could of course change my scale in relation to it to examine details, place it in beautiful inaccessible immersive surroundings, apply impossible physics to it etc etc. Fun!
Vision Pro Pottery
Extending the idea of the virtual platen – could I use my iPad in combination with with Vision pro as a cross-over real/virtual creative surface in my field of view. Rather than have a robot 3d printer do the work for me, could I use my hands and sculpt something on it?
Could I move the iPad up and down or side to side to extrude or lathe sculpted shapes in space in front of me?
Could it spin and become a potter’s wheel with the detailed resolution hand detection of the Vision Pro picking up the slightest changes to give fine control to what I’m shaping.
Is Patrick Swayze over my shoulder?
Vision Pro + iPad sculpting in space.
Maybe it’s something much more throw-away and playful – like using the iPad as an extremely expensive version of a deformed wire coat-hanger to create streams of beautiful, iridescent bubbles as you drag it through the air – but perhaps capturing rare butterflies or fairies in them as you while away the hours atop Machu Picchu or somewhere similar where it would be frowned up to spill washing-up liquid so frivolously…
Making impossible bubbles with an iPad in Vision Pro world
Of course this interaction owes more than a little debt to a previous iPad project I saw get made first hand, namely BERG’s iPad Light-painting
Although my only real involvement in that project was as a photographic model…
Your correspondent behind an iPad-lightpainted cityscape (Image by Timo, of course)
Pencils, Pads, Platforms, Pots, Platens, Plinths
Perhaps there is an interesting little more general, sober, useful pattern in these sketches – of horizontal virtual/real crossover ‘plates’ for making, examining and swapping between embodied creation with pencil/iPad and spatial examination and play with the Vision Pro.
I could imagine pinching something from the vertical display windows ion Vision Pro to place onto my ipad (or even my watch?) in order to keep it, edit it, change something about it – before casting it back into the simulated spatial reality of the Vision Pro.
Perhaps it allows for a relationship between two realms that feels more embodied and ‘real’ without having to leave the sofa.
Perhaps it also allows for less ‘real’ but more fun stuff to happen in the world of the Vision Pro (which in the demo seems doggedly to anchor on ‘real’ experience verissimilitude – sport, travel, family, pop concerts)
Perhaps my Apple watch can be more of a Ben 10 supercontroller – changing into a dynamic UI to the environment I’m entering, much like it changes automatically when I go swimming with it and dive under…
Anyway – was very much worth doing the demo, I’d recommend it, if only for some quick stretching (and sketching) of the mindlegs.
My sketches in a cafe a few days after the demo
All in all I wish the Vision Pro was just *weirder*.
Back when it came out in the US in February I did some more sketches in reaction to that thought… I can’t wait to see something like a bonkers Gondry video created just for the Vision Pro…
As a fan of Alan Kay and the original vision of the Dynabook is made me very happy.
But moreover – as someone who has never been that excited by the chatbot/voice obsessions of BigTech, it was wonderful to see.
Of course the proof of this pudding will be in the using, but the notion of a real-time magic notebook where the medium is an intelligent canvas responding as an ‘intelligence amplifier‘ is much more exciting to me than most of the currently hyped visions of generative AI.
I was particularly intrigued to see the more diagrammatic example below, which seemed to belong in the conceptual space between Bret Victor’s Dynamicland and Papert’s Mathland.
I recall when I read Papert’s “Mindstorms” (back in 2012 it seems? ) I got retroactively angry about how I had been taught mathematics.
The ideas he advances for learning maths through play, embodiment and experimentation made me sad that I had not had the chance to experience the subject through those lenses, but instead through rote learning leading to my rejection of it until much later in life.
As he says “The kind of mathematics foisted on children in schools is not meaningful, fun, or even very useful.”
Perhaps most famously he writes:
“a computer can be generalized to a view of learning mathematics in “Mathland”; that is to say, in a context which is to learning mathematics what living in France is to learning French.”
Play, embodiment, experimentation – supported by AI – not *done* for you by AI.
I’ve long thought the assistant model should be considered harmful. Perhaps the Apple approach announced at WWDC means it might not be the only game in town for much longer.
My first email to him had the subject line of this blog post: “Magic notebooks, not magic girlfriends” – which I think must have intrigued him enough to respond.
This, in turn, led to the fantastic experience of meeting up with him a few times while he was based in Edinburgh and having him write a series of brilliant pieces (for internal consumption only, sadly) on what truly personal AI might mean through his lens of cognitive science and philosophy.
As a tease here’s an appropriate snippet from one of Professor Clark’s essays:
“The idea here (the practical core of many somewhat exotic debates over the ‘extended mind’) is that considered as thinking systems, we humans already are, and will increasingly become, swirling nested ecologies whose boundaries are somewhat fuzzy and shifting. That’s arguably the human condition as it has been for much of our recent history—at least since the emergence of speech and the collaborative construction of complex external symbolic environments involving text and graphics. But emerging technologies—especially personal AI’s—open up new, potentially ever- more-intimate, ways of being cognitively extended.”
I think that’s what I object to, or at least recoil from in the ‘assistant’ model – we’re abandoning exploring loads of really rich, playful ways in which we already think with technology.
Drawing, model making, acting things out in embodied ways.
Back to Papert’s Mindstorms:
“My interest is in the process of invention of “objects-to-think-with,” objects in which there is an intersection of cultural presence, embedded knowledge, and the possibility for personal identification.”
“…I am interested in stimulating a major change in how things can be. The bottom line for such changes is political. What is happening now is an empirical question. What can happen is a technical question. But what will happen is a political question, depending on social choices.”
The some-what lost futures of Kay, Victor and Papert are now technically realisable.
“what will happen is a political question, depending on social choices.”
That is, Apple are toolmakers, at heart – and personal device sellers at the bottom line. They don’t need to maximise attention or capture you as a rent (mostly). That makes personal AI as a ‘thing’ that can be sold much more of viable choice for them of course.
Apple are far freer, well-placed (and of coursse well-resourced) to make “objects-to-think-with, objects in which there is an intersection of cultural presence, embedded knowledge, and the possibility for personal identification.”
The wider strategy of “Apple Intelligence” appears to be just that.
But – my hope is the ‘magic notebook’ stance in the new iPad calculator represents the start of exploration in a wider, richer set of choices in how we interact with AI systems.
Including the assertion that most of the folk who see it as a goal to be emulated in our technologies haven’t watched the end.
The end (which I did watch) if memory serves is where the AIs ‘leave’ to go hang out with the emulated ghost of Alan Watts in the Oort Cloud.
And it’s ok, cos everyone then realises how alienated they’ve been by technofeudalism, and go for a picnic.
Or something.
I was trying to find a talk that Kevin Slavin gave, 16 or so years ago at the Architectural Association – at the launch of the BLDGBLOG book.
I can’t.
But again, if memory serves, it’s epic coda was the machines full of HFT algos ascending, like the end of Her, to a realm of pure lightspeed hyperfinance, uncoupled from the physical world they had been chained to.
Maybe, on a good day, I think the machines, and the people who think like machines will delaminate themselves, and we’ll be left behind – but it’ll be ok, because we’ll have people like Louis Cole.