Google’s former and founding motto “Don’t be evil” sounds benevolent, but is it?
“Don’t be evil” paired with the companies premise: lets build products in a way that allows for immeasurable corporate access into people’s private lives for fun and profit, unlocking all the positive potential this data can provide. We’ll only use it for good, the Evil™️ things will just be off limits.
Of course the motto changes to “Do the right thing” in 2015 when it’s very obvious it’s impossible Google to not indulge in the Evil with the potential for it sitting right there. By loosening up the motto to allow for at least some Evil, so long as its the “right thing”, Google can at least be honest with the public that “Evil” is definitely on the table.
Maybe its time to demand “Can’t be evil”? Build in a way where the evil just isn’t possible. Reject the possibility of evil in what software you choose to use.
]]>Richard Hamming’s “The Art of Doing Science and Engineering” is a book capturing the lessons he taught in a course he gave at the U.S. Navy Postgraduate School in Monterey, CA. He characterizes what he was trying to teach was “style” of thinking in science and engineering.
Having a physics degree myself, and also finding myself periodically ruminating on the agony of professional software development and hoping to find some overlap between my professional field and Hamming’s life experience, I gave it a read.
The book is filled with nuggets of wisdom and illustrates a career of move-the-needle science and engineering at Bell Labs. I didn’t personally find much value in many of the algebraic walk-through’s of various topics like information theory, but learning about how Hamming discovered error correcting codes definitely was interesting and worth a read.
The highlight of book comes in the second half where he includes interesting stories, analogies and observations on nearly every page. Below are my highlights I pulled while reading.
That brings up another point, which is now well recognized in software for computers but which applies to hardware too. Things change so fast that part of the system design problem is that the system will be constantly upgraded in ways you do not now know in any detail! Flexibility must be part of the modern design of things and processes. Flexibility built into the design means not only will you be better able to handle the changes which will come after installation, but it also contributes to your own work as the small changes which inevitably arise both in the later stages of design and in the field installation of the system…
Thus rule two:
Part of systems engineering design is to prepare for changes so they can be gracefully made and still not degrade the other parts.
– p.367
This quote is my favorite out of the entire book. It feels like a constant fight in software engineering between the impulse to lock down runtime versions, specific dependency versions, and other environmental factors versus developing software in such a way that accommodates wide variance in all of these different component factors. Both approaches argue reliability and flexibility, however which approach actually tests for it?
In my experience, the tighter the runtime dependency specifications, the faster fragility spreads, and it’s satisfying to hear Hamming’s experience echo this observation. Sadly though, his observation that those writing software will universally understand this simply hasn’t held up.
Good design protects you from the need for too many highly accurate components in the system. But such design principals are still, to this date, ill understood and need to be researched extensively. Not that good designers do not understand this intuitively, merely it is not easily incorporated into the design methods you were thought in school.
Good minds are still need in spite of all the computing tools we have developed. The best mind will be the one who gets the principle into the design methods taught so it will be automatically available for lesser minds!.
– p.268
Here Hamming is describing H.S. Black’s feedback circuit’s tolerance for low accuracy components as what constitutes good design. I agree! Technology that works at any scale, made out of commodity parts with minimal runtime requirements tends to be what is most useful across the longest amount of time.
Committee decisions, which tend to diffuse responsibility, are seldom the best in practice—most of the time they represent a compromise which has none of the virtues of any path and tends to end in mediocrity.
– p.274
I appreciated his observations on committees, and their tendency to launder responsibility. They serve a purpose, but its important to understand their nature.
The Hawthorne effect strongly suggests the proper teaching method will always to be in a state of experimental change, and it hardly matters just what is done; all that matters is both the professor and the students believe in the change.
– p.288
It has been my experience, as well as the experience of many others who have looked, that data is generally much less accurate than it is advertised to be. This is not a trivial point—we depend on initial data for many decisions, as well as for the input data for simulations which result in decisions.
– p.345
Averages are meaningful for homogeneous groups (homogeneous with respect to the actions that may later be taken), but for diverse groups averages are often meaningless. As earlier remarked, the average adult has one breast and one testicle, but that does not represent the average person in our society.
– p.356
You may think the title means that if you measure accurately you will get an accurate measurement, and if not then not, but it refers to a much more subtle thing—the way you choose to measure things controls to a large extent what happens. I repeat the story Eddington told about the fishermen who went fishing with a net. They examined the size of the fish they caught and concluded there was a minimum size to the fish in the sea. The instrument you use clearly affects what you see.
– p.373
Intuitively I think many people who attempt to measure anything understand that their approach reflects in the results to some degree. I hadn’t heard of the Hawthorne effect before, but intuitively it makes sense.
People with an idea on how to improve something implement their idea and it works, because they want it to work and allow the effects to be fully effective. Someone else is prescribed this idea or brought into the fold where the idea is implemented and the benefits of the idea evaporate.
I’ve long suspected that in the context of professional software development, where highly unscrutinized benchmarks and soft data are the norm, people start with an opinion or theory and work back to data that supports it. Could it just be that people need to believe that working in a certain way is necessary for them to work optimally? Could it be “data” is often just a work function used to out maneuver competing ideas?
Anyway, just another thing to factor for when data is plopped in your lap.
Moral: there need not be a unique form of a theory to account for a body of observations; instead, two rather different-looking theories can agree on all the predicted details. You cannot go from a body of data to a unique theory! I noted this in the last chapter.
–p.314
Heisenberg derived the uncertainty principle that conjugate variables, meaning Fourier transforms, obeyed a condition in which the product of the uncertainties of the two had to exceed a fixed number, involving Planck’s constant. I earlier commented, Chapter 17, this is a theorem in Fourier transforms-any linear theory must have a corresponding uncertainty principle, but among physicists it is still widely regarded as a physical effect from nature rather than a mathematical effect of the model.
–p.316
I appreciate Hamming suggesting that some of our understanding of physical reality could be a byproduct of the model being used to describe it. It’s not exactly examined closely in undergraduate or graduate quantum mechanics, and I find it interesting Hamming, who’s clearly highly intuitive with modeling, also raises this question.
Let me now turn to predictions of the immediate future. It is fairly clear that in time “drop lines” from the street to the house (they may actually be buried, but will probably still be called “drop lines”) will be fiber optics. Once a fiber-optic wire is installed, then potentially you have available almost all the information you could possibly want, including TV and radio, and possibly newspaper articles selected according to your interest profile (you pay the printing bill which occurs in your own house). There would be no need for separate information channels most of the time. At your end of the fiber there are one or more digital filters. Which channel you want, the phone, radio, or TV, can be selected by you much as you do now, and the channel is determined by the numbers put into the digital filter-thus the same filter can be multipurpose, if you wish. You will need one filter for each channel you wish to use at the same time (though it is possible a single time-sharing filter would be available) and each filter would be of the same standard design. Alternately, the filters may come with the particular equipment you buy.
– p.284-285
Here Hamming is predicting the internet. He got very close, and it’s interesting to think that these signals would all just be piped to your house in a bundle you you pay for a filter to unlock access to the ones you want. Hey Cable TV worked that for a long time!
But a lot of evidence on what enabled people to make big contributions points to the conclusion that a famous prof was a terrible lecturer and the students had to work hard to learn it for themselves! I again suggest a rule:
What you learn from others you can use to follow;
What you learn for yourself you can use to lead.
– p.292
Learn by doing, not by following.
What you did to become successful is likely to be counterproductive when applied at a later date.
– p.342
It’s easy to blame changing trends in software development for the disgustingly short half-life of knowledge regarding development patterns and tools, but I think it’s probably just the nature of knowledge based work. Operating by yourself may be effective and work well, but its not a recipe for success at any given moment in time.
A man was examining the construction of a cathedral. He asked a stonemason what he was doing chipping the stones, and the mason replied, “I am making stones.” He asked a stone carver what he was doing; “I am carving a gargoyle.” And so it went; each person said in detail what they were doing. Finally he came to an old woman who was sweeping the ground. She said, “I am helping build a cathedral.” If, on the average campus, you asked a sample of professors what they were going to do in the next class hour, you would hear they were going to “teach partial fractions,” “show how to find the moments of a normal distribution,” “explain Young’s modulus and how to measure it,” etc. I doubt you would often hear a professor say, “I am going to educate the students and prepare them for their future careers.” This myopic view is the chief characteristic of a bureaucrat. To rise to the top you should have the larger view—at least when you get there.
– p.360
Software bureaucrats aplenty. Really easy to fall into this role.
I must come to the topic of “selling” new ideas. You must master three things to do this (Chapter 5):
- Giving formal presentations,
- Producing written reports, and
- Mastering the art of informal presentations as they happen to occur.
All three are essential—you must learn to sell your ideas, not by propaganda, but by force of clear presentation. I am sorry to have to point this out; many scientists and others think good ideas will win out automatically and need not be carefully presented. They are wrong;
– p.396
One thing I regret over the last 10 years of my career is not writing down more insights I have learned through experience. Ideas simply don’t transmit if they aren’t written down or put into some consumable format like video or audio. Nearly every annoying tool or developer trend you are forced to use is in play because it communicated the idea through blogs, videos and conference talks. And those who watched echoed these messages.
An expert is one who knows everything about nothing; a generalist knows nothing about everything.
In an argument between a specialist and a generalist, the expert usually wins by simply (1) using unintelligible jargon, and (2) citing their specialist results, which are often completely irrelevant to the discussion. The expert is, therefore, a potent factor to be reckoned with in our society. Since experts both are necessary and also at times do great harm in blocking significant progress, they need to be examined closely. All too often the expert misunderstands the problem at hand, but the generalist cannot carry though their side to completion. The person who thinks they understand the problem and does not is usually more of a curse (blockage) than the person who knows they do not understand the problem.
– p.333
Understand when you are generalist and a specialist.
Experts, in looking at something new, always bring their expertise with them, as well as their particular way of looking at things. Whatever does not fit into their frame of reference is dismissed, not seen, or forced to fit into their beliefs. Thus really new ideas seldom arise from the experts in the field. You cannot blame them too much, since it is more economical to try the old, successful ways before trying to find new ways of looking and thinking.
If an expert says something can be done he is probably correct, but if he says it is impossible then consider getting another opinion.
– p.336
Anyone wading into a technical field will encounter experts at every turn. They have valuable information, but they are also going to give you dated, myopic advice (gatekeeping?). I like Hamming’s framing here and it reflects my experience when weighing expert opinion.
In some respects the expert is the curse of our society, with their assurance they know everything, and without the decent humility to consider they might be wrong. Where the question looms so important, I suggested to you long ago to use in an argument, “What would you accept as evidence you are wrong?” Ask yourself regularly, “Why do I believe whatever I do?” Especially in the areas where you are so sure you know, the area of the paradigms of your field.
– p.340
I love this exercise. It will also drive you crazy. Tread carefully.
Systems engineering is indeed a fascinating profession, but one which is hard to practice. There is a great need for real systems engineers, as well as perhaps a greater need to get rid of those who merely talk a good story but cannot play the game effectively.
– p.372
Controversial, harsh, but true.
The last thing I want to recognize is the beautiful cloth resin binding and quality printing of the book. Bravo Stripe Press for still producing beautiful artifacts at affordable pricing in the age of print on demand.
]]>async-neocities
v3.0.0 is now available and introduces a CLI.
Usage: async-neocities [options]
Example: async-neocities --src public
--help, -h print help text
--src, -s The directory to deploy to neocities (default: "public")
--cleanup, -c Destructively clean up orphaned files on neocities
--protect, -p String to minimatch files which will never be cleaned up
--status Print auth status of current working directory
--print-key Print api-key status of current working directory
--clear-key Remove the currently associated API key
--force-auth Force re-authorization of current working directory
async-neocities (v3.0.0)
When you run it, you will see something similar to this:
> async-neocities --src public
Found siteName in config: bret
API Key found for bret
Starting inspecting stage...
Finished inspecting stage.
Starting diffing stage...
Finished diffing stage.
Skipping applying stage.
Deployed to Neocities in 743ms:
Uploaded 0 files
Orphaned 0 files
Skipped 244 files
0 protected files
async-neocities
was previously available as a GitHub Action called deploy-to-neocities. This Action API remains available, however the CLI offers a local-first workflow that was not previously offered.
Now that async-neocities
is available as a CLI, you can easily configure it as an npm
script and run it locally when you want to push changes to neocities without relying on GitHub Actions.
It also works great in Actions with side benefit of deploys working exactly the same way in both local and remote environments.
Here is a quick example of that:
async-neocities@^3.0.0
to your project’s package.json
.package.json
deploy script: "scripts": {
"build": "npm run clean && run-p build:*",
"build:tb": "top-bun",
"clean": "rm -rf public && mkdir -p public",
"deploy": "run-s build deploy:*",
"deploy:async-neocities": "async-neocities --src public --cleanup"
},
deploy-to-neocities.json
config file. Example config contents:{"siteName":"bret"}
npm run deploy
.npm run deploy
and configure the token secret.name: Deploy to neociteis
on:
push:
branches:
- master
env:
node-version: 21
FORCE_COLOR: 2
concurrency: # prevent concurrent deploys doing starnge things
group: deploy-to-neocities
cancel-in-progress: true
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Create LFS file list
run: git lfs ls-files -l | cut -d' ' -f1 | sort > .lfs-assets-id
- name: Restore LFS cache
uses: actions/cache@v3
id: lfs-cache
with:
path: .git/lfs
key: ${{ runner.os }}-lfs-${{ hashFiles('.lfs-assets-id') }}-v1
- name: Git LFS Pull
run: git lfs pull
- name: Use Node.js
uses: actions/setup-node@v4
with:
node-version: ${{env.node-version}}
- run: npm i
- run: npm run deploy
env:
NEOCITIES_API_TOKEN: ${{ secrets.NEOCITIES_API_TOKEN }}
The async-neocities
CLI re-uses the same ENV name as deploy-to-neocities
action so migrating to the CLI requires no additional changes to the Actions environment secrets.
This prompts some questions regarding when are CLIs and when are actions most appropriate. Lets compare the two:
package.json
or node_modules
In addition to the CLI, async-neocities
migrates to full Node.js esm
and internally enables ts-in-js
though the types were far to dynamic to export full type support with the time I had available.
With respect to an implementation plan going forward regarding CLIs vs actions, I’ve summarized my thoughts below:
Implement core functionality as a re-usable library. Exposing a CLI makes that library an interactive tool that provides a local first workflow and is equally useful in CI. Exposing the library in an action further opens up the library to a wider language ecosystem which would otherwise ignore the library due to foreign ecosystem ergonomic overhead. The action is simpler to implement than a CLI but the CLI offers a superior experience within the implemented language ecosystem.
]]>Behold, a mildly redesigned and reorganized landing page:
It’s still not great, but it should make it easier to keep it up to date going forward.
It has 3 sections:
I removed a bunch of older inactive projects and links and stashed them in a project.
Additionally, the edit button in the page footer now takes you to the correct page in GitHub for editing, so if you ever see a typo, feel free to send in a fix!
Finally, the about page includes a live dump of the dependencies that were used to build the website.
]]>After some unexpected weekends of downtime looking after sick toddlers, I’m happy to re-introduce top-bun
v7.
Re-introduce? Well, you may remember @siteup/cli
, a spiritual offshoot of sitedown
, the static site generator that turned a directory of markdown into a website.
top-bun
v7?Let’s dive into the new feature, changes and additions in top-bun
7.
top-bun
@siteup/cli
is now top-bun
.
As noted above, @siteup/cli
was a name hack because I didn’t snag the bare npm
name when it was available, and someone else had the genius idea of taking the same name. Hey it happens.
I described the project to Chat-GPT and it recommended the following gems:
quick-brick
web-erect
OK Chat-GPT, pretty good, I laughed, but I’m not naming this web-erect
.
The kids have a recent obsession with Wallace & Gromit and we watched a lot of episodes while she was sick. Also I’ve really been enjoying 🥖 bread themes so I decided to name it after Wallace & Gromit’s bakery “Top Bun” in their hit movie “A Matter of Loaf and Death”.
top-bun
now builds it’s own repo into a docs website. It’s slightly better than the GitHub README.md view, so go check it out! It even has a real domain name so you know its for real.
css
bundling is now handled by esbuild
esbuild
is an amazing tool. postcss
is a useful tool, but its slow and hard to keep up with. In top-bun
, css
bundling is now handled by esbuild
.
css
bundling is now faster and less fragile, and still supports many of the same transforms that siteup
had before. CSS nesting is now supported in every modern browser so we don’t even need a transform for that. Some basic transforms and prefixes are auto-applied by setting a relatively modern browser target.
esbuild
doesn’t support import chunking on css yet though, so each css
entrypoint becomes its own bundle. If esbuild
ever gets this optimization, so will top-bun
. In the meantime, global.css
, style.css
and now layout.style.css
give you ample room to generally optimize your scoped css loading by hand. It’s simpler and has less moving parts!
You can now have more than one layout!
In prior releases, you could only have a single root
layout that you customized on a per-page basis with variables.
Now you can have as many layouts as you need.
They can even nest.
Check out this example of a nested layout from this website. It’s named article.layout.js
and imports the root.layout.js
. It wraps the children
and then passes the results to root.layout.js
.
// article.layout.js
import { html } from 'uhtml-isomorphic'
import { sep } from 'node:path'
import { breadcrumb } from '../components/breadcrumb/index.js'
import defaultRootLayout from './root.layout.js'
export default function articleLayout (args) {
const { children, ...rest } = args
const vars = args.vars
const pathSegments = args.page.path.split(sep)
const wrappedChildren = html`
${breadcrumb({ pathSegments })}
<article class="article-layout h-entry" itemscope itemtype="http://schema.org/NewsArticle">
<header class="article-header">
<h1 class="p-name article-title" itemprop="headline">${vars.title}</h1>
<div class="metadata">
<address class="author-info" itemprop="author" itemscope itemtype="http://schema.org/Person">
${vars.authorImgUrl
? html`<img height="40" width="40" src="${vars.authorImgUrl}" alt="${vars.authorImgAlt}" class="u-photo" itemprop="image">`
: null
}
${vars.authorName && vars.authorUrl
? html`
<a href="${vars.authorUrl}" class="p-author h-card" itemprop="url">
<span itemprop="name">${vars.authorName}</span>
</a>`
: null
}
</address>
${vars.publishDate
? html`
<time class="published-date dt-published" itemprop="datePublished" datetime="${vars.publishDate}">
<a href="#" class="u-url">
${(new Date(vars.publishDate)).toLocaleString()}
</a>
</time>`
: null
}
${vars.updatedDate
? html`<time class="dt-updated" itemprop="dateModified" datetime="${vars.updatedDate}">Updated ${(new Date(vars.updatedDate)).toLocaleString()}</time>`
: null
}
</div>
</header>
<section class="e-content" itemprop="articleBody">
${typeof children === 'string'
? html([children])
: children /* Support both uhtml and string children. Optional. */
}
</section>
<!--
<footer>
<p>Footer notes or related info here...</p>
</footer>
-->
</article>
${breadcrumb({ pathSegments })}
`
return defaultRootLayout({ children: wrappedChildren, ...rest })
}
With multi-layout support, it made sense to introduce two more style and js bundle types:
Prior the global.css
and global.client.js
bundles served this need.
Layouts, and global.client.js
, etc used to have to live at the root of the project src
directory. This made it simple to find them when building, and eliminated duplicate singleton errors, but the root of websites is already crowded. It was easy enough to find these things anywhere, so now you can organize these special files in any way you like. I’ve been using:
layouts
: A folder full of layoutsglobals
: A folder full of the globally scoped filesGiven the top-bun
variable cascade system, and not all website files are html, it made sense to include a templating system for generating any kind of file from the global.vars.js
variable set. This lets you generate random website “sidefiles” from your site variables.
It works great for generating RSS feeds for websites built with top-bun
. Here is the template file that generates the RSS feed for this website:
import pMap from 'p-map'
import jsonfeedToAtom from 'jsonfeed-to-atom'
/**
* @template T
* @typedef {import('@siteup/cli').TemplateAsyncIterator<T>} TemplateAsyncIterator
*/
/** @type {TemplateAsyncIterator<{
* siteName: string,
* siteDescription: string,
* siteUrl: string,
* authorName: string,
* authorUrl: string,
* authorImgUrl: string
* layout: string,
* publishDate: string
* title: string
* }>} */
export default async function * feedsTemplate ({
vars: {
siteName,
siteDescription,
siteUrl,
authorName,
authorUrl,
authorImgUrl
},
pages
}) {
const blogPosts = pages
.filter(page => ['article', 'book-review'].includes(page.vars.layout) && page.vars.published !== false)
.sort((a, b) => new Date(b.vars.publishDate) - new Date(a.vars.publishDate))
.slice(0, 10)
const jsonFeed = {
version: 'https://jsonfeed.org/version/1',
title: siteName,
home_page_url: siteUrl,
feed_url: `${siteUrl}/feed.json`,
description: siteDescription,
author: {
name: authorName,
url: authorUrl,
avatar: authorImgUrl
},
items: await pMap(blogPosts, async (page) => {
return {
date_published: page.vars.publishDate,
title: page.vars.title,
url: `${siteUrl}/${page.pageInfo.path}/`,
id: `${siteUrl}/${page.pageInfo.path}/#${page.vars.publishDate}`,
content_html: await page.renderInnerPage({ pages })
}
}, { concurrency: 4 })
}
yield {
content: JSON.stringify(jsonFeed, null, ' '),
outputName: 'feed.json'
}
const atom = jsonfeedToAtom(jsonFeed)
yield {
content: atom,
outputName: 'feed.xml'
}
yield {
content: atom,
outputName: 'atom.xml'
}
}
Pages, Layouts and Templates can now introspect every other page in the top-bun
build.
You can now easily implement any of the following:
Pages, Layouts and Templates receive a pages
array that includes PageData instances for every page in the build. Variables are already pre-resolved, so you can easily filter, sort and target various pages in the build.
top-bun
now has full type support. It’s achieved with types-in-js
and it took a ton of time and effort.
The results are nice, but I’m not sure the juice was worth the squeeze. top-bun
was working really well before types. Adding types required solidifying a lot of trivial details to make the type-checker happy. I don’t even think a single runtime bug was solved. It did help clarify some of the more complex types that had developed over the first 2 years of development though.
The biggest improvement provided here is that the following types are now exported from top-bun
:
LayoutFunction<T>
PostVarsFunction<T>
PageFunction<T>
TemplateFunction<T>
TemplateAsyncIterator<T>
You can use these to get some helpful auto-complete in LSP supported editors.
types-in-js
not TypescriptThis was the first major dive I did into a project with types-in-js
support.
My overall conclusions are:
types-in-js
provides a superior development experience to developing in .ts
by eliminating the development loop build step.types-in-js
are in conflict with each other. types-in-js
should win, its better than JSDoc in almost every way (but you still use both).types-in-js
. Find something that consumes the generated types instead of consuming the JSDoc blocs.@ts-ignore
liberally. Take a pass or two to remove some later.md
and html
Previously, only js
pages had access to the variable cascade inside of the page itself. Now html
and md
pages can access these variables with handlebars placeholders.
## My markdown page
Hey this is a markdown page for {{ vars.siteName }} that uses handlebars templates.
Previously, if you opted for the default layout, it would import mine.css from unpkg. This worked, but went against the design goal of making top-bun
sites as reliable as possible (shipping all final assets to the dest folder).
Now when you build with the default layout, the default stylesheet (and theme picker js code) is built out into your dest
folder.
browser-sync
@siteup/cli
previously didn’t ship with a development server, meaning you had to run one in parallel when developing. This step is now eliminated now that top-bun
ships browser-sync
. browser-sync
is one of the best Node.js development servers out there and offers a bunch of really helpful dev tools built right in, including scroll position sync so testing across devices is actually enjoyable.
If you aren’t familiar with browser-sync, here are some screenshots of fun feature:
top-bun
ejecttop-bun
now includes an --eject
flag, that will write out the default layout, style, client and dependencies into your src
folder and update package.json
. This lets you easily get started with customizing default layouts and styles when you decide you need more control.
√ default-layout % npx top-bun --eject
top-bun eject actions:
- Write src/layouts/root.layout.mjs
- Write src/globals/global.css
- Write src/globals/global.client.mjs
- Add mine.css@^9.0.1 to package.json
- Add uhtml-isomorphic@^2.0.0 to package.json
- Add highlight.js@^11.9.0 to package.json
Continue? (Y/n) y
Done ejecting files!
The default layout is always supported, and its of course safe to rely on that.
The logging has been improved quite a bit. Here is an example log output from building this blog:
> top-bun --watch
Initial JS, CSS and Page Build Complete
bret.io/src => bret.io/public
├─┬ projects
│ ├─┬ websockets
│ │ └── README.md: projects/websockets/index.html
│ ├─┬ tron-legacy-2021
│ │ └── README.md: projects/tron-legacy-2021/index.html
│ ├─┬ package-automation
│ │ └── README.md: projects/package-automation/index.html
│ └── page.js: projects/index.html
├─┬ jobs
│ ├─┬ netlify
│ │ └── README.md: jobs/netlify/index.html
│ ├─┬ littlstar
│ │ └── README.md: jobs/littlstar/index.html
│ ├── page.js: jobs/index.html
│ ├── zhealth.md: jobs/zhealth.html
│ ├── psu.md: jobs/psu.html
│ └── landrover.md: jobs/landrover.html
├─┬ cv
│ ├── README.md: cv/index.html
│ └── style.css: cv/style-IDZIRKYR.css
├─┬ blog
│ ├─┬ 2023
│ │ ├─┬ reintroducing-top-bun
│ │ │ ├── README.md: blog/2023/reintroducing-top-bun/index.html
│ │ │ └── style.css: blog/2023/reintroducing-top-bun/style-E2RTO5OB.css
│ │ ├─┬ hello-world-again
│ │ │ └── README.md: blog/2023/hello-world-again/index.html
│ │ └── page.js: blog/2023/index.html
│ ├── page.js: blog/index.html
│ └── style.css: blog/style-NDOJ4YGB.css
├─┬ layouts
│ ├── root.layout.js: root
│ ├── blog-index.layout.js: blog-index
│ ├── blog-index.layout.css: layouts/blog-index.layout-PSZNH2YW.css
│ ├── blog-auto-index.layout.js: blog-auto-index
│ ├── blog-auto-index.layout.css: layouts/blog-auto-index.layout-2BVSCYSS.css
│ ├── article.layout.js: article
│ └── article.layout.css: layouts/article.layout-MI62V7ZK.css
├── globalStyle: globals/global-OO6KZ4MS.css
├── globalClient: globals/global.client-HTTIO47Y.js
├── globalVars: global.vars.js
├── README.md: index.html
├── style.css: style-E5WP7SNI.css
├── booklist.md: booklist.html
├── about.md: about.html
├── manifest.json.template.js: manifest.json
├── feeds.template.js: feed.json
├── feeds.template.js-1: feed.xml
└── feeds.template.js-2: atom.xml
[Browsersync] Access URLs:
--------------------------------------
Local: http://localhost:3000
External: http://192.168.0.187:3000
--------------------------------------
UI: http://localhost:3001
UI External: http://localhost:3001
--------------------------------------
[Browsersync] Serving files from: /Users/bret/Developer/bret.io/public
Copy watcher ready
mjs
and cjs
file extensionsYou can now name your page, template, vars, and layout files with the mjs
or cjs
file extensions. Sometimes this is a necessary evil. In general, set your type
in your package.json
correctly and stick with .js
.
top-bun
The current plan is to keep sitting on this feature set for a while. But I have some ideas:
uhtml
?top-bun
is already one of the best environments for implementing sites that use web-components. Page bundles are a perfect place to register components!If you try out top-bun
, I would love to hear about your experience. Do you like it? Do you hate it? Open an discussion item. or reach out privately.
top-bun
OK, now time for the story behind top-bun
aka @siteup/cli
.
I ran some experiments with orthogonal tool composition a few years ago. I realized I could build sophisticated module based websites by composing various tools together by simply running them in parallel.
What does this idea look like? See this snippet of a package.json
:
{ "scripts": {
"build": "npm run clean && run-p build:*",
"build:css": "postcss src/index.css -o public/bundle.css",
"build:md": "sitedown src -b public -l src/layout.html",
"build:feed": "generate-feed src/log --dest public && cp public/feed.xml public/atom.xml",
"build:static": "cpx 'src/**/*.{png,svg,jpg,jpeg,pdf,mp4,mp3,js,json,gif}' public",
"build:icon": "gravatar-favicons --config favicon-config.js",
"watch": "npm run clean && run-p watch:* build:static",
"watch:css": "run-s 'build:css -- --watch'",
"watch:serve": "browser-sync start --server 'public' --files 'public'",
"watch:md": "npm run build:md -- -w",
"watch:feed": "run-s build:feed",
"watch:static": "npm run build:static -- --watch",
"watch:icon": "run-s build:icon",
"clean": "rimraf public && mkdirp public",
"start": "npm run watch"
}
}
postcss
building css bundles, enabling an @import
based workflow for css, as well as providing various transforms I found useful.sitedown
.generate-feed
gravatar-favicons
.js
bundling could easily be added in here with esbuild
or rollup.build
and watch
prefixes, and managed with npm-run-all2 which provides shortcuts to running these tasks in parallel.I successfully implemented this pattern across 4-5 different websites I manged. It work beautifully. Some of the things I liked about it:
But it had a few drawbacks:
So I decided to roll up all of the common patterns into a single tool that included the discoveries of this process.
@siteup/cli
extended sitedown
Because it was clear sitedown
provided the core structure of this pattern (making the website part), I extended the idea in the project @siteup/cli
. Here are some of the initial features that project shipped with:
<head>
tag easily from the frontmatter section of markdown pages.js
layout: This let you write a simple JS program that receives the variable cascade and child content of the page as a function argument, and return the contents of the page after applying any kind of template tool available in JS.html
pages: CommonMark supports html
in markdown, but it also has some funny rules that makes it more picky about how the html
is used. I wanted a way to access the full power of html
without the overhead of markdown, and this page type unlocked that.js
pages: Since I enjoy writing JavaScript, I also wanted a way to support generating pages using any templating system and data source imaginable so I also added support to generate pages from the return value of a JS function.siteup
opted to make it super easy to add a global css
bundle, and page scoped css
bundles, which both supported a postcss
@import
workflow and provided a few common transforms to make working with css
tolerable (nesting, prefixing etc).sitedown
also added this feature subsequently.src
!After sitting on the idea of siteup
for over a year, by the time I published it to npm
, the name was taken, so I used the npm org name hack to get a similar name @siteup/cli
. SILLY NPM!
I enjoyed how @siteup/cli
came out, and have been using it for 2 year now. Thank you of course to ungoldman for laying the foundation of most of these tools and patterns. Onward and upward to top-bun
!
This blog has been super quiet lately, sorry about that. I had some grand plans for finishing up my website tools to more seamlessly support blogging.
The other day around 3AM, I woke up and realized that the tools aren’t stopping me from writing, I am. Also my silently implemented policy about not writing about ‘future plans and ideas before they are ready’ was implemented far to strictly. It is in fact a good thing to write about in-progress ideas and projects slightly out into the future. This is realistic, interesting, and avoids the juvenile trap of spilling ideas in front of the world only to see you never realize them. So here I am writing a blog post again.
Anyway, no promises, but it is my goal to write various ahem opinions, thoughts and ideas more often because nothing ever happens unless you write it down.
Here are some updates from the last couple years that didn’t make it onto this site before.
I updated the Sublime Text Tron Color Scheme
today after a few weeks of reworking it for the recent release of Sublime Text 4.
The 2.0.0
release converts the older .tmTheme
format into the Sublime specific theme format.
Overall the new Sublime theme format (.sublime-color-scheme
) is a big improvement, largely due to its simple JSON structure and its variables support.
JSON is, despite the common arguments against it, super readable and easily read and written by humans. The variable support makes the process of making a theme a whole lot more automatic, since you no longer have to find and replace colors all over the place.
The biggest problem I ran into was poor in-line color highlighting when working with colors, so I ended up using a VSCode plugin called Color Highlight
in a separate window.
Sublime has a great plugin also called Color Highlight
that usually works well, but not in this case.
The Sublime Color Highlight
variant actually does temporary modifications to color schemes, which seriously gets in the way when working on color scheme files.
The rewrite is based it off of the new Mariana Theme that ships with ST4, so the theme should have support for most of the latest features in ST4 though there are surely features that even Mariana missed. Let me know if you know of any.
Here a few other points of consideration made during the rewrite:
Tron Legacy 4 (Dark)
.Here a few more relevant links and please let me know what you think if you try it out.
Sublime Tron Legacy color scheme fully updated for @sublimehq Text 4. Full syntax support, lots of other small improvements. Also it supports 'glow' text✌️ pic.twitter.com/vShbGThgDF
— 🌌🌵🛸Bret🏜👨👩👧🚙 (@bcomnes) August 5, 2021
After a short sabbatical in Denmark at Hyperdivision, I joined Littlstar. Here is a quick overview of some of the more interesting projects I worked on. I joined Littlstar during a transitional period of the company and help them transition from a VR video platform to a video on demand and live streaming platform, and now an NFT sales and auction platform.
My first project was picking up on an agency style project developing a novel VR reality training platform for NYCPF, powered by a custom hypercore p2p file sharing network, delivering in-house developed unity VR scenarios. These scenarios could then be brought to various locations like schools or events where NYCPF could guide participants through various law enforcement scenarios with different outcomes based on participant choices within the simulation.
By utilizing a p2p and offline first design approach, we were able to deliver an incredibly flexible and robust delivery platform that had all sorts of difficult features to develop for traditional file distribution platforms such as:
While the project was built on the backs of giants like the Hypercore protocol, as well as the amazing work of my colleague, I contributed in a number of areas to move the project forward during my time contracting on it.
Some of the discrete software packages that resulted from this project are described below.
secure-rpc-protocol
Secure rpc-protocol over any duplex socket using noise-protocol. This was a refactor of an existing RPC over websocket solution the project was using. It improved upon the previous secure RPC already used by switching to using the noise protocol which implements well understood handshake patterns that can be shared and audited between projects, rather than relying on a novel implementation at the RPC layer. It also decoupled the RPC protocol from the underlying socket being used, so that the RPC system could be used over any other channels we might want in the future.
async-folder-walker
An async generator that walks files. This project was a refactor of an existing project called folder-walker implementing a high performance folder walk algorithm using a more modern async generator API.
unpacker-with-progress
unpacker-with-progress
is a specialized package that unpacks archives, and provides a progress api in order to provide UI feedback.
One of the deficiencies with the NYCPF project when I started was lack of UI feedback during the extraction process.
VR files are very large, and are delivered compressed to the clients. After the download is complete, the next step to processing the data is unpacking the content. This step did not provide any sort of progress feedback to the user because the underlying unpacking libraries did not expose this information, or only exposed some of the information needed to display a progress bar.
This library implemented support for unpacking the various archive formats the project required, and also added an API providing uniform unpacking progress info that could be used in the UI during unpacking tasks.
hchacha20
One of the more interesting side projects I worked on was porting over some of the libsodium primitives to sodium-javascript. I utilized a technique I learned about at Hyperdivision where one can write web assembly by hand in the WAT format, providing a much wider set of data types, providing the type guarantees needed to write effective crypto.
While the WAT was written for HChaCha20, the effort was quite laborious and it kicked off a debate as to whether it would be better to just wrap libsodium-js (the official libsodium js port) in a wrapper that provided the sodium-universal API. This was achieved by another group in geut/sodium-javascript-plus which successfully ran hypercores in the browser using that wrapper.
Ultimately, this effort was scrapped, determining that noise peer connections in the browser are redundant to webRTC encryption and https sockets. It was a fun and interesting project none the less.
We were having some state transition bugs between the webapp and the local server process, where the app could get into strange indeterminate states. Both had independent reconnect logic wrapped up with application code, and it added a lot of chaos to understanding how each process was behaving when things went wrong (especially around sleep cycles on machines).
I implemented a generic reconnecting state machine that could accept any type of socket, and we were able to reduce the number of state transition bugs we had.
After working on various agency projects at Littlstar, we formed a separate organization to start a fresh rewrite of the technology stack.
My high level contributions:
This was an amazing founders-style opportunity to help rethink and re-implement years of work that had developed at Littlstar prior to joining. Effectively starting from 0, we rethought the entire technology pipeline, from operations, to infrastructure, to deployment, resulting in something really nice, modern, minimal, low maintenance and malleable.
A culmination of ingesting the “Terraform: Up & Running” and “AWS Certified Solutions Architect” books, as well as building off existing organizational experience with AWS, I helped research and design an operations plan using Terraform and GitHub Actions.
This arrangement has proven powerful and flexible. While it isn’t perfect, it has been effective and reliable and cheap, despite its imperfections in relation to some of the more esoteric edge cases of Terraform.
A quick overview of how its arrange:
ops
global repo runs terraform in a bootstrapped GitHub actions environment.ops
terraform repo, that in turn contain their own Terraform files specific to that service.One of the drawbacks of rolling our own Terraform CI infrastructure was that we had to tackle many small edge cases inside the GitHub actions environment.
It was nice to learn about the various types of custom GitHub actions one can write, as well as expand that knowlege to the rest of the org, but it also ate up a number of days focusing on DevOps problems specific to our CI environment.
Here are some of the problems I helped solve in the actions environment.
I helped lay the framework for the initial version of sdk-js
, the Little Core Labs unified library used to talk to the various back-end services at Little Core Labs.
One of underlying design goals was to solve for the newly introduced native ESM features in node, in such a way that the package could be consumed directly in the browser, natively as ESM in node, but also work in dual CJS/ESM environments like Next.js. While this did add some extra overhead to the project, it serves as a design pattern we can pull from in the future, as well as a provide a highly compatible but modern API client.
I also extracted out this dual package pattern into a reusable template.
I was the principal engineer on the new Rad.live website. I established and implemented the tech stack, aiming to take a relatively conservative take on a code base that would maximize the readability and clarity for a Dev team that would eventually grow in size.
A high level, the application is simply an app written with:
From a development perspective, it was important to have testing and automation from the beginning. For this we used:
Overall, the codebase has had two other seasoned developers (one familiar and one new at React) jump in and find it productive based on individual testimony. Gathering feedback from those thatI work with on technical decisions that I make is an important feedback look I always try to incorporate when green fielding projects. Additionally, it has been a relatively malleable code base that is easy to add MVP features to and is in a great position to grow.
vision.css
I implemented a custom design system in tandem with our design team. This project has worked out well, and has so far avoided ‘css lockout’, where only one developer can effectively make dramatic changes to an app layout due to an undefined and overly general orthogonal ‘global style sheet’.
The way this was achieved was by focusing on a simple global CSS style sheet that implements the base HTML elements in accordance with the design system created by the design team. While this does result in a few element variants that are based on a global style class name, they remain in the theme of only styling ‘built in’ html elements, so there is little question what might be found in the global style sheet, and what needs to be a scoped css style.
Some of the features we used for vision.css
eslint-config-12core
Linting is the “spell check” of code, but its hard to agree on what rules to follow. Like most things, having a standard set of rules that is good-enough is always better than no rules, and usually better than an unpredictable and unmaintainable collection of project-unique rules.
I put together a shared ESLint config that was flexible enough for most projects so far at Little Core Labs based on the ‘StandardJS’ ruleset, but remained flexible enough to modify unique org requirements. Additionally, I’ve implemented it across many of the projects in the Github Org.
gqlr
gqlr
is a simplified fork of graphql-request.
This relatively simple wrapper around the JS fetch API has a gaggle of upstream maintainers with various needs that don’t really match our needs.
The fork simplified and reduced code redundancy, improved maintainability through automation, fixed bugs and weird edge cases and dramatically improved errors and error handling at the correct level of the tech stack. These changes would have unlikely been accepted upstream, so by forking we are able to gain the value out of open source resources, while still being able to finely tune them for our needs, as well as offer those changes back to the world.
local-storage-proxy
A configuration solution that allows for persistent overrides stored in local storage, including cache busting capabilities. Implemented with a recursive JS proxy to simulate native object interactions over a window.localstorage interface.
We ended up taking on maintainence of a few other packages, providing fixes and improvements where the original authors seem to have left off.
Here are some snapshots of the video platform we launched.
Here are some screenshots of the NFT auction platform I helped build. The UI was fully responsive and updated on the fly to new results, thanks to the powers of SWR.
I did a few marketing pages as well.
While this isn’t everything I did at Littlstar, it captures many of the projects I enjoyed working on, and can hopefully provide some insights into my skills, interests and experiences from the past year.
]]>tldr; The full package maintenance life cycle should be automated and can be broken down into the following levels of automation sophistication:
git
+ GithubThese solutions focus on Node.js + npm packages, automated on Github Actions, but the underlying principals are general to any language or automation platform.
Maintaining lots of software packages is burdensome.
Scaling open source package maintenance beyond a single contributor who understands the release life cycle is challenging.
Long CONTRIBUTING.md
files are often the goto solution, but are easily overlooked.
In the end, automating the package life cycle so that it can maintain itself, is the only way to realistically scale a large set of packages in a maintainable way.
For a long time I didn’t seek out automation solutions for package maintenance beyond a few simple solutions like testing and CI. Instead I had a lengthly ritual that looked approximately like this:
# 🔮
git checkout -b my-cool-branch
# do some work
# update tests
# update docs
npm run test
git commit -am 'Describe the changes'
git push -u
hub browse
# do the PR process
# merge the PR
git checkout master
git pull
git branch --delete my-cool-branch
# hand edit changelog
git add CHANGELOG.md
git commit -m 'CHANGELOG'
npm version {major,minor,patch}
git push && git push --follow-tags
npx gh-release
npm publish
# 😅
It was ritual, a muscle memory.
Over the years, I’ve managed to automate away large amount of raw labor to various bots, tools and platforms that tend to build on one another and are often usable in isolation or adopted one at a time. I’ve broken various tools and opportunities for automation into levels, with each level building on the complexity contained in the level below.
git
and GithubYou are already automating your packages to a large extent by your use of git
.
git
automates the process of working on code across multiple computers and collaborating on it with other people, and Github is the central platform to coordinate and distribute that code.
If you are new to programming or learning git
, its helpful to understand you are learning a tool used to automate the process by which you can cooperatively work on code with other people and bots.
This isn’t an article about git
though, so I won’t dive more into that.
There is no debate. Software isn’t “done” until it has tests. The orthodox position is that you shouldn’t be allowed to write code until the tests are written (TDD). No matter the methodology, you are automating a verification process of the package that you would normally have to perform by hand.
These are my preferred test runners for Node.js:
foo.test.js
).Unit tests that you run with a test runner are not the only type of test though. There are lots of other easy tests you can throw at your package testing step that provide a ton of value:
standard
, eslint
etc)
dependency-check
package.json
matches usage in the actual code.package.json
that are no longer used in code or if it finds pacakges in use in the code that are not listed in package.json
which would cause a runtime error.npm run build
part of your test life cycle.Running multiple tests under npm test
can result in a long, difficult to maintain test
script. Install npm-run-all2
as a devDependency
to break each package.json
test
command into its own sub-script, and then use the globbing feature to run them all in parallel (run-p
) or in series (run-s
):
{
"scripts": {
"test": "run-s test:*",
"test:deps": "dependency-check . --no-dev --no-peer",
"test:standard": "standard",
"test:tap": "tap"
}
}
When testing locally, and an individual test is failing, you can bypass the other tests and run just the failing test:
# run just the dep tests
npm run test:deps
npm-run-all2
is a fantastic tool to keep your npm run
scripts manageable.
This builds on the fantastic and 2014 classic Keith Cirkel blog post How to Use npm as a Build Tool.
While its obvious that writing automated tests is a form of automation, its still very common to see projects not take the actual step of automating the run step of the tests by hooking them up to a CI system that runs the tests on every interaction with the code on Github. Services like TravisCI have been available for FREE for years, and there is literally no valid excuse not to have this set up.
Although TravisCI has served many projects well over the years, Github Actions is a newer and platform native solution that many projects are now using. Despite the confusing name, Github Actions is primarily a CI service.
Create the following action file in your package repo and push it up to turn on CI.
# .github/workflows/tests.yml
name: tests
on: [push]
jobs:
test:
runs-on: $
strategy:
matrix:
os: [ubuntu-latest]
node: [14]
steps:
- uses: actions/[email protected]
- name: Use Node.js $
uses: actions/[email protected]
with:
node-version: $
- run: npm i
- run: npm test
For more information on Action syntax and directives, see:
Once you have a test suite set up, running in CI, any pull request to your package features the results of the “Checks API”. Various tests and integrations will post their results on every change to the pull request in the form of “running”, “pass” or “fail”.
The benefit to the checks status in the pull request UI is, depending on the quality and robustness of your test suite, you can have some amount of confidence that you can safely merge the proposed changes, while still having things work the way you expect, including the newly proposed changes. No matter the reliability of the test suite, it is still important to read and review the code.
Your package has dependences. Be it your test runner, or other packages imported or required into your package. They help provide valuable function with little upfront cost.
Dependencies form the foundation that your package is built upon. But that foundation is made of shifting sands⏳. Dependencies have their own dependencies, which all have to slowly morph and change with the underlying platform and dependency changes. Like a garden, if you don’t tend to the weeds and periodically water it with dependency updates, the plants will die.
With npm
, you normally update dependencies by grabbing the latest copy of the code and checking for outdated packages:
git checkout master
git pull
rm -rf node_modules
npm i
npm outdated
npm outdated
will give you a list of your dependencies that have fallen behind their semver range and updates that are available outside of their semver range.Checking for updates any time you work on the package is not a bad strategy, but it becomes tiresome, and can present a large amount of maintenance work, unrelated to the prompting task at hand, if left to go a long time. A good package doesn’t need to change much, so it may rarely ever be revisited and rot indefinitely.
Enter dependency bots.
A dependency bot monitors your package repositories for dependency updates. When a dependency update is found, it automatically creates a PR with the new version. If you have Level 1 automation setup, this PR will run your tests with the updated version of the dependency. The results will (mostly) inform you if its safe to apply the update, and also give you a button to press to apply the change. No typing or console required! 🚽🤳
Level 1 automation isn’t required to use a dependency bot, but you won’t have any way to automatically validate the change, so they are much less useful in that case.
Github now has a dependency bot built in called dependabot. To turn it on, create the following file in your packages Github repo:
# .github/dependabot.yml
version: 2
updates:
# Enable version updates for npm
- package-ecosystem: "npm"
# Look for `package.json` and `lock` files in the `root` directory
directory: "/"
# Check the npm registry for updates every day (weekdays)
schedule:
interval: "daily"
# Enable updates to github actions
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
This enables updates for npm and github actions. It offers other ecosystems as well. See the dependabot docs for more info.
Before dependabot, there was a now-shut-down service called Greenkeeper.io which provided a very similar service. It offered a very interesting feature which I’m still not sure if dependabot has yet.
It would run tests every time a dependency in your package was updated, in and out of semver range.
For in range updates that passed, nothing would happen. For in range updates that failed, it would open a PR alerting you that one of your dependencies inadvertently released a breaking change as a non-breaking change. This was a fantastic feature, and really demonstrated the heights that automated tests, CI, an ecosystem that fully utilized semver and dependency bots could achieve together.
Sadly, I haven’t seen other services or languages quite reach these heights of automation sophistication (many ecosystems even lack any kind of major
version gate rage), but perhaps as awareness increases of these possibilities more people will demand it.
There is a lot more room for innovation in this space. It would be great to get get periodic reports regarding the health of downstream dependency chains (e.g. if you are depending on a project that is slowly rotting and not maintaining its deps). As of now, dependabot seems to be recovering from a post acquisition engineering fallout, but I hope that they can get these kinds of features back into reality sooner than later.
There are bots out there that can send in automated code changes source off lint tests and other code analysis tools. While this is ‘cool’, these tasks are better served at the testing level. Automated code changes for failing lint tests should really just be apart of the human development cycle with whatever IDE or editor they use. Still, the bot layer is open to experimentation, so go forth an experiment all you want, though note that external service integrations have a heavy integration cost usually. 🤖
Quick recap:
That means our package is going to morph and change with time (hopefully not too much though). We need a way way to communicate that clearly to downstream dependents, be that us, someone else on the team or a large base of dark-matter developers.
The way to do this is with a CHANGELOG.md file. Or release notes in a Github release page. Or ideally both. keepachangelog.com offers a good overview on the correct format a CHANGELOG.md should follow.
This is a tedious process. If you work with other people, they might not be as motivated as you to handcraft an artisan CHANGELOG file. In my experience, the handcrafted, artisan CHANGELOG is too much work and easy to forget about. Also, I haven’t found a good linter tool to enforce its maintenance.
auto-changelog
auto-changelog
is a tool that takes your git history and generates a CHANGELOG that is almost-just-as-good as the artisan handcrafted one. Hooking this tool into your package’s version
life cycle enforces that it is run when a new version is generated with npm version {major,minor,patch}
.
While keepachangelog.com advocates for the handcrafted version, and discourages ‘git commit dumps’, as long as you are halfway concious of your git
commit logs (as you should be), the auto-changelog
output is generally still useful.
You can even follow conventionalcommits.org if you want an even more structured git log.
Automating auto-changelog
to run during npm version
[1] is easy.
Install it as a devDependency
and set up the following script in package.json
:
{
"scripts": {
"version": "auto-changelog -p --template keepachangelog auto-changelog --breaking-pattern 'BREAKING CHANGE:' && git add CHANGELOG.md"
}
}
The version
script is a npm run
lifecycle script that runs after the package.json
version is bumped, but before the git commit with the change are created. Kind of a mouthful, but with nice results.
auto-changelog
generates satisfactory changelogs. The consistency it provides exceeds the value a hand written changelog can provide due to its inconsistent nature.Ok, so we have local changes merged in, and we’ve created a new version of the module with an automated changelog generated as part of the npm version
commit. Time to get this all pushed out and published! By hand we could do:
git push --follow-tags
# copy contents of changelog
# create a new Github release on the new tag with the changelog contents
npm publish
But that is tedious.
And what happens when your colleague forgets to push the git commit/tag to Github and just publishes to npm
?
Or more likely, they just forget to create the Github release, creating inconsistency in the release process.
The solution is to automate all of this!
Use the prepublishOnly
hook to run all of these tasks automatically before publishing to npm via npm publish
.
Incorporate a tool like gh-release
to create a Github release page for the new tag with the contents of your freshly minted auto-changelog
.
{
"scripts": {
"prepublishOnly": "git push --follow-tags && gh-release -y"
}
}
gh-release
makes it easy to create Github releases from a CHANGELOG.md.The result of this is our release process is returned to the lowest common denominator of process dictated by npm:
npm version {major,minor,patch}
npm publish
But we still get all of these results, completely automated:
auto-changelog
git
commit+tag.gh-release
, with the new contents of CHANGELOGnpm
.Those two run scripts together:
{
"scripts": {
"version": "auto-changelog -p --template keepachangelog auto-changelog --breaking-pattern 'BREAKING CHANGE:' && git add CHANGELOG.md",
"prepublishOnly": "git push --follow-tags && gh-release -y"
}
}
Some packages have builds steps. No problem, these are easily incorporated into the above flow:
{
"scripts": {
"build": "do some build command here",
"prepare": "npm run build",
"version": "run-s prepare version:*",
"version:changelog": "auto-changelog -p --template keepachangelog auto-changelog --breaking-pattern 'BREAKING CHANGE:'",
"version:git": "git add CHANGELOG.md dist",
"prepublishOnly": "git push --follow-tags && gh-release -y"
}
}
Since version becomes a bit more complex, we can break it down into pieces with npm-run-all2
as we did in the testing step. We ensure we run fresh builds on development install (prepare
), and also when we version
. We capture any updated build outputs in git
during the version step by staging the dist
folder (or whatever else you want to capture in your git
version commit).
This pattern was documented well by @swyx: Semi-Automatic npm and GitHub Releases with gh-release
and auto-changelog
.
We now have a process for fully managing the maintenance and release cycle of our package, but we are still left to pull down any changes from our Github repo and run these release commands, as simple as they are now.
You can’t really do this on your phone (easily) and someone else on the project still might manage to not run npm version
and just hand-bump the version number for some reason, bypassing all our wonderful automation.
What would be cool is if we could kick of a special CI program that would run npm version && npm publish
for us, at the push of a button.
It turns out Github-Actions has a feature now called workflow_dispatch
, which lets you press a button on the repos actions page on GitHub and trigger a CI flow with some input.
workflow_dispatch
actions lets you trigger an action from your browser, with simple textual inputs. Use it as a simple shared deployment environment.Implementing workflow_dispatch
is easy: create a new action workflow file with the following contents:
# .github/workflows/release.yml
name: npm version && npm publish
on:
workflow_dispatch:
inputs:
newversion:
description: 'npm version {major,minor,patch}'
required: true
env:
node_version: 14
jobs:
version_and_release:
runs-on: ubuntu-latest
steps:
- uses: actions/[email protected]
with:
# fetch full history so things like auto-changelog work properly
fetch-depth: 0
- name: Use Node.js $
uses: actions/[email protected]
with:
node-version: $
# setting a registry enables the NODE_AUTH_TOKEN env variable where we can set an npm token. REQUIRED
registry-url: 'https://registry.npmjs.org'
- run: npm i
- run: npm test
- run: git config --global user.email "[email protected]"
- run: git config --global user.name " $"
- run: npm version $
- run: npm publish
env:
GH_RELEASE_GITHUB_API_TOKEN: $
NODE_AUTH_TOKEN: $
Next, generate an npm token with publishing rights.
Then set that token as a repo secret called NPM_TOKEN
.
GitHub
secrets allows you to securely store tokens for use in your GitHub Actions runs. It's not bulletproof, bit its pretty good.Now you can visit the actions tab on the repo, select the npm version && npm publish
action, and press run, passing in either major
, minor
, or patch
as the input, and a GitHub action will kick off running our Level 3 version and release automations along with publishing a release to npm and GitHub.
Note: Its recommended that you .gitignore
package-lock.json
files, otherwise they end up in a library source, where it provides little benefit and lots of drawbacks.
I created a small actions called npm-bump
which can clean up some of the above action boilerplate:
name: Version and Release
on:
workflow_dispatch:
inputs:
newversion:
description: 'npm version {major,minor,patch}'
required: true
env:
node_version: 14
jobs:
version_and_release:
runs-on: ubuntu-latest
steps:
- uses: actions/[email protected]
with:
# fetch full history so things like auto-changelog work properly
fetch-depth: 0
- name: Use Node.js $
uses: actions/[email protected]
with:
node-version: $
# setting a registry enables the NODE_AUTH_TOKEN env variable where we can set an npm token. REQUIRED
registry-url: 'https://registry.npmjs.org'
- run: npm i
- run: npm test
- name: npm version && npm publish
uses: bcomnes/[email protected]
with:
git_email: [email protected]
git_username: $
newversion: $
github_token: $ # built in actions token. Passed tp gh-release if in use.
npm_token: $ # user set secret token generated at npm
npm-bump
helps cut down on some of the npm version and release GitHub action boilerplate YAML. Is it better? Not sure!So this is great! You can maintain packages by merging automatically generated pull requests, run your tests on them to ensure package validity, and when you are ready, fully release the package, with a CHANGELOG entry, all from the push of a button on your cell phone. Fully Automated Luxury Package Space Maintenance. 🛰🚽🤳
What is the best way to manage all of these independent pieces? A template! Or, a template repo.
You mean things like yeoman? Maybe, though that tool is largely used to ‘scaffold’ massive amounts of web framework boilerplate and is a complex ecosystem.
Something simpler will be more constrained and easier to maintain over time. Github repo templates and create-project
are good choices.
Github offers a very simple solution called template repos. You take any repo on GitHub, go into its settings page and designate it as a template repo. You can create a new repo from a template repo page with the click of a button, or select it from a drop down in the github create repo wizard.
The only issue, is that you then have to go through and modify all the repo specific parameters by hand. Better than nothing! But we can do better.
create-project
reposcreate-project
is a simple CLI tool (by @mafintosh) that works similar to Github template repos, except it has a `` system that lets you insert values when spawning off a project repo. You can designate your create-project
template repos to also be a Github template repo, and create new projects either way you feel like it.
create-project
lets you spawn a new project from a git repo and inject values into special
blocks.Here are some of my personal template repos:
create-project
doesn’t need to only manage boilerplate for Node.js projects. Maybe a go tool would be better for this, but it does show the flexibility of the tools.There are various solutions for generating docs from comments that live close to the code.
I haven’t found a solution that satisfies my needs well enough to use one generally. Its hard to exceed the quality docs written by hand for and by humans. If you have a good solution please let me know!
That was a lot to cover.
If you want to see a complete level 0 through level 5 example, check out my create-template
template repo snapshotted to the latest commit of the time of publishing.
This collection is written in the context of the Node.js programming system, however the class of tools discussed apply to every other language ecosystem and these automation levels could serve as a framework for assessing the maturity of automation capabilities of other programming language systems. Hopefully they can can provide some insights into the capabilities and common practices around modern JavaScript development for those unfamiliar with this ecosystem.
Additionally, this documents my personal suite of tools and processes that I have developed to automate package maintenance, and is by no means normative. Modification and experimentation is always encouraged.
There are many subtle layers to the Node.js programming system, and this just covers the maintenance automation layer that can exist around a package. Much more could be said about the versioned development tooling, standardized scripting hooks, diamond dependency problem solutions, localized dependencies, upstream package hacking/debugging conveniences and local package linking. An even deeper dive could be made on the overlap these patterns have (and don’t) have in other JS runtimes like Deno which standardizes a lot around Level 1, or even other languages like Go or Rust.
If you enjoyed this article, have suggestions or feedback, or think I’m full of it, follow me on twitter (@bcomnes) and feel free to hop in the the accompanying thread. I would love to hear your thoughts, ideas and examples! Also subscribe to my RSS/JSON Feed in your favorite RSS reader.
"Fully Automated Luxury Space Age Package Maintenance"
— 🌌🌵🛸Bret🏜👨👩👧🚙 (@bcomnes) September 29, 2020
I wrote up how tedious package maintenance tasks can be fully automated.
Hope someone enjoys!https://t.co/fvYIu2Wq0r pic.twitter.com/q220LTax8X
I was lucky to be able to contribute to many features and experiences that affected Netlify’s massive user base. Here are some examples of things that I worked on. If this kind of work looks interesting to you, Netlify is a fantastic team to join: netlify.com/careers. Questions and comments can be sent via email or twitter.
After working on the Product team for slightly over a year, I switched to working on Netlify’s platform team. The team has a strong DevOps focus and maintains a redundant, highly available multi-cloud infrastructure on which Netlify and and all of its services run. My focus on the team is to maintain, develop, scale and improve various critical services and libraries. Below are some examples of larger projects I worked on.
One of my primary responsibilities upon joining the team is to maintain the Buildbot that all customer site builds run on. It is partially open-source so customers can explore the container in a more freeform matter locally.
One of the first feature additions I launched for the Buidlbot was selectable build images. This project required adding the concept of additional build images to the API and UI and to develop an upgrade path allowing users to migrate their websites to the new build image image wile also allowing them to roll back to the old image if they needed more time to accommodate the update.
Additionally, I performed intake on a number of user contributed build-image additions and merged other various potential breaking changes within a development window before releasing. I also helped develop the changes to the Ruby on Rails API, additions to the React UI, as well as write the user documentation. It was widely cross cutting project.
I help maintain and further develop Netlify’s Open-API (aka Swagger) API definition, website and surrounding client ecosystem. While open-api can be cumbersome, it has been a decent way to synchronize projects written in different language ecosystems in a unified way.
I worked on Neltify’s Product team for a bit over a year and completed many successful user facing projects. Here are just a few examples:
I was primary author on Netlify’s current CLI codebase.
I gave a talk on some ideas and concepts I came across working with Netlify’s platform and general JAMStack architecture at a local Portland JAMStack meetup.
I led the Netlify Domains project which allowed users to to add an existing live domain during site setup, or buy the domain if it is available. This feature enabled users to deploy a website from a git repo to a real live domain name with automatic https in a matter of minutes and has resulted in a nice stream of ever increasing AAR for the company.
I helped lead and implement build status favicons, so you can put a build log into a tab, and monitor status from the tab bar.
I implemented the application UI for Netlify’s Lambda functions and logging infrastructure, and have continued to help design and improve the developer ergonomics of the Lambda functions feature set.
I helped architect and implement Netlify’s Identity widget.
I helped implement the UI for our site dashboard redesign.
I led the project to specify and implement the Audit Log for teams and identity instances.
I implemented the UI for Netlify’s split testing feature.
]]>