{ "version": "https://jsonfeed.org/version/1", "title": "bret.io", "home_page_url": "https://bret.io", "feed_url": "https://bret.io/feed.json", "description": "A running log of announcements, projects and accomplishments.", "author": { "name": "Bret Comnes", "url": "https://bret.io", "avatar": "/favicons/apple-touch-icon-1024x1024.png" }, "items": [ { "date_published": "2024-10-03T18:25:30.405Z", "title": "\"Don't Be Evil\" Is An Excuse For Evil", "url": "https://bret.io/blog/2024/dont-be-evil-is-an-excuse-for-evil/", "id": "https://bret.io/blog/2024/dont-be-evil-is-an-excuse-for-evil/#2024-10-03T18:25:30.405Z", "content_html": "

Google’s former and founding motto “Don’t be evil” sounds benevolent, but is it?

\n
\n \n \"Book\n \n
The original Google motto.
\n
\n

“Don’t be evil” paired with the companies premise: lets build products in a way that allows for immeasurable corporate access into people’s private lives for fun and profit, unlocking all the positive potential this data can provide.\nWe’ll only use it for good, the Evil™️ things will just be off limits.

\n

Of course the motto changes to “Do the right thing” in 2015 when it’s very obvious it’s impossible Google to not indulge in the Evil with the potential for it sitting right there.\nBy loosening up the motto to allow for at least some Evil, so long as its the “right thing”, Google can at least be honest with the public that “Evil” is definitely on the table.

\n

Maybe its time to demand “Can’t be evil”? Build in a way where the evil just isn’t possible.\nReject the possibility of evil in what software you choose to use.

\n" }, { "date_published": "2024-02-13T18:24:49.707Z", "title": "The Art of Doing Science and Engineering", "url": "https://bret.io/blog/2024/the-art-of-doing-science-and-engineering/", "id": "https://bret.io/blog/2024/the-art-of-doing-science-and-engineering/#2024-02-13T18:24:49.707Z", "content_html": "
\n \n \"Book\n \n
The art of doing science and engineering : Learning to Learn by Richard W. Hamming
\n
\n

Richard Hamming’s “The Art of Doing Science and Engineering” is a book capturing the lessons he taught in a course he gave at the U.S. Navy Postgraduate School in Monterey, CA.\nHe characterizes what he was trying to teach was “style” of thinking in science and engineering.

\n

Having a physics degree myself, and also finding myself periodically ruminating on the agony of professional software development and hoping to find some overlap between my professional field and Hamming’s life experience, I gave it a read.

\n

The book is filled with nuggets of wisdom and illustrates a career of move-the-needle science and engineering at Bell Labs.\nI didn’t personally find much value in many of the algebraic walk-through’s of various topics like information theory, but learning about how Hamming discovered error correcting codes definitely was interesting and worth a read.

\n

The highlight of book comes in the second half where he includes interesting stories, analogies and observations on nearly every page. Below are my highlights I pulled while reading.

\n

On What Makes Good Design

\n
\n

That brings up another point, which is now well recognized in software for computers but which applies to hardware too. Things change so fast that part of the system design problem is that the system will be constantly upgraded in ways you do not now know in any detail! Flexibility must be part of the modern design of things and processes. Flexibility built into the design means not only will you be better able to handle the changes which will come after installation, but it also contributes to your own work as the small changes which inevitably arise both in the later stages of design and in the field installation of the system…

\n

Thus rule two:

\n

Part of systems engineering design is to prepare for changes so they can be gracefully made and still not degrade the other parts.

\n

– p.367

\n
\n

This quote is my favorite out of the entire book.\nIt feels like a constant fight in software engineering between the impulse to lock down runtime versions, specific dependency versions, and other environmental factors versus developing software in such a way that accommodates wide variance in all of these different component factors.\nBoth approaches argue reliability and flexibility, however which approach actually tests for it?

\n

In my experience, the tighter the runtime dependency specifications, the faster fragility spreads, and it’s satisfying to hear Hamming’s experience echo this observation. Sadly though, his observation that those writing software will universally understand this simply hasn’t held up.

\n
\n

Good design protects you from the need for too many highly accurate components in the system. But such design principals are still, to this date, ill understood and need to be researched extensively. Not that good designers do not understand this intuitively, merely it is not easily incorporated into the design methods you were thought in school.

\n

Good minds are still need in spite of all the computing tools we have developed. The best mind will be the one who gets the principle into the design methods taught so it will be automatically available for lesser minds!.

\n

– p.268

\n
\n

Here Hamming is describing H.S. Black’s feedback circuit’s tolerance for low accuracy components as what constitutes good design. I agree! Technology that works at any scale, made out of commodity parts with minimal runtime requirements tends to be what is most useful across the longest amount of time.

\n

On Committees

\n
\n

Committee decisions, which tend to diffuse responsibility, are seldom the best in practice—most of the time they represent a compromise which has none of the virtues of any path and tends to end in mediocrity.

\n

– p.274

\n
\n

I appreciated his observations on committees, and their tendency to launder responsibility.\nThey serve a purpose, but its important to understand their nature.

\n

On Data and Observation

\n
\n

The Hawthorne effect strongly suggests the proper teaching method will always to be in a state of experimental change, and it hardly matters just what is done; all that matters is both the professor and the students believe in the change.

\n

– p.288

\n
\n
\n

It has been my experience, as well as the experience of many others who have looked, that data is generally much less accurate than it is advertised to be. This is not a trivial point—we depend on initial data for many decisions, as well as for the input data for simulations which result in decisions.

\n

– p.345

\n
\n
\n

Averages are meaningful for homogeneous groups (homogeneous with respect to the actions that may later be taken), but for diverse groups averages are often meaningless. As earlier remarked, the average adult has one breast and one testicle, but that does not represent the average person in our society.

\n

– p.356

\n
\n
\n

You may think the title means that if you measure accurately you will get an accurate measurement, and if not then not, but it refers to a much more subtle thing—the way you choose to measure things controls to a large extent what happens. I repeat the story Eddington told about the fishermen who went fishing with a net. They examined the size of the fish they caught and concluded there was a minimum size to the fish in the sea. The instrument you use clearly affects what you see.

\n

– p.373

\n
\n

Intuitively I think many people who attempt to measure anything understand that their approach reflects in the results to some degree.\nI hadn’t heard of the Hawthorne effect before, but intuitively it makes sense.

\n

People with an idea on how to improve something implement their idea and it works, because they want it to work and allow the effects to be fully effective.\nSomeone else is prescribed this idea or brought into the fold where the idea is implemented and the benefits of the idea evaporate.

\n

I’ve long suspected that in the context of professional software development, where highly unscrutinized benchmarks and soft data are the norm, people start with an opinion or theory and work back to data that supports it.\nCould it just be that people need to believe that working in a certain way is necessary for them to work optimally? Could it be “data” is often just a work function used to out maneuver competing ideas?

\n

Anyway, just another thing to factor for when data is plopped in your lap.

\n

On Theory

\n
\n

Moral: there need not be a unique form of a theory to account for a body of observations; instead, two rather different-looking theories can agree on all the predicted details. You cannot go from a body of data to a unique theory! I noted this in the last chapter.

\n

–p.314

\n
\n
\n

Heisenberg derived the uncertainty principle that conjugate variables, meaning Fourier transforms, obeyed a condition in which the product of the uncertainties of the two had to exceed a fixed number, involving Planck’s constant. I earlier commented, Chapter 17, this is a theorem in Fourier transforms-any linear theory must have a corresponding uncertainty principle, but among physicists it is still widely regarded as a physical effect from nature rather than a mathematical effect of the model.

\n

–p.316

\n
\n

I appreciate Hamming suggesting that some of our understanding of physical reality could be a byproduct of the model being used to describe it.\nIt’s not exactly examined closely in undergraduate or graduate quantum mechanics, and I find it interesting Hamming, who’s clearly highly intuitive with modeling, also raises this question.

\n

Predictions

\n
\n

Let me now turn to predictions of the immediate future. It is fairly clear that in time “drop lines” from the street to the house (they may actually be buried, but will probably still be called “drop lines”) will be fiber optics. Once a fiber-optic wire is installed, then potentially you have available almost all the information you could possibly want, including TV and radio, and possibly newspaper articles selected according to your interest profile (you pay the printing bill which occurs in your own house). There would be no need for separate information channels most of the time. At your end of the fiber there are one or more digital filters. Which channel you want, the phone, radio, or TV, can be selected by you much as you do now, and the channel is determined by the numbers put into the digital filter-thus the same filter can be multipurpose, if you wish. You will need one filter for each channel you wish to use at the same time (though it is possible a single time-sharing filter would be available) and each filter would be of the same standard design. Alternately, the filters may come with the particular equipment you buy.

\n

– p.284-285

\n
\n

Here Hamming is predicting the internet. He got very close, and it’s interesting to think that these signals would all just be piped to your house in a bundle you you pay for a filter to unlock access to the ones you want. Hey Cable TV worked that for a long time!

\n

On Leadership

\n
\n

But a lot of evidence on what enabled people to make big contributions points to the conclusion that a famous prof was a terrible lecturer and the students had to work hard to learn it for themselves! I again suggest a rule:

\n

What you learn from others you can use to follow;

\n

What you learn for yourself you can use to lead.

\n

– p.292

\n
\n

Learn by doing, not by following.

\n
\n

What you did to become successful is likely to be counterproductive when applied at a later date.

\n

– p.342

\n
\n

It’s easy to blame changing trends in software development for the disgustingly short half-life of knowledge regarding development patterns and tools, but I think it’s probably just the nature of knowledge based work.\nOperating by yourself may be effective and work well, but its not a recipe for success at any given moment in time.

\n
\n

A man was examining the construction of a cathedral. He asked a stonemason what he was doing chipping the stones, and the mason replied, “I am making stones.” He asked a stone carver what he was doing; “I am carving a gargoyle.” And so it went; each person said in detail what they were doing. Finally he came to an old woman who was sweeping the ground. She said, “I am helping build a cathedral.”\nIf, on the average campus, you asked a sample of professors what they were going to do in the next class hour, you would hear they were going to “teach partial fractions,” “show how to find the moments of a normal distribution,” “explain Young’s modulus and how to measure it,” etc. I doubt you would often hear a professor say, “I am going to educate the students and prepare them for their future careers.”\nThis myopic view is the chief characteristic of a bureaucrat. To rise to the top you should have the larger view—at least when you get there.

\n

– p.360

\n
\n

Software bureaucrats aplenty. Really easy to fall into this role.

\n
\n

I must come to the topic of “selling” new ideas. You must master three things to do this (Chapter 5):

\n
    \n
  1. Giving formal presentations,
  2. \n
  3. Producing written reports, and
  4. \n
  5. Mastering the art of informal presentations as they happen to occur.
  6. \n
\n

All three are essential—you must learn to sell your ideas, not by propaganda, but by force of clear presentation. I am sorry to have to point this out; many scientists and others think good ideas will win out automatically and need not be carefully presented. They are wrong;

\n

– p.396

\n
\n

One thing I regret over the last 10 years of my career is not writing down more insights I have learned through experience.\nIdeas simply don’t transmit if they aren’t written down or put into some consumable format like video or audio.\nNearly every annoying tool or developer trend you are forced to use is in play because it communicated the idea through blogs, videos and conference talks.\nAnd those who watched echoed these messages.

\n

On Experts

\n
\n

An expert is one who knows everything about nothing; a generalist knows nothing about everything.

\n

In an argument between a specialist and a generalist, the expert usually wins by simply (1) using unintelligible jargon, and (2) citing their specialist results, which are often completely irrelevant to the discussion. The expert is, therefore, a potent factor to be reckoned with in our society. Since experts both are necessary and also at times do great harm in blocking significant progress, they need to be examined closely. All too often the expert misunderstands the problem at hand, but the generalist cannot carry though their side to completion. The person who thinks they understand the problem and does not is usually more of a curse (blockage) than the person who knows they do not understand the problem.

\n

– p.333

\n
\n

Understand when you are generalist and a specialist.

\n
\n

Experts, in looking at something new, always bring their expertise with them, as well as their particular way of looking at things. Whatever does not fit into their frame of reference is dismissed, not seen, or forced to fit into their beliefs. Thus really new ideas seldom arise from the experts in the field. You cannot blame them too much, since it is more economical to try the old, successful ways before trying to find new ways of looking and thinking.

\n

If an expert says something can be done he is probably correct, but if he says it is impossible then consider getting another opinion.

\n

– p.336

\n
\n

Anyone wading into a technical field will encounter experts at every turn.\nThey have valuable information, but they are also going to give you dated, myopic advice (gatekeeping?).\nI like Hamming’s framing here and it reflects my experience when weighing expert opinion.

\n
\n

In some respects the expert is the curse of our society, with their assurance they know everything, and without the decent humility to consider they might be wrong. Where the question looms so important, I suggested to you long ago to use in an argument, “What would you accept as evidence you are wrong?” Ask yourself regularly, “Why do I believe whatever I do?” Especially in the areas where you are so sure you know, the area of the paradigms of your field.

\n

– p.340

\n
\n

I love this exercise. It will also drive you crazy. Tread carefully.

\n
\n

Systems engineering is indeed a fascinating profession, but one which is hard to practice. There is a great need for real systems engineers, as well as perhaps a greater need to get rid of those who merely talk a good story but cannot play the game effectively.

\n

– p.372

\n
\n

Controversial, harsh, but true.

\n

The Binding

\n

The last thing I want to recognize is the beautiful cloth resin binding and quality printing of the book. Bravo Stripe Press for still producing beautiful artifacts at affordable pricing in the age of print on demand.

\n" }, { "date_published": "2024-01-15T23:55:33.582Z", "title": "async-neocities has a bin", "url": "https://bret.io/blog/2024/async-neocities-bin/", "id": "https://bret.io/blog/2024/async-neocities-bin/#2024-01-15T23:55:33.582Z", "content_html": "

async-neocities v3.0.0 is now available and introduces a CLI.

\n
Usage: async-neocities [options]\n\n    Example: async-neocities --src public\n\n    --help, -h            print help text\n    --src, -s             The directory to deploy to neocities (default: "public")\n    --cleanup, -c         Destructively clean up orphaned files on neocities\n    --protect, -p         String to minimatch files which will never be cleaned up\n    --status              Print auth status of current working directory\n    --print-key           Print api-key status of current working directory\n    --clear-key           Remove the currently associated API key\n    --force-auth          Force re-authorization of current working directory\n\nasync-neocities (v3.0.0)\n
\n

When you run it, you will see something similar to this:

\n
> async-neocities --src public\n\nFound siteName in config: bret\nAPI Key found for bret\nStarting inspecting stage...\nFinished inspecting stage.\nStarting diffing stage...\nFinished diffing stage.\nSkipping applying stage.\nDeployed to Neocities in 743ms:\n    Uploaded 0 files\n    Orphaned 0 files\n    Skipped 244 files\n    0 protected files\n
\n

async-neocities was previously available as a GitHub Action called deploy-to-neocities. This Action API remains available, however the CLI offers a local-first workflow that was not previously offered.

\n

Local First Deploys

\n

Now that async-neocities is available as a CLI, you can easily configure it as an npm script and run it locally when you want to push changes to neocities without relying on GitHub Actions.\nIt also works great in Actions with side benefit of deploys working exactly the same way in both local and remote environments.

\n

Here is a quick example of that:

\n\n

The async-neocities CLI re-uses the same ENV name as deploy-to-neocities action so migrating to the CLI requires no additional changes to the Actions environment secrets.

\n

CLIs vs Actions

\n

This prompts some questions regarding when are CLIs and when are actions most appropriate. Lets compare the two:

\n

CLIs

\n\n

Actions

\n\n

Conclusion

\n

In addition to the CLI, async-neocities migrates to full Node.js esm and internally enables ts-in-js though the types were far to dynamic to export full type support with the time I had available.

\n

With respect to an implementation plan going forward regarding CLIs vs actions, I’ve summarized my thoughts below:

\n

Implement core functionality as a re-usable library.\nExposing a CLI makes that library an interactive tool that provides a local first workflow and is equally useful in CI.\nExposing the library in an action further opens up the library to a wider language ecosystem which would otherwise ignore the library due to foreign ecosystem ergonomic overhead.\nThe action is simpler to implement than a CLI but the CLI offers a superior experience within the implemented language ecosystem.

\n" }, { "date_published": "2023-12-02T20:49:41.713Z", "title": "Reorganized", "url": "https://bret.io/blog/2023/reorganized/", "id": "https://bret.io/blog/2023/reorganized/#2023-12-02T20:49:41.713Z", "content_html": "

Behold, a mildly redesigned and reorganized landing page:

\n

\"screenshot

\n

It’s still not great, but it should make it easier to keep it up to date going forward.

\n

It has 3 sections:

\n\n

I removed a bunch of older inactive projects and links and stashed them in a project.

\n

Additionally, the edit button in the page footer now takes you to the correct page in GitHub for editing, so if you ever see a typo, feel free to send in a fix!

\n

Finally, the about page includes a live dump of the dependencies that were used to build the website.

\n" }, { "date_published": "2023-11-23T15:14:54.910Z", "title": "Reintroducing top-bun 7 🥐", "url": "https://bret.io/blog/2023/reintroducing-top-bun/", "id": "https://bret.io/blog/2023/reintroducing-top-bun/#2023-11-23T15:14:54.910Z", "content_html": "

After some unexpected weekends of downtime looking after sick toddlers, I’m happy to re-introduce top-bun v7.

\n

Re-introduce? Well, you may remember @siteup/cli, a spiritual offshoot of sitedown, the static site generator that turned a directory of markdown into a website.

\n

Whats new with top-bun v7?

\n

Let’s dive into the new feature, changes and additions in top-bun 7.

\n

Rename to top-bun

\n

@siteup/cli is now top-bun.\nAs noted above, @siteup/cli was a name hack because I didn’t snag the bare npm name when it was available, and someone else had the genius idea of taking the same name. Hey it happens.

\n

I described the project to Chat-GPT and it recommended the following gems:

\n\n

OK Chat-GPT, pretty good, I laughed, but I’m not naming this web-erect.

\n

The kids have a recent obsession with Wallace & Gromit and we watched a lot of episodes while she was sick. Also I’ve really been enjoying 🥖 bread themes so I decided to name it after Wallace & Gromit’s bakery “Top Bun” in their hit movie “A Matter of Loaf and Death”.

\n
\n

A Docs Website

\n

top-bun now builds it’s own repo into a docs website. It’s slightly better than the GitHub README.md view, so go check it out! It even has a real domain name so you know its for real.

\n\n
\n \n \n \n \"Screenshot\n \n \n
top-bun builds itself into its own docs website in an act dubbed \"dogfooding\".
\n
\n

css bundling is now handled by esbuild

\n

esbuild is an amazing tool. postcss is a useful tool, but its slow and hard to keep up with. In top-bun, css bundling is now handled by esbuild.\ncss bundling is now faster and less fragile, and still supports many of the same transforms that siteup had before. CSS nesting is now supported in every modern browser so we don’t even need a transform for that. Some basic transforms and prefixes are auto-applied by setting a relatively modern browser target.

\n

esbuild doesn’t support import chunking on css yet though, so each css entrypoint becomes its own bundle. If esbuild ever gets this optimization, so will top-bun. In the meantime, global.css, style.css and now layout.style.css give you ample room to generally optimize your scoped css loading by hand. It’s simpler and has less moving parts!

\n
\n \n \n \n \"Screenshot\n \n \n
The esbuild css docs are worth a cruise through.
\n
\n

Multi-layout support

\n

You can now have more than one layout!\nIn prior releases, you could only have a single root layout that you customized on a per-page basis with variables.\nNow you can have as many layouts as you need.\nThey can even nest.\nCheck out this example of a nested layout from this website. It’s named article.layout.js and imports the root.layout.js. It wraps the children and then passes the results to root.layout.js.

\n
// article.layout.js\n\nimport { html } from 'uhtml-isomorphic'\nimport { sep } from 'node:path'\nimport { breadcrumb } from '../components/breadcrumb/index.js'\n\nimport defaultRootLayout from './root.layout.js'\n\nexport default function articleLayout (args) {\n  const { children, ...rest } = args\n  const vars = args.vars\n  const pathSegments = args.page.path.split(sep)\n  const wrappedChildren = html`\n    ${breadcrumb({ pathSegments })}\n    <article class="article-layout h-entry" itemscope itemtype="http://schema.org/NewsArticle">\n      <header class="article-header">\n        <h1 class="p-name article-title" itemprop="headline">${vars.title}</h1>\n        <div class="metadata">\n          <address class="author-info" itemprop="author" itemscope itemtype="http://schema.org/Person">\n            ${vars.authorImgUrl\n              ? html`<img height="40" width="40"  src="${vars.authorImgUrl}" alt="${vars.authorImgAlt}" class="u-photo" itemprop="image">`\n              : null\n            }\n            ${vars.authorName && vars.authorUrl\n              ? html`\n                  <a href="${vars.authorUrl}" class="p-author h-card" itemprop="url">\n                    <span itemprop="name">${vars.authorName}</span>\n                  </a>`\n              : null\n            }\n          </address>\n          ${vars.publishDate\n            ? html`\n              <time class="published-date dt-published" itemprop="datePublished" datetime="${vars.publishDate}">\n                <a href="#" class="u-url">\n                  ${(new Date(vars.publishDate)).toLocaleString()}\n                </a>\n              </time>`\n            : null\n          }\n          ${vars.updatedDate\n            ? html`<time class="dt-updated" itemprop="dateModified" datetime="${vars.updatedDate}">Updated ${(new Date(vars.updatedDate)).toLocaleString()}</time>`\n            : null\n          }\n        </div>\n      </header>\n\n      <section class="e-content" itemprop="articleBody">\n        ${typeof children === 'string'\n          ? html([children])\n          : children /* Support both uhtml and string children. Optional. */\n        }\n      </section>\n\n      <!--\n        <footer>\n            <p>Footer notes or related info here...</p>\n        </footer>\n      -->\n    </article>\n    ${breadcrumb({ pathSegments })}\n  `\n\n  return defaultRootLayout({ children: wrappedChildren, ...rest })\n}\n
\n

Layout styles and js bundles

\n

With multi-layout support, it made sense to introduce two more style and js bundle types:

\n\n

Prior the global.css and global.client.js bundles served this need.

\n
\n \n \n \n \"Screenshot\n \n \n
Layouts introduce a new asset scope in the form of layout clients and style.
\n
\n

Layouts and Global Assets live anywhere

\n

Layouts, and global.client.js, etc used to have to live at the root of the project src directory. This made it simple to find them when building, and eliminated duplicate singleton errors, but the root of websites is already crowded. It was easy enough to find these things anywhere, so now you can organize these special files in any way you like. I’ve been using:

\n\n
\n \n \n \n \"Screenshot\n \n \n
You are free to organize globals and layouts wherever you want now. The globals and layouts folders are great choice!
\n
\n

Template files

\n

Given the top-bun variable cascade system, and not all website files are html, it made sense to include a templating system for generating any kind of file from the global.vars.js variable set. This lets you generate random website “sidefiles” from your site variables.

\n

It works great for generating RSS feeds for websites built with top-bun. Here is the template file that generates the RSS feed for this website:

\n
import pMap from 'p-map'\nimport jsonfeedToAtom from 'jsonfeed-to-atom'\n\n/**\n * @template T\n * @typedef {import('@siteup/cli').TemplateAsyncIterator<T>} TemplateAsyncIterator\n */\n\n/** @type {TemplateAsyncIterator<{\n *  siteName: string,\n *  siteDescription: string,\n *  siteUrl: string,\n *  authorName: string,\n *  authorUrl: string,\n *  authorImgUrl: string\n *  layout: string,\n *  publishDate: string\n *  title: string\n * }>} */\nexport default async function * feedsTemplate ({\n  vars: {\n    siteName,\n    siteDescription,\n    siteUrl,\n    authorName,\n    authorUrl,\n    authorImgUrl\n  },\n  pages\n}) {\n  const blogPosts = pages\n    .filter(page => ['article', 'book-review'].includes(page.vars.layout) && page.vars.published !== false)\n    .sort((a, b) => new Date(b.vars.publishDate) - new Date(a.vars.publishDate))\n    .slice(0, 10)\n\n  const jsonFeed = {\n    version: 'https://jsonfeed.org/version/1',\n    title: siteName,\n    home_page_url: siteUrl,\n    feed_url: `${siteUrl}/feed.json`,\n    description: siteDescription,\n    author: {\n      name: authorName,\n      url: authorUrl,\n      avatar: authorImgUrl\n    },\n    items: await pMap(blogPosts, async (page) => {\n      return {\n        date_published: page.vars.publishDate,\n        title: page.vars.title,\n        url: `${siteUrl}/${page.pageInfo.path}/`,\n        id: `${siteUrl}/${page.pageInfo.path}/#${page.vars.publishDate}`,\n        content_html: await page.renderInnerPage({ pages })\n      }\n    }, { concurrency: 4 })\n  }\n\n  yield {\n    content: JSON.stringify(jsonFeed, null, '  '),\n    outputName: 'feed.json'\n  }\n\n  const atom = jsonfeedToAtom(jsonFeed)\n\n  yield {\n    content: atom,\n    outputName: 'feed.xml'\n  }\n\n  yield {\n    content: atom,\n    outputName: 'atom.xml'\n  }\n}\n
\n

Page Introspection

\n

Pages, Layouts and Templates can now introspect every other page in the top-bun build.

\n

You can now easily implement any of the following:

\n\n

Pages, Layouts and Templates receive a pages array that includes PageData instances for every page in the build. Variables are already pre-resolved, so you can easily filter, sort and target various pages in the build.

\n

Full Type Support

\n

top-bun now has full type support. It’s achieved with types-in-js and it took a ton of time and effort.

\n

The results are nice, but I’m not sure the juice was worth the squeeze. top-bun was working really well before types. Adding types required solidifying a lot of trivial details to make the type-checker happy. I don’t even think a single runtime bug was solved. It did help clarify some of the more complex types that had developed over the first 2 years of development though.

\n

The biggest improvement provided here is that the following types are now exported from top-bun:

\n
LayoutFunction<T>\nPostVarsFunction<T>\nPageFunction<T>\nTemplateFunction<T>\nTemplateAsyncIterator<T>\n
\n

You can use these to get some helpful auto-complete in LSP supported editors.

\n

types-in-js not Typescript

\n

This was the first major dive I did into a project with types-in-js support.\nMy overall conclusions are:

\n\n

Handlebars support in md and html

\n

Previously, only js pages had access to the variable cascade inside of the page itself. Now html and md pages can access these variables with handlebars placeholders.

\n
## My markdown page\n\nHey this is a markdown page for {{ vars.siteName }} that uses handlebars templates.\n
\n

Locally shipped default styles

\n

Previously, if you opted for the default layout, it would import mine.css from unpkg. This worked, but went against the design goal of making top-bun sites as reliable as possible (shipping all final assets to the dest folder).

\n

Now when you build with the default layout, the default stylesheet (and theme picker js code) is built out into your dest folder.

\n

Built-in browser-sync

\n

@siteup/cli previously didn’t ship with a development server, meaning you had to run one in parallel when developing. This step is now eliminated now that top-bun ships browser-sync. browser-sync is one of the best Node.js development servers out there and offers a bunch of really helpful dev tools built right in, including scroll position sync so testing across devices is actually enjoyable.

\n

If you aren’t familiar with browser-sync, here are some screenshots of fun feature:

\n
\n \n \n \n \"Screenshot\n \n \n
BrowserSync remote debugging
\n
\n
\n \n \n \n \"Screenshot\n \n \n
BrowserSync Grid overlay
\n
\n
\n \n \n \n \"Screenshot\n \n \n
BrowserSync CSS outline
\n
\n
\n \n \n \n \"Screenshot\n \n \n
BrowserSync CSS depth
\n
\n
\n \n \n \n \"Screenshot\n \n \n
BrowserSync CSS depth
\n
\n
\n \n \n \n \"Screenshot\n \n \n
BrowserSync Network throttle
\n
\n

top-bun eject

\n

top-bun now includes an --eject flag, that will write out the default layout, style, client and dependencies into your src folder and update package.json. This lets you easily get started with customizing default layouts and styles when you decide you need more control.

\n
√ default-layout % npx top-bun --eject\n\ntop-bun eject actions:\n  - Write src/layouts/root.layout.mjs\n  - Write src/globals/global.css\n  - Write src/globals/global.client.mjs\n  - Add mine.css@^9.0.1 to package.json\n  - Add uhtml-isomorphic@^2.0.0 to package.json\n  - Add highlight.js@^11.9.0 to package.json\n\nContinue? (Y/n) y\nDone ejecting files!\n
\n

The default layout is always supported, and its of course safe to rely on that.

\n

Improved log output 🪵

\n

The logging has been improved quite a bit. Here is an example log output from building this blog:

\n
> top-bun --watch\n\nInitial JS, CSS and Page Build Complete\nbret.io/src => bret.io/public\n├─┬ projects\n│ ├─┬ websockets\n│ │ └── README.md: projects/websockets/index.html\n│ ├─┬ tron-legacy-2021\n│ │ └── README.md: projects/tron-legacy-2021/index.html\n│ ├─┬ package-automation\n│ │ └── README.md: projects/package-automation/index.html\n│ └── page.js: projects/index.html\n├─┬ jobs\n│ ├─┬ netlify\n│ │ └── README.md: jobs/netlify/index.html\n│ ├─┬ littlstar\n│ │ └── README.md: jobs/littlstar/index.html\n│ ├── page.js:      jobs/index.html\n│ ├── zhealth.md:   jobs/zhealth.html\n│ ├── psu.md:       jobs/psu.html\n│ └── landrover.md: jobs/landrover.html\n├─┬ cv\n│ ├── README.md: cv/index.html\n│ └── style.css: cv/style-IDZIRKYR.css\n├─┬ blog\n│ ├─┬ 2023\n│ │ ├─┬ reintroducing-top-bun\n│ │ │ ├── README.md: blog/2023/reintroducing-top-bun/index.html\n│ │ │ └── style.css: blog/2023/reintroducing-top-bun/style-E2RTO5OB.css\n│ │ ├─┬ hello-world-again\n│ │ │ └── README.md: blog/2023/hello-world-again/index.html\n│ │ └── page.js: blog/2023/index.html\n│ ├── page.js:   blog/index.html\n│ └── style.css: blog/style-NDOJ4YGB.css\n├─┬ layouts\n│ ├── root.layout.js:             root\n│ ├── blog-index.layout.js:       blog-index\n│ ├── blog-index.layout.css:      layouts/blog-index.layout-PSZNH2YW.css\n│ ├── blog-auto-index.layout.js:  blog-auto-index\n│ ├── blog-auto-index.layout.css: layouts/blog-auto-index.layout-2BVSCYSS.css\n│ ├── article.layout.js:          article\n│ └── article.layout.css:         layouts/article.layout-MI62V7ZK.css\n├── globalStyle:               globals/global-OO6KZ4MS.css\n├── globalClient:              globals/global.client-HTTIO47Y.js\n├── globalVars:                global.vars.js\n├── README.md:                 index.html\n├── style.css:                 style-E5WP7SNI.css\n├── booklist.md:               booklist.html\n├── about.md:                  about.html\n├── manifest.json.template.js: manifest.json\n├── feeds.template.js:         feed.json\n├── feeds.template.js-1:       feed.xml\n└── feeds.template.js-2:       atom.xml\n\n[Browsersync] Access URLs:\n --------------------------------------\n       Local: http://localhost:3000\n    External: http://192.168.0.187:3000\n --------------------------------------\n          UI: http://localhost:3001\n UI External: http://localhost:3001\n --------------------------------------\n[Browsersync] Serving files from: /Users/bret/Developer/bret.io/public\nCopy watcher ready\n
\n

Support for mjs and cjs file extensions

\n

You can now name your page, template, vars, and layout files with the mjs or cjs file extensions. Sometimes this is a necessary evil. In general, set your type in your package.json correctly and stick with .js.

\n

What’s next for top-bun

\n

The current plan is to keep sitting on this feature set for a while. But I have some ideas:

\n\n

If you try out top-bun, I would love to hear about your experience. Do you like it? Do you hate it? Open an discussion item. or reach out privately.

\n

History of top-bun

\n

OK, now time for the story behind top-bun aka @siteup/cli.

\n

I ran some experiments with orthogonal tool composition a few years ago. I realized I could build sophisticated module based websites by composing various tools together by simply running them in parallel.

\n

What does this idea look like? See this snippet of a package.json:

\n
 { "scripts": {\n    "build": "npm run clean && run-p build:*",\n    "build:css": "postcss src/index.css -o public/bundle.css",\n    "build:md": "sitedown src -b public -l src/layout.html",\n    "build:feed": "generate-feed src/log --dest public && cp public/feed.xml public/atom.xml",\n    "build:static": "cpx 'src/**/*.{png,svg,jpg,jpeg,pdf,mp4,mp3,js,json,gif}' public",\n    "build:icon": "gravatar-favicons --config favicon-config.js",\n    "watch": "npm run clean && run-p watch:* build:static",\n    "watch:css": "run-s 'build:css -- --watch'",\n    "watch:serve": "browser-sync start --server 'public' --files 'public'",\n    "watch:md": "npm run build:md -- -w",\n    "watch:feed": "run-s build:feed",\n    "watch:static": "npm run build:static -- --watch",\n    "watch:icon": "run-s build:icon",\n    "clean": "rimraf public && mkdirp public",\n    "start": "npm run watch"\n  }\n}\n
\n\n

I successfully implemented this pattern across 4-5 different websites I manged. It work beautifully. Some of the things I liked about it:

\n\n

But it had a few drawbacks:

\n\n

So I decided to roll up all of the common patterns into a single tool that included the discoveries of this process.

\n

@siteup/cli extended sitedown

\n

Because it was clear sitedown provided the core structure of this pattern (making the website part), I extended the idea in the project @siteup/cli. Here are some of the initial features that project shipped with:

\n\n

After sitting on the idea of siteup for over a year, by the time I published it to npm, the name was taken, so I used the npm org name hack to get a similar name @siteup/cli. SILLY NPM!

\n

I enjoyed how @siteup/cli came out, and have been using it for 2 year now. Thank you of course to ungoldman for laying the foundation of most of these tools and patterns. Onward and upward to top-bun!

\n

\"contribution

\n" }, { "date_published": "2023-08-30T18:06:24.000Z", "title": "Hello world (again) 🌎", "url": "https://bret.io/blog/2023/hello-world-again/", "id": "https://bret.io/blog/2023/hello-world-again/#2023-08-30T18:06:24.000Z", "content_html": "

\"sunset\"

\n

This blog has been super quiet lately, sorry about that.\nI had some grand plans for finishing up my website tools to more seamlessly\nsupport blogging.

\n

The other day around 3AM, I woke up and realized that the tools aren’t stopping me\nfrom writing, I am.\nAlso my silently implemented policy about not writing about ‘future plans and ideas before they are ready’ was implemented far to strictly.\nIt is in fact a good thing to write about in-progress ideas and projects slightly out into the future.\nThis is realistic, interesting, and avoids the juvenile trap of spilling ideas in front of the world only to see you never realize them.\nSo here I am writing a blog post again.

\n

Anyway, no promises, but it is my goal to write various ahem opinions, thoughts and ideas more often because nothing ever happens unless you write it down.

\n

Some basic updates

\n

Here are some updates from the last couple years that didn’t make it onto this site before.

\n\n

\"pic

\n" }, { "date_published": "2021-08-05T23:09:46.781Z", "title": "Tron Legacy 2021", "url": "https://bret.io/projects/tron-legacy-2021/", "id": "https://bret.io/projects/tron-legacy-2021/#2021-08-05T23:09:46.781Z", "content_html": "

I updated the Sublime Text Tron Color Scheme today after a few weeks of reworking it for the recent release of Sublime Text 4.

\n

The 2.0.0 release converts the older .tmTheme format into the Sublime specific theme format.\nOverall the new Sublime theme format (.sublime-color-scheme) is a big improvement, largely due to its simple JSON structure and its variables support.

\n

JSON is, despite the common arguments against it, super readable and easily read and written by humans.\nThe variable support makes the process of making a theme a whole lot more automatic, since you no longer have to find and replace colors all over the place.

\n

The biggest problem I ran into was poor in-line color highlighting when working with colors, so I ended up using a VSCode plugin called Color Highlight in a separate window.\nSublime has a great plugin also called Color Highlight that usually works well, but not in this case.\nThe Sublime Color Highlight variant actually does temporary modifications to color schemes, which seriously gets in the way when working on color scheme files.

\n

The rewrite is based it off of the new Mariana Theme that ships with ST4, so the theme should have support for most of the latest features in ST4 though there are surely features that even Mariana missed.\nLet me know if you know of any.

\n

Here a few other points of consideration made during the rewrite:

\n\n
\n \"Tron\n
JS Syntax example
\n
\n
\n \"Tron\n
Markdown Syntax example
\n
\n
\n \"Tron\n
Python Syntax example
\n
\n
\n \"Tron\n
C Syntax example
\n
\n
\n \"Tron\n
diff Syntax example
\n
\n

Here a few more relevant links and please let me know what you think if you try it out.

\n\n

Syndication

\n\n

Sublime Tron Legacy color scheme fully updated for @sublimehq Text 4. Full syntax support, lots of other small improvements. Also it supports 'glow' text✌️ pic.twitter.com/vShbGThgDF

— 🌌🌵🛸Bret🏜👨‍👩‍👧🚙 (@bcomnes) August 5, 2021
\n

2021-08-05T23:09:46.781Z

\n" }, { "date_published": "2021-07-16T19:20:44.161Z", "title": "Littlstar Portfolio", "url": "https://bret.io/jobs/littlstar/", "id": "https://bret.io/jobs/littlstar/#2021-07-16T19:20:44.161Z", "content_html": "\n

After a short sabbatical in Denmark at Hyperdivision, I joined Littlstar.\nHere is a quick overview of some of the more interesting projects I worked on.\nI joined Littlstar during a transitional period of the company and help them transition from a VR video platform to a video on demand and live streaming platform, and now an NFT sales and auction platform.

\n

NYCPF VR Platform

\n
\n \"NYCPF-XR\n
NYCPF used a local process talking over websocket RPC to a webapp running in a local browser.
\n
\n

My first project was picking up on an agency style project developing a novel VR reality training platform for NYCPF, powered by a custom hypercore p2p file sharing network, delivering in-house developed unity VR scenarios.\nThese scenarios could then be brought to various locations like schools or events where NYCPF could guide participants through various law enforcement scenarios with different outcomes based on participant choices within the simulation.

\n

By utilizing a p2p and offline first design approach, we were able to deliver an incredibly flexible and robust delivery platform that had all sorts of difficult features to develop for traditional file distribution platforms such as:

\n\n

While the project was built on the backs of giants like the Hypercore protocol, as well as the amazing work of my colleague, I contributed in a number of areas to move the project forward during my time contracting on it.

\n\n

\"Tiny

\n

Some of the discrete software packages that resulted from this project are described below.

\n

secure-rpc-protocol

\n

Secure rpc-protocol over any duplex socket using noise-protocol.\nThis was a refactor of an existing RPC over websocket solution the project was using.\nIt improved upon the previous secure RPC already used by switching to using the noise protocol which implements well understood handshake patterns that can be shared and audited between projects, rather than relying on a novel implementation at the RPC layer.\nIt also decoupled the RPC protocol from the underlying socket being used, so that the RPC system could be used over any other channels we might want in the future.

\n
\n \"Secure\n
Secure RPC protocol uses the noise protocol for encryption, and works over any duplex socket.
\n
\n

async-folder-walker

\n

An async generator that walks files.\nThis project was a refactor of an existing project called folder-walker implementing a high performance folder walk algorithm using a more modern async generator API.

\n
\n \"Async\n
Async folder walker provides a modern api to folder and filer walking of a directory.
\n
\n

unpacker-with-progress

\n

unpacker-with-progress is a specialized package that unpacks archives, and provides a progress api in order to provide UI feedback.\nOne of the deficiencies with the NYCPF project when I started was lack of UI feedback during the extraction process.

\n

VR files are very large, and are delivered compressed to the clients.\nAfter the download is complete, the next step to processing the data is unpacking the content.\nThis step did not provide any sort of progress feedback to the user because the underlying unpacking libraries did not expose this information, or only exposed some of the information needed to display a progress bar.

\n

This library implemented support for unpacking the various archive formats the project required, and also added an API providing uniform unpacking progress info that could be used in the UI during unpacking tasks.

\n
\n \"Unpacker\n
unpacker-with-progress is a consistency layer on top of a few unpacking libraries, with an emphasis on progress reporting for use in UI applications.
\n
\n

hchacha20

\n

One of the more interesting side projects I worked on was porting over some of the libsodium primitives to sodium-javascript.\nI utilized a technique I learned about at Hyperdivision where one can write web assembly by hand in the WAT format, providing a much wider set of data types, providing the type guarantees needed to write effective crypto.

\n

While the WAT was written for HChaCha20, the effort was quite laborious and it kicked off a debate as to whether it would be better to just wrap libsodium-js (the official libsodium js port) in a wrapper that provided the sodium-universal API. This was achieved by another group in geut/sodium-javascript-plus which successfully ran hypercores in the browser using that wrapper.

\n

Ultimately, this effort was scrapped, determining that noise peer connections in the browser are redundant to webRTC encryption and https sockets.\nIt was a fun and interesting project none the less.

\n
\n \"Some\n
A peek into the world of WASM via WAT.
\n
\n

Reconnecting sockets

\n

We were having some state transition bugs between the webapp and the local server process, where the app could get into strange indeterminate states.\nBoth had independent reconnect logic wrapped up with application code, and it added a lot of chaos to understanding how each process was behaving when things went wrong (especially around sleep cycles on machines).

\n

I implemented a generic reconnecting state machine that could accept any type of socket, and we were able to reduce the number of state transition bugs we had.

\n\n
\n \"Reconnecting\n
\n

Little Core Labs

\n\n

After working on various agency projects at Littlstar, we formed a separate organization to start a fresh rewrite of the technology stack.

\n

My high level contributions:

\n\n

This was an amazing founders-style opportunity to help rethink and re-implement years of work that had developed at Littlstar prior to joining.\nEffectively starting from 0, we rethought the entire technology pipeline, from operations, to infrastructure, to deployment, resulting in something really nice, modern, minimal, low maintenance and malleable.

\n\n
\n \"A\n
\n A screencap of our cloudformation diagram.\n
\n
\n

Terraform ops

\n

A culmination of ingesting the “Terraform: Up & Running” and “AWS Certified Solutions Architect” books, as well as building off existing organizational experience with AWS, I helped research and design an operations plan using Terraform and GitHub Actions.

\n
\n
\n \n
\n
\n \n
\n
\n

This arrangement has proven powerful and flexible.\nWhile it isn’t perfect, it has been effective and reliable and cheap, despite its imperfections in relation to some of the more esoteric edge cases of Terraform.

\n

A quick overview of how its arrange:

\n\n

Github actions

\nActions logo\n

One of the drawbacks of rolling our own Terraform CI infrastructure was that we had to tackle many small edge cases inside the GitHub actions environment.

\n

It was nice to learn about the various types of custom GitHub actions one can write, as well as expand that knowlege to the rest of the org, but it also ate up a number of days focusing on DevOps problems specific to our CI environment.

\n

Here are some of the problems I helped solve in the actions environment.

\n\n

\"netrc-creds\"

\n

sdk-js

\n

I helped lay the framework for the initial version of sdk-js, the Little Core Labs unified library used to talk to the various back-end services at Little Core Labs.

\n

One of underlying design goals was to solve for the newly introduced native ESM features in node, in such a way that the package could be consumed directly in the browser, natively as ESM in node, but also work in dual CJS/ESM environments like Next.js.\nWhile this did add some extra overhead to the project, it serves as a design pattern we can pull from in the future, as well as a provide a highly compatible but modern API client.

\n

I also extracted out this dual package pattern into a reusable template.

\n\n

\"sdk-js\"

\n

rad.live

\n

\"Rad.live\"

\n

I was the principal engineer on the new Rad.live website.\nI established and implemented the tech stack, aiming to take a relatively conservative take on a code base that would maximize the readability and clarity for a Dev team that would eventually grow in size.

\n

A high level, the application is simply an app written with:

\n\n

From a development perspective, it was important to have testing and automation from the beginning.\nFor this we used:

\n\n

Overall, the codebase has had two other seasoned developers (one familiar and one new at React) jump in and find it productive based on individual testimony.\nGathering feedback from those thatI work with on technical decisions that I make is an important feedback look I always try to incorporate when green fielding projects.\nAdditionally, it has been a relatively malleable code base that is easy to add MVP features to and is in a great position to grow.

\n

vision.css

\n

\"vision.css\"

\n

I implemented a custom design system in tandem with our design team.\nThis project has worked out well, and has so far avoided ‘css lockout’, where only one developer can effectively make dramatic changes to an app layout due to an undefined and overly general orthogonal ‘global style sheet’.

\n

The way this was achieved was by focusing on a simple global CSS style sheet that implements the base HTML elements in accordance with the design system created by the design team.\nWhile this does result in a few element variants that are based on a global style class name, they remain in the theme of only styling ‘built in’ html elements, so there is little question what might be found in the global style sheet, and what needs to be a scoped css style.

\n

Some of the features we used for vision.css

\n\n

eslint-config-12core

\n

\"eslint-config-12core\"

\n

Linting is the “spell check” of code, but its hard to agree on what rules to follow.\nLike most things, having a standard set of rules that is good-enough is always better than no rules, and usually better than an unpredictable and unmaintainable collection of project-unique rules.

\n

I put together a shared ESLint config that was flexible enough for most projects so far at Little Core Labs based on the ‘StandardJS’ ruleset, but remained flexible enough to modify unique org requirements.\nAdditionally, I’ve implemented it across many of the projects in the Github Org.

\n\n

gqlr

\n

\"gqlr\"

\n

gqlr is a simplified fork of graphql-request.

\n

This relatively simple wrapper around the JS fetch API has a gaggle of upstream maintainers with various needs that don’t really match our needs.

\n

The fork simplified and reduced code redundancy, improved maintainability through automation, fixed bugs and weird edge cases and dramatically improved errors and error handling at the correct level of the tech stack.\nThese changes would have unlikely been accepted upstream, so by forking we are able to gain the value out of open source resources, while still being able to finely tune them for our needs, as well as offer those changes back to the world.

\n\n

local-storage-proxy

\n

\"\"

\n

A configuration solution that allows for persistent overrides stored in local storage, including cache busting capabilities. Implemented with a recursive JS proxy to simulate native object interactions over a window.localstorage interface.

\n\n

Community maintenance

\n

We ended up taking on maintainence of a few other packages, providing fixes and improvements where the original authors seem to have left off.

\n\n

Video platform

\n

Here are some snapshots of the video platform we launched.

\n

\"\"

\n

\"\"

\n

NFT Auction Platform

\n

Here are some screenshots of the NFT auction platform I helped build.\nThe UI was fully responsive and updated on the fly to new results, thanks to the powers of SWR.

\n

\"\"

\n

Marketing pages

\n

I did a few marketing pages as well.

\n

\"\"

\n

\"\"

\n

\"\"

\n

\"\"

\n

Conclusion

\n

While this isn’t everything I did at Littlstar, it captures many of the projects I enjoyed working on, and can hopefully provide some insights into my skills, interests and experiences from the past year.

\n" }, { "date_published": "2020-09-29T17:50:58.562Z", "title": "Fully Automated Luxury Space Age Package Maintenance", "url": "https://bret.io/projects/package-automation/", "id": "https://bret.io/projects/package-automation/#2020-09-29T17:50:58.562Z", "content_html": "

tldr; The full package maintenance life cycle should be automated and can be broken down into the following levels of automation sophistication:

\n\n

These solutions focus on Node.js + npm packages, automated on Github Actions, but the underlying principals are general to any language or automation platform.

\n

Background

\n

Maintaining lots of software packages is burdensome.\nScaling open source package maintenance beyond a single contributor who understands the release life cycle is challenging.\nLong CONTRIBUTING.md files are often the goto solution, but are easily overlooked.

\n
\n \"Shrek\n
A pre-automation package is a lot like the shrek swamp. Fixing things requires a slog through the mud, and its constantly at risk for overgrowth. (Img Source)
\n
\n

In the end, automating the package life cycle so that it can maintain itself, is the only way to realistically scale a large set of packages in a maintainable way.

\n
\n \"Forerunne\n
A fully automated luxury space age package maintains itself, re-activating after years of abandonment to operate the same as it did the day of its creation. This is about as much value I can get out of this silly analogy. (Img Source)
\n
\n

For a long time I didn’t seek out automation solutions for package maintenance beyond a few simple solutions like testing and CI.\nInstead I had a lengthly ritual that looked approximately like this:

\n
# 🔮\ngit checkout -b my-cool-branch\n# do some work\n# update tests\n# update docs\nnpm run test\ngit commit -am 'Describe the changes'\ngit push -u\nhub browse\n# do the PR process\n# merge the PR\ngit checkout master\ngit pull\ngit branch --delete my-cool-branch\n# hand edit changelog\ngit add CHANGELOG.md\ngit commit -m 'CHANGELOG'\nnpm version {major,minor,patch}\ngit push && git push --follow-tags\nnpx gh-release\nnpm publish\n# 😅\n
\n

It was ritual, a muscle memory.

\n

Over the years, I’ve managed to automate away large amount of raw labor to various bots, tools and platforms that tend to build on one another and are often usable in isolation or adopted one at a time.\nI’ve broken various tools and opportunities for automation into levels, with each level building on the complexity contained in the level below.

\n

Level 0: git and Github

\n

You are already automating your packages to a large extent by your use of git.\ngit automates the process of working on code across multiple computers and collaborating on it with other people, and Github is the central platform to coordinate and distribute that code.

\n
\n \"The\n
The old git mascot knew what was up.
\n
\n

If you are new to programming or learning git, its helpful to understand you are learning a tool used to automate the process by which you can cooperatively work on code with other people and bots.

\n
\n \n \n \"Screenshot\n \n
Use the tree view Luke!
\n
\n

This isn’t an article about git though, so I won’t dive more into that.

\n

Level 1: Automated Tests + CI

\n

There is no debate.\nSoftware isn’t “done” until it has tests.\nThe orthodox position is that you shouldn’t be allowed to write code until the tests are written (TDD).\nNo matter the methodology, you are automating a verification process of the package that you would normally have to perform by hand.

\n

Test runners

\n

These are my preferred test runners for Node.js:

\n\n
\n \"Screenshot\n
tap offers some nice test runner features and better async support.
\n
\n

Additional Tests

\n

Unit tests that you run with a test runner are not the only type of test though.\nThere are lots of other easy tests you can throw at your package testing step that provide a ton of value:

\n\n
\n \n \n \"Screenshot\n \n
Platforms like coveralls provide nice coverage tracking, but the cost of setting up integrations makes them optional.
\n
\n

Managing complex test scripts

\n

Running multiple tests under npm test can result in a long, difficult to maintain test script. Install npm-run-all2 as a devDependency to break each package.json test command into its own sub-script, and then use the globbing feature to run them all in parallel (run-p) or in series (run-s):

\n
{\n  "scripts": {\n    "test": "run-s test:*",\n    "test:deps": "dependency-check . --no-dev --no-peer",\n    "test:standard": "standard",\n    "test:tap": "tap"\n  }\n}\n
\n

When testing locally, and an individual test is failing, you can bypass the other tests and run just the failing test:

\n
# run just the dep tests\nnpm run test:deps\n
\n

npm-run-all2 is a fantastic tool to keep your npm run scripts manageable.

\n

This builds on the fantastic and 2014 classic Keith Cirkel blog post How to Use npm as a Build Tool.

\n

Actually automating the tests

\n

While its obvious that writing automated tests is a form of automation, its still very common to see projects not take the actual step of automating the run step of the tests by hooking them up to a CI system that runs the tests on every interaction with the code on Github.\nServices like TravisCI have been available for FREE for years, and there is literally no valid excuse not to have this set up.

\n

Although TravisCI has served many projects well over the years, Github Actions is a newer and platform native solution that many projects are now using. Despite the confusing name, Github Actions is primarily a CI service.

\n
\n \n \n \"Screenshot\n \n
Github actions has its own oddities, but overall its proven to be an extremely flexible and useful CI service, built right into the platform you are using if you are doing any kind of open source.
\n
\n

Create the following action file in your package repo and push it up to turn on CI.

\n
# .github/workflows/tests.yml\nname: tests\n\non: [push]\n\njobs:\n  test:\n    runs-on: $\n\n    strategy:\n      matrix:\n        os: [ubuntu-latest]\n        node: [14]\n\n    steps:\n    - uses: actions/[email protected]\n    - name: Use Node.js $\n      uses: actions/[email protected]\n      with:\n        node-version: $\n    - run: npm i\n    - run: npm test\n
\n

For more information on Action syntax and directives, see:

\n\n

Automated Checks

\n

Once you have a test suite set up, running in CI, any pull request to your package features the results of the “Checks API”.\nVarious tests and integrations will post their results on every change to the pull request in the form of “running”, “pass” or “fail”.

\n

The benefit to the checks status in the pull request UI is, depending on the quality and robustness of your test suite, you can have some amount of confidence that you can safely merge the proposed changes, while still having things work the way you expect, including the newly proposed changes.\nNo matter the reliability of the test suite, it is still important to read and review the code.

\n
\n \n \n \"Screenshot\n \n
\n

Level 2: Dependency Bots

\n

Your package has dependences. Be it your test runner, or other packages imported or required into your package. They help provide valuable function with little upfront cost.

\n

Dependencies form the foundation that your package is built upon. But that foundation is made of shifting sands⏳.\nDependencies have their own dependencies, which all have to slowly morph and change with the underlying platform and dependency changes.\nLike a garden, if you don’t tend to the weeds and periodically water it with dependency updates, the plants will die.

\n

With npm, you normally update dependencies by grabbing the latest copy of the code and checking for outdated packages:

\n
git checkout master\ngit pull\nrm -rf node_modules\nnpm i\nnpm outdated\n
\n
\n \n \n \"Screenshot\n \n
npm outdated will give you a list of your dependencies that have fallen behind their semver range and updates that are available outside of their semver range.
\n
\n

Checking for updates any time you work on the package is not a bad strategy, but it becomes tiresome, and can present a large amount of maintenance work, unrelated to the prompting task at hand, if left to go a long time. A good package doesn’t need to change much, so it may rarely ever be revisited and rot indefinitely.

\n

Enter dependency bots.

\n

A dependency bot monitors your package repositories for dependency updates.\nWhen a dependency update is found, it automatically creates a PR with the new version.\nIf you have Level 1 automation setup, this PR will run your tests with the updated version of the dependency.\nThe results will (mostly) inform you if its safe to apply the update, and also give you a button to press to apply the change.\nNo typing or console required! 🚽🤳

\n

Level 1 automation isn’t required to use a dependency bot, but you won’t have any way to automatically validate the change, so they are much less useful in that case.

\n

Dependabot

\n\n

Github now has a dependency bot built in called dependabot.\nTo turn it on, create the following file in your packages Github repo:

\n
# .github/dependabot.yml\n\nversion: 2\nupdates:\n  # Enable version updates for npm\n  - package-ecosystem: "npm"\n    # Look for `package.json` and `lock` files in the `root` directory\n    directory: "/"\n    # Check the npm registry for updates every day (weekdays)\n    schedule:\n      interval: "daily"\n  # Enable updates to github actions\n  - package-ecosystem: "github-actions"\n    directory: "/"\n    schedule:\n      interval: "daily"\n
\n

This enables updates for npm and github actions. It offers other ecosystems as well. See the dependabot docs for more info.

\n

In-range breaking change 🚨

\n

Before dependabot, there was a now-shut-down service called Greenkeeper.io which provided a very similar service. It offered a very interesting feature which I’m still not sure if dependabot has yet.

\n
\n \n \n \"Screenshot\n \n
Greenkeeper would run continuous checks for every new dependency in an and out of your packages semver range, proactively identifying breaking changes that could sneak in.
\n
\n

It would run tests every time a dependency in your package was updated, in and out of semver range.

\n

For in range updates that passed, nothing would happen.\nFor in range updates that failed, it would open a PR alerting you that one of your dependencies inadvertently released a breaking change as a non-breaking change.\nThis was a fantastic feature, and really demonstrated the heights that automated tests, CI, an ecosystem that fully utilized semver and dependency bots could achieve together.

\n

Sadly, I haven’t seen other services or languages quite reach these heights of automation sophistication (many ecosystems even lack any kind of major version gate rage), but perhaps as awareness increases of these possibilities more people will demand it.

\n

There is a lot more room for innovation in this space. It would be great to get get periodic reports regarding the health of downstream dependency chains (e.g. if you are depending on a project that is slowly rotting and not maintaining its deps).\nAs of now, dependabot seems to be recovering from a post acquisition engineering fallout, but I hope that they can get these kinds of features back into reality sooner than later.

\n

Linter bots?

\n

There are bots out there that can send in automated code changes source off lint tests and other code analysis tools. While this is ‘cool’, these tasks are better served at the testing level.\nAutomated code changes for failing lint tests should really just be apart of the human development cycle with whatever IDE or editor they use.\nStill, the bot layer is open to experimentation, so go forth an experiment all you want, though note that external service integrations have a heavy integration cost usually. 🤖

\n

Level 3: Automated Changelog and Release Consistency Scripts

\n

Quick recap:

\n\n

That means our package is going to morph and change with time (hopefully not too much though). We need a way way to communicate that clearly to downstream dependents, be that us, someone else on the team or a large base of dark-matter developers.

\n

The way to do this is with a CHANGELOG.md file. Or release notes in a Github release page. Or ideally both. keepachangelog.com offers a good overview on the correct format a CHANGELOG.md should follow.

\n
\n \n \n \"Screenshot\n \n
keepachangelog.com, Conventional Commits and Semantic Release are all conventions and systems for documenting changes to a project.
\n
\n

This is a tedious process. If you work with other people, they might not be as motivated as you to handcraft an artisan CHANGELOG file. In my experience, the handcrafted, artisan CHANGELOG is too much work and easy to forget about. Also, I haven’t found a good linter tool to enforce its maintenance.

\n

auto-changelog

\n

auto-changelog is a tool that takes your git history and generates a CHANGELOG that is almost-just-as-good as the artisan handcrafted one. Hooking this tool into your package’s version life cycle enforces that it is run when a new version is generated with npm version {major,minor,patch}.\nWhile keepachangelog.com advocates for the handcrafted version, and discourages ‘git commit dumps’, as long as you are halfway concious of your git commit logs (as you should be), the auto-changelog output is generally still useful.\nYou can even follow conventionalcommits.org if you want an even more structured git log.

\n

Automating auto-changelog to run during npm version[1] is easy.\nInstall it as a devDependency and set up the following script in package.json:

\n
{\n  "scripts": {\n    "version": "auto-changelog -p --template keepachangelog auto-changelog --breaking-pattern 'BREAKING CHANGE:' && git add CHANGELOG.md"\n  }\n}\n
\n

The version script is a npm run lifecycle script that runs after the package.json version is bumped, but before the git commit with the change are created. Kind of a mouthful, but with nice results.

\n
\n \n \n \"Screenshot\n \n
auto-changelog generates satisfactory changelogs. The consistency it provides exceeds the value a hand written changelog can provide due to its inconsistent nature.
\n
\n

Publishing consistency scripts

\n

Ok, so we have local changes merged in, and we’ve created a new version of the module with an automated changelog generated as part of the npm version commit. Time to get this all pushed out and published! By hand we could do:

\n
git push --follow-tags\n# copy contents of changelog\n# create a new Github release on the new tag with the changelog contents\nnpm publish\n
\n

But that is tedious.\nAnd what happens when your colleague forgets to push the git commit/tag to Github and just publishes to npm?\nOr more likely, they just forget to create the Github release, creating inconsistency in the release process.

\n

The solution is to automate all of this!\nUse the prepublishOnly hook to run all of these tasks automatically before publishing to npm via npm publish.\nIncorporate a tool like gh-release to create a Github release page for the new tag with the contents of your freshly minted auto-changelog.

\n
{\n  "scripts": {\n    "prepublishOnly": "git push --follow-tags && gh-release -y"\n  }\n}\n
\n
\n \n
gh-release makes it easy to create Github releases from a CHANGELOG.md.
\n
\n

The result of this is our release process is returned to the lowest common denominator of process dictated by npm:

\n
npm version {major,minor,patch}\nnpm publish\n
\n

But we still get all of these results, completely automated:

\n\n

All together

\n

Those two run scripts together:

\n
{\n  "scripts": {\n    "version": "auto-changelog -p --template keepachangelog auto-changelog --breaking-pattern 'BREAKING CHANGE:' && git add CHANGELOG.md",\n    "prepublishOnly": "git push --follow-tags && gh-release -y"\n  }\n}\n
\n

Extending with a build step

\n

Some packages have builds steps. No problem, these are easily incorporated into the above flow:

\n
{\n  "scripts": {\n    "build": "do some build command here",\n    "prepare": "npm run build",\n    "version": "run-s prepare version:*",\n    "version:changelog": "auto-changelog -p --template keepachangelog auto-changelog --breaking-pattern 'BREAKING CHANGE:'",\n    "version:git": "git add CHANGELOG.md dist",\n    "prepublishOnly": "git push --follow-tags && gh-release -y"\n  }\n}\n
\n

Since version becomes a bit more complex, we can break it down into pieces with npm-run-all2 as we did in the testing step. We ensure we run fresh builds on development install (prepare), and also when we version. We capture any updated build outputs in git during the version step by staging the dist folder (or whatever else you want to capture in your git version commit).

\n

This pattern was documented well by @swyx: Semi-Automatic npm and GitHub Releases with gh-release and auto-changelog.

\n

Level 4: Publishing Bots 🤖

\n

We now have a process for fully managing the maintenance and release cycle of our package, but we are still left to pull down any changes from our Github repo and run these release commands, as simple as they are now.\nYou can’t really do this on your phone (easily) and someone else on the project still might manage to not run npm version and just hand-bump the version number for some reason, bypassing all our wonderful automation.

\n

What would be cool is if we could kick of a special CI program that would run npm version && npm publish for us, at the push of a button.

\n

It turns out Github-Actions has a feature now called workflow_dispatch, which lets you press a button on the repos actions page on GitHub and trigger a CI flow with some input.

\n
\n \n \n \"Screenshot\n \n
workflow_dispatch actions lets you trigger an action from your browser, with simple textual inputs. Use it as a simple shared deployment environment.
\n
\n

Implementing workflow_dispatch is easy: create a new action workflow file with the following contents:

\n
# .github/workflows/release.yml\n\nname: npm version && npm publish\n\non:\n  workflow_dispatch:\n    inputs:\n      newversion:\n        description: 'npm version {major,minor,patch}'\n        required: true\n\nenv:\n  node_version: 14\n\njobs:\n  version_and_release:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/[email protected]\n      with:\n        # fetch full history so things like auto-changelog work properly\n        fetch-depth: 0\n    - name: Use Node.js $\n      uses: actions/[email protected]\n      with:\n        node-version: $\n        # setting a registry enables the NODE_AUTH_TOKEN env variable where we can set an npm token.  REQUIRED\n        registry-url: 'https://registry.npmjs.org'\n    - run: npm i\n    - run: npm test\n    - run: git config --global user.email "[email protected]"\n    - run: git config --global user.name " $"\n    - run: npm version $\n    - run: npm publish\n      env:\n        GH_RELEASE_GITHUB_API_TOKEN: $\n        NODE_AUTH_TOKEN: $\n
\n

Next, generate an npm token with publishing rights.

\n

Then set that token as a repo secret called NPM_TOKEN.

\n
\n \n \n \"Screenshot\n \n
GitHub secrets allows you to securely store tokens for use in your GitHub Actions runs. It's not bulletproof, bit its pretty good.
\n
\n

Now you can visit the actions tab on the repo, select the npm version && npm publish action, and press run, passing in either major, minor, or patch as the input, and a GitHub action will kick off running our Level 3 version and release automations along with publishing a release to npm and GitHub.

\n

Note: Its recommended that you .gitignore package-lock.json files, otherwise they end up in a library source, where it provides little benefit and lots of drawbacks.

\n

I created a small actions called npm-bump which can clean up some of the above action boilerplate:

\n
name: Version and Release\n\non:\n  workflow_dispatch:\n    inputs:\n      newversion:\n        description: 'npm version {major,minor,patch}'\n        required: true\n\nenv:\n  node_version: 14\n\njobs:\n  version_and_release:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/[email protected]\n      with:\n        # fetch full history so things like auto-changelog work properly\n        fetch-depth: 0\n    - name: Use Node.js $\n      uses: actions/[email protected]\n      with:\n        node-version: $\n        # setting a registry enables the NODE_AUTH_TOKEN env variable where we can set an npm token.  REQUIRED\n        registry-url: 'https://registry.npmjs.org'\n    - run: npm i\n    - run: npm test\n    - name: npm version && npm publish\n      uses: bcomnes/[email protected]\n      with:\n        git_email: [email protected]\n        git_username: $\n        newversion: $\n        github_token: $ # built in actions token.  Passed tp gh-release if in use.\n        npm_token: $ # user set secret token generated at npm\n
\n
\n \n \n \"Screenshot\n \n
npm-bump helps cut down on some of the npm version and release GitHub action boilerplate YAML. Is it better? Not sure!
\n
\n

So this is great! You can maintain packages by merging automatically generated pull requests, run your tests on them to ensure package validity, and when you are ready, fully release the package, with a CHANGELOG entry, all from the push of a button on your cell phone. Fully Automated Luxury Package Space Maintenance. 🛰🚽🤳

\n

Level 5: Project generation

\n

What is the best way to manage all of these independent pieces? A template! Or, a template repo.

\n

You mean things like yeoman? Maybe, though that tool is largely used to ‘scaffold’ massive amounts of web framework boilerplate and is a complex ecosystem.

\n

Something simpler will be more constrained and easier to maintain over time. Github repo templates and create-project are good choices.

\n

Github Template Repos

\n

Github offers a very simple solution called template repos. You take any repo on GitHub, go into its settings page and designate it as a template repo. You can create a new repo from a template repo page with the click of a button, or select it from a drop down in the github create repo wizard.

\n
\n \n \n \"Screenshot\n \n
Repos that are designated as templates show up in the new repo UI.
\n
\n

The only issue, is that you then have to go through and modify all the repo specific parameters by hand.\nBetter than nothing!\nBut we can do better.

\n

create-project repos

\n

create-project is a simple CLI tool (by @mafintosh) that works similar to Github template repos, except it has a `` system that lets you insert values when spawning off a project repo. You can designate your create-project template repos to also be a Github template repo, and create new projects either way you feel like it.

\n
\n \n
create-project lets you spawn a new project from a git repo and inject values into special blocks.
\n
\n

Here are some of my personal template repos:

\n\n

What about docs?

\n

There are various solutions for generating docs from comments that live close to the code.

\n\n

I haven’t found a solution that satisfies my needs well enough to use one generally.\nIts hard to exceed the quality docs written by hand for and by humans.\nIf you have a good solution please let me know!

\n
\n \n \n \"Screenshot\n \n
deno doc is beautiful and keeps code and docs tightly coupled. But is the human consumed result better? Or does it just lead to cutting corners in exchange for static type checking? Time will tell!
\n
\n

All together now

\n

That was a lot to cover.\nIf you want to see a complete level 0 through level 5 example, check out my create-template template repo snapshotted to the latest commit of the time of publishing.

\n
\n \n
Here we can see the various parts of level 0-4 automation in action together. Changes are made to the git history, the module is versioned and published by a bot. It includes a changelog and was automatically validated by tests.
\n
\n

Final thoughts

\n

This collection is written in the context of the Node.js programming system, however the class of tools discussed apply to every other language ecosystem and these automation levels could serve as a framework for assessing the maturity of automation capabilities of other programming language systems.\nHopefully they can can provide some insights into the capabilities and common practices around modern JavaScript development for those unfamiliar with this ecosystem.

\n

Additionally, this documents my personal suite of tools and processes that I have developed to automate package maintenance, and is by no means normative. Modification and experimentation is always encouraged.

\n

There are many subtle layers to the Node.js programming system, and this just covers the maintenance automation layer that can exist around a package.\nMuch more could be said about the versioned development tooling, standardized scripting hooks, diamond dependency problem solutions, localized dependencies, upstream package hacking/debugging conveniences and local package linking. An even deeper dive could be made on the overlap these patterns have (and don’t) have in other JS runtimes like Deno which standardizes a lot around Level 1, or even other languages like Go or Rust.

\n

If you enjoyed this article, have suggestions or feedback, or think I’m full of it, follow me on twitter (@bcomnes) and feel free to hop in the the accompanying thread. I would love to hear your thoughts, ideas and examples! Also subscribe to my RSS/JSON Feed in your favorite RSS reader.

\n

"Fully Automated Luxury Space Age Package Maintenance"

I wrote up how tedious package maintenance tasks can be fully automated.

Hope someone enjoys!https://t.co/fvYIu2Wq0r pic.twitter.com/q220LTax8X

— 🌌🌵🛸Bret🏜👨‍👩‍👧🚙 (@bcomnes) September 29, 2020
\n

Syndications

\n\n

See also

\n\n
\n
\n
    \n
  1. Just as a refresher, npm version is a command that bumps the version in package.json, creates a commit with that change titled 0.0.0, then tags it v0.0.0. See npm docs for more info. ↩︎

    \n
  2. \n
\n
\n" }, { "date_published": "2020-01-16T22:19:35.001Z", "title": "Netlify Portfolio", "url": "https://bret.io/jobs/netlify/", "id": "https://bret.io/jobs/netlify/#2020-01-16T22:19:35.001Z", "content_html": "

I was lucky to be able to contribute to many features and experiences that affected Netlify’s massive user base. Here are some examples of things that I worked on. If this kind of work looks interesting to you, Netlify is a fantastic team to join: netlify.com/careers. Questions and comments can be sent via email or twitter.

\n

Platform

\n

After working on the Product team for slightly over a year, I switched to working on Netlify’s platform team. The team has a strong DevOps focus and maintains a redundant, highly available multi-cloud infrastructure on which Netlify and and all of its services run. My focus on the team is to maintain, develop, scale and improve various critical services and libraries. Below are some examples of larger projects I worked on.

\n

Buildbot

\n

One of my primary responsibilities upon joining the team is to maintain the Buildbot that all customer site builds run on. It is partially open-source so customers can explore the container in a more freeform matter locally.

\n\n

\"screenshot

\n

Selectable build images

\n

One of the first feature additions I launched for the Buidlbot was selectable build images. This project required adding the concept of additional build images to the API and UI and to develop an upgrade path allowing users to migrate their websites to the new build image image wile also allowing them to roll back to the old image if they needed more time to accommodate the update.

\n

Additionally, I performed intake on a number of user contributed build-image additions and merged other various potential breaking changes within a development window before releasing. I also helped develop the changes to the Ruby on Rails API, additions to the React UI, as well as write the user documentation. It was widely cross cutting project.

\n

\"screenshot

\n

\"screenshot

\n

Open API

\n

I help maintain and further develop Netlify’s Open-API (aka Swagger) API definition, website and surrounding client ecosystem. While open-api can be cumbersome, it has been a decent way to synchronize projects written in different language ecosystems in a unified way.

\n\n

\"screenshot

\n

Other platform projects I work on

\n\n

Product

\n

I worked on Neltify’s Product team for a bit over a year and completed many successful user facing projects. Here are just a few examples:

\n

CLI

\n

I was primary author on Netlify’s current CLI codebase.

\n\n
\n \n
Build image selection UI.
\n
\n

\"screenshot

\n

JAMStack slides

\n

I gave a talk on some ideas and concepts I came across working with Netlify’s platform and general JAMStack architecture at a local Portland JAMStack meetup.

\n\n

\"jamstack-slides\"

\n

Domains

\n

I led the Netlify Domains project which allowed users to to add an existing live domain during site setup, or buy the domain if it is available. This feature enabled users to deploy a website from a git repo to a real live domain name with automatic https in a matter of minutes and has resulted in a nice stream of ever increasing AAR for the company.

\n

\"domains

\n

Build Status Favicons

\n

I helped lead and implement build status favicons, so you can put a build log into a tab, and monitor status from the tab bar.

\n

\"build

\n

Lambda Functions

\n

I implemented the application UI for Netlify’s Lambda functions and logging infrastructure, and have continued to help design and improve the developer ergonomics of the Lambda functions feature set.

\n

\"functions\"

\n

Identity Widget

\n

I helped architect and implement Netlify’s Identity widget.

\n

\"identity

\n

Dashboard

\n

I helped implement the UI for our site dashboard redesign.

\n

\"dashboard\"

\n

Security Audit Log

\n

I led the project to specify and implement the Audit Log for teams and identity instances.

\n

\"audit

\n

Split Testing

\n

I implemented the UI for Netlify’s split testing feature.

\n\n" } ] }