{ "version": "https://jsonfeed.org/version/1", "title": "bret.io", "home_page_url": "https://bret.io", "feed_url": "https://bret.io/feed.json", "description": "A running log of announcements, projects and accomplishments.", "author": { "name": "Bret Comnes", "url": "https://bret.io", "avatar": "/favicons/apple-touch-icon-1024x1024.png" }, "items": [ { "date_published": "2024-10-03T18:25:30.405Z", "title": "\"Don't Be Evil\" Is An Excuse For Evil", "url": "https://bret.io/blog/2024/dont-be-evil-is-an-excuse-for-evil/", "id": "https://bret.io/blog/2024/dont-be-evil-is-an-excuse-for-evil/#2024-10-03T18:25:30.405Z", "content_html": "
Googleâs former and founding motto âDonât be evilâ sounds benevolent, but is it?
\n\nâDonât be evilâ paired with the companies premise: lets build products in a way that allows for immeasurable corporate access into peopleâs private lives for fun and profit, unlocking all the positive potential this data can provide.\nWeâll only use it for good, the Evilâ¢ï¸ things will just be off limits.
\nOf course the motto changes to âDo the right thingâ in 2015 when itâs very obvious itâs impossible Google to not indulge in the Evil with the potential for it sitting right there.\nBy loosening up the motto to allow for at least some Evil, so long as its the âright thingâ, Google can at least be honest with the public that âEvilâ is definitely on the table.
\nMaybe its time to demand âCanât be evilâ? Build in a way where the evil just isnât possible.\nReject the possibility of evil in what software you choose to use.
\n" }, { "date_published": "2024-02-13T18:24:49.707Z", "title": "The Art of Doing Science and Engineering", "url": "https://bret.io/blog/2024/the-art-of-doing-science-and-engineering/", "id": "https://bret.io/blog/2024/the-art-of-doing-science-and-engineering/#2024-02-13T18:24:49.707Z", "content_html": "\nRichard Hammingâs âThe Art of Doing Science and Engineeringâ is a book capturing the lessons he taught in a course he gave at the U.S. Navy Postgraduate School in Monterey, CA.\nHe characterizes what he was trying to teach was âstyleâ of thinking in science and engineering.
\nHaving a physics degree myself, and also finding myself periodically ruminating on the agony of professional software development and hoping to find some overlap between my professional field and Hammingâs life experience, I gave it a read.
\nThe book is filled with nuggets of wisdom and illustrates a career of move-the-needle science and engineering at Bell Labs.\nI didnât personally find much value in many of the algebraic walk-throughâs of various topics like information theory, but learning about how Hamming discovered error correcting codes definitely was interesting and worth a read.
\nThe highlight of book comes in the second half where he includes interesting stories, analogies and observations on nearly every page. Below are my highlights I pulled while reading.
\n\n\nThat brings up another point, which is now well recognized in software for computers but which applies to hardware too. Things change so fast that part of the system design problem is that the system will be constantly upgraded in ways you do not now know in any detail! Flexibility must be part of the modern design of things and processes. Flexibility built into the design means not only will you be better able to handle the changes which will come after installation, but it also contributes to your own work as the small changes which inevitably arise both in the later stages of design and in the field installation of the systemâ¦
\nThus rule two:
\nPart of systems engineering design is to prepare for changes so they can be gracefully made and still not degrade the other parts.
\nâ p.367
\n
This quote is my favorite out of the entire book.\nIt feels like a constant fight in software engineering between the impulse to lock down runtime versions, specific dependency versions, and other environmental factors versus developing software in such a way that accommodates wide variance in all of these different component factors.\nBoth approaches argue reliability and flexibility, however which approach actually tests for it?
\nIn my experience, the tighter the runtime dependency specifications, the faster fragility spreads, and itâs satisfying to hear Hammingâs experience echo this observation. Sadly though, his observation that those writing software will universally understand this simply hasnât held up.
\n\n\nGood design protects you from the need for too many highly accurate components in the system. But such design principals are still, to this date, ill understood and need to be researched extensively. Not that good designers do not understand this intuitively, merely it is not easily incorporated into the design methods you were thought in school.
\nGood minds are still need in spite of all the computing tools we have developed. The best mind will be the one who gets the principle into the design methods taught so it will be automatically available for lesser minds!.
\nâ p.268
\n
Here Hamming is describing H.S. Blackâs feedback circuitâs tolerance for low accuracy components as what constitutes good design. I agree! Technology that works at any scale, made out of commodity parts with minimal runtime requirements tends to be what is most useful across the longest amount of time.
\n\n\nCommittee decisions, which tend to diffuse responsibility, are seldom the best in practiceâmost of the time they represent a compromise which has none of the virtues of any path and tends to end in mediocrity.
\nâ p.274
\n
I appreciated his observations on committees, and their tendency to launder responsibility.\nThey serve a purpose, but its important to understand their nature.
\n\n\nThe Hawthorne effect strongly suggests the proper teaching method will always to be in a state of experimental change, and it hardly matters just what is done; all that matters is both the professor and the students believe in the change.
\nâ p.288
\n
\n\nIt has been my experience, as well as the experience of many others who have looked, that data is generally much less accurate than it is advertised to be. This is not a trivial pointâwe depend on initial data for many decisions, as well as for the input data for simulations which result in decisions.
\nâ p.345
\n
\n\nAverages are meaningful for homogeneous groups (homogeneous with respect to the actions that may later be taken), but for diverse groups averages are often meaningless. As earlier remarked, the average adult has one breast and one testicle, but that does not represent the average person in our society.
\nâ p.356
\n
\n\nYou may think the title means that if you measure accurately you will get an accurate measurement, and if not then not, but it refers to a much more subtle thingâthe way you choose to measure things controls to a large extent what happens. I repeat the story Eddington told about the fishermen who went fishing with a net. They examined the size of the fish they caught and concluded there was a minimum size to the fish in the sea. The instrument you use clearly affects what you see.
\nâ p.373
\n
Intuitively I think many people who attempt to measure anything understand that their approach reflects in the results to some degree.\nI hadnât heard of the Hawthorne effect before, but intuitively it makes sense.
\nPeople with an idea on how to improve something implement their idea and it works, because they want it to work and allow the effects to be fully effective.\nSomeone else is prescribed this idea or brought into the fold where the idea is implemented and the benefits of the idea evaporate.
\nIâve long suspected that in the context of professional software development, where highly unscrutinized benchmarks and soft data are the norm, people start with an opinion or theory and work back to data that supports it.\nCould it just be that people need to believe that working in a certain way is necessary for them to work optimally? Could it be âdataâ is often just a work function used to out maneuver competing ideas?
\nAnyway, just another thing to factor for when data is plopped in your lap.
\n\n\nMoral: there need not be a unique form of a theory to account for a body of observations; instead, two rather different-looking theories can agree on all the predicted details. You cannot go from a body of data to a unique theory! I noted this in the last chapter.
\nâp.314
\n
\n\nHeisenberg derived the uncertainty principle that conjugate variables, meaning Fourier transforms, obeyed a condition in which the product of the uncertainties of the two had to exceed a fixed number, involving Planckâs constant. I earlier commented, Chapter 17, this is a theorem in Fourier transforms-any linear theory must have a corresponding uncertainty principle, but among physicists it is still widely regarded as a physical effect from nature rather than a mathematical effect of the model.
\nâp.316
\n
I appreciate Hamming suggesting that some of our understanding of physical reality could be a byproduct of the model being used to describe it.\nItâs not exactly examined closely in undergraduate or graduate quantum mechanics, and I find it interesting Hamming, whoâs clearly highly intuitive with modeling, also raises this question.
\n\n\nLet me now turn to predictions of the immediate future. It is fairly clear that in time âdrop linesâ from the street to the house (they may actually be buried, but will probably still be called âdrop linesâ) will be fiber optics. Once a fiber-optic wire is installed, then potentially you have available almost all the information you could possibly want, including TV and radio, and possibly newspaper articles selected according to your interest profile (you pay the printing bill which occurs in your own house). There would be no need for separate information channels most of the time. At your end of the fiber there are one or more digital filters. Which channel you want, the phone, radio, or TV, can be selected by you much as you do now, and the channel is determined by the numbers put into the digital filter-thus the same filter can be multipurpose, if you wish. You will need one filter for each channel you wish to use at the same time (though it is possible a single time-sharing filter would be available) and each filter would be of the same standard design. Alternately, the filters may come with the particular equipment you buy.
\nâ p.284-285
\n
Here Hamming is predicting the internet. He got very close, and itâs interesting to think that these signals would all just be piped to your house in a bundle you you pay for a filter to unlock access to the ones you want. Hey Cable TV worked that for a long time!
\n\n\nBut a lot of evidence on what enabled people to make big contributions points to the conclusion that a famous prof was a terrible lecturer and the students had to work hard to learn it for themselves! I again suggest a rule:
\nWhat you learn from others you can use to follow;
\nWhat you learn for yourself you can use to lead.
\nâ p.292
\n
Learn by doing, not by following.
\n\n\nWhat you did to become successful is likely to be counterproductive when applied at a later date.
\nâ p.342
\n
Itâs easy to blame changing trends in software development for the disgustingly short half-life of knowledge regarding development patterns and tools, but I think itâs probably just the nature of knowledge based work.\nOperating by yourself may be effective and work well, but its not a recipe for success at any given moment in time.
\n\n\nA man was examining the construction of a cathedral. He asked a stonemason what he was doing chipping the stones, and the mason replied, âI am making stones.â He asked a stone carver what he was doing; âI am carving a gargoyle.â And so it went; each person said in detail what they were doing. Finally he came to an old woman who was sweeping the ground. She said, âI am helping build a cathedral.â\nIf, on the average campus, you asked a sample of professors what they were going to do in the next class hour, you would hear they were going to âteach partial fractions,â âshow how to find the moments of a normal distribution,â âexplain Youngâs modulus and how to measure it,â etc. I doubt you would often hear a professor say, âI am going to educate the students and prepare them for their future careers.â\nThis myopic view is the chief characteristic of a bureaucrat. To rise to the top you should have the larger viewâat least when you get there.
\nâ p.360
\n
Software bureaucrats aplenty. Really easy to fall into this role.
\n\n\nI must come to the topic of âsellingâ new ideas. You must master three things to do this (Chapter 5):
\n\n
\n- Giving formal presentations,
\n- Producing written reports, and
\n- Mastering the art of informal presentations as they happen to occur.
\nAll three are essentialâyou must learn to sell your ideas, not by propaganda, but by force of clear presentation. I am sorry to have to point this out; many scientists and others think good ideas will win out automatically and need not be carefully presented. They are wrong;
\nâ p.396
\n
One thing I regret over the last 10 years of my career is not writing down more insights I have learned through experience.\nIdeas simply donât transmit if they arenât written down or put into some consumable format like video or audio.\nNearly every annoying tool or developer trend you are forced to use is in play because it communicated the idea through blogs, videos and conference talks.\nAnd those who watched echoed these messages.
\n\n\nAn expert is one who knows everything about nothing; a generalist knows nothing about everything.
\nIn an argument between a specialist and a generalist, the expert usually wins by simply (1) using unintelligible jargon, and (2) citing their specialist results, which are often completely irrelevant to the discussion. The expert is, therefore, a potent factor to be reckoned with in our society. Since experts both are necessary and also at times do great harm in blocking significant progress, they need to be examined closely. All too often the expert misunderstands the problem at hand, but the generalist cannot carry though their side to completion. The person who thinks they understand the problem and does not is usually more of a curse (blockage) than the person who knows they do not understand the problem.
\nâ p.333
\n
Understand when you are generalist and a specialist.
\n\n\nExperts, in looking at something new, always bring their expertise with them, as well as their particular way of looking at things. Whatever does not fit into their frame of reference is dismissed, not seen, or forced to fit into their beliefs. Thus really new ideas seldom arise from the experts in the field. You cannot blame them too much, since it is more economical to try the old, successful ways before trying to find new ways of looking and thinking.
\nIf an expert says something can be done he is probably correct, but if he says it is impossible then consider getting another opinion.
\nâ p.336
\n
Anyone wading into a technical field will encounter experts at every turn.\nThey have valuable information, but they are also going to give you dated, myopic advice (gatekeeping?).\nI like Hammingâs framing here and it reflects my experience when weighing expert opinion.
\n\n\nIn some respects the expert is the curse of our society, with their assurance they know everything, and without the decent humility to consider they might be wrong. Where the question looms so important, I suggested to you long ago to use in an argument, âWhat would you accept as evidence you are wrong?â Ask yourself regularly, âWhy do I believe whatever I do?â Especially in the areas where you are so sure you know, the area of the paradigms of your field.
\nâ p.340
\n
I love this exercise. It will also drive you crazy. Tread carefully.
\n\n\nSystems engineering is indeed a fascinating profession, but one which is hard to practice. There is a great need for real systems engineers, as well as perhaps a greater need to get rid of those who merely talk a good story but cannot play the game effectively.
\nâ p.372
\n
Controversial, harsh, but true.
\nThe last thing I want to recognize is the beautiful cloth resin binding and quality printing of the book. Bravo Stripe Press for still producing beautiful artifacts at affordable pricing in the age of print on demand.
\n" }, { "date_published": "2024-01-15T23:55:33.582Z", "title": "async-neocities has a bin", "url": "https://bret.io/blog/2024/async-neocities-bin/", "id": "https://bret.io/blog/2024/async-neocities-bin/#2024-01-15T23:55:33.582Z", "content_html": "async-neocities
v3.0.0 is now available and introduces a CLI.
Usage: async-neocities [options]\n\n Example: async-neocities --src public\n\n --help, -h print help text\n --src, -s The directory to deploy to neocities (default: "public")\n --cleanup, -c Destructively clean up orphaned files on neocities\n --protect, -p String to minimatch files which will never be cleaned up\n --status Print auth status of current working directory\n --print-key Print api-key status of current working directory\n --clear-key Remove the currently associated API key\n --force-auth Force re-authorization of current working directory\n\nasync-neocities (v3.0.0)\n
\nWhen you run it, you will see something similar to this:
\nasync-neocities --src public\n\nFound siteName in config: bret\nAPI Key found for bret\nStarting inspecting stage...\nFinished inspecting stage.\nStarting diffing stage...\nFinished diffing stage.\nSkipping applying stage.\nDeployed to Neocities in 743ms:\n Uploaded 0 files\n Orphaned 0 files\n Skipped 244 files\n 0 protected files\n
\nasync-neocities
was previously available as a GitHub Action called deploy-to-neocities. This Action API remains available, however the CLI offers a local-first workflow that was not previously offered.
Now that async-neocities
is available as a CLI, you can easily configure it as an npm
script and run it locally when you want to push changes to neocities without relying on GitHub Actions.\nIt also works great in Actions with side benefit of deploys working exactly the same way in both local and remote environments.
Here is a quick example of that:
\nasync-neocities@^3.0.0
to your projectâs package.json
.package.json
deploy script: "scripts": {\n "build": "npm run clean && run-p build:*",\n "build:tb": "top-bun",\n "clean": "rm -rf public && mkdir -p public",\n "deploy": "run-s build deploy:*",\n "deploy:async-neocities": "async-neocities --src public --cleanup"\n },\n
\ndeploy-to-neocities.json
config file. Example config contents:{"siteName":"bret"}\n
\nnpm run deploy
.npm run deploy
and configure the token secret.name: Deploy to neociteis\n\non:\n push:\n branches:\n - master\n\nenv:\n node-version: 21\n FORCE_COLOR: 2\n\nconcurrency: # prevent concurrent deploys doing starnge things\n group: deploy-to-neocities\n cancel-in-progress: true\n\njobs:\n deploy:\n runs-on: ubuntu-latest\n\n steps:\n - uses: actions/checkout@v4\n - name: Create LFS file list\n run: git lfs ls-files -l | cut -d' ' -f1 | sort > .lfs-assets-id\n - name: Restore LFS cache\n uses: actions/cache@v3\n id: lfs-cache\n with:\n path: .git/lfs\n key: ${{ runner.os }}-lfs-${{ hashFiles('.lfs-assets-id') }}-v1\n - name: Git LFS Pull\n run: git lfs pull\n\n - name: Use Node.js\n uses: actions/setup-node@v4\n with:\n node-version: ${{env.node-version}}\n - run: npm i\n - run: npm run deploy\n env:\n NEOCITIES_API_TOKEN: ${{ secrets.NEOCITIES_API_TOKEN }}\n
\nThe async-neocities
CLI re-uses the same ENV name as deploy-to-neocities
action so migrating to the CLI requires no additional changes to the Actions environment secrets.
This prompts some questions regarding when are CLIs and when are actions most appropriate. Lets compare the two:
\npackage.json
or node_modules
In addition to the CLI, async-neocities
migrates to full Node.js esm
and internally enables ts-in-js
though the types were far to dynamic to export full type support with the time I had available.
With respect to an implementation plan going forward regarding CLIs vs actions, Iâve summarized my thoughts below:
\nImplement core functionality as a re-usable library.\nExposing a CLI makes that library an interactive tool that provides a local first workflow and is equally useful in CI.\nExposing the library in an action further opens up the library to a wider language ecosystem which would otherwise ignore the library due to foreign ecosystem ergonomic overhead.\nThe action is simpler to implement than a CLI but the CLI offers a superior experience within the implemented language ecosystem.
\n" }, { "date_published": "2023-12-02T20:49:41.713Z", "title": "Reorganized", "url": "https://bret.io/blog/2023/reorganized/", "id": "https://bret.io/blog/2023/reorganized/#2023-12-02T20:49:41.713Z", "content_html": "Behold, a mildly redesigned and reorganized landing page:
\n\nItâs still not great, but it should make it easier to keep it up to date going forward.
\nIt has 3 sections:
\nI removed a bunch of older inactive projects and links and stashed them in a project.
\nAdditionally, the edit button in the page footer now takes you to the correct page in GitHub for editing, so if you ever see a typo, feel free to send in a fix!
\nFinally, the about page includes a live dump of the dependencies that were used to build the website.
\n" }, { "date_published": "2023-11-23T15:14:54.910Z", "title": "Reintroducing top-bun 7 ð¥", "url": "https://bret.io/blog/2023/reintroducing-top-bun/", "id": "https://bret.io/blog/2023/reintroducing-top-bun/#2023-11-23T15:14:54.910Z", "content_html": "After some unexpected weekends of downtime looking after sick toddlers, Iâm happy to re-introduce top-bun
v7.
Re-introduce? Well, you may remember @siteup/cli
, a spiritual offshoot of sitedown
, the static site generator that turned a directory of markdown into a website.
top-bun
v7?Letâs dive into the new feature, changes and additions in top-bun
7.
top-bun
@siteup/cli
is now top-bun
.\nAs noted above, @siteup/cli
was a name hack because I didnât snag the bare npm
name when it was available, and someone else had the genius idea of taking the same name. Hey it happens.
I described the project to Chat-GPT and it recommended the following gems:
\nquick-brick
web-erect
OK Chat-GPT, pretty good, I laughed, but Iâm not naming this web-erect
.
The kids have a recent obsession with Wallace & Gromit and we watched a lot of episodes while she was sick. Also Iâve really been enjoying ð¥ bread themes so I decided to name it after Wallace & Gromitâs bakery âTop Bunâ in their hit movie âA Matter of Loaf and Deathâ.
\n\ntop-bun
now builds itâs own repo into a docs website. Itâs slightly better than the GitHub README.md view, so go check it out! It even has a real domain name so you know its for real.
css
bundling is now handled by esbuild
esbuild
is an amazing tool. postcss
is a useful tool, but its slow and hard to keep up with. In top-bun
, css
bundling is now handled by esbuild
.\ncss
bundling is now faster and less fragile, and still supports many of the same transforms that siteup
had before. CSS nesting is now supported in every modern browser so we donât even need a transform for that. Some basic transforms and prefixes are auto-applied by setting a relatively modern browser target.
esbuild
doesnât support import chunking on css yet though, so each css
entrypoint becomes its own bundle. If esbuild
ever gets this optimization, so will top-bun
. In the meantime, global.css
, style.css
and now layout.style.css
give you ample room to generally optimize your scoped css loading by hand. Itâs simpler and has less moving parts!
You can now have more than one layout!\nIn prior releases, you could only have a single root
layout that you customized on a per-page basis with variables.\nNow you can have as many layouts as you need.\nThey can even nest.\nCheck out this example of a nested layout from this website. Itâs named article.layout.js
and imports the root.layout.js
. It wraps the children
and then passes the results to root.layout.js
.
// article.layout.js\n\nimport { html } from 'uhtml-isomorphic'\nimport { sep } from 'node:path'\nimport { breadcrumb } from '../components/breadcrumb/index.js'\n\nimport defaultRootLayout from './root.layout.js'\n\nexport default function articleLayout (args) {\n const { children, ...rest } = args\n const vars = args.vars\n const pathSegments = args.page.path.split(sep)\n const wrappedChildren = html`\n ${breadcrumb({ pathSegments })}\n <article class="article-layout h-entry" itemscope itemtype="http://schema.org/NewsArticle">\n <header class="article-header">\n <h1 class="p-name article-title" itemprop="headline">${vars.title}</h1>\n <div class="metadata">\n <address class="author-info" itemprop="author" itemscope itemtype="http://schema.org/Person">\n ${vars.authorImgUrl\n ? html`<img height="40" width="40" src="${vars.authorImgUrl}" alt="${vars.authorImgAlt}" class="u-photo" itemprop="image">`\n : null\n }\n ${vars.authorName && vars.authorUrl\n ? html`\n <a href="${vars.authorUrl}" class="p-author h-card" itemprop="url">\n <span itemprop="name">${vars.authorName}</span>\n </a>`\n : null\n }\n </address>\n ${vars.publishDate\n ? html`\n <time class="published-date dt-published" itemprop="datePublished" datetime="${vars.publishDate}">\n <a href="#" class="u-url">\n ${(new Date(vars.publishDate)).toLocaleString()}\n </a>\n </time>`\n : null\n }\n ${vars.updatedDate\n ? html`<time class="dt-updated" itemprop="dateModified" datetime="${vars.updatedDate}">Updated ${(new Date(vars.updatedDate)).toLocaleString()}</time>`\n : null\n }\n </div>\n </header>\n\n <section class="e-content" itemprop="articleBody">\n ${typeof children === 'string'\n ? html([children])\n : children /* Support both uhtml and string children. Optional. */\n }\n </section>\n\n <!--\n <footer>\n <p>Footer notes or related info here...</p>\n </footer>\n -->\n </article>\n ${breadcrumb({ pathSegments })}\n `\n\n return defaultRootLayout({ children: wrappedChildren, ...rest })\n}\n
\nWith multi-layout support, it made sense to introduce two more style and js bundle types:
\nPrior the global.css
and global.client.js
bundles served this need.
Layouts, and global.client.js
, etc used to have to live at the root of the project src
directory. This made it simple to find them when building, and eliminated duplicate singleton errors, but the root of websites is already crowded. It was easy enough to find these things anywhere, so now you can organize these special files in any way you like. Iâve been using:
layouts
: A folder full of layoutsglobals
: A folder full of the globally scoped filesGiven the top-bun
variable cascade system, and not all website files are html, it made sense to include a templating system for generating any kind of file from the global.vars.js
variable set. This lets you generate random website âsidefilesâ from your site variables.
It works great for generating RSS feeds for websites built with top-bun
. Here is the template file that generates the RSS feed for this website:
import pMap from 'p-map'\nimport jsonfeedToAtom from 'jsonfeed-to-atom'\n\n/**\n * @template T\n * @typedef {import('@siteup/cli').TemplateAsyncIterator<T>} TemplateAsyncIterator\n */\n\n/** @type {TemplateAsyncIterator<{\n * siteName: string,\n * siteDescription: string,\n * siteUrl: string,\n * authorName: string,\n * authorUrl: string,\n * authorImgUrl: string\n * layout: string,\n * publishDate: string\n * title: string\n * }>} */\nexport default async function * feedsTemplate ({\n vars: {\n siteName,\n siteDescription,\n siteUrl,\n authorName,\n authorUrl,\n authorImgUrl\n },\n pages\n}) {\n const blogPosts = pages\n .filter(page => ['article', 'book-review'].includes(page.vars.layout) && page.vars.published !== false)\n .sort((a, b) => new Date(b.vars.publishDate) - new Date(a.vars.publishDate))\n .slice(0, 10)\n\n const jsonFeed = {\n version: 'https://jsonfeed.org/version/1',\n title: siteName,\n home_page_url: siteUrl,\n feed_url: `${siteUrl}/feed.json`,\n description: siteDescription,\n author: {\n name: authorName,\n url: authorUrl,\n avatar: authorImgUrl\n },\n items: await pMap(blogPosts, async (page) => {\n return {\n date_published: page.vars.publishDate,\n title: page.vars.title,\n url: `${siteUrl}/${page.pageInfo.path}/`,\n id: `${siteUrl}/${page.pageInfo.path}/#${page.vars.publishDate}`,\n content_html: await page.renderInnerPage({ pages })\n }\n }, { concurrency: 4 })\n }\n\n yield {\n content: JSON.stringify(jsonFeed, null, ' '),\n outputName: 'feed.json'\n }\n\n const atom = jsonfeedToAtom(jsonFeed)\n\n yield {\n content: atom,\n outputName: 'feed.xml'\n }\n\n yield {\n content: atom,\n outputName: 'atom.xml'\n }\n}\n
\nPages, Layouts and Templates can now introspect every other page in the top-bun
build.
You can now easily implement any of the following:
\nPages, Layouts and Templates receive a pages
array that includes PageData instances for every page in the build. Variables are already pre-resolved, so you can easily filter, sort and target various pages in the build.
top-bun
now has full type support. Itâs achieved with types-in-js
and it took a ton of time and effort.
The results are nice, but Iâm not sure the juice was worth the squeeze. top-bun
was working really well before types. Adding types required solidifying a lot of trivial details to make the type-checker happy. I donât even think a single runtime bug was solved. It did help clarify some of the more complex types that had developed over the first 2 years of development though.
The biggest improvement provided here is that the following types are now exported from top-bun
:
LayoutFunction<T>\nPostVarsFunction<T>\nPageFunction<T>\nTemplateFunction<T>\nTemplateAsyncIterator<T>\n
\nYou can use these to get some helpful auto-complete in LSP supported editors.
\ntypes-in-js
not TypescriptThis was the first major dive I did into a project with types-in-js
support.\nMy overall conclusions are:
types-in-js
provides a superior development experience to developing in .ts
by eliminating the development loop build step.types-in-js
are in conflict with each other. types-in-js
should win, its better than JSDoc in almost every way (but you still use both).types-in-js
. Find something that consumes the generated types instead of consuming the JSDoc blocs.@ts-ignore
liberally. Take a pass or two to remove some later.md
and html
Previously, only js
pages had access to the variable cascade inside of the page itself. Now html
and md
pages can access these variables with handlebars placeholders.
## My markdown page\n\nHey this is a markdown page for {{ vars.siteName }} that uses handlebars templates.\n
\nPreviously, if you opted for the default layout, it would import mine.css from unpkg. This worked, but went against the design goal of making top-bun
sites as reliable as possible (shipping all final assets to the dest folder).
Now when you build with the default layout, the default stylesheet (and theme picker js code) is built out into your dest
folder.
browser-sync
@siteup/cli
previously didnât ship with a development server, meaning you had to run one in parallel when developing. This step is now eliminated now that top-bun
ships browser-sync
. browser-sync
is one of the best Node.js development servers out there and offers a bunch of really helpful dev tools built right in, including scroll position sync so testing across devices is actually enjoyable.
If you arenât familiar with browser-sync, here are some screenshots of fun feature:
\n\n\n\n\n\n\ntop-bun
ejecttop-bun
now includes an --eject
flag, that will write out the default layout, style, client and dependencies into your src
folder and update package.json
. This lets you easily get started with customizing default layouts and styles when you decide you need more control.
â default-layout % npx top-bun --eject\n\ntop-bun eject actions:\n - Write src/layouts/root.layout.mjs\n - Write src/globals/global.css\n - Write src/globals/global.client.mjs\n - Add mine.css@^9.0.1 to package.json\n - Add uhtml-isomorphic@^2.0.0 to package.json\n - Add highlight.js@^11.9.0 to package.json\n\nContinue? (Y/n) y\nDone ejecting files!\n
\nThe default layout is always supported, and its of course safe to rely on that.
\nThe logging has been improved quite a bit. Here is an example log output from building this blog:
\ntop-bun --watch\n\nInitial JS, CSS and Page Build Complete\nbret.io/src => bret.io/public\nââ⬠projects\nâ ââ⬠websockets\nâ â âââ README.md: projects/websockets/index.html\nâ ââ⬠tron-legacy-2021\nâ â âââ README.md: projects/tron-legacy-2021/index.html\nâ ââ⬠package-automation\nâ â âââ README.md: projects/package-automation/index.html\nâ âââ page.js: projects/index.html\nââ⬠jobs\nâ ââ⬠netlify\nâ â âââ README.md: jobs/netlify/index.html\nâ ââ⬠littlstar\nâ â âââ README.md: jobs/littlstar/index.html\nâ âââ page.js: jobs/index.html\nâ âââ zhealth.md: jobs/zhealth.html\nâ âââ psu.md: jobs/psu.html\nâ âââ landrover.md: jobs/landrover.html\nââ⬠cv\nâ âââ README.md: cv/index.html\nâ âââ style.css: cv/style-IDZIRKYR.css\nââ⬠blog\nâ ââ⬠2023\nâ â ââ⬠reintroducing-top-bun\nâ â â âââ README.md: blog/2023/reintroducing-top-bun/index.html\nâ â â âââ style.css: blog/2023/reintroducing-top-bun/style-E2RTO5OB.css\nâ â ââ⬠hello-world-again\nâ â â âââ README.md: blog/2023/hello-world-again/index.html\nâ â âââ page.js: blog/2023/index.html\nâ âââ page.js: blog/index.html\nâ âââ style.css: blog/style-NDOJ4YGB.css\nââ⬠layouts\nâ âââ root.layout.js: root\nâ âââ blog-index.layout.js: blog-index\nâ âââ blog-index.layout.css: layouts/blog-index.layout-PSZNH2YW.css\nâ âââ blog-auto-index.layout.js: blog-auto-index\nâ âââ blog-auto-index.layout.css: layouts/blog-auto-index.layout-2BVSCYSS.css\nâ âââ article.layout.js: article\nâ âââ article.layout.css: layouts/article.layout-MI62V7ZK.css\nâââ globalStyle: globals/global-OO6KZ4MS.css\nâââ globalClient: globals/global.client-HTTIO47Y.js\nâââ globalVars: global.vars.js\nâââ README.md: index.html\nâââ style.css: style-E5WP7SNI.css\nâââ booklist.md: booklist.html\nâââ about.md: about.html\nâââ manifest.json.template.js: manifest.json\nâââ feeds.template.js: feed.json\nâââ feeds.template.js-1: feed.xml\nâââ feeds.template.js-2: atom.xml\n\n[Browsersync] Access URLs:\n --------------------------------------\n Local: http://localhost:3000\n External: http://192.168.0.187:3000\n --------------------------------------\n UI: http://localhost:3001\n UI External: http://localhost:3001\n --------------------------------------\n[Browsersync] Serving files from: /Users/bret/Developer/bret.io/public\nCopy watcher ready\n
\nmjs
and cjs
file extensionsYou can now name your page, template, vars, and layout files with the mjs
or cjs
file extensions. Sometimes this is a necessary evil. In general, set your type
in your package.json
correctly and stick with .js
.
top-bun
The current plan is to keep sitting on this feature set for a while. But I have some ideas:
\nuhtml
?top-bun
is already one of the best environments for implementing sites that use web-components. Page bundles are a perfect place to register components!If you try out top-bun
, I would love to hear about your experience. Do you like it? Do you hate it? Open an discussion item. or reach out privately.
top-bun
OK, now time for the story behind top-bun
aka @siteup/cli
.
I ran some experiments with orthogonal tool composition a few years ago. I realized I could build sophisticated module based websites by composing various tools together by simply running them in parallel.
\nWhat does this idea look like? See this snippet of a package.json
:
{ "scripts": {\n "build": "npm run clean && run-p build:*",\n "build:css": "postcss src/index.css -o public/bundle.css",\n "build:md": "sitedown src -b public -l src/layout.html",\n "build:feed": "generate-feed src/log --dest public && cp public/feed.xml public/atom.xml",\n "build:static": "cpx 'src/**/*.{png,svg,jpg,jpeg,pdf,mp4,mp3,js,json,gif}' public",\n "build:icon": "gravatar-favicons --config favicon-config.js",\n "watch": "npm run clean && run-p watch:* build:static",\n "watch:css": "run-s 'build:css -- --watch'",\n "watch:serve": "browser-sync start --server 'public' --files 'public'",\n "watch:md": "npm run build:md -- -w",\n "watch:feed": "run-s build:feed",\n "watch:static": "npm run build:static -- --watch",\n "watch:icon": "run-s build:icon",\n "clean": "rimraf public && mkdirp public",\n "start": "npm run watch"\n }\n}\n
\npostcss
building css bundles, enabling an @import
based workflow for css, as well as providing various transforms I found useful.sitedown
.generate-feed
gravatar-favicons
.js
bundling could easily be added in here with esbuild
or rollup.build
and watch
prefixes, and managed with npm-run-all2 which provides shortcuts to running these tasks in parallel.I successfully implemented this pattern across 4-5 different websites I manged. It work beautifully. Some of the things I liked about it:
\nBut it had a few drawbacks:
\nSo I decided to roll up all of the common patterns into a single tool that included the discoveries of this process.
\n@siteup/cli
extended sitedown
Because it was clear sitedown
provided the core structure of this pattern (making the website part), I extended the idea in the project @siteup/cli
. Here are some of the initial features that project shipped with:
<head>
tag easily from the frontmatter section of markdown pages.js
layout: This let you write a simple JS program that receives the variable cascade and child content of the page as a function argument, and return the contents of the page after applying any kind of template tool available in JS.html
pages: CommonMark supports html
in markdown, but it also has some funny rules that makes it more picky about how the html
is used. I wanted a way to access the full power of html
without the overhead of markdown, and this page type unlocked that.js
pages: Since I enjoy writing JavaScript, I also wanted a way to support generating pages using any templating system and data source imaginable so I also added support to generate pages from the return value of a JS function.siteup
opted to make it super easy to add a global css
bundle, and page scoped css
bundles, which both supported a postcss
@import
workflow and provided a few common transforms to make working with css
tolerable (nesting, prefixing etc).sitedown
also added this feature subsequently.src
!After sitting on the idea of siteup
for over a year, by the time I published it to npm
, the name was taken, so I used the npm org name hack to get a similar name @siteup/cli
. SILLY NPM!
I enjoyed how @siteup/cli
came out, and have been using it for 2 year now. Thank you of course to ungoldman for laying the foundation of most of these tools and patterns. Onward and upward to top-bun
!
This blog has been super quiet lately, sorry about that.\nI had some grand plans for finishing up my website tools to more seamlessly\nsupport blogging.
\nThe other day around 3AM, I woke up and realized that the tools arenât stopping me\nfrom writing, I am.\nAlso my silently implemented policy about not writing about âfuture plans and ideas before they are readyâ was implemented far to strictly.\nIt is in fact a good thing to write about in-progress ideas and projects slightly out into the future.\nThis is realistic, interesting, and avoids the juvenile trap of spilling ideas in front of the world only to see you never realize them.\nSo here I am writing a blog post again.
\nAnyway, no promises, but it is my goal to write various ahem opinions, thoughts and ideas more often because nothing ever happens unless you write it down.
\nHere are some updates from the last couple years that didnât make it onto this site before.
\nI updated the Sublime Text Tron Color Scheme
today after a few weeks of reworking it for the recent release of Sublime Text 4.
The 2.0.0
release converts the older .tmTheme
format into the Sublime specific theme format.\nOverall the new Sublime theme format (.sublime-color-scheme
) is a big improvement, largely due to its simple JSON structure and its variables support.
JSON is, despite the common arguments against it, super readable and easily read and written by humans.\nThe variable support makes the process of making a theme a whole lot more automatic, since you no longer have to find and replace colors all over the place.
\nThe biggest problem I ran into was poor in-line color highlighting when working with colors, so I ended up using a VSCode plugin called Color Highlight
in a separate window.\nSublime has a great plugin also called Color Highlight
that usually works well, but not in this case.\nThe Sublime Color Highlight
variant actually does temporary modifications to color schemes, which seriously gets in the way when working on color scheme files.
The rewrite is based it off of the new Mariana Theme that ships with ST4, so the theme should have support for most of the latest features in ST4 though there are surely features that even Mariana missed.\nLet me know if you know of any.
\nHere a few other points of consideration made during the rewrite:
\nTron Legacy 4 (Dark)
.Here a few more relevant links and please let me know what you think if you try it out.
\n\n\nSublime Tron Legacy color scheme fully updated for @sublimehq Text 4. Full syntax support, lots of other small improvements. Also it supports 'glow' textâï¸ pic.twitter.com/vShbGThgDF
— ððµð¸Bretðð¨âð©âð§ð (@bcomnes) August 5, 2021
After a short sabbatical in Denmark at Hyperdivision, I joined Littlstar.\nHere is a quick overview of some of the more interesting projects I worked on.\nI joined Littlstar during a transitional period of the company and help them transition from a VR video platform to a video on demand and live streaming platform, and now an NFT sales and auction platform.
\nMy first project was picking up on an agency style project developing a novel VR reality training platform for NYCPF, powered by a custom hypercore p2p file sharing network, delivering in-house developed unity VR scenarios.\nThese scenarios could then be brought to various locations like schools or events where NYCPF could guide participants through various law enforcement scenarios with different outcomes based on participant choices within the simulation.
\nBy utilizing a p2p and offline first design approach, we were able to deliver an incredibly flexible and robust delivery platform that had all sorts of difficult features to develop for traditional file distribution platforms such as:
\nWhile the project was built on the backs of giants like the Hypercore protocol, as well as the amazing work of my colleague, I contributed in a number of areas to move the project forward during my time contracting on it.
\nSome of the discrete software packages that resulted from this project are described below.
\nsecure-rpc-protocol
Secure rpc-protocol over any duplex socket using noise-protocol.\nThis was a refactor of an existing RPC over websocket solution the project was using.\nIt improved upon the previous secure RPC already used by switching to using the noise protocol which implements well understood handshake patterns that can be shared and audited between projects, rather than relying on a novel implementation at the RPC layer.\nIt also decoupled the RPC protocol from the underlying socket being used, so that the RPC system could be used over any other channels we might want in the future.
\n\nasync-folder-walker
An async generator that walks files.\nThis project was a refactor of an existing project called folder-walker implementing a high performance folder walk algorithm using a more modern async generator API.
\n\nunpacker-with-progress
unpacker-with-progress
is a specialized package that unpacks archives, and provides a progress api in order to provide UI feedback.\nOne of the deficiencies with the NYCPF project when I started was lack of UI feedback during the extraction process.
VR files are very large, and are delivered compressed to the clients.\nAfter the download is complete, the next step to processing the data is unpacking the content.\nThis step did not provide any sort of progress feedback to the user because the underlying unpacking libraries did not expose this information, or only exposed some of the information needed to display a progress bar.
\nThis library implemented support for unpacking the various archive formats the project required, and also added an API providing uniform unpacking progress info that could be used in the UI during unpacking tasks.
\n\nhchacha20
One of the more interesting side projects I worked on was porting over some of the libsodium primitives to sodium-javascript.\nI utilized a technique I learned about at Hyperdivision where one can write web assembly by hand in the WAT format, providing a much wider set of data types, providing the type guarantees needed to write effective crypto.
\nWhile the WAT was written for HChaCha20, the effort was quite laborious and it kicked off a debate as to whether it would be better to just wrap libsodium-js (the official libsodium js port) in a wrapper that provided the sodium-universal API. This was achieved by another group in geut/sodium-javascript-plus which successfully ran hypercores in the browser using that wrapper.
\nUltimately, this effort was scrapped, determining that noise peer connections in the browser are redundant to webRTC encryption and https sockets.\nIt was a fun and interesting project none the less.
\n\nWe were having some state transition bugs between the webapp and the local server process, where the app could get into strange indeterminate states.\nBoth had independent reconnect logic wrapped up with application code, and it added a lot of chaos to understanding how each process was behaving when things went wrong (especially around sleep cycles on machines).
\nI implemented a generic reconnecting state machine that could accept any type of socket, and we were able to reduce the number of state transition bugs we had.
\n\n\nAfter working on various agency projects at Littlstar, we formed a separate organization to start a fresh rewrite of the technology stack.
\nMy high level contributions:
\nThis was an amazing founders-style opportunity to help rethink and re-implement years of work that had developed at Littlstar prior to joining.\nEffectively starting from 0, we rethought the entire technology pipeline, from operations, to infrastructure, to deployment, resulting in something really nice, modern, minimal, low maintenance and malleable.
\n\n\nA culmination of ingesting the âTerraform: Up & Runningâ and âAWS Certified Solutions Architectâ books, as well as building off existing organizational experience with AWS, I helped research and design an operations plan using Terraform and GitHub Actions.
\nThis arrangement has proven powerful and flexible.\nWhile it isnât perfect, it has been effective and reliable and cheap, despite its imperfections in relation to some of the more esoteric edge cases of Terraform.
\nA quick overview of how its arrange:
\nops
global repo runs terraform in a bootstrapped GitHub actions environment.ops
terraform repo, that in turn contain their own Terraform files specific to that service.One of the drawbacks of rolling our own Terraform CI infrastructure was that we had to tackle many small edge cases inside the GitHub actions environment.
\nIt was nice to learn about the various types of custom GitHub actions one can write, as well as expand that knowlege to the rest of the org, but it also ate up a number of days focusing on DevOps problems specific to our CI environment.
\nHere are some of the problems I helped solve in the actions environment.
\nI helped lay the framework for the initial version of sdk-js
, the Little Core Labs unified library used to talk to the various back-end services at Little Core Labs.
One of underlying design goals was to solve for the newly introduced native ESM features in node, in such a way that the package could be consumed directly in the browser, natively as ESM in node, but also work in dual CJS/ESM environments like Next.js.\nWhile this did add some extra overhead to the project, it serves as a design pattern we can pull from in the future, as well as a provide a highly compatible but modern API client.
\nI also extracted out this dual package pattern into a reusable template.
\n\n\nI was the principal engineer on the new Rad.live website.\nI established and implemented the tech stack, aiming to take a relatively conservative take on a code base that would maximize the readability and clarity for a Dev team that would eventually grow in size.
\nA high level, the application is simply an app written with:
\nFrom a development perspective, it was important to have testing and automation from the beginning.\nFor this we used:
\nOverall, the codebase has had two other seasoned developers (one familiar and one new at React) jump in and find it productive based on individual testimony.\nGathering feedback from those thatI work with on technical decisions that I make is an important feedback look I always try to incorporate when green fielding projects.\nAdditionally, it has been a relatively malleable code base that is easy to add MVP features to and is in a great position to grow.
\nvision.css
I implemented a custom design system in tandem with our design team.\nThis project has worked out well, and has so far avoided âcss lockoutâ, where only one developer can effectively make dramatic changes to an app layout due to an undefined and overly general orthogonal âglobal style sheetâ.
\nThe way this was achieved was by focusing on a simple global CSS style sheet that implements the base HTML elements in accordance with the design system created by the design team.\nWhile this does result in a few element variants that are based on a global style class name, they remain in the theme of only styling âbuilt inâ html elements, so there is little question what might be found in the global style sheet, and what needs to be a scoped css style.
\nSome of the features we used for vision.css
\neslint-config-12core
Linting is the âspell checkâ of code, but its hard to agree on what rules to follow.\nLike most things, having a standard set of rules that is good-enough is always better than no rules, and usually better than an unpredictable and unmaintainable collection of project-unique rules.
\nI put together a shared ESLint config that was flexible enough for most projects so far at Little Core Labs based on the âStandardJSâ ruleset, but remained flexible enough to modify unique org requirements.\nAdditionally, Iâve implemented it across many of the projects in the Github Org.
\n\ngqlr
gqlr
is a simplified fork of graphql-request.
This relatively simple wrapper around the JS fetch API has a gaggle of upstream maintainers with various needs that donât really match our needs.
\nThe fork simplified and reduced code redundancy, improved maintainability through automation, fixed bugs and weird edge cases and dramatically improved errors and error handling at the correct level of the tech stack.\nThese changes would have unlikely been accepted upstream, so by forking we are able to gain the value out of open source resources, while still being able to finely tune them for our needs, as well as offer those changes back to the world.
\ngqlr
local-storage-proxy
A configuration solution that allows for persistent overrides stored in local storage, including cache busting capabilities. Implemented with a recursive JS proxy to simulate native object interactions over a window.localstorage interface.
\n\nWe ended up taking on maintainence of a few other packages, providing fixes and improvements where the original authors seem to have left off.
\nHere are some snapshots of the video platform we launched.
\n\n\nHere are some screenshots of the NFT auction platform I helped build.\nThe UI was fully responsive and updated on the fly to new results, thanks to the powers of SWR.
\n\nI did a few marketing pages as well.
\n\n\n\n\nWhile this isnât everything I did at Littlstar, it captures many of the projects I enjoyed working on, and can hopefully provide some insights into my skills, interests and experiences from the past year.
\n" }, { "date_published": "2020-09-29T17:50:58.562Z", "title": "Fully Automated Luxury Space Age Package Maintenance", "url": "https://bret.io/projects/package-automation/", "id": "https://bret.io/projects/package-automation/#2020-09-29T17:50:58.562Z", "content_html": "tldr; The full package maintenance life cycle should be automated and can be broken down into the following levels of automation sophistication:
\ngit
+ GithubThese solutions focus on Node.js + npm packages, automated on Github Actions, but the underlying principals are general to any language or automation platform.
\nMaintaining lots of software packages is burdensome.\nScaling open source package maintenance beyond a single contributor who understands the release life cycle is challenging.\nLong CONTRIBUTING.md
files are often the goto solution, but are easily overlooked.
In the end, automating the package life cycle so that it can maintain itself, is the only way to realistically scale a large set of packages in a maintainable way.
\n\nFor a long time I didnât seek out automation solutions for package maintenance beyond a few simple solutions like testing and CI.\nInstead I had a lengthly ritual that looked approximately like this:
\nð®\ngit checkout -b my-cool-branch\ndo some work\nupdate tests\nupdate docs\nnpm run test\ngit commit -am 'Describe the changes'\ngit push -u\nhub browse\ndo the PR process\nmerge the PR\ngit checkout master\ngit pull\ngit branch --delete my-cool-branch\nhand edit changelog\ngit add CHANGELOG.md\ngit commit -m 'CHANGELOG'\nnpm version {major,minor,patch}\ngit push && git push --follow-tags\nnpx gh-release\nnpm publish\nð
\n
\nIt was ritual, a muscle memory.
\nOver the years, Iâve managed to automate away large amount of raw labor to various bots, tools and platforms that tend to build on one another and are often usable in isolation or adopted one at a time.\nIâve broken various tools and opportunities for automation into levels, with each level building on the complexity contained in the level below.
\ngit
and GithubYou are already automating your packages to a large extent by your use of git
.\ngit
automates the process of working on code across multiple computers and collaborating on it with other people, and Github is the central platform to coordinate and distribute that code.
If you are new to programming or learning git
, its helpful to understand you are learning a tool used to automate the process by which you can cooperatively work on code with other people and bots.
This isnât an article about git
though, so I wonât dive more into that.
There is no debate.\nSoftware isnât âdoneâ until it has tests.\nThe orthodox position is that you shouldnât be allowed to write code until the tests are written (TDD).\nNo matter the methodology, you are automating a verification process of the package that you would normally have to perform by hand.
\nThese are my preferred test runners for Node.js:
\nfoo.test.js
).Unit tests that you run with a test runner are not the only type of test though.\nThere are lots of other easy tests you can throw at your package testing step that provide a ton of value:
\nstandard
, eslint
etc)\ndependency-check
\npackage.json
matches usage in the actual code.package.json
that are no longer used in code or if it finds pacakges in use in the code that are not listed in package.json
which would cause a runtime error.npm run build
part of your test life cycle.Running multiple tests under npm test
can result in a long, difficult to maintain test
script. Install npm-run-all2
as a devDependency
to break each package.json
test
command into its own sub-script, and then use the globbing feature to run them all in parallel (run-p
) or in series (run-s
):
{\n "scripts": {\n "test": "run-s test:*",\n "test:deps": "dependency-check . --no-dev --no-peer",\n "test:standard": "standard",\n "test:tap": "tap"\n }\n}\n
\nWhen testing locally, and an individual test is failing, you can bypass the other tests and run just the failing test:
\nrun just the dep tests\nnpm run test:deps\n
\nnpm-run-all2
is a fantastic tool to keep your npm run
scripts manageable.
This builds on the fantastic and 2014 classic Keith Cirkel blog post How to Use npm as a Build Tool.
\nWhile its obvious that writing automated tests is a form of automation, its still very common to see projects not take the actual step of automating the run step of the tests by hooking them up to a CI system that runs the tests on every interaction with the code on Github.\nServices like TravisCI have been available for FREE for years, and there is literally no valid excuse not to have this set up.
\nAlthough TravisCI has served many projects well over the years, Github Actions is a newer and platform native solution that many projects are now using. Despite the confusing name, Github Actions is primarily a CI service.
\n\nCreate the following action file in your package repo and push it up to turn on CI.
\n# .github/workflows/tests.yml\nname: tests\n\non: [push]\n\njobs:\n test:\n runs-on: $\n\n strategy:\n matrix:\n os: [ubuntu-latest]\n node: [14]\n\n steps:\n - uses: actions/[email protected]\n - name: Use Node.js $\n uses: actions/[email protected]\n with:\n node-version: $\n - run: npm i\n - run: npm test\n
\nFor more information on Action syntax and directives, see:
\n\nOnce you have a test suite set up, running in CI, any pull request to your package features the results of the âChecks APIâ.\nVarious tests and integrations will post their results on every change to the pull request in the form of ârunningâ, âpassâ or âfailâ.
\nThe benefit to the checks status in the pull request UI is, depending on the quality and robustness of your test suite, you can have some amount of confidence that you can safely merge the proposed changes, while still having things work the way you expect, including the newly proposed changes.\nNo matter the reliability of the test suite, it is still important to read and review the code.
\n\nYour package has dependences. Be it your test runner, or other packages imported or required into your package. They help provide valuable function with little upfront cost.
\nDependencies form the foundation that your package is built upon. But that foundation is made of shifting sandsâ³.\nDependencies have their own dependencies, which all have to slowly morph and change with the underlying platform and dependency changes.\nLike a garden, if you donât tend to the weeds and periodically water it with dependency updates, the plants will die.
\nWith npm
, you normally update dependencies by grabbing the latest copy of the code and checking for outdated packages:
git checkout master\ngit pull\nrm -rf node_modules\nnpm i\nnpm outdated\n
\n\nChecking for updates any time you work on the package is not a bad strategy, but it becomes tiresome, and can present a large amount of maintenance work, unrelated to the prompting task at hand, if left to go a long time. A good package doesnât need to change much, so it may rarely ever be revisited and rot indefinitely.
\nEnter dependency bots.
\nA dependency bot monitors your package repositories for dependency updates.\nWhen a dependency update is found, it automatically creates a PR with the new version.\nIf you have Level 1 automation setup, this PR will run your tests with the updated version of the dependency.\nThe results will (mostly) inform you if its safe to apply the update, and also give you a button to press to apply the change.\nNo typing or console required! ð½ð¤³
\nLevel 1 automation isnât required to use a dependency bot, but you wonât have any way to automatically validate the change, so they are much less useful in that case.
\nGithub now has a dependency bot built in called dependabot.\nTo turn it on, create the following file in your packages Github repo:
\n# .github/dependabot.yml\n\nversion: 2\nupdates:\n # Enable version updates for npm\n - package-ecosystem: "npm"\n # Look for `package.json` and `lock` files in the `root` directory\n directory: "/"\n # Check the npm registry for updates every day (weekdays)\n schedule:\n interval: "daily"\n # Enable updates to github actions\n - package-ecosystem: "github-actions"\n directory: "/"\n schedule:\n interval: "daily"\n
\nThis enables updates for npm and github actions. It offers other ecosystems as well. See the dependabot docs for more info.
\nBefore dependabot, there was a now-shut-down service called Greenkeeper.io which provided a very similar service. It offered a very interesting feature which Iâm still not sure if dependabot has yet.
\n\nIt would run tests every time a dependency in your package was updated, in and out of semver range.
\nFor in range updates that passed, nothing would happen.\nFor in range updates that failed, it would open a PR alerting you that one of your dependencies inadvertently released a breaking change as a non-breaking change.\nThis was a fantastic feature, and really demonstrated the heights that automated tests, CI, an ecosystem that fully utilized semver and dependency bots could achieve together.
\nSadly, I havenât seen other services or languages quite reach these heights of automation sophistication (many ecosystems even lack any kind of major
version gate rage), but perhaps as awareness increases of these possibilities more people will demand it.
There is a lot more room for innovation in this space. It would be great to get get periodic reports regarding the health of downstream dependency chains (e.g. if you are depending on a project that is slowly rotting and not maintaining its deps).\nAs of now, dependabot seems to be recovering from a post acquisition engineering fallout, but I hope that they can get these kinds of features back into reality sooner than later.
\nThere are bots out there that can send in automated code changes source off lint tests and other code analysis tools. While this is âcoolâ, these tasks are better served at the testing level.\nAutomated code changes for failing lint tests should really just be apart of the human development cycle with whatever IDE or editor they use.\nStill, the bot layer is open to experimentation, so go forth an experiment all you want, though note that external service integrations have a heavy integration cost usually. ð¤
\nQuick recap:
\nThat means our package is going to morph and change with time (hopefully not too much though). We need a way way to communicate that clearly to downstream dependents, be that us, someone else on the team or a large base of dark-matter developers.
\nThe way to do this is with a CHANGELOG.md file. Or release notes in a Github release page. Or ideally both. keepachangelog.com offers a good overview on the correct format a CHANGELOG.md should follow.
\n\nThis is a tedious process. If you work with other people, they might not be as motivated as you to handcraft an artisan CHANGELOG file. In my experience, the handcrafted, artisan CHANGELOG is too much work and easy to forget about. Also, I havenât found a good linter tool to enforce its maintenance.
\nauto-changelog
auto-changelog
is a tool that takes your git history and generates a CHANGELOG that is almost-just-as-good as the artisan handcrafted one. Hooking this tool into your packageâs version
life cycle enforces that it is run when a new version is generated with npm version {major,minor,patch}
.\nWhile keepachangelog.com advocates for the handcrafted version, and discourages âgit commit dumpsâ, as long as you are halfway concious of your git
commit logs (as you should be), the auto-changelog
output is generally still useful.\nYou can even follow conventionalcommits.org if you want an even more structured git log.
Automating auto-changelog
to run during npm version
[1] is easy.\nInstall it as a devDependency
and set up the following script in package.json
:
{\n "scripts": {\n "version": "auto-changelog -p --template keepachangelog auto-changelog --breaking-pattern 'BREAKING CHANGE:' && git add CHANGELOG.md"\n }\n}\n
\nThe version
script is a npm run
lifecycle script that runs after the package.json
version is bumped, but before the git commit with the change are created. Kind of a mouthful, but with nice results.
Ok, so we have local changes merged in, and weâve created a new version of the module with an automated changelog generated as part of the npm version
commit. Time to get this all pushed out and published! By hand we could do:
git push --follow-tags\ncopy contents of changelog\ncreate a new Github release on the new tag with the changelog contents\nnpm publish\n
\nBut that is tedious.\nAnd what happens when your colleague forgets to push the git commit/tag to Github and just publishes to npm
?\nOr more likely, they just forget to create the Github release, creating inconsistency in the release process.
The solution is to automate all of this!\nUse the prepublishOnly
hook to run all of these tasks automatically before publishing to npm via npm publish
.\nIncorporate a tool like gh-release
to create a Github release page for the new tag with the contents of your freshly minted auto-changelog
.
{\n "scripts": {\n "prepublishOnly": "git push --follow-tags && gh-release -y"\n }\n}\n
\n\nThe result of this is our release process is returned to the lowest common denominator of process dictated by npm:
\nnpm version {major,minor,patch}\nnpm publish\n
\nBut we still get all of these results, completely automated:
\nauto-changelog
git
commit+tag.gh-release
, with the new contents of CHANGELOGnpm
.Those two run scripts together:
\n{\n "scripts": {\n "version": "auto-changelog -p --template keepachangelog auto-changelog --breaking-pattern 'BREAKING CHANGE:' && git add CHANGELOG.md",\n "prepublishOnly": "git push --follow-tags && gh-release -y"\n }\n}\n
\nSome packages have builds steps. No problem, these are easily incorporated into the above flow:
\n{\n "scripts": {\n "build": "do some build command here",\n "prepare": "npm run build",\n "version": "run-s prepare version:*",\n "version:changelog": "auto-changelog -p --template keepachangelog auto-changelog --breaking-pattern 'BREAKING CHANGE:'",\n "version:git": "git add CHANGELOG.md dist",\n "prepublishOnly": "git push --follow-tags && gh-release -y"\n }\n}\n
\nSince version becomes a bit more complex, we can break it down into pieces with npm-run-all2
as we did in the testing step. We ensure we run fresh builds on development install (prepare
), and also when we version
. We capture any updated build outputs in git
during the version step by staging the dist
folder (or whatever else you want to capture in your git
version commit).
This pattern was documented well by @swyx: Semi-Automatic npm and GitHub Releases with gh-release
and auto-changelog
.
We now have a process for fully managing the maintenance and release cycle of our package, but we are still left to pull down any changes from our Github repo and run these release commands, as simple as they are now.\nYou canât really do this on your phone (easily) and someone else on the project still might manage to not run npm version
and just hand-bump the version number for some reason, bypassing all our wonderful automation.
What would be cool is if we could kick of a special CI program that would run npm version && npm publish
for us, at the push of a button.
It turns out Github-Actions has a feature now called workflow_dispatch
, which lets you press a button on the repos actions page on GitHub and trigger a CI flow with some input.
Implementing workflow_dispatch
is easy: create a new action workflow file with the following contents:
# .github/workflows/release.yml\n\nname: npm version && npm publish\n\non:\n workflow_dispatch:\n inputs:\n newversion:\n description: 'npm version {major,minor,patch}'\n required: true\n\nenv:\n node_version: 14\n\njobs:\n version_and_release:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/[email protected]\n with:\n # fetch full history so things like auto-changelog work properly\n fetch-depth: 0\n - name: Use Node.js $\n uses: actions/[email protected]\n with:\n node-version: $\n # setting a registry enables the NODE_AUTH_TOKEN env variable where we can set an npm token. REQUIRED\n registry-url: 'https://registry.npmjs.org'\n - run: npm i\n - run: npm test\n - run: git config --global user.email "[email protected]"\n - run: git config --global user.name " $"\n - run: npm version $\n - run: npm publish\n env:\n GH_RELEASE_GITHUB_API_TOKEN: $\n NODE_AUTH_TOKEN: $\n
\nNext, generate an npm token with publishing rights.
\nThen set that token as a repo secret called NPM_TOKEN
.
Now you can visit the actions tab on the repo, select the npm version && npm publish
action, and press run, passing in either major
, minor
, or patch
as the input, and a GitHub action will kick off running our Level 3 version and release automations along with publishing a release to npm and GitHub.
Note: Its recommended that you .gitignore
package-lock.json
files, otherwise they end up in a library source, where it provides little benefit and lots of drawbacks.
I created a small actions called npm-bump
which can clean up some of the above action boilerplate:
name: Version and Release\n\non:\n workflow_dispatch:\n inputs:\n newversion:\n description: 'npm version {major,minor,patch}'\n required: true\n\nenv:\n node_version: 14\n\njobs:\n version_and_release:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/[email protected]\n with:\n # fetch full history so things like auto-changelog work properly\n fetch-depth: 0\n - name: Use Node.js $\n uses: actions/[email protected]\n with:\n node-version: $\n # setting a registry enables the NODE_AUTH_TOKEN env variable where we can set an npm token. REQUIRED\n registry-url: 'https://registry.npmjs.org'\n - run: npm i\n - run: npm test\n - name: npm version && npm publish\n uses: bcomnes/[email protected]\n with:\n git_email: [email protected]\n git_username: $\n newversion: $\n github_token: $ # built in actions token. Passed tp gh-release if in use.\n npm_token: $ # user set secret token generated at npm\n
\n\nSo this is great! You can maintain packages by merging automatically generated pull requests, run your tests on them to ensure package validity, and when you are ready, fully release the package, with a CHANGELOG entry, all from the push of a button on your cell phone. Fully Automated Luxury Package Space Maintenance. ð°ð½ð¤³
\nWhat is the best way to manage all of these independent pieces? A template! Or, a template repo.
\nYou mean things like yeoman? Maybe, though that tool is largely used to âscaffoldâ massive amounts of web framework boilerplate and is a complex ecosystem.
\nSomething simpler will be more constrained and easier to maintain over time. Github repo templates and create-project
are good choices.
Github offers a very simple solution called template repos. You take any repo on GitHub, go into its settings page and designate it as a template repo. You can create a new repo from a template repo page with the click of a button, or select it from a drop down in the github create repo wizard.
\n\nThe only issue, is that you then have to go through and modify all the repo specific parameters by hand.\nBetter than nothing!\nBut we can do better.
\ncreate-project
reposcreate-project
is a simple CLI tool (by @mafintosh) that works similar to Github template repos, except it has a `` system that lets you insert values when spawning off a project repo. You can designate your create-project
template repos to also be a Github template repo, and create new projects either way you feel like it.
Here are some of my personal template repos:
\ncreate-project
doesnât need to only manage boilerplate for Node.js projects. Maybe a go tool would be better for this, but it does show the flexibility of the tools.There are various solutions for generating docs from comments that live close to the code.
\nI havenât found a solution that satisfies my needs well enough to use one generally.\nIts hard to exceed the quality docs written by hand for and by humans.\nIf you have a good solution please let me know!
\n\nThat was a lot to cover.\nIf you want to see a complete level 0 through level 5 example, check out my create-template
template repo snapshotted to the latest commit of the time of publishing.
This collection is written in the context of the Node.js programming system, however the class of tools discussed apply to every other language ecosystem and these automation levels could serve as a framework for assessing the maturity of automation capabilities of other programming language systems.\nHopefully they can can provide some insights into the capabilities and common practices around modern JavaScript development for those unfamiliar with this ecosystem.
\nAdditionally, this documents my personal suite of tools and processes that I have developed to automate package maintenance, and is by no means normative. Modification and experimentation is always encouraged.
\nThere are many subtle layers to the Node.js programming system, and this just covers the maintenance automation layer that can exist around a package.\nMuch more could be said about the versioned development tooling, standardized scripting hooks, diamond dependency problem solutions, localized dependencies, upstream package hacking/debugging conveniences and local package linking. An even deeper dive could be made on the overlap these patterns have (and donât) have in other JS runtimes like Deno which standardizes a lot around Level 1, or even other languages like Go or Rust.
\nIf you enjoyed this article, have suggestions or feedback, or think Iâm full of it, follow me on twitter (@bcomnes) and feel free to hop in the the accompanying thread. I would love to hear your thoughts, ideas and examples! Also subscribe to my RSS/JSON Feed in your favorite RSS reader.
\n\n"Fully Automated Luxury Space Age Package Maintenance"
— ððµð¸Bretðð¨âð©âð§ð (@bcomnes) September 29, 2020
I wrote up how tedious package maintenance tasks can be fully automated.
Hope someone enjoys!https://t.co/fvYIu2Wq0r pic.twitter.com/q220LTax8X
I was lucky to be able to contribute to many features and experiences that affected Netlifyâs massive user base. Here are some examples of things that I worked on. If this kind of work looks interesting to you, Netlify is a fantastic team to join: netlify.com/careers. Questions and comments can be sent via email or twitter.
\nAfter working on the Product team for slightly over a year, I switched to working on Netlifyâs platform team. The team has a strong DevOps focus and maintains a redundant, highly available multi-cloud infrastructure on which Netlify and and all of its services run. My focus on the team is to maintain, develop, scale and improve various critical services and libraries. Below are some examples of larger projects I worked on.
\nOne of my primary responsibilities upon joining the team is to maintain the Buildbot that all customer site builds run on. It is partially open-source so customers can explore the container in a more freeform matter locally.
\nOne of the first feature additions I launched for the Buidlbot was selectable build images. This project required adding the concept of additional build images to the API and UI and to develop an upgrade path allowing users to migrate their websites to the new build image image wile also allowing them to roll back to the old image if they needed more time to accommodate the update.
\nAdditionally, I performed intake on a number of user contributed build-image additions and merged other various potential breaking changes within a development window before releasing. I also helped develop the changes to the Ruby on Rails API, additions to the React UI, as well as write the user documentation. It was widely cross cutting project.
\n\n\nI help maintain and further develop Netlifyâs Open-API (aka Swagger) API definition, website and surrounding client ecosystem. While open-api can be cumbersome, it has been a decent way to synchronize projects written in different language ecosystems in a unified way.
\nI worked on Neltifyâs Product team for a bit over a year and completed many successful user facing projects. Here are just a few examples:
\nI was primary author on Netlifyâs current CLI codebase.
\nI gave a talk on some ideas and concepts I came across working with Netlifyâs platform and general JAMStack architecture at a local Portland JAMStack meetup.
\n\n\nI led the Netlify Domains project which allowed users to to add an existing live domain during site setup, or buy the domain if it is available. This feature enabled users to deploy a website from a git repo to a real live domain name with automatic https in a matter of minutes and has resulted in a nice stream of ever increasing AAR for the company.
\n\nI helped lead and implement build status favicons, so you can put a build log into a tab, and monitor status from the tab bar.
\n\nI implemented the application UI for Netlifyâs Lambda functions and logging infrastructure, and have continued to help design and improve the developer ergonomics of the Lambda functions feature set.
\n\nI helped architect and implement Netlifyâs Identity widget.
\n\nI helped implement the UI for our site dashboard redesign.
\n\nI led the project to specify and implement the Audit Log for teams and identity instances.
\n\nI implemented the UI for Netlifyâs split testing feature.
\n\n" } ] }