Artem Sapegin’s BlogCoffee first frontend engineer, photographer, leathercrafter, food and coffee enthusiast.https://sapegin.me/en-usWashing your code: don’t make me thinkhttps://sapegin.me/blog/dont-make-me-think/https://sapegin.me/blog/dont-make-me-think/All the different ways programmers like to write clever code, and why we should avoid clever code as much as possible.Thu, 28 Nov 2024 00:00:00 GMT<p><!-- description: All the different ways programmers like to write clever code, and why we should avoid clever code as much as possible --></p>
<p>Clever code is something we may see in job interview questions or language quizzes, when they expect us to know how a language feature, which we probably have never seen before, works. My answer to all these questions is: “it won’t pass code review”.</p>
<p>Some folks confuse <em>brevity</em> with <em>clarity</em>. Short code (brevity) isn’t always the clearest code (clarity), often it’s the opposite. Striving to make your code shorter is a noble goal, but it should never come at the expense of readability.</p>
<p>There are many ways to express the same idea in code, and some are easier to understand than others. We should always aim to reduce the cognitive load of the next developer who reads our code. Every time we stumble on something that isn’t immediately obvious, we waste our brain’s resources.</p>
<p><strong>Info:</strong> I “stole” the name of this chapter from <a href="https://www.amazon.com/Dont-Make-Think-Revisited-Usability/dp/0321965515/">Steve <!-- cspell:disable -->Krug’s<!-- cspell:enable --> book on web usability</a> with the same name.</p>
<h2>Dark patterns of JavaScript</h2>
<p>Let’s look at some examples. Try to cover the answers and guess what these code snippets do. Then, count how many you guessed correctly.</p>
<p><strong>Example 1:</strong></p>
<p><!-- eslint-skip --></p>
<pre><code>const percent = 5;
const percentString = percent.toString().concat('%');
</code></pre>
<p><!-- expect(percentString).toBe('5%') --></p>
<p>This code only adds the <code>%</code> sign to a number and should be rewritten as:</p>
<pre><code>const percent = 5;
const percentString = `${percent}%`;
// → '5%'
</code></pre>
<p><!-- expect(percentString).toBe('5%') --></p>
<p><strong>Example 2:</strong></p>
<p><!-- eslint-skip --></p>
<pre><code>const url = 'index.html?id=5';
if (~url.indexOf('id')) {
// Something fishy here…
}
</code></pre>
<p><!-- expect($1).toBe(true) --></p>
<p>The <code>~</code> symbol is called the <em>bitwise NOT</em> operator. Its useful effect here is that it returns a falsy value only when <code>indexOf()</code> returns <code>-1</code>. This code should be rewritten as:</p>
<pre><code>const url = 'index.html?id=5';
if (url.includes('id')) {
// Something fishy here…
}
</code></pre>
<p><!-- expect($1).toBe(true) --></p>
<p><strong>Example 3:</strong></p>
<p><!-- eslint-skip --></p>
<pre><code>const value = ~~3.14;
</code></pre>
<p><!-- expect(value).toBe(3) --></p>
<p>Another obscure use of the bitwise NOT operator is to discard the fractional portion of a number. Use <code>Math.trunc()</code> instead:</p>
<pre><code>const value = Math.trunc(3.14);
// → 3
</code></pre>
<p><!--
expect(value).toBe(3)
--></p>
<p><strong>Example 4:</strong></p>
<p><!-- let dogs = [1], cats = [] -->
<!-- eslint-skip --></p>
<pre><code>if (dogs.length + cats.length > 0) {
// Something fishy here…
}
</code></pre>
<p><!-- expect($1).toBe(true) --></p>
<p>This one is understandable after a moment: it checks if either of the two arrays has any elements. However, it’s better to make it clearer:</p>
<p><!-- let dogs = [1], cats = [] --></p>
<pre><code>if (dogs.length > 0 || cats.length > 0) {
// Something fishy here…
}
</code></pre>
<p><!-- expect($1).toBe(true) --></p>
<p><strong>Example 5:</strong></p>
<p><!-- eslint-skip --></p>
<pre><code>const header = 'filename="pizza.rar"';
const filename = header.split('filename=')[1].slice(1, -1);
</code></pre>
<p><!-- expect(filename).toBe('pizza.rar') --></p>
<p>This one took me a while to understand. Imagine we have a portion of a URL, such as <code>filename="pizza"</code>. First, we split the string by <code>=</code> and take the second part, <code>"pizza"</code>. Then, we slice the first and the last characters to get <code>pizza</code>.</p>
<p>I’d likely use a regular expression here:</p>
<pre><code>const header = 'filename="pizza.rar"';
const filename = header.match(/filename="(.*?)"/)[1];
// → 'pizza'
</code></pre>
<p><!-- expect(filename).toBe('pizza.rar') --></p>
<p>Or, even better, the <code>URLSearchParams</code> API:</p>
<p><!-- let URLSearchParams = window.URLSearchParams --></p>
<pre><code>const header = 'filename="pizza.rar"';
const filename = new URLSearchParams(header)
.get('filename')
.replaceAll(/^"|"$/g, '');
// → 'pizza'
</code></pre>
<p><!-- expect(filename).toBe('pizza.rar') --></p>
<p><em>These quotes are weird, though. Normally we don’t need quotes around URL parameters, so talking to the backend developer could be a good idea.</em></p>
<p><strong>Example 6:</strong></p>
<p><!-- eslint-skip --></p>
<pre><code>const condition = true;
const object = {
...(condition && { value: 42 })
};
</code></pre>
<p><!-- expect(object).toEqual({ value: 42 }) --></p>
<p>In the code above, we add a property to an object when the condition is true, otherwise we do nothing. The intention is more obvious when we explicitly define objects to destructure rather than relying on destructuring of falsy values:</p>
<pre><code>const condition = true;
const object = {
...(condition ? { value: 42 } : {})
};
// → { value: 42 }
</code></pre>
<p><!-- expect(object).toEqual({ value: 42 }) --></p>
<p>I usually prefer when objects don’t change shape, so I’d move the condition inside the <code>value</code> field:</p>
<pre><code>const condition = true;
const object = {
value: condition ? 42 : undefined
};
// → { value: 42 }
</code></pre>
<p><!-- expect(object).toEqual({ value: 42 }) --></p>
<p><strong>Example 7:</strong></p>
<p><!-- eslint-skip --></p>
<pre><code>const array = [...Array(10).keys()];
</code></pre>
<p><!-- expect(array).toEqual([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) --></p>
<p>This <a href="https://stackoverflow.com/questions/3746725/how-to-create-an-array-containing-1-n/33352604#33352604">wonderful one-liner</a> creates an array filled with numbers from 0 to 9. <code>Array(10)</code> creates an array with 10 <em>empty</em> elements, then the <code>keys()</code> method returns the keys (numbers from 0 to 9) as an iterator, which we then convert into a plain array using the spread syntax. Exploding head emoji…</p>
<p>We can rewrite it using a <code>for</code> loop:</p>
<pre><code>const array = [];
for (let i = 0; i < 10; i++) {
array.push(i);
}
// → [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
<p><!-- expect(array).toEqual([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) --></p>
<p>As much as I like to avoid loops in my code, the loop version is more readable for me.</p>
<p>Somewhere in the middle would be using the <code>Array.from()</code> method:</p>
<pre><code>const array = Array.from({ length: 10 }).map((_, i) => i);
// → [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
<p><!-- expect(array).toEqual([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) --></p>
<p>The <code>Array.from({length: 10})</code> creates an array with 10 <em>undefined</em> elements, then using the <code>map()</code> method, we fill the array with numbers from 0 to 9.</p>
<p>We can write it shorter by using <code>Array.from()</code>’s map callback:</p>
<pre><code>const array = Array.from({ length: 10 }, (_, i) => i);
// → [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
<p><!-- expect(array).toEqual([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) --></p>
<p>Explicit <code>map()</code> is slightly more readable, and we don’t need to remember what the second argument of <code>Array.from()</code> does. Additionally, <code>Array.from({length: 10})</code> is slightly more readable than <code>Array(10)</code>. Though only slightly.</p>
<p>So, what’s your score? I think mine would be around 3/7.</p>
<h2>Gray areas</h2>
<p>Some patterns tread the line between cleverness and readability.</p>
<p>For example, using <code>Boolean</code> to filter out falsy array elements (<code>null</code> and <code>0</code> in this example):</p>
<pre><code>const words = ['Not', null, 'enough', 0, 'cheese'].filter(Boolean);
// → ['Not', 'enough', 'cheese']
</code></pre>
<p><!-- expect(words).toEqual( ['Not', 'enough', 'cheese']) --></p>
<p>I find this pattern acceptable; though it requires learning, it’s better than the alternative:</p>
<p><!-- eslint-skip --></p>
<pre><code>const words = ['Not', null, 'enough', 0, 'cheese'].filter(
item => !!item
);
// → ['Not', 'enough', 'cheese']
</code></pre>
<p><!-- expect(words).toEqual( ['Not', 'enough', 'cheese']) --></p>
<p>However, keep in mind that both variations filter out <em>falsy</em> values, so if zeros or empty strings are important, we need to explicitly filter for <code>undefined</code> or <code>null</code>:</p>
<pre><code>const words = ['Not', null, 'enough', 0, 'cheese'].filter(
item => item != null
);
// → ['Not', 'enough', 0, 'cheese']
</code></pre>
<p><!-- expect(words).toEqual(['Not', 'enough', 0, 'cheese']) --></p>
<h2>Make differences in code obvious</h2>
<p>When I see two lines of tricky code that appear identical, I assume they differ in some way, but I don’t see the difference yet. Otherwise, a programmer would likely create a variable or a function for the repeated code instead of copypasting it.</p>
<p>For example, we have a code that generates test IDs for two different tools we use on a project, Enzyme and Codeception:</p>
<p><!-- const type = 'type', columnName = 'col', rowIndex = 2, toTitleCase = x => <em>.startCase(</em>.toLower(x)) --></p>
<pre><code>const props = {
'data-enzyme-id': columnName
? `${type}-${toTitleCase(columnName)}-${rowIndex}`
: null,
'data-codeception-id': columnName
? `${type}-${toTitleCase(columnName)}-${rowIndex}`
: null
};
// → {
// 'data-enzyme-id': 'type-Col-2',
// 'data-codeception-id': 'type-Col-2'
// }
</code></pre>
<p><!--
expect(props).toHaveProperty('data-enzyme-id', 'type-Col-2')
expect(props).toHaveProperty('data-codeception-id', 'type-Col-2')
--></p>
<p>It’s difficult to immediately spot any differences between these two lines of code. Remember those pairs of pictures where you had to find ten differences? That’s what this code does to the reader.</p>
<p>While I’m generally skeptical about extreme code DRYing, this is a good case for it.</p>
<p><strong>Info:</strong> We talk more about the Don’t repeat yourself principle in the <a href="/blog/divide/">Divide and conquer, or merge and relax</a> chapter.</p>
<p><!-- const type = 'type', columnName = 'col', rowIndex = 2, toTitleCase = x => <em>.startCase(</em>.toLower(x)) --></p>
<pre><code>const testId = columnName
? `${type}-${toTitleCase(columnName)}-${rowIndex}`
: null;
const props = {
'data-enzyme-id': testId,
'data-codeception-id': testId
};
// → {
// 'data-enzyme-id': 'type-Col-2',
// 'data-codeception-id': 'type-Col-2'
// }
</code></pre>
<p><!--
expect(props).toHaveProperty('data-enzyme-id', 'type-Col-2')
expect(props).toHaveProperty('data-codeception-id', 'type-Col-2')
--></p>
<p>Now, there’s no doubt that the code for both test IDs is exactly the same.</p>
<p>Let’s look at a trickier example. Suppose we use different naming conventions for each testing tool:</p>
<p><!-- const type = 'type', columnName = 'col', rowIndex = 2, toTitleCase = x => <em>.startCase(</em>.toLower(x)) --></p>
<pre><code>const props = {
'data-enzyme-id': columnName
? `${type}-${toTitleCase(columnName)}-${rowIndex}`
: null,
'data-codeception-id': columnName
? `${type}_${toTitleCase(columnName)}_${rowIndex}`
: null
};
// → {
// 'data-enzyme-id': 'type-Col-2',
// 'data-codeception-id': 'type_Col_2'
// }
</code></pre>
<p><!--
expect(props).toHaveProperty('data-enzyme-id', 'type-Col-2')
expect(props).toHaveProperty('data-codeception-id', 'type_Col_2')
--></p>
<p>The difference between these two lines of code is hard to notice, and we can never be sure that the name separator (<code>-</code> or <code>_</code>) is the only difference here.</p>
<p>In a project with such a requirement, this pattern will likely appear in many places. One way to improve it is to create functions that generate test IDs for each tool:</p>
<p><!-- const type = 'type', columnName = 'col', rowIndex = 2, toTitleCase = x => <em>.startCase(</em>.toLower(x)) --></p>
<pre><code>const joinEnzymeId = (...parts) => parts.join('-');
const joinCodeceptionId = (...parts) => parts.join('_');
const props = {
'data-enzyme-id': columnName
? joinEnzymeId(type, toTitleCase(columnName), rowIndex)
: null,
'data-codeception-id': columnName
? joinCodeceptionId(type, toTitleCase(columnName), rowIndex)
: null
};
</code></pre>
<p><!--
expect(props).toHaveProperty('data-enzyme-id', 'type-Col-2')
expect(props).toHaveProperty('data-codeception-id', 'type_Col_2')
--></p>
<p>This is already much better, but it’s not perfect yet — the repeated code is still too large. Let’s fix this too:</p>
<p><!-- const type = 'type', columnName = 'col', rowIndex = 2, toTitleCase = x => <em>.startCase(</em>.toLower(x)) --></p>
<pre><code>const joinEnzymeId = (...parts) => parts.join('-');
const joinCodeceptionId = (...parts) => parts.join('_');
const getTestIdProps = (...parts) => ({
'data-enzyme-id': joinEnzymeId(...parts),
'data-codeception-id': joinCodeceptionId(...parts)
});
const props = columnName
? getTestIdProps(type, toTitleCase(columnName), rowIndex)
: {};
</code></pre>
<p><!--
expect(props).toHaveProperty('data-enzyme-id', 'type-Col-2')
expect(props).toHaveProperty('data-codeception-id', 'type_Col_2')
--></p>
<p>This is an extreme case of using small functions, and I generally try to avoid splitting code this much. However, in this case, it works well, especially if there are already many places in the project where we can use the new <code>getTestIdProps()</code> function.</p>
<p>Sometimes, code that looks nearly identical has subtle differences:</p>
<p><!-- let result, dispatch = x => result = x, changeIsWordDocumentExportSuccessful = x => x --></p>
<pre><code>function handleSomething(documentId) {
if (documentId) {
dispatch(changeIsWordDocumentExportSuccessful(true));
return;
}
dispatch(changeIsWordDocumentExportSuccessful(false));
}
</code></pre>
<p><!--
handleSomething('pizza')
expect(result).toEqual(true);
handleSomething()
expect(result).toEqual(false);
--></p>
<p>The only difference here is the parameter we pass to the function with a very long name. We can move the condition inside the function call:</p>
<p><!-- let result, dispatch = x => result = x, changeIsWordDocumentExportSuccessful = x => x --></p>
<pre><code>function handleSomething(documentId) {
dispatch(
changeIsWordDocumentExportSuccessful(documentId !== undefined)
);
}
</code></pre>
<p><!--
handleSomething('pizza')
expect(result).toEqual(true);
handleSomething()
expect(result).toEqual(false);
--></p>
<p>This eliminates the similar code, making the entire snippet shorter and easier to understand.</p>
<p>Whenever we encounter a condition that makes code slightly different, we should ask ourselves: is this condition truly necessary? If the answer is “yes”, we should ask ourselves again. Often, there’s no <em>real</em> need for a particular condition. For example, why do we even need to add test IDs for different tools separately? Can’t we configure one of the tools to use test IDs of the other? If we dig deep enough, we might find that no one knows the answer, or that the original reason is no longer relevant.</p>
<p>Consider this example:</p>
<p><!-- eslint-skip --></p>
<pre><code>function getAssetDirs(config) {
return config.assetsDir
? Array.isArray(config.assetsDir)
? config.assetsDir.map(dir => ({ from: dir }))
: [{ from: config.assetsDir }]
: [];
}
</code></pre>
<p><!--
expect(getAssetDirs({})).toEqual([])
expect(getAssetDirs({assetsDir: 'pizza'})).toEqual([{from: 'pizza'}])
expect(getAssetDirs({assetsDir: ['pizza', 'tacos']})).toEqual([{from: 'pizza'}, {from: 'tacos'}])
--></p>
<p>This code handles two edge cases: when <code>assetsDir</code> doesn’t exist, and when <code>assetsDir</code> isn’t an array. Also, the object generation code is duplicated. <em>(And let’s not talk about nested ternaries…)</em> We can get rid of the duplication and at least one condition:</p>
<p><!-- eslint-disable unicorn/prevent-abbreviations --></p>
<pre><code>function getAssetDirs(config) {
const assetDirectories = config.assetsDir
? _.castArray(config.assetsDir)
: [];
return assetDirectories.map(from => ({
from
}));
}
</code></pre>
<p><!--
expect(getAssetDirs({})).toEqual([])
expect(getAssetDirs({assetsDir: 'pizza'})).toEqual([{from: 'pizza'}])
expect(getAssetDirs({assetsDir: ['pizza', 'tacos']})).toEqual([{from: 'pizza'}, {from: 'tacos'}])
--></p>
<p>I don’t like that Lodash’s <a href="https://lodash.com/docs#castArray"><code>castArray()</code> method</a> wraps <code>undefined</code> in an array, which isn’t what I’d expect, but still, the result is simpler.</p>
<h2>Avoid shortcuts</h2>
<p>CSS has <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/Shorthand_properties">shorthand properties</a>, and developers often overuse them. The idea is that a single property can define multiple properties at the same time. Here’s a good example:</p>
<pre><code>.block {
margin: 1rem;
}
</code></pre>
<p>Which is the same as:</p>
<pre><code>.block {
margin-top: 1rem;
margin-right: 1rem;
margin-bottom: 1rem;
margin-left: 1rem;
}
</code></pre>
<p>One line of code instead of four, and it’s still clear what’s happening: we set the same margin on all four sides of an element.</p>
<p>Now, look at this example:</p>
<pre><code>.block-1 {
margin: 1rem 2rem 3rem 4rem;
}
.block-3 {
margin: 1rem 2rem 3rem;
}
.block-2 {
margin: 1rem 2rem;
}
</code></pre>
<p>To understand what they do, we need to know that:</p>
<ul>
<li>when the <code>margin</code> property has four values, the order is top, right, bottom, left;</li>
<li>when it has three values, the order is top, left/right, bottom;</li>
<li>and when it has two values, the order is top/bottom, left/right.</li>
</ul>
<p>This creates an unnecessary cognitive load, and makes code harder to read, edit, and review. I avoid such shorthands.</p>
<p>Another issue with shorthand properties is that they can set values for properties we didn’t intend to change. Consider this example:</p>
<pre><code>.block {
font: italic bold 2rem Helvetica;
}
</code></pre>
<p>This declaration sets the Helvetica font family, the font size of 2rem, and makes the text italic and bold. What we don’t see here is that it also changes the line height to the default value of <code>normal</code>.</p>
<p>My rule of thumb is to use shorthand properties only when setting a single value; otherwise, I prefer longhand properties.</p>
<p>Here are some good examples:</p>
<pre><code>.block {
/* Set margin on all four sides */
margin: 1rem;
/* Set top/bottom and left/right margins */
margin-block: 1rem;
margin-inline: 2rem;
/* Set border radius to all four corners */
border-radius: 0.5rem;
/* Set border-width, border-style and border-color
* This is a bit of an outlier but it’s very common and
* it’s hard to misinterpret it because all values have
* different types */
border: 1px solid #c0ffee;
/* Set top, right, bottom, and left position */
inset: 0;
}
</code></pre>
<p>And here are some examples to avoid:</p>
<pre><code>.block {
/* Set top/bottom and left/right margin */
margin: 1rem 2rem;
/* Set border radius to top-left/bottom-right,
* and top-right/bottom-left corners */
border-radius: 1em 2em;
/* Set border radius to top-left, top-right/bottom-left,
* and bottom-right corners */
border-radius: 1em 2em 3em;
/* Set border radius to top-left, top-right, bottom-right,
* and bottom-left corners */
border-radius: 1em 2em 3em 4em;
/* Set background-color, background-image, background-repeat,
* and background-position */
background: #bada55 url(images/tacocat.gif) no-repeat left top;
/* Set top, right, bottom, and left */
inset: 0 20px 0 20px;
}
</code></pre>
<p>While shorthand properties indeed make the code shorter, they often make it significantly harder to read, so use them with caution.</p>
<h2>Write parallel code</h2>
<p>Eliminating conditions isn’t always possible. However, there are ways to make differences in code branches easier to spot. One of my favorite approaches is what I call <em>parallel coding</em>.</p>
<p>Consider this example:</p>
<p><!-- let Link = ({href}) => href --></p>
<pre><code>function RecipeName({ name, subrecipe }) {
if (subrecipe) {
return <Link href={`/recipes/${subrecipe.slug}`}>{name}</Link>;
}
return name;
}
</code></pre>
<p><!--
const {container: c1} = RTL.render(<RecipeName name="Tacos" subrecipe={{slug: 'salsa', name: 'Salsa'}} />);
expect(c1.textContent).toEqual('/recipes/salsa')
--></p>
<p>It might be a personal pet peeve, but I dislike when the <code>return</code> statements are on different levels, making them harder to compare. Let’s add an <code>else</code> statement to fix this:</p>
<p><!-- let Link = ({href}) => href --></p>
<pre><code>function RecipeName({ name, subrecipe }) {
if (subrecipe) {
return <Link href={`/recipes/${subrecipe.slug}`}>{name}</Link>;
} else {
return name;
}
}
</code></pre>
<p><!--
const {container: c1} = RTL.render(<RecipeName name="Tacos" subrecipe={{slug: 'salsa', name: 'Salsa'}} />);
expect(c1.textContent).toEqual('/recipes/salsa')
--></p>
<p>Now, both return values are at the same indentation level, making them easier to compare. This pattern works when none of the condition branches are handling errors, in which case an early return would be a better approach.</p>
<p><strong>Info:</strong> We talk about early returns in the <a href="/blog/avoid-conditions/">Avoid conditions</a> chapter.</p>
<p>Here’s another example:</p>
<p><!--
let Button = ({link}) => <button>{link}</button>
let previewLink = 'http://example.com'
let onOpenViewConfirmation = () => {}
let Render = ({platform: Platform}) => { return (
--></p>
<p><!-- eslint-skip --></p>
<pre><code><Button
onPress={Platform.OS !== 'web' ? onOpenViewConfirmation : undefined}
link={Platform.OS === 'web' ? previewLink : undefined}
target="_empty"
>
Continue
</Button>
</code></pre>
<p><!--
)}
const {container: c1} = RTL.render(<Render platform={{OS: 'web'}} />);
expect(c1.textContent).toEqual(previewLink)
const {container: c2} = RTL.render(<Render platform={{OS: 'native'}} />);
expect(c2.textContent).toEqual('')
--></p>
<p>In this example, we have a button that behaves like a link in the browser and shows a confirmation modal in an app. The reversed condition for the <code>onPress</code> prop makes this logic hard to see.</p>
<p>Let’s make both conditions positive:</p>
<p><!--
let Button = ({link}) => <button>{link}</button>
let previewLink = 'http://example.com'
let onOpenViewConfirmation = () => {}
let Render = ({platform: Platform}) => { return (
--></p>
<pre><code><Button
onPress={Platform.OS === 'web' ? undefined : onOpenViewConfirmation}
link={Platform.OS === 'web' ? previewLink : undefined}
target="_empty"
>
Continue
</Button>
</code></pre>
<p><!--
)}
const {container: c1} = RTL.render(<Render platform={{OS: 'web'}} />);
expect(c1.textContent).toEqual(previewLink)
const {container: c2} = RTL.render(<Render platform={{OS: 'native'}} />);
expect(c2.textContent).toEqual('')
--></p>
<p>Now, it’s clear that we either set <code>onPress</code> or <code>link</code> props depending on the platform.</p>
<p>We can stop here or take it a step further, depending on the number of <code>Platform.OS === 'web'</code> conditions in the component or how many props we need to set conditionally</p>
<p>We can extract the conditional props into a separate variable:</p>
<p><!--
let Platform = {OS: 'web'}
let onOpenViewConfirmation = () => {}
let previewLink = 'http://example.com'
--></p>
<pre><code>const buttonProps =
Platform.OS === 'web'
? {
link: previewLink,
target: '_empty'
}
: {
onPress: onOpenViewConfirmation
};
</code></pre>
<p><!-- expect(buttonProps).toHaveProperty('target', '_empty') --></p>
<p>Then, use it instead of hardcoding the entire condition every time:</p>
<p><!--
let Button = ({link}) => <button>{link}</button>
let previewLink = 'http://example.com'
let onOpenViewConfirmation = () => {}
let Render = ({platform: Platform}) => {
const buttonProps = Platform.OS === 'web'
? {
link: previewLink,
target: '_empty'
}
: {
onPress: onOpenViewConfirmation
};
return (
--></p>
<pre><code><Button {...buttonProps}>Continue</Button>
</code></pre>
<p><!--
)}
const {container: c1} = RTL.render(<Render platform={{OS: 'web'}} />);
expect(c1.textContent).toEqual(previewLink)
const {container: c2} = RTL.render(<Render platform={{OS: 'native'}} />);
expect(c2.textContent).toEqual('')
--></p>
<p>I also moved the <code>target</code> prop to the web branch because it’s not used by the app anyway.</p>
<hr />
<p>When I was in my twenties, remembering things wasn’t much of a problem for me. I could recall books I’d read and all the functions in a project I was working on. Now that I’m in my forties, that’s no longer the case. I now value simple code that doesn’t use any tricks; I value search engines, quick access to the documentation, and tooling that help me to reason about the code and navigate the project without keeping everything in my head.</p>
<p>We shouldn’t write code for our present selves but for who we’ll be a few years from now. Thinking is hard, and programming demands a lot of it, even without having to decipher tricky or unclear code.</p>
<p>Start thinking about:</p>
<ul>
<li>When you feel smart and write some short, clever code, think if there’s a simpler, more readable way to write it.</li>
<li>Whether a condition that makes code slightly different is truly necessary.</li>
<li>Whether a shortcut makes the code shorter but still readable, or just shorter.</li>
</ul>
<hr />
<p>Read other sample chapters of the book:</p>
<ul>
<li><a href="/blog/avoid-loops/">Avoid loops</a></li>
<li><a href="/blog/avoid-conditions/">Avoid conditions</a></li>
<li><a href="/blog/avoid-reassigning-variables/">Avoid reassigning variables</a></li>
<li><a href="/blog/avoid-mutation/">Avoid mutation</a></li>
<li><a href="/blog/avoid-comments/">Avoid comments</a></li>
<li><a href="/blog/naming/">Naming is hard</a></li>
<li><a href="/blog/divide/">Divide and conquer, or merge and relax</a></li>
<li><em>Don’t make me think (<em>this post</em>)</em></li>
</ul>
Washing your code: divide and conquer, or merge and relaxhttps://sapegin.me/blog/divide/https://sapegin.me/blog/divide/Splitting code into functions and modules, when the right time is to introduce an abstraction, and when it’s better to sleep on it.Thu, 10 Oct 2024 00:00:00 GMT<p><!-- description: Splitting code into functions and modules, when the right time is to introduce an abstraction, and when it’s better to sleep on it --></p>
<p>Knowing how to organize code into modules or functions, and when the right time is to introduce an abstraction instead of duplicating code, is an important skill. Writing generic code that others can effectively use is yet another skill. There are just as many reasons to split the code as there are to keep it together. In this chapter, we’ll discuss some of these reasons.</p>
<h2>Let abstractions grow</h2>
<p>We, developers, hate to do the same work twice. DRY is a mantra for many. However, when we have two or three pieces of code that kind of do the same thing, it may be still too early to introduce an abstraction, no matter how tempting it may feel.</p>
<p><strong>Info:</strong> The <a href="https://en.wikipedia.org/wiki/Don%27t_repeat_yourself">Don’t repeat yourself</a> (DRY) principle demands that “every piece of knowledge must have a single, unambiguous, authoritative representation within a system”, which is often interpreted as <em>any code duplication is strictly verboten</em>.</p>
<p>Live with the pain of code duplication for a while; maybe it’s not so bad in the end, and the code is actually not exactly the same. Some level of code duplication is healthy and allows us to iterate and evolve code faster without the fear of breaking something.</p>
<p>It’s also hard to come up with a good API when we only consider a couple of use cases.</p>
<p>Managing shared code in large projects with many developers and teams is difficult. New requirements for one team may not work for another Team and break their code, or we end up with an unmaintainable spaghetti monster with dozens of conditions.</p>
<p>Imagine Team A is adding a comment form to their page: a name, a message, and a submit button. Then, Team B needs a feedback form, so they find Team A’s component and try to reuse it. Then, Team A also wants an email field, but they don’t know that Team B uses their component, so they add a required email field and break the feature for Team B users. Then, Team B needs a phone number field, but they know that Team A is using the component without it, so they add an option to show a phone number field. A year later, the two teams hate each other for breaking each other’s code, and the component is full of conditions and is impossible to maintain. Both teams would save a lot of time and have healthier relationships with each other if they maintained separate components composed of lower-level shared components, like an input field or a button.</p>
<p><strong>Tip:</strong> It might be a good idea to forbid other teams from using our code unless it’s designed and marked as shared. The <a href="https://github.com/sverweij/dependency-cruiser">Dependency cruiser</a> is a tool that could help set up such rules.</p>
<p>Sometimes, we have to roll back an abstraction. When we start adding conditions and options, we should ask ourselves: is it still a variation of the same thing or a new thing that should be separated? Adding too many conditions and parameters to a module can make the API hard to use and the code hard to maintain and test.</p>
<p>Duplication is cheaper and healthier than the wrong abstraction.</p>
<p><strong>Info:</strong> See Sandi Metz’s article <a href="https://sandimetz.com/blog/2016/1/20/the-wrong-abstraction">The Wrong Abstraction</a> for a great explanation.</p>
<p>The higher the level of the code, the longer we should wait before we abstract it. Low-level utility abstractions are much more obvious and stable than business logic.</p>
<h2>Size doesn’t always matter</h2>
<p><em>Code reuse</em> isn’t the only, or even most important, reason to extract a piece of code into a separate function or module.</p>
<p><em>Code length</em> is often <a href="https://softwareengineering.stackexchange.com/questions/27798/what-is-proven-as-a-good-maximum-length-of-a-function">used as a metric</a> for when we should split a module or a function, but size alone doesn’t make code hard to read or maintain.</p>
<p>Splitting a linear algorithm, even a long one, into several functions and then calling them one after another rarely makes the code more readable. Jumping between functions (and even more so — files) is harder than scrolling, and if we have to look into each function’s implementation to understand the code, then the abstraction wasn’t the right one.</p>
<p><strong>Info:</strong> Egon Elbre wrote a nice article on <a href="https://egonelbre.com/psychology-of-code-readability/">psychology of code readability</a>.</p>
<p>Here’s an example, adapted from the <a href="https://testing.googleblog.com/2023/09/use-abstraction-to-improve-function.html">Google Testing Blog</a>:</p>
<p><!--
let checkOvenInterval = 100
let cookTime = 2000
let cookingTemp = 240
let vegToppings = ['champignon'], meatToppings = ['pig']
let time = {sleep: () => {}}
let getOvenTemp = () => 250</p>
<p>class Pizza {
toppings = []
baked = false
boxed = null
sliced = null
ready = null
base = null
sauce = null
cheese = null
constructor({base, sauce, cheese}) {
this.base = base
this.sauce = sauce
this.cheese = cheese
}
}
class Oven {
temp = 20
insert() {}
remove() {}
}
class Box {
putIn() { return true }
slicePizza() { return true }
close() { return true }
}
--></p>
<pre><code>function createPizza(order) {
const pizza = new Pizza({
base: order.size,
sauce: order.sauce,
cheese: 'Mozzarella'
});
if (order.kind === 'Veg') {
pizza.toppings = vegToppings;
} else if (order.kind === 'Meat') {
pizza.toppings = meatToppings;
}
const oven = new Oven();
if (oven.temp !== cookingTemp) {
while (oven.temp < cookingTemp) {
time.sleep(checkOvenInterval);
oven.temp = getOvenTemp(oven);
}
}
if (!pizza.baked) {
oven.insert(pizza);
time.sleep(cookTime);
oven.remove(pizza);
pizza.baked = true;
}
const box = new Box();
pizza.boxed = box.putIn(pizza);
pizza.sliced = box.slicePizza(order.size);
pizza.ready = box.close();
return pizza;
}
</code></pre>
<p><!--
let pizza = createPizza({size: 30, sauce: 'red', kind: 'Meat'})
expect(pizza).toEqual({
baked: true,
base: 30,
sauce: 'red',
cheese: 'Mozzarella',
toppings: ['pig'],
boxed: true,
sliced: true,
ready: true
})
--></p>
<p>I have so many questions about the API of the <code>Pizza</code> class, but let’s see what improvements the authors suggest:</p>
<p><!--
let checkOvenInterval = 100
let cookTime = 2000
let cookingTemp = 240
let vegToppings = ['champignon'], meatToppings = ['pig']
let time = {sleep: () => {}}
let getOvenTemp = () => 250</p>
<p>class Pizza {
toppings = []
baked = false
boxed = null
sliced = null
ready = null
base = null
sauce = null
cheese = null
constructor({base, sauce, cheese}) {
this.base = base
this.sauce = sauce
this.cheese = cheese
}
}
class Oven {
temp = 20
insert() {}
remove() {}
}
class Box {
putIn() { return true }
slicePizza() { return true }
close() { return true }
}
--></p>
<pre><code>function prepare(order) {
const pizza = new Pizza({
base: order.size,
sauce: order.sauce,
cheese: 'Mozzarella'
});
addToppings(pizza, order.kind);
return pizza;
}
function addToppings(pizza, kind) {
if (kind === 'Veg') {
pizza.toppings = vegToppings;
} else if (kind === 'Meat') {
pizza.toppings = meatToppings;
}
}
function bake(pizza) {
const oven = new Oven();
heatOven(oven);
bakePizza(pizza, oven);
}
function heatOven(oven) {
if (oven.temp !== cookingTemp) {
while (oven.temp < cookingTemp) {
time.sleep(checkOvenInterval);
oven.temp = getOvenTemp(oven);
}
}
}
function bakePizza(pizza, oven) {
if (!pizza.baked) {
oven.insert(pizza);
time.sleep(cookTime);
oven.remove(pizza);
pizza.baked = true;
}
}
function pack(pizza) {
const box = new Box();
pizza.boxed = box.putIn(pizza);
pizza.sliced = box.slicePizza(pizza.size);
pizza.ready = box.close();
}
function createPizza(order) {
const pizza = prepare(order);
bake(pizza);
pack(pizza);
return pizza;
}
</code></pre>
<p><!--
let pizza = createPizza({size: 30, sauce: 'red', kind: 'Meat'})
expect(pizza).toEqual({
baked: true,
base: 30,
sauce: 'red',
cheese: 'Mozzarella',
toppings: ['pig'],
boxed: true,
sliced: true,
ready: true
})
--></p>
<p>What was already complex and convoluted is now even more complex and convoluted, and half of the code is just function calls. This doesn’t make the code any easier to understand, but it does make it almost impossible to work with. The article doesn’t show the complete code of the refactored version, perhaps to make the point more compelling.</p>
<p>Pierre “catwell” Chapuis <a href="https://blog.separateconcerns.com/2023-09-11-linear-code.html">suggests in his blog post</a> to add comments instead of new functions:</p>
<p><!--
let checkOvenInterval = 100
let cookTime = 2000
let cookingTemp = 240
let vegToppings = ['champignon'], meatToppings = ['pig']
let time = {sleep: () => {}}
let getOvenTemp = () => 250</p>
<p>class Pizza {
toppings = []
baked = false
boxed = null
sliced = null
ready = null
base = null
sauce = null
cheese = null
constructor({base, sauce, cheese}) {
this.base = base
this.sauce = sauce
this.cheese = cheese
}
}
class Oven {
temp = 20
insert() {}
remove() {}
}
class Box {
putIn() { return true }
slicePizza() { return true }
close() { return true }
}
--></p>
<pre><code>function createPizza(order) {
// Prepare pizza
const pizza = new Pizza({
base: order.size,
sauce: order.sauce,
cheese: 'Mozzarella'
});
// Add toppings
if (order.kind == 'Veg') {
pizza.toppings = vegToppings;
} else if (order.kind == 'Meat') {
pizza.toppings = meatToppings;
}
const oven = new Oven();
if (oven.temp !== cookingTemp) {
// Heat oven
while (oven.temp < cookingTemp) {
time.sleep(checkOvenInterval);
oven.temp = getOvenTemp(oven);
}
}
if (!pizza.baked) {
// Bake pizza
oven.insert(pizza);
time.sleep(cookTime);
oven.remove(pizza);
pizza.baked = true;
}
// Box and slice
const box = new Box();
pizza.boxed = box.putIn(pizza);
pizza.sliced = box.slicePizza(order.size);
pizza.ready = box.close();
return pizza;
}
</code></pre>
<p><!--
let pizza = createPizza({size: 30, sauce: 'red', kind: 'Meat'})
expect(pizza).toEqual({
baked: true,
base: 30,
sauce: 'red',
cheese: 'Mozzarella',
toppings: ['pig'],
boxed: true,
sliced: true,
ready: true
})
--></p>
<p>This is already much better than the split version. An even better solution would be improving the APIs and making the code more clear. Pierre suggests that preheating the oven shouldn’t be part of the <code>createPizza()</code> function (and baking many pizzas myself, I totally agree!) because in real life the oven is already there and probably already hot from the previous pizza. Pierre also suggests that the function should return the box, not the pizza, because in the original code, the box kind of disappears after all the slicing and packaging magic, and we end up with the sliced pizza in our hands.</p>
<p>There are many ways to cook a pizza, just as there are many ways to code a problem. The result may look the same, but some solutions are easier to understand, modify, reuse, and delete than others.</p>
<p>Naming can also be a problem too when all the extracted functions are parts of the same algorithm. We need to invent names that are clearer than the code and shorter than comments — not an easy task.</p>
<p><strong>Info:</strong> We talk about commenting code in the <a href="/blog/avoid-comments/">Avoid comments</a> chapter, and about naming in the <a href="/blog/naming/">Naming is hard</a> chapter.</p>
<p>You probably won’t find many small functions in my code. In my experience, the most useful reasons to split code are <em>change frequency</em> and <em>change reason</em>.</p>
<h2>Separate code that changes often</h2>
<p>Let’s start with <em>change frequency</em>. Business logic changes much more often than utility functions. It makes sense to separate often-changing code from code that is very stable.</p>
<p>The comment form we discussed earlier in this chapter is an example of the former; a function that converts camelCase strings to kebab-case is an example of the latter. The comment form is likely to change and diverge over time when new business requirements arise; the case conversion function is unlikely to change at all and it’s safe to reuse in many places.</p>
<p>Imagine that we’re making a nice-looking table to display some data. We may think we’ll never need this table design again, so we decide to keep all the code for the table in a single module.</p>
<p>Next sprint, we get a task to add another column to the table, so we copy the code of an existing column and change a few lines there. Next sprint, we need to add another table with the same design. Next sprint, we need to change the design of the tables…</p>
<p>Our table module has at least three <em>reasons to change</em>, or <em>responsibilities</em>:</p>
<ul>
<li>new business requirements, like a new table column;</li>
<li>UI or behavior changes, like adding sorting or column resizing;</li>
<li>design changes, like replacing borders with striped row backgrounds.</li>
</ul>
<p>This makes the module harder to understand and harder to change. Presentational code adds a lot of verbosity, making it harder to understand the business logic. To make a change in any of the responsibilities, we need to read and modify more code. This makes it harder and slower to iterate on either.</p>
<p>Having a generic table as a separate module solves this problem. Now, to add another column to a table, we only need to understand and modify one of the two modules. We don’t need to know anything about the generic table module except its public API. To change the design of all tables, we only need to change the generic table module’s code and likely don’t need to touch individual tables at all.</p>
<p>However, depending on the complexity of the problem, it’s okay, and often better, to start with a monolithic approach and extract an abstraction later.</p>
<p>Even code reuse can be a valid reason to separate code: if we use some component on one page, we’ll likely need it on another page soon.</p>
<h2>Keep together code that changes at the same time</h2>
<p>It might be tempting to extract every function into its own module. However, it has downsides too:</p>
<ul>
<li>Other developers may think that they can reuse the function somewhere else, but in reality, this function is likely not generic or tested enough to be reused.</li>
<li>Creating, importing, and switching between multiple files creates unnecessary overhead when the function is only used in one place.</li>
<li>Such functions often stay in the codebase long after the code that used them is gone.</li>
</ul>
<p>I prefer to keep small functions that are used only in one module at the beginning of the module. This way, we don’t need to import them to use in the same module, but reusing them somewhere else would be awkward.</p>
<p><!--
let PageWithTitle = ({children}) => children
let Stack = ({children}) => children
let Heading = ({children}) => children
let Text = ({children}) => children
let Link = ({children}) => children
--></p>
<pre><code>function FormattedAddress({ address, city, country, district, zip }) {
return [address, zip, district, city, country]
.filter(Boolean)
.join(', ');
}
function getMapLink({ name, address, city, country, zip }) {
return `https://www.google.com/maps/?q=${encodeURIComponent(
[name, address, zip, city, country].filter(Boolean).join(', ')
)}`;
}
function ShopsPage({ url, title, shops }) {
return (
<PageWithTitle url={url} title={title}>
<Stack as="ul" gap="l">
{shops.map(shop => (
<Stack key={shop.name} as="li" gap="m">
<Heading level={2}>
<Link href={shop.url}>{shop.name}</Link>
</Heading>
{shop.address && (
<Text variant="small">
<Link href={getMapLink(shop)}>
<FormattedAddress {...shop} />
</Link>
</Text>
)}
</Stack>
))}
</Stack>
</PageWithTitle>
);
}
</code></pre>
<p><!--
const {container: c1} = RTL.render(
<ShopsPage url="/s" title="Shops" shops={[
{name: 'Tacos', url: '/tacos', address: 'Bright street',
city: 'Valencia', country: 'Spain'},
{name: 'Pizza', url: '/pizza', address: 'Dark street',
city: 'Berlin', country: 'Germany'},
]} />
)
expect(c1.textContent).toEqual(
'TacosBright street, Valencia, SpainPizzaDark street, Berlin, Germany'
)
--></p>
<p>In the code above, we have a component (<code>FormattedAddress</code>) and a function (<code>getMapLink()</code>) that are only used in this module, so they’re defined at the top of the file.</p>
<p>If we need to test these functions (and we should!), we can export them from the module and test them together with the main function of the module.</p>
<p>The same applies to functions that are intended to be used only together with a certain function or component. Keeping them in the same module makes it clearer that all functions belong together and makes these functions more discoverable.</p>
<p>Another benefit is that when we delete a module, we automatically delete its dependencies. Code in shared modules often stays in the codebase forever because it’s hard to know if it’s still used (though TypeScript makes this easier).</p>
<p><strong>Info:</strong> Such modules are sometimes called <em>deep modules</em>: a relatively large modules that encapsulate complex problems but has a simple APIs. The opposite of deep modules are <em>shallow modules</em>: many small modules that need to interact with each other.</p>
<p>If we often have to change several modules or functions at the same time, it might be better to merge them into a single module or function. This approach is sometimes called <em>colocation</em>.</p>
<p>Here are a couple of examples of colocation:</p>
<ul>
<li>React components: keeping everything a component needs in the same file, including markup (JSX), styles (CSS in JS), and logic, rather than separating each into its own file, likely in a separate folder.</li>
<li>Tests: keeping tests next to the module file rather than in a separate folder.</li>
<li><a href="https://github.com/erikras/ducks-modular-redux">Ducks convention</a> for Redux: keep related actions, action creators, and reducers in the same file rather than having them in three files in separate folders.</li>
</ul>
<p>Here’s how the file tree changes with colocation:</p>
<table>
<thead>
<tr>
<th>Separated</th>
<th>Colocated</th>
</tr>
</thead>
<tbody>
<tr>
<td><strong>React components</strong></td>
<td></td>
</tr>
<tr>
<td><code>src/components/Button.tsx</code></td>
<td><code>src/components/Button.tsx</code></td>
</tr>
<tr>
<td><code>styles/Button.css</code></td>
<td></td>
</tr>
<tr>
<td><strong>Tests</strong></td>
<td></td>
</tr>
<tr>
<td><code>src/util/formatDate.ts</code></td>
<td><code>src/util/formatDate.ts</code></td>
</tr>
<tr>
<td><code>tests/formatDate.ts</code></td>
<td><code>src/util/formatDate.test.ts</code></td>
</tr>
<tr>
<td><strong>Ducks</strong></td>
<td></td>
</tr>
<tr>
<td><code>src/actions/feature.js</code></td>
<td><code>src/ducks/feature.js</code></td>
</tr>
<tr>
<td><code>src/actionCreators/feature.js</code></td>
<td></td>
</tr>
<tr>
<td><code>src/reducers/feature.js</code></td>
<td></td>
</tr>
</tbody>
</table>
<p><strong>Info:</strong> To learn more about colocation, read <a href="https://kentcdodds.com/blog/colocation">Kent C. Dodds’s article</a>.</p>
<p>A common complaint about colocation is that it makes components too large. In such cases, it’s better to extract some parts into their own components, along with the markup, styles, and logic.</p>
<p>The idea of colocation also conflicts with <em>separation of concerns</em> — an outdated idea that led web developers to keep HTML, CSS, and JavaScript in separate files (and often in separate parts of the file tree) for too long, forcing us edit three files at the same time to make even the most basic changes to web pages.</p>
<p><strong>Info:</strong> The <em>change reason</em> is also known as the <a href="https://en.wikipedia.org/wiki/Single_responsibility_principle">single responsibility principle</a>, which states that “every module, class, or function should have responsibility over a single part of the functionality provided by the software, and that responsibility should be entirely encapsulated by the class.”</p>
<h2>Sweep that ugly code under the rug</h2>
<p>Sometimes, we have to work with an API that’s especially difficult to use or prone to errors. For example, it requires several steps in a specific order or calling a function with multiple parameters that are always the same. This is a good reason to create a utility function to make sure we always do it right. As a bonus, we can now write tests for this piece of code.</p>
<p>String manipulations — such as URLs, filenames, case conversion, or formatting — are good candidates for abstraction. Most likely, there’s already a library for what we’re trying to do.</p>
<p>Consider this example:</p>
<pre><code>const file = 'pizza.jpg';
const prefix = file.slice(0, -4);
// → 'pizza'
</code></pre>
<p><!-- expect(prefix).toEqual('pizza') --></p>
<p>It takes some time to realize that this code removes the file extension and returns the base name. Not only is it unnecessary and hard to read, but it also assumes the extension is always three characters, which might not be the case.</p>
<p>Let’s rewrite it using a library, the built-in Node.js’ <code>path</code> module:</p>
<pre><code>const file = 'pizza.jpg';
const prefix = path.parse(file).name;
// → 'pizza'
</code></pre>
<p><!-- expect(prefix).toEqual('pizza') --></p>
<p>Now, it’s clear what’s happening, there are no magic numbers, and it works with file extensions of any length.</p>
<p>Other candidates for abstraction include dates, device capabilities, forms, data validation, internationalization, and more. I recommend looking for an existing library before writing a new utility function. We often underestimate the complexity of seemingly simple functions.</p>
<p>Here are a few examples of such libraries:</p>
<ul>
<li><a href="https://lodash.com">Lodash</a>: utility functions of all kinds.</li>
<li><a href="https://date-fns.org">Date-fns</a>: functions to work with dates, such as parsing, manipulation, and formatting.</li>
<li><a href="https://zod.dev">Zod</a>: schema validation for TypeScript.</li>
</ul>
<h2>Bless the inline refactoring!</h2>
<p>Sometimes, we get carried away and create abstractions that neither simplify the code nor make it shorter:</p>
<pre><code>// my_feature_util.js
const noop = () => {};
export const Utility = {
noop
// Many more functions…
};
// MyComponent.js
function MyComponent({ onClick }) {
return <button onClick={onClick}>Hola!</button>;
}
MyComponent.defaultProps = {
onClick: Utility.noop
};
</code></pre>
<p><!--
expect(Utility.noop()).toEqual(undefined)
expect(MyComponent.defaultProps.onClick()).toEqual(undefined)
--></p>
<p>Another example:</p>
<p><!-- eslint-skip --></p>
<pre><code>const findByReference = (wrapper, reference) =>
wrapper.find(reference);
const favoriteTaco = findByReference(
['Al pastor', 'Cochinita pibil', 'Barbacoa'],
x => x === 'Cochinita pibil'
);
// → 'Cochinita pibil'
</code></pre>
<p><!-- expect(favoriteTaco).toEqual('Cochinita pibil') --></p>
<p>The best thing we can do in such cases is to apply the almighty <em>inline refactoring</em>: replace each function call with its body. No abstraction, no problem.</p>
<p>The first example becomes:</p>
<pre><code>function MyComponent({ onClick }) {
return <button onClick={onClick}>Hola!</button>;
}
MyComponent.defaultProps = {
onClick: () => {}
};
</code></pre>
<p><!--
expect(MyComponent.defaultProps.onClick()).toEqual(undefined)
--></p>
<p>And the second example becomes:</p>
<pre><code>const favoriteTaco = [
'Al pastor',
'Cochinita pibil',
'Barbacoa'
].find(x => x === 'Cochinita pibil');
// → 'Cochinita pibil'
</code></pre>
<p><!-- expect(favoriteTaco).toEqual('Cochinita pibil') --></p>
<p>The result is not just shorter and more readable; now readers don’t need to guess what these functions do, as we now use JavaScript native functions and features without home-baked abstractions.</p>
<p>In many cases, a bit of repetition is good. Consider this example:</p>
<pre><code>const baseSpacing = 8;
const spacing = {
tiny: baseSpacing / 2,
small: baseSpacing,
medium: baseSpacing * 2,
large: baseSpacing * 3,
xlarge: baseSpacing * 4,
xxlarge: baseSpacing * 5
};
</code></pre>
<p><!-- expect(spacing.xlarge).toEqual(32) --></p>
<p>It looks perfectly fine and won’t raise any questions during code review. However, when we try to use these values, autocompletion only shows <code>number</code> instead of the actual values (see an illustration). This makes it harder to choose the right value.</p>
<p></p>
<p>We could inline the <code>baseSpacing</code> constant:</p>
<pre><code>const spacing = {
tiny: 4,
small: 8,
medium: 16,
large: 24,
xlarge: 32,
xxlarge: 40
};
</code></pre>
<p><!-- expect(spacing.xlarge).toEqual(32) --></p>
<p>Now, we have less code, it’s just as easy to understand, and autocompletion shows the actual values (see the illustration). And I don’t think this code will change often — probably never.</p>
<p></p>
<h2>Separate “what” and “how”</h2>
<p>Consider this excerpt from a form validation function:</p>
<pre><code>function validate(values) {
const errors = {};
if (!values.name || (values.name && values.name.trim() === '')) {
errors.name = 'Name is required';
}
if (!values.login || (values.login && values.login.trim() === '')) {
errors.login = 'Login is required';
}
if (values.login && values.login.indexOf(' ') > 0) {
errors.login = 'No spaces are allowed in login';
}
// This goes on and on for a dozen of other fields…
return errors;
}
</code></pre>
<p><!--
expect(validate({name: 'Chuck', login: 'chuck'})).toEqual({})
expect(validate({name: '', login: 'chuck'})).toEqual({name: 'Name is required'})
expect(validate({name: 'Chuck', login: ''})).toEqual({login: 'Login is required'})
expect(validate({name: 'Chuck', login: 'c norris'})).toEqual({login: 'No spaces are allowed in login'})
--></p>
<p>It’s quite difficult to grasp what’s going on here: validation logic is mixed with error messages, many checks are repeated…</p>
<p>We can split this function into several pieces, each responsible for one thing only:</p>
<ul>
<li>a list of validations for a particular form;</li>
<li>a collection of validation functions, such as <code>isEmail()</code>;</li>
<li>a function that validates all form values using a list of validations.</li>
</ul>
<p>We can describe the validations declaratively as an array:</p>
<p><!--
let hasStringValue = value => value?.trim() !== ''
let hasNoSpaces = value => value?.includes(' ') === false
--></p>
<pre><code>const validations = [
{
field: 'name',
validation: hasStringValue,
message: 'Name is required'
},
{
field: 'login',
validation: hasStringValue,
message: 'Login is required'
},
{
field: 'login',
validation: hasNoSpaces,
message: 'No spaces are allowed in login'
}
// All other validations…
];
</code></pre>
<p><!--
expect(validations[0].validation('tacocat')).toBe(true)
expect(validations[0].validation('')).toBe(false)
expect(validations[1].validation('tacocat')).toBe(true)
expect(validations[1].validation('')).toBe(false)
expect(validations[2].validation('tacocat')).toBe(true)
expect(validations[2].validation('taco cat')).toBe(false)
--></p>
<p>Each validation function and the function that runs validations are pretty generic, so we can either abstract them or use a third-party library.</p>
<p>Now, we can add validation for any form by describing which fields need which validations and which error to show when a certain check fails.</p>
<p><strong>Info:</strong> See the <a href="/blog/avoid-conditions/">Avoid conditions</a> chapter for the complete code and a more detailed explanation of this example.</p>
<p>I call this process <em>separation of “what” and “how”</em>:</p>
<ul>
<li><strong>the “what”</strong> is the data — the list of validations for a particular form;</li>
<li><strong>the “how”</strong> is the algorithms — the validation functions and the validation runner function.</li>
</ul>
<p>The benefits are:</p>
<ul>
<li><strong>Readability:</strong> often, we can define the “what” declaratively, using basic data structures such as arrays and objects.</li>
<li><strong>Maintainability:</strong> we change the “what” more often than the “how”, and now they are separated. We can import the “what” from a file, such as JSON, or load it from a database, making updates possible without code changes, or allowing non-developers to do them.</li>
<li><strong>Reusability:</strong> often, the “how” is generic, and we can reuse it, or even import it from a third-party library.</li>
<li><strong>Testability:</strong> each validation and the validation runner function are isolated, and we can test them separately.</li>
</ul>
<h2>Avoid monster utility files</h2>
<p>Many projects have a file called <code>utils.js</code>, <code>helpers.js</code>, or <code>misc.js</code> where developers throw in utility functions when they can’t find a better place for them. Often, these functions are never reused anywhere else and stay in the utility file forever, so it keeps growing. That’s how <em>monster utility files</em> are born.</p>
<p>Monster utility files have several issues:</p>
<ul>
<li><strong>Poor discoverability:</strong> since all functions are in the same file, we can’t use the fuzzy file opener in our code editor to find them.</li>
<li><strong>They may outlive their callers:</strong> often such functions are never reused again and stay in the codebase, even after the code that was using them is removed.</li>
<li><strong>Not generic enough:</strong> such functions are often made for a single use case and won’t cover other use cases.</li>
</ul>
<p>These are my rules of thumb:</p>
<ul>
<li>If the function is small and used only once, keep it in the same module where it’s used.</li>
<li>If the function is long or used more than once, put it in a separate file inside <code>util</code>, <code>shared</code>, or <code>helpers</code> folder.</li>
<li>If we want more organization, instead of creating files like <code>utils/validators.js</code>, we can group related functions (each in its own file) into a folder: <code>utils/validators/isEmail.js</code>.</li>
</ul>
<h2>Avoid default exports</h2>
<p>JavaScript modules have two types of exports. The first is <strong>named exports</strong>:</p>
<p><!-- test-skip --></p>
<pre><code>// button.js
export function Button() {
/* … */
}
</code></pre>
<p>Which can be imported like this:</p>
<p><!-- test-skip --></p>
<pre><code>import { Button } from './button';
</code></pre>
<p>And the second is <strong>default exports</strong>:</p>
<p><!-- test-skip --></p>
<pre><code>// button.js
export default function Button() {
/* … */
}
</code></pre>
<p>Which can be imported like this:</p>
<p><!-- test-skip --></p>
<pre><code>import Button from './button';
</code></pre>
<p>I don’t really see any advantages to default exports, but they have several issues:</p>
<ul>
<li><strong>Poor refactoring:</strong> renaming a module with a default export often leaves existing imports unchanged. This doesn’t happen with named exports, where all imports are updated after renaming a function.</li>
<li><strong>Inconsistency:</strong> default-exported modules can be imported using any name, which reduces the consistency and greppability of the codebase. Named exports can also be imported using a different name using the <code>as</code> keyword to avoid naming conflicts, but it’s more explicit and is rarely done by accident.</li>
</ul>
<p><strong>Info:</strong> We talk more about greppability in the Write greppable code section of the <em>Other techniques</em> chapter.</p>
<p>Unfortunately, some third-party APIs, such as <code>React.lazy()</code> require default exports, but for all other cases, I stick to named exports.</p>
<h2>Avoid barrel files</h2>
<p>A barrel file is a module (usually named <code>index.js</code> or <code>index.ts</code>) that reexports a bunch of other modules:</p>
<p><!-- test-skip --></p>
<pre><code>// components/index.js
export { Box } from './Box';
export { Button } from './Button';
export { Link } from './Link';
</code></pre>
<p>The main advantage is cleaner imports. Instead of importing each module individually:</p>
<p><!-- test-skip --></p>
<pre><code>import { Box } from '../components/Box';
import { Button } from '../components/Button';
import { Link } from '../components/Link';
</code></pre>
<p>We can import all components from a barrel file:</p>
<p><!-- test-skip --></p>
<pre><code>import { Box, Button, Link } from '../components';
</code></pre>
<p>However, barrel files have several issues:</p>
<ul>
<li><strong>Maintenance cost:</strong> we need to add an export of each new component in a barrel file, along with additional items such as types of utility functions.</li>
<li><strong>Performance cost:</strong> setting up tree shaking is complex, and <a href="https://vercel.com/blog/how-we-optimized-package-imports-in-next-js#what's-the-problem-with-barrel-files">barrel files often lead to increased bundle size or runtime costs</a>. This can also slow down hot reload, unit tests, and linters.</li>
<li><strong>Circular imports:</strong> importing from a barrel file can cause a circular import when both modules are imported from the same barrel files (for example, the <code>Button</code> component imports the <code>Box</code> component).</li>
<li><strong>Developer experience:</strong> navigation to function definition navigates to the barrel file instead of the function’s source code; and autoimport can be confused whether to import from a barrel file instead of a source file.</li>
</ul>
<p><strong>Info:</strong> TkDodo explains <a href="https://tkdodo.eu/blog/please-stop-using-barrel-files">the drawbacks of barrel files in great detail</a>.</p>
<p>The benefits of barrel files are too minor to justify their use, so I recommend avoiding them.</p>
<p>One type of barrel files I especially dislike is those that export a single component just to allow importing it as <code>./components/button</code> instead of <code>./components/button/button</code>.</p>
<h2>Stay hydrated</h2>
<p>To troll the DRYers (developers who never repeat their code), someone coined another term: <a href="https://overreacted.io/the-wet-codebase/">WET</a>, <em>write everything twice</em>, or <em>we enjoy typing</em>, suggesting we should duplicate code at least twice until we replace it with an abstraction. It is a joke, and I don’t fully agree with the idea (sometimes it’s okay to duplicate some code more than twice), but it’s a good reminder that all good things are best in moderation.</p>
<p>Consider this example:</p>
<p><!--
let visitStory = () => {}
let tester = { should: () => {}, click: () => {} }
let cy = { findByText: () => tester, findByTestId: () => tester, }
let it = (_, fn) => fn()
--></p>
<pre><code>const stories = {
YOUR_RECIPES: 'page--your-recipes',
ALL_RECIPES: 'page--all-recipes',
CUISINES: 'page--cuisines',
RECIPE: 'page--recipe'
};
const testIds = {
ADD_TO_FAVORITES: 'add-to-favs-button',
QR_CLOSE: 'qr-close-button',
QR_CODE: 'qr-code',
MOBILE_CTA: 'transfer-button'
};
const copyTesters = {
titleRecipe: /Cochinita Pibil Tacos/,
titleYourRecipes: /Your favorite recipes on a single page/,
addedToFavorites: /In favorites/
// Many more lines…
};
it('your recipes', () => {
visitStory(stories.RECIPE);
cy.findByText(copyTesters.titleRecipe).should('exist');
cy.findByTestId(testIds.ADD_TO_FAVORITES).click();
cy.findByText(copyTesters.addedToFavorites).should('exist');
// Lots of lines in similar style…
});
</code></pre>
<p><!-- // No actual test, just executing the code --></p>
<p>This is an extreme example of code DRYing, which doesn’t make the code more readable or maintainable, especially when most of these constants are used only once. Seeing variable names instead of actual strings here is unhelpful.</p>
<p>Let’s inline all these extra variables. (Unfortunately, inline refactoring in Visual Studio Code doesn’t support inlining object properties, so we have to do it manually.)</p>
<p><!--
let visitStory = () => {}
let tester = { should: () => {}, click: () => {} }
let cy = { findByText: () => tester, findByTestId: () => tester, }
let it = (_, fn) => fn()
--></p>
<pre><code>it('your recipes', () => {
visitStory('page--recipe');
cy.findByText(/Cochinita Pibil Tacos/).should('exist');
cy.findByTestId('add-to-favs-button').click();
cy.findByText(/In favorites/).should('exist');
// Lots of lines in similar style…
});
</code></pre>
<p><!-- // No actual test, just executing the code --></p>
<p>Now, we have significantly less code, and it’s easier to understand what’s going on and easier to update or delete tests.</p>
<p>I’ve encountered so many hopeless abstractions in tests. For example, this pattern is very common:</p>
<p><!--
let Pony = () => null
let mount = () => ({find: () => ({ prop: () => {} })})
let expect = () => ({toBe: () => {}})
let beforeEach = (fn) => fn()
let test = (_, fn) => fn()
--></p>
<pre><code>let wrapper;
beforeEach(() => {
wrapper = mount(<Pony color="pink" />);
});
test('pony has pink tail', () => {
expect(wrapper.find('.tail').prop('value')).toBe('pink');
});
// More tests that use `wrapper`…
</code></pre>
<p><!-- // No actual test, just executing the code --></p>
<p>This pattern tries to avoid repeating <code>mount(...)</code> calls in each test case, but it makes tests more confusing than they need to be. Let’s inline the <code>mount()</code> calls:</p>
<p><!--
let Pony = () => null
let mount = () => ({find: () => ({ prop: () => {} })})
let expect = () => ({toBe: () => {}})
let beforeEach = (fn) => fn()
let test = (_, fn) => fn()
--></p>
<pre><code>test('pony has pink tail', () => {
const wrapper = mount(<Pony color="pink" />);
expect(wrapper.find('.tail').prop('value')).toBe('pink');
});
// More tests that use `wrapper`…
</code></pre>
<p><!-- // No actual test, just executing the code --></p>
<p>Additionally, the <code>beforeEach</code> pattern works only when we want to initialize each test case with the same values, which is rarely the case:</p>
<p><!--
let Pony = () => null
let mount = () => ({find: () => ({ prop: () => {} })})
let expect = () => ({toBe: () => {}, toBeVisible: () => {}})
let beforeEach = (fn) => fn()
let test = (_, fn) => fn()
--></p>
<pre><code>test('pony has pink tail', () => {
const wrapper = mount(<Pony color="pink" />);
expect(wrapper.find('.tail').prop('value')).toBe('pink');
});
test('pony can breath fire', () => {
const wrapper = mount(<Pony color="pink" breathFire />);
expect(wrapper.find('.fire')).toBeVisible();
});
</code></pre>
<p><!-- // No actual test, just executing the code --></p>
<p>To avoid <em>some</em> duplication when testing React components, I often add a <code>defaultProps</code> object and spread it inside each test case:</p>
<p><!--
let Pony = () => null
let mount = () => ({find: () => ({ prop: () => {} })})
let expect = () => ({toBe: () => {}, toBeVisible: () => {}})
let beforeEach = (fn) => fn()
let test = (_, fn) => fn()
--></p>
<pre><code>const defaultProps = { color: 'pink' };
test('pony has pink tail', () => {
const wrapper = mount(<Pony {...defaultProps} />);
expect(wrapper.find('.tail').prop('value')).toBe('pink');
});
test('pony can breath fire', () => {
const wrapper = mount(<Pony {...defaultProps} breathFire />);
expect(wrapper.find('.fire')).toBeVisible();
});
</code></pre>
<p><!-- // No actual test, just executing the code --></p>
<p>This way, we don’t have too much duplication, but at the same time, each test case is isolated and readable. The difference between test cases is now clearer because it’s easier to see the unique props of each test case.</p>
<p>Here’s a more extreme variation of the same problem:</p>
<p><!--
let getSelector = (x) => x
let expect = () => ({toBe: () => {}})
let beforeEach = (fn) => fn()
let test = (_, fn) => fn()
--></p>
<p><!-- eslint-skip --></p>
<pre><code>let css;
let res;
beforeEach(() => {
css = '';
res = '';
});
test('works with basic selectors', () => {
css = 'div\n{}';
res = 'div\n';
expect(getSelector(css)).toBe(res);
});
test('works with lobotomized owl selector', () => {
css = '.stack > * + *\n{}';
res = '.stack > * + *\n';
expect(getSelector(css)).toBe(res);
});
// More tests that use `css` and `res`…
</code></pre>
<p><!-- // No actual test, just executing the code --></p>
<p>We can inline the <code>beforeEach()</code> function the same way we did in the previous example:</p>
<p><!--
let getSelector = (x) => x
let expect = () => ({toBe: () => {}})
let beforeEach = (fn) => fn()
let test = (_, fn) => fn()
--></p>
<pre><code>test('works with basic selectors', () => {
const css = 'div\n{}';
const expected = 'div\n';
expect(getSelector(css)).toBe(expected);
});
test('works with lobotomized owl selector', () => {
const css = '.stack > * + *\n{}';
const expected = '.stack > * + *\n';
expect(getSelector(css)).toBe(expected);
});
// More tests that use `css` and `res`…
</code></pre>
<p><!-- // No actual test, just executing the code --></p>
<p>I’d go even further and use <code>test.each()</code> method because we run the same test with a bunch of different inputs:</p>
<p><!--
let getSelector = (x) => x
let expect = () => ({toBe: () => {}})
let beforeEach = (fn) => fn()
let test = {each: () => (_, fn) => fn()}
--></p>
<pre><code>test.each([
['div\n{}', 'div\n'],
['.stack > * + *\n{}', '.stack > * + *\n']
// More inputs…
])('selector: %s', (css, expected) => {
expect(getSelector(css)).toBe(expected);
});
</code></pre>
<p><!-- // No actual test, just executing the code --></p>
<p>Now, we’ve gathered all the test inputs with their expected results in one place, making it easier to add new test cases.</p>
<p><strong>Info:</strong> Check out my <a href="https://github.com/sapegin/jest-cheat-sheet">Jest</a> and <a href="https://github.com/sapegin/vitest-cheat-sheet">Vitest</a> cheat sheets.</p>
<hr />
<p>The biggest challenge with abstractions is finding a balance between being too rigid and too flexible, and knowing when to start abstracting things and when to stop. It’s often worth waiting to see if we really need to abstract something — many times, it’s better not to.</p>
<p>It’s nice to have a global button component, but if it’s too flexible and has a dozen boolean props to switch between different variations, it will be difficult to use. However, if it’s too rigid, developers will create their own button components instead of using a shared one.</p>
<p>We should be vigilant about letting others reuse our code. Too often, this creates tight coupling between parts of the codebase that should be independent, slowing down development and leading to bugs.</p>
<p>Start thinking about:</p>
<ul>
<li>Colocating related code in the same file or folder to make it easier to change, move, or delete.</li>
<li>Before adding another option to an abstraction, think whether this new use case truly belongs there.</li>
<li>Before merging several pieces of code that look similar, think whether they actually solve the same problems or just happened to look the same.</li>
<li>Before making tests DRY, think whether it would make them more readable and maintainable, or a bit of code duplication isn’t an issue.</li>
</ul>
<hr />
<p>Read other sample chapters of the book:</p>
<ul>
<li><a href="/blog/avoid-loops/">Avoid loops</a></li>
<li><a href="/blog/avoid-conditions/">Avoid conditions</a></li>
<li><a href="/blog/avoid-reassigning-variables/">Avoid reassigning variables</a></li>
<li><a href="/blog/avoid-mutation/">Avoid mutation</a></li>
<li><a href="/blog/avoid-comments/">Avoid comments</a></li>
<li><a href="/blog/naming/">Naming is hard</a></li>
<li><em>Divide and conquer, or merge and relax (<em>this post</em>)</em></li>
<li><a href="/blog/dont-make-me-think/">Don’t make me think</a></li>
</ul>
Better autosave and autoformat in Visual Studio Codehttps://sapegin.me/blog/vscode-autosave/https://sapegin.me/blog/vscode-autosave/Avoid autoformat messing up your code when you need to look something up in the docs halfway through writing a line of code.Mon, 16 Sep 2024 00:00:00 GMT<p><em>This setup assumes that you have the <a href="https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint">ESLint</a> and <a href="https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode">Prettier</a> extensions installed in Visual Studio Code.</em></p>
<p>I like the autosave feature in editors: when you switch to another app — usually a browser — the file is automatically saved, causing the hot reload to apply the changes to the page right before you look at it.</p>
<p>One thing I’ve struggled with for a long time is that autoformatting (using Prettier) on autosave isn’t always desirable. For example, I start typing something, then google how to use a certain API, come back to the editor, and now everything is messed up by autoformatting.</p>
<p>I solved this by disabling autoformatting on save and running autoformat and save on a custom Cmd+S hotkey.</p>
<p>Two other useful things I like to do here:</p>
<ol>
<li>Disable autosave when there’s a syntax error in the file, so autosave doesn’t break the dev server, which could cause loss of state, such as scroll position or form data.</li>
<li>Hide all autofixable ESLint issues so they don’t distract me while I’m writing code, since these issues don’t require any action from me and will be autofixed the next time I save the file.</li>
</ol>
<p>The <a href="https://code.visualstudio.com/docs/getstarted/settings#_settingsjson">Visual Studio Code config</a> to achieve this could look like this:</p>
<pre><code>// settings.json
{
// Don’t format files on save
"editor.formatOnSave": false,
// Autosave files on focus change
"files.autoSave": "onFocusChange",
// Don’t autosave files with syntax errors
"files.autoSaveWhenNoErrors": true,
"editor.codeActionsOnSave": {
// Trigger lint autofixing on explicit save (not on autosave)
"source.fixAll": "explicit"
},
"eslint.rules.customizations": [
// Change the severity of all autofixable issues to `off`
{ "rule": "*", "fixable": true, "severity": "off" }
]
}
</code></pre>
<p>And the <a href="https://code.visualstudio.com/docs/getstarted/keybindings">keybinding config</a> could look like this:</p>
<pre><code>// keybindings.json
[
{
"comment": "Format and Save (to make autosave save files without formatting)",
"key": "cmd+s",
"command": "runCommands",
"args": {
"commands": [
"editor.action.format",
"workbench.action.files.save"
]
}
},
// Disable the default keybinding
{
"key": "cmd+s",
"command": "-workbench.action.files.save"
}
]
</code></pre>
<p>One downside of this setup is that it’s sometimes unclear why a linter autofixed something, as there’s no log of any kind.</p>
<p>P.S. Check out my other Visual Studio Code extensions: <a href="https://marketplace.visualstudio.com/items?itemName=sapegin.emoji-console-log">Emoji Console Log</a>, <a href="https://marketplace.visualstudio.com/items?itemName=sapegin.just-blame">Just Blame</a>, <a href="https://marketplace.visualstudio.com/items?itemName=sapegin.mini-markdown">Mini Markdown</a>, <a href="https://marketplace.visualstudio.com/items?itemName=sapegin.new-file-now">New File Now</a>, <a href="https://marketplace.visualstudio.com/items?itemName=sapegin.notebox">Notebox</a>, <a href="https://marketplace.visualstudio.com/items?itemName=sapegin.todo-tomorrow">Todo Tomorrow</a>; and my themes: <a href="https://marketplace.visualstudio.com/items?itemName=sapegin.Theme-SquirrelsongLight">Squirrelsong Light</a>, <a href="https://marketplace.visualstudio.com/items?itemName=sapegin.Theme-SquirrelsongDark">Squirrelsong Dark</a>.</p>
Modern React testing, part 5: Playwrighthttps://sapegin.me/blog/react-testing-5-playwright/https://sapegin.me/blog/react-testing-5-playwright/Learn how to test React apps end-to-end with Playwright, how to mock network requests with Mock Service Worker, and how to apply testing best practices to write integration tests.Wed, 01 May 2024 00:00:00 GMT<p>Playwright is a framework-agnostic end-to-end testing (also known as <!-- textlint-disable -->E2E<!-- textlint-enable -->, or integration testing) tool for web apps. Playwright has great developer experience and makes writing good and resilient to changes tests straightforward.</p>
<p><strong>This is the fifth article in the series</strong>, where we learn how to test React apps end-to-end using Playwright and how to mock network requests using Mock Service Worker.</p>
<ul>
<li><a href="/blog/react-testing-1-best-practices/">Modern React testing, part 1: best practices</a></li>
<li><a href="/blog/react-testing-2-jest-and-enzyme/">Modern React testing, part 2: Jest and Enzyme</a></li>
<li><a href="/blog/react-testing-3-jest-and-react-testing-library/">Modern React testing, part 3: Jest and React Testing Library</a></li>
<li><a href="/blog/react-testing-4-cypress/">Modern React testing, part 4: Cypress and Cypress Testing Library</a></li>
<li><strong>Modern React testing, part 5: Playwright (<em>this post</em>)</strong></li>
</ul>
<p><em>Check out <a href="https://github.com/sapegin/playwright-article-2024">the GitHub repository</a> with all the examples.</em></p>
<h2>Getting started with Playwright</h2>
<p>We’ll set up and use these tools:</p>
<ul>
<li><a href="https://playwright.dev/">Playwright</a>, an end-to-end test runner;</li>
<li><a href="https://mswjs.io/">Mock Service Worker</a>, mocking network requests.</li>
</ul>
<h3>Why Playwright</h3>
<p><strong>Playwright</strong> has many benefits over other end-to-end test runners:</p>
<ul>
<li>The best experience writing and debugging tests.</li>
<li>An ability to inspect the page at any moment during the test run using the browser developer tools.</li>
<li>All commands wait for the DOM to change when necessary, which simplifies testing async behavior.</li>
<li>Tests better resemble real user behavior. For example, Playwright checks that a button is present in the DOM, isn’t disabled, and isn’t hidden behind another element or offscreen before clicking it.</li>
<li>Supports Chromium, WebKit, Firefox, as well as Google Chrome for Android and Mobile Safari.</li>
<li>Convenient semantic queries, like finding elements by their label text or ARIA role, similar to <a href="/blog/react-testing-3-jest-and-react-testing-library/">React Testing Library</a>.</li>
<li>It’s very fast.</li>
</ul>
<p>Semantic queries help us write <a href="/blog/react-testing-1-best-practices/">good tests</a> and make writing bad tests difficult. It allows us to interact with the app in a way that is similar to how a real user would do that: for example, find form elements and buttons by their labels. It helps us avoid testing implementation details, making our tests resilient to code changes that don’t change the behavior.</p>
<h3>Why not Cypress</h3>
<p>Playwright is similar to the combination of <a href="/blog/react-testing-4-cypress/">Cypress, Cypress Testing Library, and start-server-and-test</a>, which have been my choice for end-to-end testing for many years, but gives us all the necessary tools in a single package. Also, the API feels more cohesive and intentional. There wasn’t a lot of new things to learn.</p>
<p>Some of the benefits of Playwright over Cypress:</p>
<ul>
<li>better API;</li>
<li>easier setup;</li>
<li>multi-tabs support;</li>
<li>speed.</li>
</ul>
<h3>Setting up Playwright</h3>
<p>First, run the <a href="https://playwright.dev/docs/intro">installation wizard</a>:</p>
<pre><code>npm init playwright@latest
</code></pre>
<p>This will install all the dependencies and generate the config files. We’ll need to choose:</p>
<ul>
<li>whether to use TypeScript or JavaScript (we’ll use JavaScript in this article);</li>
<li>where to put the tests (<code>tests</code> folder in the project root);</li>
<li>whether to generate GitHub Actions to run the tests on CI (we won’t cover this here);</li>
<li>whether we want to install the browsers (it’s a good idea; we’ll need them anyway).</li>
</ul>
<p></p>
<p>Then add two scripts to our <a href="https://github.com/sapegin/playwright-article-2024/blob/master/package.json">package.json</a> file:</p>
<pre><code>{
"name": "pizza",
"version": "1.0.0",
"scripts": {
"start": "react-scripts start",
"build": "react-scripts build",
"test:e2e": "npx playwright test --ui",
"test:e2e:ci": "npx playwright test"
},
"dependencies": {
"react": "18.2.0",
"react-dom": "18.2.0",
"react-scripts": "5.0.1"
},
"devDependencies": {
"@playwright/test": "^1.41.2"
}
}
</code></pre>
<p>Playwright, unlike React Testing Library or Enzyme, tests a real app in a real browser, so we need to run our development server before running Playwright. We can run both commands manually in separate terminal windows — good enough for local development — or we can set up Playwright to run them for us (see below) and have a single command that we can also use on a continuous integration (CI) server.</p>
<p>As a development server, we can use an actual development server of our app, like Create React App (that we use for the examples) or Vite, or another tool like <a href="https://react-styleguidist.js.org/">React Styleguidist</a> or <a href="https://storybook.js.org/">Storybook</a>, to test isolated components.</p>
<p>We’ve added two scripts to run the development server and Playwright together:</p>
<ul>
<li><code>npm run test:e2e</code> to run a development server and Playwright ready for local development;</li>
<li><code>npm run test:e2e:ci</code> to run a development server and all Playwright tests in headless mode, ideal for CI.</li>
</ul>
<p>Then, edit the Playwright config file, <a href="https://github.com/sapegin/playwright-article-2024/blob/master/playwright.config.js">playwright.config.js</a>, in the project root folder:</p>
<pre><code>const { defineConfig, devices } = require('@playwright/test');
module.exports = defineConfig({
testDir: './tests',
// Run tests in files in parallel
fullyParallel: true,
// Fail the build on CI if you accidentally left test.only
// in the source code
forbidOnly: Boolean(process.env.CI),
// Retry on CI only
retries: process.env.CI ? 2 : 0,
// Opt out of parallel tests on CI
workers: process.env.CI ? 1 : undefined,
// Reporter to use
reporter: 'html',
// Shared settings for all the projects below
use: {
// Base URL to use in actions like `await page.goto('/')`
baseURL: 'http://localhost:3000',
// Collect trace when retrying the failed test
trace: 'on-first-retry'
},
// Configure projects for major browsers
projects: [
{
name: 'chromium',
use: { ...devices['Desktop Chrome'] }
},
{
name: 'firefox',
use: { ...devices['Desktop Firefox'] }
},
{
name: 'webkit',
use: { ...devices['Desktop Safari'] }
}
],
// Run your local dev server before starting the tests
webServer: {
command: 'npm run start',
url: 'http://localhost:3000',
reuseExistingServer: !process.env.CI
}
});
</code></pre>
<p>The options we’ve changed are:</p>
<ul>
<li><code>use.baseURL</code> is the URL of our development server to avoid writing it in every test;</li>
<li><code>webServer</code> block describes how to a run development server; we also want to reuse an already-running server unless we’re in the CI environment.</li>
</ul>
<p><strong>Tip:</strong> Read more about all <a href="https://playwright.dev/docs/test-configuration">Playwright config options in the docs</a>.</p>
<h3>Setting up Mock Service Worker</h3>
<p>We’re going to use <a href="https://mswjs.io/">Mock Service Worker</a> (MSW) for mocking network requests in our integration tests and in the app during development.</p>
<ul>
<li>It uses Service Workers, so it intercepts all network requests, no matter how they are made.</li>
<li>A single place to define mocks for the project, with the ability to <a href="https://mswjs.io/docs/api/setup-server/use">override responses</a> for particular tests.</li>
<li>An ability to reuse mocks in integration tests and during development.</li>
<li>Requests are still visible in the network panel of the browser developer tools.</li>
<li>Supports REST API and GraphQL.</li>
</ul>
<p>First, install MSW from npm:</p>
<pre><code>npm install --save-dev msw
</code></pre>
<p>Create <a href="https://mswjs.io/docs/network-behavior/rest">mock definitions</a>, <a href="https://github.com/sapegin/playwright-article-2024/blob/master/src/mocks/handlers.js">src/mocks/handlers.js</a>:</p>
<pre><code>import { http, HttpResponse } from 'msw';
export const handlers = [
// GET requests to https://httpbin.org/anything with any parameters
http.get('https://httpbin.org/anything', () => {
// Return OK status with a JSON object
return HttpResponse.json({
args: {
ingredients: ['bacon', 'tomato', 'mozzarella', 'pineapples']
}
});
})
];
</code></pre>
<p><strong>Note:</strong> To mock GraphQL requests instead of REST, use the <a href="https://mswjs.io/docs/network-behavior/graphql">GraphQL</a> namespace.</p>
<p>Here, we’re intercepting GET requests to <code>https://httpbin.org/anything</code> with any parameters and returning a JSON object with an OK (200) status.</p>
<p>Now we need to <a href="https://mswjs.io/docs/integrations/browser">generate the Service Worker script</a>:</p>
<pre><code>npx msw init ./public --save
</code></pre>
<p>The <code>--save</code> flag will save the public directory path to <code>package.json</code> so we can update the worker script later by running just <code>msw init</code>.</p>
<p><strong>Note:</strong> The public directory <a href="https://mswjs.io/docs/integrations/browser#where-is-my-public-directory">may be different</a> for projects not using Create React App.</p>
<p>Then, create another JavaScript module that will register our Service Worker with our mocks, <a href="https://github.com/sapegin/playwright-article-2024/blob/master/src/mocks/browser.js">src/mocks/browser.js</a>:</p>
<pre><code>import * as msw from 'msw';
import { setupWorker } from 'msw/browser';
import { handlers } from './handlers';
// Configure a Service Worker with the given request handlers
export const worker = setupWorker(...handlers);
// Expose methods globally to make them available in integration tests
window.msw = { worker, ...msw };
</code></pre>
<p>And the last step is to start the worker function when we run our app in development mode. Add these lines to our app root module (<a href="https://github.com/sapegin/playwright-article-2024/blob/master/src/index.js">src/index.js</a> for Create React App):</p>
<pre><code>async function enableMocking() {
if (process.env.NODE_ENV !== 'development') {
return;
}
const { worker } = await import('./mocks/browser');
// `worker.start()` returns a Promise that resolves
// once the Service Worker is up and ready to intercept requests.
return worker.start();
}
</code></pre>
<p>And update the way we render the React app to await the Promise returned by <code>enableMocking()</code> function before rendering anything:</p>
<pre><code>enableMocking().then(() => {
const root = createRoot(document.querySelector('#root'));
root.render(<App />);
});
</code></pre>
<p>Now, every time we run our app in the development mode or on CI, network requests will be mocked without any changes to the application code or tests, except these few lines of code in the root module.</p>
<h3>Creating our first test</h3>
<p>As defined in our config file, Playwright looks for test files inside the <a href="https://github.com/sapegin/playwright-article-2024/tree/master/tests">tests/</a> folder. Feel free to remove the <code>example.spec.js</code> file from there — we won’t need it.</p>
<p>So, let’s create our first test, <a href="https://github.com/sapegin/playwright-article-2024/blob/master/tests/hello.spec.js">tests/hello.spec.js</a>:</p>
<pre><code>const { test, expect } = require('@playwright/test');
test('hello world', async ({ page }) => {
await page.goto('/');
await expect(page.getByText('welcome back')).toBeVisible();
});
</code></pre>
<p>Here, we’re visiting the homepage of our app running on the development server, then validating that the text “welcome back” is present on the page using Playwright’s <a href="https://playwright.dev/docs/locators#locate-by-text">getByText()</a> locator, and <a href="https://playwright.dev/docs/api/class-locatorassertions#locator-assertions-to-be-visible">toBeVisible()</a> assertion.</p>
<h3>Running tests</h3>
<p>Start Playwright in the UI mode by running <code>npm run test:e2e</code>. From here, either run a single test or all tests. We can press an eye icon next to a single test or a group to automatically rerun them on every change in the code of the test.</p>
<p></p>
<p>When I write tests, I usually <em>watch</em> a single test (meaning Playwright reruns it for me on every change), otherwise it’s too slow and too hard to see what’s wrong if there are any issues.</p>
<p>Run <code>npm run test:e2e:ci</code> to run all tests in the headless mode, meaning we won’t see the browser window:</p>
<p></p>
<h3>Querying DOM elements for tests</h3>
<p>Tests should resemble how users interact with the app. That means we shouldn’t rely on implementation details because the implementation can change and we’ll have to update our tests. This also increases the chance of false positives when tests are passing but the actual feature is broken.</p>
<p>Let’s compare different methods of querying DOM elements:</p>
<table>
<thead>
<tr>
<th>Selector</th>
<th>Recommended</th>
<th>Notes</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>button</code></td>
<td>Never</td>
<td>Worst: too generic</td>
</tr>
<tr>
<td><code>.btn.btn-large</code></td>
<td>Never</td>
<td>Bad: coupled to styles</td>
</tr>
<tr>
<td><code>#main</code></td>
<td>Never</td>
<td>Bad: avoid IDs in general</td>
</tr>
<tr>
<td><code>[data-testid="cookButton"]</code></td>
<td>Sometimes</td>
<td>Okay: not visible to the user, but not an implementation detail; use when better options aren’t available</td>
</tr>
<tr>
<td><code>[alt="Chuck Norris"]</code>, <code>[role="banner"]</code></td>
<td>Often</td>
<td>Good: still not visible to users, but already part of the app UI</td>
</tr>
<tr>
<td><code>[children="Cook pizza!"]</code></td>
<td>Always</td>
<td>Best: visible to the user part of the app UI</td>
</tr>
</tbody>
</table>
<p>To summarize:</p>
<ul>
<li>Prefer to query elements by their visible (for example, button label) or accessible name (for example, image alt).</li>
<li>Use test IDs as the last resort. They clutter the markup with props we only need in tests. Test IDs are also something that users of our app don’t see: if we remove a label from a button, a test with test ID will still pass.</li>
</ul>
<p><strong>Note:</strong> I often hear this complaint about using labels to query elements: they break when the app copy is updated. I consider this a feature: I’ve seen more than once that a button label change on one screen broke some other screen where this change was undesired.</p>
<p>Playwright has methods for all good queries, which are called <a href="https://playwright.dev/docs/locators">locators</a>:</p>
<ul>
<li><code>page.getByAltText()</code> finds an image by its alt text;</li>
<li><code>page.getByLabel()</code> finds a form element by its <code><label></code>;</li>
<li><code>page.getByPlaceholder()</code> finds a form element by its placeholder text;</li>
<li><code>page.getByRole()</code> finds an element by its ARIA role;</li>
<li><code>page.getByTestId()</code> finds an element by its test ID;</li>
<li><code>page.getByText()</code> finds an element by its text content;</li>
<li><code>page.getByTitle()</code> finds an element by its <code>title</code> attribute.</li>
</ul>
<p>Let’s see how to use locators. To select this button in a test:</p>
<pre><code><button data-testid="cookButton">Cook pizza!</button>
</code></pre>
<p>We can either query it by its test ID:</p>
<pre><code>page.getByTestId('cookButton');
</code></pre>
<p>Or query it by its text content:</p>
<pre><code>page.getByText('cook pizza');
</code></pre>
<p><strong>Note:</strong> Text locators are partial and case-insensitive by default, which makes them more resilient to small tweaks and changes in the content. For an exact match, use the <code>exact</code> option: <code>page.getByText('Cook pizza!', {exact: true})</code>.</p>
<p>Or, the best method is to query it by its ARIA role and label:</p>
<pre><code>page.getByRole('button', { name: 'cook pizza' });
</code></pre>
<p>Benefits of the last method are:</p>
<ul>
<li>doesn’t clutter the markup with test IDs that aren’t perceived by users;</li>
<li>doesn’t give false positives when the same text is used in non-interactive content;</li>
<li>makes sure that the button is an actual <code>button</code> element or at least has the <code>button</code> ARIA role.</li>
</ul>
<p>Check the Playwright docs for more details on <a href="https://playwright.dev/docs/best-practices">best practices</a>, and <a href="https://github.com/A11yance/aria-query#elements-to-roles">inherent roles of HTML elements</a>.</p>
<h2>Testing React apps end-to-end</h2>
<h3>Testing basic user interaction</h3>
<p>A typical integration test looks like this: visit the page, interact with it, and check the changes on the page after the interaction. <a href="https://github.com/sapegin/playwright-article-2024/blob/master/tests/hello.spec.js">For example</a>:</p>
<pre><code>const { test, expect } = require('@playwright/test');
test('navigates to another page', async ({ page }) => {
await page.goto('/');
// Opening the pizza page
await page.getByRole('link', { name: 'remotepizza' }).click();
// We are on the pizza page
await expect(
page.getByRole('heading', { name: 'pizza' })
).toBeVisible();
});
</code></pre>
<p>Here, we’re finding a link by its ARIA role (<code>link</code>) and text using the Playwright’s <a href="https://playwright.dev/docs/locators#locate-by-role">getByRole()</a> locator, and clicking it using the <a href="https://playwright.dev/docs/input#mouse-click">click()</a> method. Then we’re verifying that we’re on the correct page by checking its heading, first by finding it the same way we found the link before, and testing the heading with the <a href="https://playwright.dev/docs/api/class-locatorassertions#locator-assertions-to-be-visible">toBeVisible()</a> assertion.</p>
<p>With Playwright, we generally don’t have to care if the actions are synchronous or asynchronous: each command will <a href="https://playwright.dev/docs/actionability">wait for some time</a> for the queried element to appear on the page. Though we should explicitly <code>await</code> most operations. This avoids the flakiness and complexity of asynchronous testing and keeps the code straightforward.</p>
<h3>Testing forms</h3>
<p>Playwright’s locators allow us to access any form element by its visible (for example, <code><label></code> element) or accessible (for example, <code>aria-label</code> attribute) label.</p>
<p>For example, we have a <a href="https://github.com/sapegin/playwright-article-2024/blob/master/src/components/SignUpForm.js">registration form</a> with a bunch of text inputs, select boxes, checkboxes, and radio buttons. This is how we can <a href="https://github.com/sapegin/playwright-article-2024/blob/master/tests/signUp.spec.js">test it</a>:</p>
<pre><code>const { test, expect } = require('@playwright/test');
test('should show success page after submission', async ({
page
}) => {
await page.goto('/signup');
// Filling the form
await page.getByLabel('first name').fill('Chuck');
await page.getByLabel('last name').fill('Norris');
await page.getByLabel('country').selectOption({ label: 'Russia' });
await page.getByLabel('english').check();
await page.getByLabel('subscribe to our newsletter').check();
// Submit the form
await page.getByRole('button', { name: 'sign in' }).click();
// We are on the success page
await expect(
page.getByText('thank you for signing up')
).toBeVisible();
});
</code></pre>
<p>Here we’re using <a href="https://playwright.dev/docs/locators#locate-by-label">getByLabel()</a> and <a href="https://playwright.dev/docs/locators#locate-by-role">getByRole()</a> locators to find elements by their label texts or ARIA roles. Then we use the <a href="https://playwright.dev/docs/api/class-locator#locator-fill">fill()</a>, <a href="https://playwright.dev/docs/api/class-locator#locator-select-option">selectOption()</a>, and <a href="https://playwright.dev/docs/api/class-locator#locator-check">check()</a> methods to fill the form, and the <a href="https://playwright.dev/docs/api/class-locator#locator-click">click()</a> method to submit it by clicking the submit button.</p>
<h3>Testing complex forms</h3>
<p>In the previous example, we used the <a href="https://playwright.dev/docs/locators#locate-by-label">getByLabel()</a> locator to find form elements, which works when all form elements have unique labels, but this isn’t always the case.</p>
<p>For example, we have a passport number section in our <a href="https://github.com/sapegin/playwright-article-2024/blob/master/src/components/SignUpForm.js">registration form</a> where multiple inputs have the same label, like “year” of the issue date and “year” of the expiration date. The markup of each field group looks like so:</p>
<pre><code><fieldset>
<legend>Passport issue date</legend>
<input type="number" aria-label="Day" placeholder="Day" />
<select aria-label="Month">
<option value="1">Jan</option>
<option value="2">Feb</option>
...
</select>
<input type="number" aria-label="Year" placeholder="Year" />
</fieldset>
</code></pre>
<p>To access a particular field, we can select a <code>fieldset</code> by its <code>legend</code> text first, and then select an input by its label inside the <code>fieldset</code>.</p>
<pre><code>const passportIssueDateGroup = page.getByRole('group', {
name: 'passport issue date'
});
await passportIssueDateGroup.getByLabel('day').fill('12');
await passportIssueDateGroup
.getByLabel('month')
.selectOption({ label: 'May' });
await passportIssueDateGroup.getByLabel('year').fill('2004');
</code></pre>
<p>We call <a href="https://playwright.dev/docs/locators#locate-by-role">getByRole()</a> locator with the ARIA role of <code>fieldset</code> (<code>group</code>) and the text of its<code>legend</code>. Then we chain the <a href="https://playwright.dev/docs/locators#locate-by-label">getByLabel()</a> locator to query form fields by their labels.</p>
<h3>Testing links</h3>
<p>There are several ways to test links that open in a new tab:</p>
<ul>
<li>check the link’s <code>href</code> attribute without clicking it;</li>
<li>click the link, and then get the handle of the new page and use it instead of the current one (<code>page</code>).</li>
</ul>
<p>In the first method, we query the link by its ARIA role and text, and verify that the URL in its <code>href</code> attribute is correct:</p>
<pre><code>await expect(
page.getByRole('link', { name: 'terms and conditions' })
).toHaveAttribute('href', /\/toc/);
</code></pre>
<p>The main drawback of this method is that we’re not testing whether the link is actually clickable. It might be hidden, or it might have a click handler that prevents the default browser behavior.</p>
<p>In the second method, we query the link by its ARIA role and text again, click it, get the handle of the new page, and use the new page handle instead of the current one:</p>
<pre><code>const pagePromise = context.waitForEvent('page');
await page
.getByRole('link', { name: 'terms and conditions' })
.click();
const newPage = await pagePromise;
await expect(newPage.getByText("i'm baby")).toBeVisible();
</code></pre>
<p>Now, we could verify that we’re on the correct page by finding some text unique to this page.</p>
<p>I recommend the second method because it better resembles the actual user behavior.</p>
<p>There are <a href="https://stackoverflow.com/questions/71843918/open-a-blank-link-and-continue-the-test-with-playwright">other solutions</a>, but I don’t think they are any better than these two.</p>
<h3>Testing network requests, and mocks</h3>
<p>Having MSW mocks setup (see “Setting up Mock Service Worker” above), happy path tests of pages with asynchronous data fetching aren’t different from any other tests.</p>
<p>For example, we have an API that returns a list of pizza ingredients:</p>
<pre><code>const { test, expect } = require('@playwright/test');
const ingredients = ['bacon', 'tomato', 'mozzarella', 'pineapples'];
test('load ingredients asynchronously', async ({ page }) => {
await page.goto('/remote-pizza');
// Ingredients list is not visible
await expect(page.getByText(ingredients[0])).toBeHidden();
// Load ingredients
await page.getByRole('button', { name: 'cook' }).click();
// All ingredients appear on the screen
for (const ingredient of ingredients) {
await expect(page.getByText(ingredient)).toBeVisible();
}
// The button is not clickable anymore
await expect(
page.getByRole('button', { name: 'cook' })
).toBeDisabled();
});
</code></pre>
<p>Playwright will wait until the data is fetched and rendered on the screen, and thanks to network calls mocking it won’t be long.</p>
<p>For not-so-happy-path tests, we may need to override global mocks inside a particular test. For example, we could test what happens when our API returns an error:</p>
<pre><code>test('shows an error message', async ({ page }) => {
await page.goto('/remote-pizza');
await page.evaluate(() => {
// Reference global instances set in src/browser.js
const { worker, http, HttpResponse } = window.msw;
worker.use(
http.get(
'https://httpbin.org/anything',
() => HttpResponse.json(null, { status: 500 }),
{ once: true }
)
);
});
// Ingredients list is not visible
await expect(page.getByText(ingredients[0])).toBeHidden();
// Load ingredients
await page.getByRole('button', { name: 'cook' }).click();
// Ingredients list is still not visible and error message appears
await expect(page.getByText(ingredients[0])).toBeHidden();
await expect(page.getByText('something went wrong')).toBeVisible();
});
</code></pre>
<p>Here, we’re using the MSW’s <a href="https://mswjs.io/docs/api/setup-worker/use/">use()</a> method to override the default mock response for our endpoint during a single test. Also note that we’re using the <a href="https://mswjs.io/docs/api/http/#once">once</a> option of the <code>http.get()</code> method; otherwise the override will be added permanently and may interfere with other tests.</p>
<h3>Testing complex pages</h3>
<p>We should avoid test IDs wherever possible and use more semantic queries instead. However, sometimes we need to be more precise. For example, we have a “delete profile” button on our user profile page that shows a confirmation modal with “delete profile” and “cancel” buttons inside. We need to know which of the two delete buttons we’re pressing in our tests.</p>
<p>The markup could look <a href="https://github.com/sapegin/playwright-article-2024/blob/master/src/components/Profile.js">like so</a>:</p>
<pre><code><button type="button">Delete profile</button>
<div
role="dialog"
aria-label="Delete profile modal"
aria-modal="true"
>
<h1>Delete profile</h1>
<button type="button">Delete profile</button>
<button type="button">Cancel</button>
</div>
</code></pre>
<p>The first “delete profile” isn’t a problem because when we click it, it’s the only one present on the page. However, when the modal is open, we have two buttons with the same label.</p>
<p>So, how do we click the one inside the modal dialog?</p>
<p>The first option would be to assign a test ID to this button, and sometimes that’s the only way. Usually, though, we can do better. We can nest locators, so we could target the container (the modal dialog) first, and then the button we need inside the container:</p>
<pre><code>await page
// Locate the dialog by its aria-label
.getByRole('dialog', { name: 'delete profile modal' })
// Locate the button by its label inside the dialog
.getByRole('button', { name: 'delete profile' })
// Click the button
.click();
</code></pre>
<p>It’s slightly more complex when the container doesn’t have any semantic way to target it, like a <code>section</code> with a heading (<code>h1</code>, <code>h2</code>, and so on) inside. In this case, we can target all sections on a page and then <a href="https://playwright.dev/docs/locators#filtering-locators">filter</a> them to find the one we’re looking for.</p>
<p>Imagine markup like so:</p>
<pre><code><section>
<h2>Our newsletter</h2>
<form>
<input
type="email"
name="email"
arial-label="Email"
placeholder="Email"
/>
<button type="submit">Subscribe</button>
</form>
</section>
</code></pre>
<p>We can click the “Subscribe” button inside the “Our newsletter” section in a test like so:</p>
<pre><code>await page
// Locate all sections on a page
.getByRole('section')
.filter({
// Filter only ones that contain "Our newsletter" heading
has: page.getByRole('heading', { name: 'our newsletter' })
})
// Locate the button inside the section
.getByRole('button', { name: 'subscribe' })
.click();
</code></pre>
<p>Coming back to our profile deletion modal, we can test it <a href="https://github.com/sapegin/playwright-article-2024/blob/master/tests/profile.spec.js">like so</a>:</p>
<pre><code>const { test, expect } = require('@playwright/test');
test('should show success message after profile deletion', async ({
page
}) => {
await page.goto('/profile');
// Attempting to delete profile
await page.getByRole('button', { name: 'delete profile' }).click();
// Confirming deletion
await page
.getByRole('dialog', { name: 'delete profile modal' })
.getByRole('button', { name: 'delete profile' })
.click();
// We are on the success page
await expect(
page.getByRole('heading', { name: 'your profile was deleted' })
).toBeVisible();
});
</code></pre>
<p>Here, we’re using the <a href="https://playwright.dev/docs/locators#locate-by-role">getByRole()</a> locator, as in previous examples, to find all elements we need.</p>
<h2>Debugging</h2>
<p>Playwright docs have a thorough <a href="https://playwright.dev/docs/running-tests">debugging guide</a>.</p>
<p>However, it’s usually enough to check the locator or inspect the DOM for a particular step of the test after running the tests.</p>
<p>Click any operation in the log first, and then do one of the following:</p>
<p><strong>To debug a locator the DOM</strong>, click the <a href="https://playwright.dev/docs/test-ui-mode#pick-locator">Pick locator</a> button, and hover over an element we want to target. We can use the <em>Locator</em> tab below to edit it and see if it still matches the element we need.</p>
<p></p>
<p><strong>To inspect the DOM</strong>, click the <a href="https://playwright.dev/docs/test-ui-mode#pop-out-and-inspect-the-dom">Open snapshot in a new tab</a> button and use the browser developer tools the way we’d normally do.</p>
<p></p>
<p>I also often focus on a single test with <code>test.only()</code>, and watch a single file by toggling the eye button in the Playwright UI to make reruns faster and avoid seeing too many errors while I debug why tests are failing.</p>
<pre><code>test.only('hello world', async ({ page }) => {
// Playwright will skip other tests in this file
});
</code></pre>
<h2>Conclusion</h2>
<p>Good tests interact with the app similar to how a real user would do that; they don’t test implementation details, and they are resilient to code changes that don’t change the behavior.</p>
<p>We’ve learned how to write good end-to-end tests using Playwright, how to set it up, and how to mock network requests using Mock Service Worker.</p>
<p>However, Playwright has many more features that we haven’t covered in the article that may be useful one day.</p>
How I stay (more) focused with ADHDhttps://sapegin.me/blog/adhd-focus/https://sapegin.me/blog/adhd-focus/I could never stay focused on one thing for a long time. These tips help me stay focused and productive.Thu, 22 Feb 2024 00:00:00 GMT<p>I could never stay focused on one thing for a long time, my mind is always squirreling around. Often, I try to read an article — a few minutes later I realize that I’ve switched to another app or a browser tab in the middle of a paragraph. Also, I’m sensitive to noise and loud sounds, and I feel overwhelmed after a few hours of working in the open space environment of a typical office. Being diagnosed with ADHD at the age of 39 explained these things and many others.</p>
<p>In this article, I share a few things that help me to complete my tasks. Mostly they minimize distractions in different ways, and the amount of things I need to keep in my memory. Some are specific to my job as a software engineer but most are fairly generic. I’m only familiar with the Apple ecosystem, so if you have suggestions for Windows, Linux, or Android, please let me know. Even if you don’t have ADHD, you could find something useful for yourself.</p>
<p></p>
<h2>Disable notifications everywhere</h2>
<p>I have notifications disabled for all apps on my Mac and iPad, and for most on my iPhone.</p>
<p>The only exceptions on iPhone are messenger apps, email (without sound though), and a few other apps that are sending infrequent but important updates (like my bank).</p>
<p>I’ve tried to disable notifications for emails too but I ended up checking it myself way too often.</p>
<p>My general rule for notifications is: better to have too few than too many. If I get a single spam notification from some app, I immediately disable all notifications for this app, unless there’s an option to disable spam but it’s rare.</p>
<h2>Work with a single screen</h2>
<p>I think multiple-screen setups aren’t worth it: they require too much space on the desk and too much meta work — moving windows between screens and so on. They also tend to have too much information visible at the same time.</p>
<p></p>
<p>I have used a single 27” screen for the past decade or so, and for me, it’s the perfect size.</p>
<h2>Turn on some music</h2>
<p>I like listening to music while I’m working, and usually I wear headphones. The right kind of music helps me to focus on work, and headphones, especially noise-canceling, reduce distractions and sensory overload from things happening around me.</p>
<p>I like different styles of music, depending on what I’m doing:</p>
<ul>
<li><strong>For programming</strong> — something loud, heavy, and energetic, like melodic death metal or post-metal. Be’Lakor, Katatonia, and Russian Circles are among my favorites.</li>
<li><strong>For writing</strong> — something soft and quiet, like lo-fi. The <a href="https://apps.apple.com/us/app/lola-stream-lofi-music/id1596929185">Lola app</a> is great to listen to in a café (when it’s too noisy or I disagree with their choice of music), or <a href="https://www.youtube.com/@LofiGirl">Lofi girl’s</a> playlists.</li>
<li><strong>For reading</strong> — <a href="https://music.apple.com/de/playlist/ambient-night/pl.u-KVXBDYWCLrrEerg?l=en-GB">something quiet</a>, like ambient.</li>
<li><strong>For cooking</strong> — something in my own language, so I can sing along.</li>
</ul>
<p><strong>Note:</strong> I prefer music without lyrics because they distract me, especially if I understand the language. However, death metal, with its growling, is an exception — the singing is so incomprehensible, that it essentially becomes music.</p>
<p>When I’m not wearing headphones, we often listen to sets by <a href="https://www.youtube.com/@MyAnalogJournal">My Analog Journal</a> and <a href="https://youtube.com/@chrisluno">Chris Luno</a>, <a href="https://music.apple.com/de/playlist/good-vibes-only/pl.u-2aoqXKzfG66mb65?l=en-GB">our favorite playlist</a>, or start a station on Apple Music based on one of the songs from Pink Floyd, The Cure, or something similar.</p>
<p><strong>Tip:</strong> Noise-cancelling headphones allow quieter music in noisier environments, or even no music at all — they may reduce sounds to a comfortable level.</p>
<h2>Minimize distractions on the screen</h2>
<p>I don’t like to see stuff unrelated to what I’m doing, especially things that change often or with animation, like icons in the dock or the menu bar, so I try to hide as many of them as possible.</p>
<p></p>
<h3>Maximize app windows</h3>
<p>I always maximize app windows. Or, more often, I put two windows side-by-side.</p>
<p>My typical setup is a code editor on the left half of the screen, and the browser or terminal on the right half of the screen.</p>
<p><strong>Tip:</strong> I use <a href="https://rectangleapp.com">Rectangle</a> app to set up keyboard shortcuts to resize windows (I only have three shortcuts there: left/right half of the screen and full screen).</p>
<h3>Disable tabs in your code editor</h3>
<p>My code editor setup is very minimalistic. I disable tabs and all toolbars, so they don’t distract me. Sometimes I have the file tree open, and that’s pretty much all I see.</p>
<p><strong>Tip:</strong> <a href="/blog/hiding-tabs-in-visual-studio-code/">Hiding tabs in Visual Studio Code</a> used to be tricky but now there’s an option to do that. WebStorm always had an option to hide tabs.</p>
<h3>Hide icons on the menu bar</h3>
<p>I don’t like seeing too many icons in the menu bar, especially colorful or animated. Thanks to <a href="https://www.macbartender.com">Bartender</a> most of the menu bar icons are hidden in a menu I can open when I need to.</p>
<p>Or, you could set Bartender up to show icons conditionally. For example, show battery indicator only when it’s below a certain level, so when you work in a café, you know when to wrap up your work and go home.</p>
<p><strong>A little bird told me:</strong> You could also hide the menu bar, and it will appear on hover.</p>
<h3>Disable dock on a Mac</h3>
<p>I’m not sure you could actually disable the dock on a Mac but hiding it and <a href="https://github.com/sapegin/dotfiles/blob/d706fd2315e7c58f89ab4c46672a1b165dc0f192/setup/osx.sh#L325">increasing the opening delay</a> to 1 second is enough to rarely see it.</p>
<h3>Install an ad blocker</h3>
<p>Ads on web pages are very distracting, especially animated ones, and I can’t imagine myself using the internet without an ad block again.</p>
<p><strong>Tip:</strong> I use <a href="https://adblockplus.org/">Adblock Plus</a> on desktop, and <a href="https://adguard.com/">AdGuard</a> on iPhone and iPad.</p>
<h2>Use Pomodoro technique</h2>
<p>The <a href="https://en.m.wikipedia.org/wiki/Pomodoro_Technique">Pomodoro technique</a> was inspired by the tomato-shaped kitchen timer. The idea is that you split your work into 25-minute blocks (<em>a pomodoro</em>) with a 5-minute break between each block, and a longer 15-30 minute break after each four pomodoros.</p>
<p>During a pomodoro, you’re supposed to focus on a single task (ideally, the one you select before starting the timer), and avoid distractions like checking your social networks, email, or going to the kitchen to make another cup of coffee.</p>
<p>And it kinda works, though I don’t use it very often, and I can’t imagine working for two hours with only three 5-minute breaks. So feel free to experiment with longer breaks and shorter work intervals (or the opposite if that’s your thing). It’s also useful when you need to report time spent on certain tasks.</p>
<p><strong>Tip:</strong> I used to use the <a href="https://www.tadamapp.com">Tadam</a> app as a Pomodoro timer but recently I switched to <a href="https://www.timetimer.com/collections/all-1/products/time-timer-mod-home-edition">a mechanical timer</a>.</p>
<p><strong>Tip:</strong> Timeboxing tasks, meaning working on something only for, let’s say, 10 or 20 minutes, is a good way of making progress on things you don’t want to do.</p>
<h2>Keep your phone in a drawer</h2>
<p>I try to keep my phone in my desk drawer, or at least on the side of my desk upside down, so I don’t see the notifications.</p>
<p><strong>Tip:</strong> For the same reason I tend to keep my phone on the kitchen counter in the evening, and never bring it to the bedroom.</p>
<h2>Work on an iPad</h2>
<p><a href="https://sapegin.me/blog/writing-on-ipad/">Using an iPad</a> (with an external keyboard) for certain tasks, like writing an article, works very well for me — somehow it feels like a single-task device and it’s easier to focus.</p>
<p><strong>Tip:</strong> Most tech writing I do in Markdown, in the <a href="https://ia.net/writer">iA Writer</a>.</p>
<p></p>
<p><strong>Tip:</strong> I often go to a café to write, and it helps me a lot to focus on writing. As much as I loathe noise and distractions, the café noise and presence of other people (known as <em>body doubling</em>, see below) help me to focus on writing. Unless, of course, I can hear and understand what people are talking about — in this case, I must hear the gossip.</p>
<h2>Unload your brain</h2>
<p>If you’re trying to focus on a task, and suddenly remember that you need to buy tortillas, do laundry, or get an idea for a new awesome article, write it down right away to your shopping list, todo, or your notes.</p>
<p>I use several tools to keep notes and tasks:</p>
<ul>
<li><a href="https://bear.app">Bear</a> for most of my notes and freeform project planning.</li>
<li><a href="https://www.notion.so/">Notion</a> when I need to share a project with someone (it has poor UX but many great features).</li>
<li>Paper todo cards (see “Make a list” section below).</li>
<li><a href="https://www.anylist.com">AnyList</a> for the shopping list because I can share it with my girlfriend, and even ask Alexa to add things there.</li>
<li>Apple Notes occasionally for short-term or throw-away notes.</li>
</ul>
<h2>Auto close distracting apps</h2>
<p>When I suddenly fall into a flow state (rarely but happens) I don’t want to be distracted by new messages in Slack or Telegram. To help me with that I use <a href="https://marco.org/apps">Quitter</a> to close all such apps after a certain time.</p>
<p></p>
<h2>Improve meeting notifications</h2>
<p>I was always struggling with meeting notifications. For some reason, all calendar apps show notifications 10 minutes before the meeting, and it’s way too long for me. Even in the office, one usually don’t need 10 minutes to walk to a meeting room. At home, it’s even worse: these notifications distract me from what I’m doing but by the time I need to join the meeting I usually completely forget about it.</p>
<p><a href="https://sindresorhus.com/dato">Dato</a> is a calendar app that gives me quick access to my calendar from the menu bar. My favorite feature is that it shows upcoming calls in the menu bar. And there’s a global shortcut to open this link in your default browser.</p>
<p></p>
<h2>Make a list</h2>
<p>I like making lists, especially when a task is complex and involves many steps. Lists (sometimes) help me to stop procrastinating at the beginning when the task is so big that I don’t know how to start for days.</p>
<p>Lists also help me to remember all the little things I need to do, instead of trying to keep them in my head.</p>
<p>I mainly use two kinds of lists:</p>
<ul>
<li><strong>Daily todo cards</strong>, similar to <a href="https://ugmonk.com/en-de/pages/analog">Analog</a>.</li>
<li><strong>Project todos</strong> in Bear or Notion.</li>
</ul>
<p></p>
<p>I’m still figuring out what works best for me, and will probably write another article if I ever find a working solution.</p>
<h2>Work from home</h2>
<p>A typical office is full of distractions, especially one with an open plan, where you sit together with dozens of other people in the same room. Someone is constantly talking to someone else. Someone is having a Zoom call next to you. Someone is constantly distracting you with “quick questions” that take at least 20 minutes to answer and 20 more to remember what you were doing before it.</p>
<p>At home, I have very few distractions, and I can organize my workspace in a way that works for me.</p>
<p>However, I feel more productive when my girlfriend is working next to me. We’re not working for the same employer but it kinda sets the right mood for me: we’re working here. When it’s just me, it’s harder to focus on work and not be distracted by something else.</p>
<p><strong>Tip:</strong> This is called <a href="https://lifehacker.com/use-body-doubling-to-increase-your-productivity-1849021265">body doubling</a>, and if you live alone, you could try to have a video call with a friend, or even watch someone working on YouTube.</p>
<p></p>
<h2>Work with an AI assistant</h2>
<p>I often have trouble starting to work on a new task, or sometimes I don’t know how to move forward and start procrastinating. Or spend way too much time googling a solution to a seemingly simple problem.</p>
<p>Recently I started using <a href="https://chat.openai.com/">ChatGPT</a> to generate the initial solution or brainstorm ideas. Whether it’s a piece of code or a letter I need to write.</p>
<p>I also use <a href="https://github.com/features/copilot">GitHub Copilot’s</a> inline chat, and I like that it’s unobtrusive, and knows enough context, so the queries could be very short.</p>
<p></p>
<p>I’ve also tried Copilot’s autocomplete and found it infuriatingly annoying and distracting.</p>
<p>Usually, AI could give you a draft you could improve on, or help with the coding “bureaucracy”, like tricky syntax, generating types, and so on. It’s also good for throw-away solutions, and it’s great for writing official letters. However, you really need to understand what’s going on to detect when the AI is hallucinating and gives you the wrong solution.</p>
<p>Think of it as a slightly smarter Stack Overflow or an assistant who can google something for you or write some code but doesn’t understand whether it’s a good solution or not, or whether it’ll work at all.</p>
<p></p>
<p><strong>Tip:</strong> I usually use ChatGPT via <a href="https://sindresorhus.com/quickgpt">QuckGPT app</a>.</p>
<h2>Choose a non-distracting color scheme for your editor</h2>
<p>I’ve created my own color scheme that gives me enough contrast to distinguish different things in the code but not overly bright to make my eyes bleed. I called it <a href="https://sapegin.me/squirrelsong/">Squirrelsong</a>, because why not?</p>
<p></p>
<p><strong>Tip:</strong> To make the coding experience even better, I use <a href="https://www.monolisa.dev">MonoLisa</a> font in all my code editors and a terminal.</p>
<h2>Conclusion</h2>
<p>Minimizing distractions and knowing what to do next helps a lot in staying focused and productive. If you’ve got any tips to stay productive, I’ll be delighted to learn about them on <a href="https://mastodon.cloud/@sapegin">Mastodon</a> or <a href="https://bsky.app/profile/sapegin.bsky.social">Bluesky</a>.</p>
<p>You may be also curious to read <a href="https://sapegin.me/man/">my personal user manual</a>.</p>
<hr />
<p><em>Thanks to Anna Bulavina, Alexei Crecotun, Margarita Diaz, and Anita Kiss for their suggestions.</em></p>
Typewriter 2.0: search for the perfect writing experience on iPadhttps://sapegin.me/blog/writing-on-ipad/https://sapegin.me/blog/writing-on-ipad/I struggle writing on my desk, so I wanted to find a great mechanical keyboard for iPad that I can take to some nice café with me.Mon, 12 Feb 2024 00:00:00 GMT<p>I like to write: <a href="/blog/">blog articles</a>, journals, and even <a href="/book/">books</a>. However, I struggle writing at home, especially at my desk.</p>
<p>What works best for me is going to a nice café and writing on an iPad with an external keyboard. I’m less distracted by other tasks and somehow the ambient noise helps me to focus on writing (though sometimes I want to know the gossip of that couple at the table nearby or wear headphones when it’s too loud).</p>
<p>We’ll talk about the best environment for writing some other time — this article is about finding the best keyboard and software for writing on an iPad.</p>
<h2>Finding the best keyboard for an iPad</h2>
<h3>Apple Magic Keyboard</h3>
<p>The first keyboard I tried with an iPad was the <a href="https://www.apple.com/shop/product/MK2A3LL/A/magic-keyboard-us-english">Apple Magic Keyboard</a>. It’s a bit better than the built-in keyboard in my MacBook Pro 2019, which is so bad, that it’s hardly an achievement.</p>
<p></p>
<p>In addition, since I started using a mechanical keyboard on my desk, I wanted a similar experience on iPad too.</p>
<p>The overall experience of working on an iPad with a hardware keyboard is nice though. Many shortcuts work the same way as on the desktop (like Cmd+Tab to switch between apps, Caps lock to switch input language, and so on).</p>
<p><strong>Storytime:</strong> I’ve used ergonomic keyboards throughout most of my career as a software engineer (probably for about 15 years or so), with the Microsoft Ergonomic Keyboard 4000 being my favorite — a great 50€ keyboard. I never liked the idea of a mechanical keyboard — I thought I wouldn’t like the noise they make and the pressure one needs to apply to type. However, I was disappointed by the poor quality of the newer and significantly more expensive Microsoft Sculpt and Surface keyboards. So when one of them died just one year after I bought it, I decided to try a mechanical keyboard. I ended up buying a split <a href="https://mistelkeyboard.com/products/bd20945a731491407807e80d48c5d790">Mistel Barocco</a>, which quickly converted me to a mechanical keyboard fan. The split design turned out to be equally, if not more, comfortable than a traditional ergonomic keyboard.</p>
<p><strong>Verdict</strong>: Apple Magic Keyboard is a decent choice for travel — very small and light (240 g). However, the typing experience isn’t great.</p>
<h3>YMDK Air40</h3>
<p>The next keyboard I’ve tried was 40% <a href="https://ymdkey.com/products/air40-full-keyboard-rgb-via-supported">YMDK Air40</a>. It’s very cute and tiny and sounds good.</p>
<p>However, there are a few issues:</p>
<ul>
<li>Typing is difficult, especially in Russian — too many of the Russian alphabet’s 33 letters are hidden on other layers (the keyboard has three layers), and you constantly need to remember how to access them.</li>
<li>I’m not a fan of a brick ortholinear layout (when rows of letter keys are placed directly under each other, without a half-key shift like on most keyboards).</li>
<li>Not great for travel — it’s small but very heavy (495 g without keycaps) thanks to its aluminum case.</li>
<li>It’s the only non-wireless keyboard I’ve tried for an iPad.</li>
</ul>
<p></p>
<p><strong>Tip:</strong> The orange/black keycaps are <a href="https://www.amazon.de/-/en/gp/product/B09ZY6C2JS/">from Amazon</a>.</p>
<p><strong>Tip:</strong> One thing I do on all my keyboards, is placing the space key upside-down: it makes the bottom edge less sharp when you press it with your thumb.</p>
<p><strong>Note:</strong> All my mechanical keyboards have brown switches, either Cherry or Gateron. I like the subtle resistance (in comparison to linear reds) and relative quietness (in comparison to clicky blues).</p>
<p><strong>Note:</strong> I haven’t done a lot of mods for my keyboards, except adding layers of foam and coins to my split keyboard, because separate halves are too light and move a lot on a desk. The YMDK keyboard already came with foam.</p>
<p>In the end, it just suddenly died after a few months of infrequent use…</p>
<p><strong>Verdict</strong>: YMDK Air40 looks really cool but typing on it is too much work. It’s a bit too esoteric for my taste.</p>
<h3>Anne Pro 2</h3>
<p>The next keyboard I tried was a 60% <a href="https://getannepro.com/products/anne-pro-2-mechanical-keyboard-60-keyboard">Anne Pro 2</a>.</p>
<p>There’s a lot to like about this keyboard:</p>
<ul>
<li>The layout is very nice, almost no need to relearn anything after a full desktop keyboard since most of the keys are accessible on the main layer and are located at familiar places (almost standard US layout). The only hidden keys are F-keys, Insert/Delete/Home/End/Page up/Page down, and <code>~</code> (the only missing Russian letter — <code>ё</code>).</li>
<li>Magic arrow keys — a group of modifier keys, at the bottom right corner, that work as arrow keys when you quickly tap them instead of holding them.</li>
<li>The sound is almost as good as on YMDK Air40 (the aluminum case helps the latter a lot though).</li>
</ul>
<p><strong>Tip:</strong> I also set up the Escape key to be magic: to act as <code>~</code> (or <code>ё</code>) when tapped quickly, and as Escape when pressed for a longer time, which gives me a complete Russian layout on the base layer with all the letters on their standard places.</p>
<p></p>
<p>I really like this keyboard except for two things:</p>
<ul>
<li>It’s big and heavy (670 g), and not very portable.</li>
<li>It’s a bit tall, so when you try to use it on your lap, it’s not very comfortable, and even on a desk it would benefit from a wrist rest.</li>
</ul>
<p><strong>Verdict</strong>: Anne Pro 2 is a great keyboard for its price ($80) and an awesome choice for using at home on a desk. I think it’s my favorite keyboard of the four overall, but portability and height are too important for my use cases.</p>
<h3>NuPhy Air60</h3>
<p>The last keyboard I’ve tried, and so far my favorite, is low-profile <a href="https://nuphy.com/products/air60">NuPhy Air60</a>. It’s also 60%, though it feels more like 65%.</p>
<p>There are a few improvements over the Anne Pro 2:</p>
<ul>
<li>The layout is very similar to Anne Pro 2, with the addition of actual arrow keys.</li>
<li>It’s very light for a mechanical keyboard, only 455 g.</li>
</ul>
<p>Some things could have been better though:</p>
<ul>
<li>The right Shift is very narrow, which makes it hard to target when touch typing but you can get used to it eventually.</li>
<li>The right Alt and Control are missing.</li>
<li>There aren’t many alternative keycaps for it, so personalization options are a bit limited.</li>
<li>The sound isn’t as good as on normal-profile keyboards.</li>
<li>The silicon feet are constantly ungluing, and I had to glue them with super glue after I lost one of them (luckily, it comes with some spares).</li>
<li>It forgets the RGB lights settings after about a week, no matter how you set them: on the keyboard itself or using the app.</li>
<li><a href="https://nuphy.com/pages/nuphy-console">Software</a> to customize the keyboard is for Windows only. It’s also very limited, and I wasn’t able to change the behavior of the Escape key (the same way I did for Anne Pro 2), so I need to press the Fn key every time I need to type <code>~</code>, or <code>ё</code>, which is quite often.</li>
</ul>
<p>The newer version, <a href="https://nuphy.com/collections/keyboards/products/air60-v2">Air60 V2</a>, possibly fixes some of the issues.</p>
<p><strong>Tip:</strong> I always switch Control and Command on my keyboards because I prefer the Windows-like experience of Ctrl+C/Ctrl+V and other shortcuts.</p>
<p>The main difference with Anne Pro 2 is that NuPhy Air60 is a low-profile keyboard, meaning it’s much more compact. It’s still heavier and larger than the Apple Magic Keyboard but I could certainly take it with me when I go to a café to do some writing. I even traveled with it on a plane.</p>
<p></p>
<p><strong>Tip:</strong> I also got a <a href="https://nuphy.com/collections/keycaps/products/twilight-nsa-dye-sub-pbt-keycaps">COAST Twilight</a> keycaps set, and mixed it with the default keycaps to make the keyboard look a bit more interesting.</p>
<p>The leather case, <a href="https://nuphy.com/collections/accessories/products/nufolio-v3-for-air60-v2">NuFolio V2</a>, is also quite nice and gives the keyboard good protection in a bag.</p>
<p></p>
<p>You could also use the case as a stand for an iPad, and that turned out to be not as good as I thought:</p>
<ul>
<li>The bottom edge of the cover is so sharp, that it gets quite painful after using the keyboard on your lap for some time.</li>
<li>The magnets that hold the keyboard in the case aren’t strong enough to keep it from sliding.</li>
<li>It makes the screen too close — I like it at 5-10 cm from the keyboard.</li>
</ul>
<p><strong>Verdict</strong>: NuPhy Air60 is my choice of portable keyboard for an iPad. It’s not as good as Anne Pro 2 but it’s significantly lighter and smaller.</p>
<h2>Finding ~the best~ an okay case for iPad</h2>
<p>I don’t like the traditional triangular design of <a href="https://www.apple.com/shop/product/MQDV3ZM/A/smart-folio-for-ipad-pro-11-inch-4th-generation-marine-blue">Apple’s</a> and most third-party cases, because it makes the bottom surface too small, which makes it unstable when use try to put an iPad on your lap.</p>
<p>The case I ended up buying, ESR Folio, has a large bottom surface and two screen angles, which makes it comfortable for working on a desk and on a lap.</p>
<p>I have a couple of issues with this case though:</p>
<ul>
<li>The iPad doesn’t stay well in the grooves and sometimes (especially when you type on your laps) slides.</li>
<li>When using on the desk, the keyboard needs to be in a very particular position — somewhere on the edge of the bottom surface of the case — otherwise it’s wobbly. Not a big issue for me, this is where I like it anyway.</li>
</ul>
<p>However, after two years, it’s falling apart, and looks like <a href="https://www.esrgear.de/categories/handyhuellen/ipad-huellen/">they don’t have this design anymore</a>. Maybe I’ll try something else soon.</p>
<h2>Finding the best writing app for iPad</h2>
<p>I write mostly in Markdown, and I use it for pretty much everything: notes, journals, articles, books…</p>
<p>I’ve been using <a href="https://ia.net/writer">iA Writer</a> on a Mac and an iPad for many years, and I think it’s the best app for writing Markdown.</p>
<p></p>
<p>However, recently I had to switch to another Apple ID (Apple makes it close to impossible to change the country), and I had to rebuy all the apps I had on my old Apple ID.</p>
<p>Unfortunately, iA Writer prices are totally insane now: about 60 euros for <em>each</em> platform. I never thought <em>luxury software</em> would be a thing but here we are. iA Writer is undoubtedly a very polished and high-quality app, though clearly not an essential one.</p>
<p>This made me try several <em>significantly</em> cheaper alternatives to iA Writer: <a href="https://serpensoft.info">iWriter Pro</a>, <a href="https://1writerapp.com">1Writer</a>, and <a href="https://www.bywordapp.com">Byword</a>. All have the same issue: Markdown syntax highlighting is too bright for my taste. I can recommend iWriter Pro, though, if you could live with bright colors — it’s a decent app and almost ten times cheaper than iA Writer.</p>
<p></p>
<p>I ended up rebuying iA Writer for Mac and iPad. It’s almost perfect for what I want, except for a few minor(ish) things. I love the monochrome Markdown color scheme, and it has the most beautiful fonts. However, I can’t stand its bright teal cursor and selection color, that you can’t change. However, I can live with that, since other options are worse.</p>
<p>I also use a few other apps for more specialized writing (all support Markdown):</p>
<ul>
<li><a href="https://code.visualstudio.com/">Visual Studio Code</a> for documentation;</li>
<li><a href="https://bear.app/">Bear</a> for general notes and personal projects;</li>
<li><a href="https://www.notion.so/">Notion</a> for shared projects;</li>
<li><a href="https://dayoneapp.com/">Day One</a> for journaling.</li>
</ul>
<p><strong>Tip:</strong> Check out <a href="https://sapegin.me/squirrelsong/">my color schemes</a> for Visual Studio Code and Bear to have a monochrome Markdown highlighting similar to iA Writer.</p>
<p>Also, the last three apps work on iPad, Mac, and iPhone, so I can read and edit my documents on various devices.</p>
<h2>Conclusion</h2>
<p>I’m quite happy with my current setup:</p>
<ul>
<li>iPad 11” with ESR Folio case;</li>
<li>NuPhy Air60 mechanical keyboard with a leather case;</li>
<li>iA Writer as a main writing app.</li>
</ul>
<p>The overall weight (almost 1.5 kg) is comparable to a MacBook, but I like the comfort and flexibility of the external mechanical keyboard. I also like the single-taskedness of the iPad, which allows me to better focus on writing.</p>
<p>And if I want to go out to do some coding, I’ll take my MacBook instead.</p>
Healthier way to open source your codehttps://sapegin.me/blog/healthy-open-source/https://sapegin.me/blog/healthy-open-source/Open source is about sharing your code. Anything else is optional. Don’t want to spend time answering issues and reviewing pull requests? It’s totally up to you!Tue, 26 Sep 2023 00:00:00 GMT<p>Open source was about sharing the code with fellow developers, learning new skills, and having fun. Somehow, it became for many a threat to their mental health, and an unpaid job. Multi-million corporations take advantage of thousands of developers working for free around the globe. And on top of this, we have a generation of developers who demand that open source maintainers fix their issues for free.</p>
<h2>Why open source is failing for me</h2>
<p>I’ve been <a href="/blog/no-complaints-oss/">struggling with my open source projects</a> for years. I used to enjoy writing code, and learned a lot of things developing these projects, and not just coding skills. However, eventually, <a href="/blog/open-source-no-more/">I burned out</a> solving other people’s problems and trying to maintain projects that were too big for a single person working in his free time.</p>
<h2>What I expect from my open source projects</h2>
<p>I enjoy coding much less than I used to but I still have <a href="/">a few projects</a> that I work on for myself, and I’d like to keep the code open for several reasons:</p>
<ul>
<li>Tooling: GitHub for hosting code and documentation, npm for sharing libraries among several projects, continuous integration, semantic versioning, and so on.</li>
<li>Portfolio of projects that I could use as part of my résumé.</li>
</ul>
<p>However, there are many things I’d like to change:</p>
<ul>
<li>Working with users and contributors: answering questions, fixing bugs, reviewing and merging pull requests.</li>
<li>Constant anxiety caused by unread notifications.</li>
<li>A feeling of guilt caused by unanswered issues, unmerged pull requests, or missed releases.</li>
<li>Being a subject of entitlement and toxicity of users.</li>
<li>Having a second, unpaid job.</li>
<li>And, in general, interacting with strangers on the internet.</li>
</ul>
<p>I’ve tried to reduce user expectations before with the <a href="http://sapegin.github.io/powered-by-you/">Powered by you</a> badge I was adding to projects I wasn’t actively maintaining, but it wasn’t enough. A few other approaches I found: <a href="https://boyter.org/posts/the-three-f-s-of-open-source/">The three Fs of open source development</a> and <a href="https://github.com/ErikMcClure/bad-licenses/blob/master/dont-ask-me.md">Don’t ask me license</a>.</p>
<p>This time I want to go further and give myself the freedom to work only on projects and features I use myself, and ignore everything else. Essentially, I only want to share <em>the code</em>, not my <em>time</em> or my <em>mental capacity</em>.</p>
<p>Curiously, the <a href="https://choosealicense.com/licenses/mit/">MIT license</a> already says that:</p>
<blockquote>
<p>The software is provided “as is,” without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.</p>
</blockquote>
<p>I’m not good with reading this kind of legal language, so I asked ChatGPT to summarize it in peasant English:</p>
<blockquote>
<p>The software is given as it is, with no promises that it will work perfectly. If something goes wrong when using it, the people who made it can’t be held responsible for any problems or damages.</p>
</blockquote>
<p>Totally agree!</p>
<h2>One way to get there</h2>
<p>I think I found a good solution for my projects that was inspired by <a href="https://snarky.ca/the-social-contract-of-open-source/">Brett Cannons’s article</a> and <a href="https://twitter.com/blvdmitry/status/1701916984806383720">Twitter discussion with Dmitry Belyaev</a>:</p>
<ol>
<li>Clearly state the project status.</li>
<li>Convert all open issues to discussions.</li>
<li>Block the creation of new issues.</li>
<li>Unsubscribe from all notifications.</li>
</ol>
<p>Let’s talk about each step in greater detail.</p>
<h3>1. Clearly state the project status</h3>
<p>This is the explanation I came up with:</p>
<hr />
<p>This isn’t a conventional open source project. It’s a personal project that is developed openly. It has been developed with a single user in mind — me, and only has features I use myself.</p>
<p>I adopted an open source development model, including things like hosting code on GitHub, npm packages, and semantic versioning, because it makes it easier for me.</p>
<p>The code is provided for free according to the MIT license (see the License.md file in the root of the repository), but the social interactions between project users and maintainer(s) are restricted.</p>
<p>I don’t look at the issues and discussion, however, you’re free to use them to talk to other users, share workarounds for bugs, and so on</p>
<p>I appreciate pull requests with bugfixes and documentation improvements, if they are concise and well structured. I may look at them and merge them one day, but no promise.</p>
<p>Contributions of new features will most likely be ignored. I don’t have the time or emotional capacity to review and later maintain them.</p>
<p>I can’t promise that the project will evolve the way you want it to, or that it will evolve at all. However, you are free to fork the project, I won’t feel bad.</p>
<p>And please don’t at-mention me anywhere, that blue dot makes me too anxious.</p>
<p><em>R’amen,<br/>Artem</em></p>
<hr />
<p>I add it <a href="https://github.com/sapegin/mrm/discussions/298">as an announcement</a> in the project’s <a href="https://docs.github.com/en/discussions/quickstart">Discussions on GitHub</a>. Then I pin it and lock the thread.</p>
<p></p>
<h3>2. Convert all open issues to discussions</h3>
<p>Firstly, I <a href="https://docs.github.com/en/discussions/managing-discussions-for-your-community/managing-categories-for-discussions">create a new Discussions category</a> on GitHub — “Bugs”, so I can move all bug reports there.</p>
<p></p>
<p>Then, I label all open issues, so I can <a href="https://docs.github.com/en/discussions/managing-discussions-for-your-community/managing-discussions#converting-issues-based-on-labels">convert all issues labeled with a certain label</a> to discussions of a certain category.</p>
<p></p>
<h3>3. Block the creation of new issues</h3>
<p>This is something that I’ve learned just recently but that changes everything: we could disable the creation of new issues on GitHub without hiding existing ones, so people could still google them and possibly find answers to their questions. We could also redirect folks to the new discussion page when they try to open a new issue.</p>
<p>To disable the creation of new issues, add <a href="https://docs.github.com/en/communities/using-templates-to-encourage-useful-issues-and-pull-requests/configuring-issue-templates-for-your-repository">an issue template</a> at <code>.github/ISSUE_TEMPLATE/config.yml </code> like this:</p>
<pre><code>blank_issues_enabled: false
contact_links:
- name: Get help
url: https://github.com/sapegin/mrm/discussions/new?category=q-a
about: >
If you can’t get something to work the way you expect, open a question in our discussion forums.
- name: Report a bug
url: https://github.com/sapegin/mrm/discussions/new?category=bugs
about: >
If something is broken in the project itself, create a bug report.
</code></pre>
<p>In this issue template, <code>blank_issues_enabled</code> option does the magic of blocking the new issue page, and <code>contact_links</code> option defines the buttons that appear when the user tries to create a new issue. We could put a link to any site here, not just GitHub, for example <a href="https://letmegooglethat.com">Let me Google that for you</a>.</p>
<p><strong>Sad note:</strong> Unfortunately, <a href="https://github.com/orgs/community/discussions/4951"><code>blank_issues_enabled</code> options isn’t fully enforced</a>, and I already got a few issues from folks who think they are better than others...</p>
<p></p>
<h3>4. Unsubscribe from all notifications</h3>
<p>The last thing I do is ignoring all notifications in the project, even when someone is explicitly mentioning you. Unfortunately, this is the only way to save ourselves from all kinds of toxicity and bullying in the issues.</p>
<p></p>
<p>Instead of the default “Participating and @mentions”, choose “Ignore”:</p>
<p></p>
<h2>Conclusion</h2>
<p>I think I found a good compromise that would allow me to work on my personal projects in public, and allow users (if there will be any) to help each other. This approach also keeps all the existing issues accessible and googlable.</p>
<p>I hope this will be enough for me to feel calm, and I won’t have to remove all my work from the internets (<a href="https://news.ycombinator.com/item?id=3073798">this has happened before</a>) or unpublish all my npm packages (<a href="https://www.theregister.com/2016/03/23/npm_left_pad_chaos/">this has happened too</a>).</p>
<p><em>Thanks to <a href="https://twitter.com/blvdmitry">Dmitry Belyaev</a>, <a href="https://drtaco.net">Margarita Diaz</a>.</em></p>
Why I quit open sourcehttps://sapegin.me/blog/open-source-no-more/https://sapegin.me/blog/open-source-no-more/Four main reasons I stopped maintaining most of my open source projects after ten years of contributing regularly.Wed, 13 Sep 2023 00:00:00 GMT<p>GitHub published a curious <a href="https://opensource.guide/maintaining-balance-for-open-source-maintainers/">article on avoiding burnout</a> for open source maintainers. It’s an important topic that should be discussed more widely, and I appreciate that GitHub published it.</p>
<p>The article has a good overview of possible burnout reasons, and gives some suggestions on how to avoid it. However, I feel that the main goal of the article is to convince maintainers to keep doing what they are doing for as long as possible, meaning to keep working for free. The article briefly mentions sponsoring but for most maintainers it’s unrealistic to rely on sponsoring or donations.</p>
<p>I think the most healthy solution for avoiding maintainer burnout is <em>quitting open source entirely</em>, at least it was a solution that worked for me. Unfortunately, I had to reach the state of burnout myself to understand that, and then it took me long time to recover (or replace maintainer burnout with a burnout in other areas of my life).</p>
<p>I’ve talked before <a href="/blog/ex-open-source/">on why open source was attractive to me</a>, and a bit on why I <a href="/blog/no-complaints-oss/">was contributing less</a>, and eventually <a href="/blog/going-offline/">quit open source</a>.</p>
<p>In this article I want to talk more about the reasons that led me to maintainer burnout, and to quitting open source after about ten years of contributing regularly, and publishing many projects.</p>
<p><strong>So what are the reasons?</strong></p>
<h2>Entitlement and toxicity of users</h2>
<p>Somehow <a href="https://mikemcquaid.com/entitlement-in-open-source/">people expect you to solve their issues</a> or implement features they need. They’ll complain that the bug you introduced in the last version broke their build, or that they need this obscure feature for their current project at work, and that if you don’t add it quickly, their boss will go berserk because the deadline is almost there.</p>
<p>Folks seem to miss that you’re working on these projects in your free time, after a long day at your full-time job, and without any compensation. They demand that you work on whatever is, in their opinion, broken or missing, and then they get angry when you’re not doing it or not doing it fast enough.</p>
<p>They miss that you might be the only person working on the project, that you’re not a part of a large team that’s paid to work full-time on maintaining the project and solving issues of its users.</p>
<p>Somehow, <em>open source</em> became a synonym of <em>free labor</em>, not just <em>free code</em>, and it’s harmful for the whole community but mostly it’s harmful for maintainers of open source projects.</p>
<p>And then, there are all the <a href="https://youtu.be/wI7L9ApnvkQ?si=IYLHpM2L4dTyaiMT">toxic comments</a> (<a href="https://medium.com/@d4nyll/the-open-source-community-have-no-place-for-disrespect-70c85d473332">see just a few examples</a>) that tell you that your software is garbage and you should just <s>kill yourself</s> quit programming, all the plus-ones (“I have the same issue”), all the pings (“any update on this?”), and other spam comments that don’t add any value…</p>
<h2>Low quality of contributions</h2>
<p>I often felt like managing contributions takes more time than implementing the same features myself.</p>
<p>The overall code quality of pull request to open source projects is usually very low, and each pull request requires a lot of time and mental effort to review, requires many comments and many iterations to bring it to somewhat an acceptable quality.</p>
<p>It often takes several months to merge a single pull request, many get abandoned, or their authors get frustrated and angry. Often someone submits a pull request, and never comes back to it again, so you waste time and energy reviewing their code for nothing (I call such pull requests <em>hit and run pull requests</em>).</p>
<p>People often submit features <em>they</em> want but it doesn’t always match the project’s vision or is outside its intended scope. They also believe that accepting their work is free for you, not thinking that you first need to review the pull request (likely multiple times), and then maintain the feature once it’s merged (likely forever).</p>
<p>And the darkest time for an open source maintainer is October, when during the Hacktoberfest people around the world <a href="https://blog.domenic.me/hacktoberfest/">spam maintainers with total nonsense</a> just to get a free t-shirt.</p>
<h2>Lack of community</h2>
<p>Most of my projects never got popular despite all my efforts to make them useful and to market them. If nobody is using your project, why bother fixing bugs, writing documentation, making a nice site, and so on?</p>
<p>My last project, <a href="/squirrelsong/">Squirrelsong</a> color theme is a good example here. I’ve invested a lot of time on making this theme, and I think it’s better and different enough than many existing themes, and yet, it seems that I’m the only user.</p>
<p>My most popular open source project, <a href="https://react-styleguidist.js.org">React Styleguidist</a>, has over 10K stars on GitHub, and yet, I couldn’t manage to build a community around it, and to make it self-sufficient. The project is too big for one person to build it, and to manage issues and pull requests.</p>
<p>I had some good contributions over the years on various projects, but most of the time they require a lot of collaboration from my side. A few people were interested in maintaining some of my projects but, again, they needed a lot of guidance from my side, so it never felt like it’s saving me any time and effort.</p>
<p>There should be enough people actively working on a project to respond to issues, review pull requests, and work on new features, so even if some of them <s>get hit by a bus</s> can’t work on a project right now, it’ll continue. In reality, however, if I wasn’t doing everything, the projects would stop completely, and the issues would start to pile up.</p>
<h2>Lack of compensation</h2>
<p>Maintaining an open source project is a hard and demanding job, as any other job. The difference is that we usually get paid to do other jobs but not for open source. Few developers could make a living (or at least any significant money) doing open source, for the majority of us it’s nothing but frustration.</p>
<p>The most money I got for my open source work was for <a href="https://opencollective.com/styleguidist">React Styleguidist via Open Collective</a>. And it was barely enough to buy a pack of stickers once in a while. The current monthly budget of the project is $8.</p>
<p>I’ve tried <a href="https://github.com/sponsors/sapegin">GitHub sponsors</a>, with zero results, apart from one-time $550 contribution from GitHub itself that was mysteriously cancelled the same day.</p>
<p>I have a <a href="https://www.buymeacoffee.com/sapegin">Buy me a coffee</a> button on every project’s readme but I don’t think I ever got a single cup from there. (I got some coffees from Unsplash though, which is also nothing for over 1,5 million downloads of <a href="https://unsplash.com/@sapegin">my photos there</a>.)</p>
<h2>Lack of tooling</h2>
<p>There are two problems with tooling that open source maintainers have to deal with.</p>
<p>First, <strong>the complexity of tooling involved in development of a typical open source project</strong>:</p>
<ul>
<li><strong>Publishing</strong> JavaScript code (can’t speak about other languages — most of my work is JavaScript and TypeScript) in a way that it could be used by many people <a href="https://blog.isquaredsoftware.com/2023/08/esm-modernization-lessons/">is very complex</a>, and it’s getting worse.</li>
<li><strong>Dependency upgrades</strong> often take ages, and if you have multiple projects, it could turn into a year-long adventure (I even made <a href="https://mrm.js.org">a tool</a> to help with that).</li>
<li>Generally, the amount of <strong>configuration</strong> (TypeScript, linters, bundlers, releases, dependencies, testing, continuous integration, changelog generation, and more, and more, and more…) is quickly getting out of hand.</li>
</ul>
<p>Second, GitHub could do so much more (more than nothing) to <strong>protect its users from toxic people</strong>. For example, GitHub could:</p>
<ul>
<li><strong>Detect toxic comments</strong>, and either remove them automatically, or mark them for manual review.</li>
<li><strong>Remove spam comments</strong>, and convert plus-ones to thumbs up reactions.</li>
<li><strong>Educate users</strong> posting such comments by teaching them better behaviors, or banning them if they don’t want to change.</li>
<li><strong>Make project status clear</strong>: make it clear whether a project is backed by a company or maintained by someone in their free time.</li>
</ul>
<p>I had to ignore any activity on many projects on GitHub just to avoid people at-mentioning me all the time.</p>
<h2>Conclusion</h2>
<p>Something has to change to make open source healthy but for now I don’t want to be part of it. I don’t want to help corporations make millions on free code, and receive rude comments instead of any kind of recognition.</p>
<p>The worst part is that it’s getting worse, not better.</p>
<p>Now I consider my open source projects as personal projects whose code happened to be open. It’s convenient to keep code on GitHub and use npm to share code among several projects. I only add features that I need myself, and when I need them. I don’t receive notifications on any activity on these projects. I rarely look at the issues or pull requests, and I almost never respond to them.</p>
<p>Perhaps, I should either disable the issues entirely, or add a note explaining that they will likely be ignored and they may be more successful by forking the code. I guess, I still want projects to have a place for users to report bugs so that other users could suggest workarounds. (I explore this approach in <a href="/blog/healthy-open-source/">my next post</a>.)</p>
<p><em>Thanks to <a href="https://drtaco.net/">Margarita Diaz</a>.</em></p>
Hiding tabs completely in Visual Studio Codehttps://sapegin.me/blog/hiding-tabs-in-visual-studio-code/https://sapegin.me/blog/hiding-tabs-in-visual-studio-code/Wed, 30 Aug 2023 00:00:00 GMT<p><em>I always disable tabs in code editors because they distract me.</em></p>
<p><strong>Update:</strong> Looks like there’s finally a built-in way of disabling tabs:</p>
<pre><code>{
"workbench.editor.showTabs": "none",
"window.customTitleBarVisibility": "never"
}
</code></pre>
<p>So the hack explained below is no longer needed.</p>
<hr />
<p>By default Visual Studio Code shows tabs like this:</p>
<p></p>
<p>We could disable the tabs in settings:</p>
<pre><code>{
"workbench.editor.showTabs": false
}
</code></pre>
<p>However, now instead of tabs there’s a bar with a filename that looks like one long tab and takes the same amount of space:</p>
<p></p>
<p>Unfortunately, <a href="https://github.com/Microsoft/vscode/issues/33607">it’s impossible to hide it</a> without any hacks.</p>
<p>The only way to do it is by using the <a href="https://marketplace.visualstudio.com/items?itemName=be5invis.vscode-custom-css">Custom CSS and JS Loader</a> extension:</p>
<ol>
<li>Install the extension.</li>
<li>Create a CSS file with the following:</li>
</ol>
<pre><code>.title.show-file-icons {
display: none !important;
}
</code></pre>
<ol>
<li>Add the following to the Code config file:</li>
</ol>
<pre><code>{
"vscode_custom_css.imports": [
"file:///Users/username/path/to/the/css-file.css"
]
}
</code></pre>
<ol>
<li>Open the command palette (Cmd+Shift+P), and select <strong>Reload Custom CSS and JS</strong>.</li>
</ol>
<p>Finally, there are no distractions:</p>
<p></p>
<p><em>I found this solution <a href="https://github.com/Microsoft/vscode/issues/33607#issuecomment-424193133">in this GitHub comment</a>.</em></p>
Migrating my blog from Gatsby to Astrohttps://sapegin.me/blog/gatsby-to-astro/https://sapegin.me/blog/gatsby-to-astro/Recently, I redesigned and rebuilt my site and blog from Gatsby to Astro, and would like to share my experience.Mon, 07 Aug 2023 00:00:00 GMT<p>Recently, I redesigned and rebuilt <a href="/">my site and blog</a> from <a href="https://www.gatsbyjs.com/">Gatsby</a> to <a href="https://astro.build/">Astro</a>. I had a few goals with this rebuild:</p>
<ul>
<li><strong>Move away from Gatsby.</strong> What sounded like a good idea, turned out to be a complete disaster. Probably one of the worst developer experiences I’ve seen (after React Native), with poor defaults, and unnecessary complexity with its GraphQL API.</li>
<li><strong>Stop shipping React.</strong> The pages are completely static, and using React to render them in the browser is unnecessary.</li>
<li><strong>Merge my homepage and my blog.</strong> The homepage is essentially a single-page site, and it doesn’t make sense to maintain it separately from the blog.</li>
<li><strong>Better represent the current me.</strong> The old homepage was focused on my open source projects, which <a href="/blog/going-offline/">aren’t an important part of my life</a> anymore.</li>
</ul>
<p></p>
<p>I had, however, a few technical requirements:</p>
<ul>
<li><strong>Keep using React</strong> for templates. Many static site generators still use templates, like Handlebars, which makes them hard to work with. I started using <a href="/blog/why-fledermaus/">JSX for templates</a> many years ago and then switched to React. This is still my favorite way.</li>
<li><strong>Keep using primitive components for styling.</strong> I already have <a href="https://github.com/tamiadev/tamia">a component library</a> that is based on primitive components (<code>Box</code>, <code>Flex</code>, <code>Grid</code>, <code>Stack</code>, and such), and it’s my favorite way of styling sites and apps.</li>
</ul>
<p>After some experimentation, I settled on Astro and vanilla-extract.</p>
<h2>Astro</h2>
<p><a href="https://astro.build">Astro</a> is a static site generator that prioritizes performance but also is very flexible. It gives a choice of several UI frameworks to use for templates, including React, Vue, and Svelte. By default, Astro uses generates static HTML pages at built-time, meaning they come with zero client JavaScript, but it also allows adding dynamic sections to static pages.</p>
<p>The developer experience is super nice, especially after Gatsby, and Astro comes with most of the things one may need for building a blog or any other content site: <a href="https://docs.astro.build/en/core-concepts/routing/">file-based routing</a>, <a href="https://docs.astro.build/en/guides/content-collections/">content collections</a>, <a href="https://docs.astro.build/en/guides/markdown-content/">Markdown with syntax highlighting</a>, and much much better <a href="https://docs.astro.build/en/guides/typescript/">TypeScript support</a>. The installation process is much nicer than Gatsby or Next.js. It’s very fast, the docs are comprehensive and well-written.</p>
<p>Probably, the only issue I had so far is that it often <a href="https://mastodon.cloud/@sapegin/110626274894320458">crashes after code changes</a>, but looks like it’s not a common issue but some problem with my environment.</p>
<p>Astro has <a href="https://docs.astro.build/en/core-concepts/astro-components/">its own components</a> that look like a very basic version of React components mixed with MDX, and we could seamlessly use React components inside Astro components:</p>
<p><!-- eslint-skip --></p>
<pre><code>---
import Layout from './Layout.astro';
import { PostPage } from '../templates/PostPage';
import type { Post } from '../types/Post';
type Props = Post & { related: Post[] };
const { url, title, description, date, tags, source, related } =
Astro.props;
---
<Layout url={url} title={title} description={description}>
<PostPage
url={url}
title={title}
description={description}
date={date}
tags={tags}
source={source}
related={related}
>
<slot />
</PostPage>
</Layout>
</code></pre>
<p>In this Astro component, we import another Astro component (<code>Layout</code>) and a React component (<code>PostPage</code>). The <code><slot /></code> is similar to React’s <code>children</code>.</p>
<p>However, my favorite Astro feature is probably <a href="https://docs.astro.build/en/guides/content-collections/">content collections</a>, which allows us to create collections of Markdown or JSON files, type frontmatter fields, and have an API to fetch documents for a collection. Have a look at <a href="https://gist.github.com/sapegin/675cbbc37ad37f2fcbab7f83ad8e3cb9">a comparison of rendering blog pages</a> in Gatsby and Astro.</p>
<h2>Vanilla-extract</h2>
<p>I couldn’t continue using <a href="https://styled-components.com/">styled-components</a> with Astro if I wanted to ship my site without the React runtime: We can’t use anything with React Context, and styled-components rely on it for theming.</p>
<p><a href="https://vanilla-extract.style">Vanilla-extract</a> seems to be a popular choice and solves the problem. It allows one to write zero-runtime styles in JavaScript, supports theming, and has good TypeScript support.</p>
<p>With the <a href="https://vanilla-extract.style/documentation/packages/recipes/">Recipes package</a> we could create variants, and with the <a href="https://vanilla-extract.style/documentation/packages/sprinkles/">Sprinkles package</a> we could access design tokens and create responsive styles.</p>
<p>However, vanilla-extract comes with a lot of limitations:</p>
<ul>
<li>We need to write styles in a separate <code>*.css.ts</code> file.</li>
<li>We cannot export React components from <code>*.css.ts</code> files, only strings containing class names.</li>
<li>We need to write <code>className</code> all the time and use <a href="https://github.com/lukeed/clsx">clsx</a> to combine class names.</li>
<li>It’s possible to create primitive components but we could only use known prop values (for example, we could write <code><Flex alignItems="center"></code> but not <code><Flex maxWidth={640}></code> or <code><Grid gridTemplateColumns="auto 1fr auto"></code>).</li>
<li>Not enough reusable types, which leads to copypasting types from vanilla-extract.</li>
<li>Nonsensical limitations like selectors can only target one element and can’t use a global class name, which produces convoluted unreadable, and hard-to-maintain styles in some cases.</li>
<li>Leaking abstractions, for example, one could use Sprinkles in local styles but not in global ones. I know why these limitations exist but I need to know how the tool works inside to be able to use it, and think about it every time I write styles.</li>
</ul>
<p>Overall, it feels like a huge step back in time for some 10 years or so. The developer experience feels similar to CSS Modules, though, with better types.</p>
<p>I found that the colocation and component model of styled-components are easier to use and maintain. I prefer to keep styles in the same file as my components and access them as components instead of keeping styles in separate files and working with CSS class names.</p>
<p>Vanilla-extract may work for a simple static site, like a personal blog, but I wouldn’t recommend it for a large app with a big team.</p>
<p>Here’s what I’d write using styled-components:</p>
<pre><code>// Hola.tsx
import type { ReactNode } from 'react';
import styled from 'styled-components';
import { Box, Stack, Heading, IconCoffee } from '.';
type Props = {
children: ReactNode;
};
const Name = styled.span({
fontSize: 'clamp(2.6rem, 7vw, 4rem)',
background: props =>
`linear-gradient(${props.theme.colors.hover}, ${props.theme.colors.primary})`,
WebkitBackgroundClip: 'text',
WebkitTextFillColor: 'transparent'
});
export function Hola({ children }: Props) {
return (
<Heading level={1}>
<Stack
as="span"
display="inline-flex"
direction="row"
gap="s"
alignItems="baseline"
>
<Name>{children}</Name>
<Box as="span" mt={-6}>
<IconCoffee />
</Box>
</Stack>
</Heading>
);
}
</code></pre>
<p>And here’s what it looks like with vanilla-extract:</p>
<pre><code>// Hola.css.ts
import { style } from '@vanilla-extract/css';
import { vars } from '../styles/theme.css';
export const name = style({
fontSize: 'clamp(2.6rem, 7vw, 4rem)',
background: `linear-gradient(${vars.colors.hover}, ${vars.colors.primary})`,
WebkitBackgroundClip: 'text',
WebkitTextFillColor: 'transparent'
});
export const icon = style({
marginTop: -6
});
// Hola.tsx
import type { ReactNode } from 'react';
import { Stack, Heading, IconCoffee } from '.';
import { name, icon } from './Hola.css';
type Props = {
children: ReactNode;
};
export function Hola({ children }: Props) {
return (
<Heading level={1}>
<Stack
as="span"
display="inline-flex"
direction="row"
gap="s"
alignItems="baseline"
>
<span className={name}>{children}</span>
<span>
<IconCoffee className={icon} />
</span>
</Stack>
</Heading>
);
}
</code></pre>
<p>The only thing I like more in the vanilla-extract version is accessing design tokens (theme) using an import instead of a function. We could do the same with styled-components, if we don’t need contextual styling (changing the theme for part of the app, for example, having a sign-up form with dark background).</p>
<p>I created <a href="https://github.com/sapegin/sapegin.me/tree/master/src/tamia">a light version of my React component library</a>, so I could create layouts without writing custom CSS:</p>
<pre><code>export function Menu({ current }: Props) {
return (
<Grid
as="ul"
columnGap="m"
rowGap={{ mobile: 0, tablet: 'm' }}
justifyItems="center"
className={menu}
>
{ITEMS.map(({ title, href, alt }, index) => (
<Fragment key={href}>
{index === HALF && (
<Box
as="li"
aria-hidden="true"
display={{ mobile: 'none', tablet: 'block' }}
/>
)}
<Text as="li" variant="menu">
<Link
href={href}
className={clsx(
link,
isCurrent(href, current) && active
)}
title={alt}
aria-label={alt}
>
{title}
</Link>
</Text>
</Fragment>
))}
</Grid>
);
}
</code></pre>
<p>Here, I’m using <code>Grid</code> and <code>Box</code> primitive components to create a responsive layout for a site menu.</p>
<p>And I think I changed my mind about responsive props, and now I prefer objects over arrays:</p>
<pre><code><Stack direction={{ mobile: 'column', tablet: 'row' }}>…</Stack>
</code></pre>
<pre><code><Stack direction={['column', null, 'row']}>…</Stack>
</code></pre>
<p>Both require some learning and getting used to but the object notation now feels more readable to me. Vanilla-extract supports both.</p>
<h2>Conclusion</h2>
<p>I’ll definitely use Astro again, and going to rebuild at least <a href="/photos/">my photo gallery</a> and possibly <a href="https://tacohuaco.co/">our recipe site</a> from Gatsby to Astro.</p>
<p>I wish there was a better way to work with styles. Vanilla-extract does the job but the developer experience is far from great. <a href="https://mastodon.cloud/@sapegin">Let me know</a>, if I’m missing anything!</p>
<p>And have a look at <a href="https://github.com/sapegin/sapegin.me">the site’s source code</a> on GitHub.</p>
Washing your code: naming is hardhttps://sapegin.me/blog/naming/https://sapegin.me/blog/naming/We all know that naming is one of the hardest problems in programming. Let’s look at many naming antipatterns, and how to fix them.Thu, 22 Jun 2023 00:00:00 GMT<p><!-- description: How clear names make it easier to understand the code, and how to improve naming in our code --></p>
<p>We all know that naming is one of the hardest problems in programming, and most of us have probably written code like this when we just started programming:</p>
<p><!-- prettier-ignore --></p>
<pre><code> // reading file signature
try
AssignFile(fp, path+sr.Name);
Reset(fp, 1);
if FileSize(fp) < sizeof(buf) then
continue
else
BlockRead(fp, buf, sizeof(buf));
CloseFile(fp);
except
on E : Exception do
begin
ShowError(E.Message+#13#10+'('+path+sr.Name+')');
continue;
end; // on
end; // try
// compare
for i:=0 to FormatsCnt do
begin
if AnsiStartsStr(Formats[i].Signature, buf) then
begin
// Check second signature
if (Formats[i].Signature2Offset>0) then
if Formats[i].Signature2Offset <> Pos(Formats[i].Signature2, buf) then
continue;
// Check extension
found := false;
ext := LowerCase(ExtractFileExt(sr.Name));
for j:=0 to High(Formats[i].Extensions) do
begin
if ext='.'+Formats[i].Extensions[j] then
begin
found := true;
break;
end; // if
end; // for j
if found then
break;
// ..
end;
end;
</code></pre>
<p>I wrote this code more than 20 years ago in Delphi, and, honestly, I don’t really remember what the app was supposed to do. It has it all: single-character names (<code>i</code>, <code>j</code>, <code>E</code>), abbreviations (<code>FormatsCnt</code>, <code>buf</code>), acronyms (<code>sr</code>, <code>fp</code>), and a mix of different naming conventions. It has some comments, though! (And I kept the original indentation for complete immersion.)</p>
<p>I once worked with a very seasoned developer who mostly used very short names and never wrote any comments or tests. Working with their code was like working with Assembler — it was very difficult. Often, we wasted days tracking and fixing bugs.</p>
<p>Let’s look at these (and many other) naming antipatterns and how to fix them.</p>
<h2>Name function parameters</h2>
<p>Function calls with multiple parameters can be hard to understand, when there are too many of them or some are optional. Consider this function call:</p>
<p><!--
let x, target, fixedRequest, ctx
const resolver = { doResolve: (...args) => x = args.length }
--></p>
<pre><code>resolver.doResolve(
target,
fixedRequest,
null,
ctx,
(error, result) => {
/* … */
}
);
</code></pre>
<p><!-- expect(x).toBe(5) --></p>
<p>This <code>null</code> in the middle is grotesque — who knows what was supposed to be there and why it’s missing…</p>
<p>However, the worst programming pattern of all time is likely positional boolean function parameters:</p>
<p><!-- let x; const appendScriptTag = (a, b) => x=b --></p>
<pre><code>appendScriptTag(`https://example.com/falafel.js`, false);
</code></pre>
<p><!-- expect(x).toBe(false) --></p>
<p>What are we disabling here? It’s impossible to answer without reading the <code>appendScriptTag()</code> function’s code.</p>
<p>How many parameters are too many? In my experience, more than two parameters are already too many. Additionally, any boolean parameter is automatically too many.</p>
<p>Some languages have <em>named parameters</em> to solve these problems. For example, in Python we could write this:</p>
<pre><code>appendScriptTag('https://example.com/falafel.js', useCORS=false)
</code></pre>
<p>It’s obvious what the code above does. Names serve as inline documentation.</p>
<p>Unfortunately, JavaScript doesn’t support named parameters yet, but we can use an object instead:</p>
<p><!-- let x; const appendScriptTag = (a, b) => x = b.useCORS --></p>
<pre><code>appendScriptTag(`https://example.com/falafel.js`, {
useCORS: false
});
</code></pre>
<p><!-- expect(x).toBe(false) --></p>
<p>The code is slightly more verbose than in Python, but it achieves the same outcome.</p>
<h2>Name complex conditions</h2>
<p>Some conditions are short and obvious, while others are long and require deep code knowledge to understand.</p>
<p>Consider this code:</p>
<p><!-- let x; const useAuth = () => ({status: 'fetched', userDetails: {}}) --></p>
<pre><code>function Toggle() {
const { userDetails, status } = useAuth();
if (status === 'fetched' && Boolean(userDetails)) {
return null;
}
/* … */
}
</code></pre>
<p><!-- expect(Toggle()).toBe(null) --></p>
<p>In the code above, it’s hard to understand why we’re shortcutting the component. However, if we give the condition a name:</p>
<p><!-- let x; const useAuth = () => ({status: 'fetched', userDetails: {}}) --></p>
<pre><code>function Toggle() {
const { userDetails, status } = useAuth();
const isUserLoggedIn = status === 'fetched' && Boolean(userDetails);
if (isUserLoggedIn) {
return null;
}
/* … */
}
</code></pre>
<p><!-- expect(Toggle()).toBe(null) --></p>
<p>This makes the code clear and obvious: if we have user details after the data has been fetched, the user must be logged in.</p>
<h2>Negative booleans are not not hard to read</h2>
<p>Consider this example:</p>
<p><!-- let displayErrors = vi.fn() --></p>
<pre><code>function validateInputs(values) {
let noErrorsFound = true;
const errorMessages = [];
if (!values.firstName) {
errorMessages.push('First name is required');
noErrorsFound = false;
}
if (!values.lastName) {
errorMessages.push('Last name is required');
noErrorsFound = false;
}
if (!noErrorsFound) {
displayErrors(errorMessages);
}
return noErrorsFound;
}
</code></pre>
<p><!--
expect(validateInputs({firstName: 'Chuck', lastName: 'Norris'})).toBe(true)
expect(displayErrors).not.toHaveBeenCalled()</p>
<p>expect(validateInputs({})).toBe(false)
expect(displayErrors).toHaveBeenCalledWith(['First name is required', 'Last name is required'])
--></p>
<p>I can say a lot about this code, but let’s focus on this line first:</p>
<p><!-- const noErrorsFound = false --></p>
<pre><code>if (!noErrorsFound) {
// No errors were fond
}
</code></pre>
<p><!-- expect($1).toBe(true) --></p>
<p>The double negation, “if not no errors found…”, makes my brain itch, and I almost want to take a red marker and start crossing out <code>!</code>s and <code>no</code>s on my screen to be able to read the code.</p>
<p>In most cases, we can significantly improve code readability by converting negative booleans to positive ones:</p>
<p><!-- let displayErrors = vi.fn() --></p>
<pre><code>function validateInputs(values) {
let errorsFound = false;
const errorMessages = [];
if (!values.firstName) {
errorMessages.push('First name is required');
errorsFound = true;
}
if (!values.lastName) {
errorMessages.push('Last name is required');
errorsFound = true;
}
if (errorsFound) {
displayErrors(errorMessages);
}
return !errorsFound;
}
</code></pre>
<p><!--
expect(validateInputs({firstName: 'Chuck', lastName: 'Norris'})).toBe(true)
expect(displayErrors).not.toHaveBeenCalled()</p>
<p>expect(validateInputs({})).toBe(false)
expect(displayErrors).toHaveBeenCalledWith(['First name is required', 'Last name is required'])
--></p>
<p>Positive names and positive conditions are usually easier to read than negative ones.</p>
<p>By this time, we should notice that we don’t need the <code>errorsFound</code> variable at all: its value can be derived from the <code>errorMessages</code> array — <em>errors found</em> when we have any <em>error messages</em> to show:</p>
<p><!-- let displayErrors = vi.fn() --></p>
<pre><code>function validateInputs(values) {
const errorMessages = [];
if (!values.firstName) {
errorMessages.push('First name is required');
}
if (!values.lastName) {
errorMessages.push('Last name is required');
}
if (errorMessages.length > 0) {
displayErrors(errorMessages);
return false;
} else {
return true;
}
}
</code></pre>
<p><!--
expect(validateInputs({firstName: 'Chuck', lastName: 'Norris'})).toBe(true)
expect(displayErrors).not.toHaveBeenCalled()</p>
<p>expect(validateInputs({})).toBe(false)
expect(displayErrors).toHaveBeenCalledWith(['First name is required', 'Last name is required'])
--></p>
<p>Here’s another example:</p>
<p><!--
let store = {}
const bookID = 'book', data = [1]
const $ = (key) => ({
toggleClass: (cls, val) => { store[key] = {}, store[key][cls] = val },
attr: (attr, val) => { store[key] = {}, store[key][attr] = val }
})
--></p>
<pre><code>const noData = data.length === 0;
$(`#${bookID}_download`).toggleClass('hidden-node', noData);
$(`#${bookID}_retry`).attr('disabled', !noData);
</code></pre>
<p><!--
expect(store['#book_download']['hidden-node']).toBe(false)
expect(store['#book_retry']['disabled']).toBe(true)
--></p>
<p>Again, every time we read <code>noData</code> in the code, we need to mentally <em>unnegate</em> it to understand what’s really happening. And the negative <code>disabled</code> attribute with double negation (<code>!noData</code>) makes things even worse. Let’s fix it:</p>
<p><!--
let store = {}
const bookID = 'book', data = [1]
const $ = (key) => ({
toggleClass: (cls, val) => { store[key] = {}, store[key][cls] = val },
attr: (attr, val) => { store[key] = {}, store[key][attr] = val }
})
--></p>
<pre><code>const hasData = data.length > 0;
$(`#${bookID}_download`).toggleClass(
'hidden-node',
hasData === false
);
$(`#${bookID}_retry`).attr('disabled', hasData);
</code></pre>
<p><!--
expect(store['#book_download']['hidden-node']).toBe(false)
expect(store['#book_retry']['disabled']).toBe(true)
--></p>
<p>Now, it’s much easier to read.</p>
<p><strong>Info:</strong> We talk about names like <code>data</code> later in this chapter.</p>
<h2>The larger the scope, the longer the name</h2>
<p>My rule of thumb is that the shorter the scope of a variable, the shorter its name should be.</p>
<p>I generally avoid very short variable names, but I prefer them for one-liners. Consider this example:</p>
<p><!-- let BREAKPOINT_MOBILE = 480, BREAKPOINT_TABLET = 768, BREAKPOINT_DESKTOP = 1024 --></p>
<pre><code>const breakpoints = [
BREAKPOINT_MOBILE,
BREAKPOINT_TABLET,
BREAKPOINT_DESKTOP
].map(x => `${x}px`);
// → ['480px', '768px', '1024px']
</code></pre>
<p><!--
expect(breakpoints).toEqual(['480px', '768px', '1024px'])
--></p>
<p>It’s clear what <code>x</code> is, and a longer name would bloat the code without making it more readable, likely less. We already have the full name in the parent function: we map over a list of breakpoints and convert numbers to strings. It also helps that here we only have a single variable, so any short name will be read as “whatever we map over.”</p>
<p>I usually use <code>x</code> in such cases. I think it’s clear enough that it’s a placeholder and not an acronym for a particular word, and it’s a common convention.</p>
<p>Some developers prefer <code>_</code>, and it’s a good choice for any programming language except JavaScript, where <code>_</code> is often used for the <a href="https://lodash.com/">Lodash</a> utility library.</p>
<p>Another convention I’m okay with is using <code>a</code>/<code>b</code> names for sorting and comparison functions:</p>
<p><!-- const dates = ['2022-02-26T00:21:00.000+01:00', '2021-05-11T10:30:00.000+01:00'] --></p>
<pre><code>const sortedDates = dates.toSorted(
(a, b) => new Date(a).valueOf() - new Date(b).valueOf()
);
</code></pre>
<p><!-- expect(sortedDates).toEqual(['2021-05-11T10:30:00.000+01:00', '2022-02-26T00:21:00.000+01:00']) --></p>
<p>Loop indices <code>i</code>, <code>j</code>, and <code>k</code> are some of the most common variable names ever. They are moderately readable in short, non-nested loops, and only because programmers are so used to seeing them in the code:</p>
<p><!--
let calls = 0
const pizzaController = { one: {mockReset(){ calls++ }}, two: {mockReset(){ calls++ }} }
--></p>
<p><!-- eslint-disable unicorn/no-for-loop --></p>
<pre><code>const keys = Object.keys(pizzaController);
for (let i = 0; i < keys.length; i += 1) {
pizzaController[keys[i]].mockReset();
}
</code></pre>
<p><!-- expect(calls).toBe(2) --></p>
<p><strong>Info:</strong> I used longer names for index variables, like <code>somethingIdx</code>, for a very long time. Surely, it’s way more readable than <code>i</code>, but, luckily, most modern languages allow us to iterate over things without coding artisan loops and without the need for an index variable. We talk more about this in the <a href="/blog/avoid-loops/">Avoid loops</a> chapter.</p>
<p>However, in nested loops, it’s difficult to understand which index belongs to which array:</p>
<p><!-- let console = { log: vi.fn() } --></p>
<p><!-- eslint-disable unicorn/no-for-loop --></p>
<pre><code>const array = [
['eins', 'zwei', 'drei'],
['uno', 'dos', 'tres']
];
for (let i = 0; i < array.length; i++) {
for (let j = 0; j < array[i].length; j++) {
console.log(array[i][j]);
}
}
</code></pre>
<p><!-- expect(console.log.mock.calls).toEqual([
['eins'], ['zwei'], ['drei'], ['uno'], ['dos'], ['tres']
]) --></p>
<p>It’s difficult to understand what’s going on here because variables <code>i</code> and <code>j</code> have no meaning. It works for non-nested loops, where <code>i</code> means “whatever the array contains,” but for nested arrays and loops, it’s not clear enough.</p>
<p>In the end, <code>x</code>, <code>a</code>, <code>b</code>, and <code>i</code> are pretty much all single-character names I ever use.</p>
<p>However, when the scope is longer or when we have multiple variables, short names can be confusing:</p>
<p><!--
let result = [
{edit: { range: [5, 10]}},
{edit: { range: [3, 4]}},
{edit: { range: [12, 20]}},
{edit: { range: [7, 7]}},
{edit: { range: [5, 6]}},
{edit: { range: [12, 12]}},
{edit: { range: [19, 19]}},
{edit: { range: [5, 12]}},
{edit: { range: [3, 3]}},
]
--></p>
<pre><code>result.sort((a, b) => {
const d0 = a.edit.range[0] - b.edit.range[0];
if (d0 !== 0) {
return d0;
}
// Both edits have now the same start offset.
// Length of a and length of b
const al = a.edit.range[1] - a.edit.range[0];
const bl = b.edit.range[1] - b.edit.range[0];
// Both has the same start offset and length.
if (al === bl) {
return 0;
}
if (al === 0) {
return -1;
}
if (bl === 0) {
return 1;
}
return al - bl;
});
</code></pre>
<p><!--
expect(result).toEqual([
{edit: { range: [3, 3]}},
{edit: { range: [3, 4]}},
{edit: { range: [5, 6]}},
{edit: { range: [5, 10]}},
{edit: { range: [5, 12]}},
{edit: { range: [7, 7]}},
{edit: { range: [12, 12]}},
{edit: { range: [12, 20]}},
{edit: { range: [19, 19]}},
])
--></p>
<p>In the code above, <code>a</code> and <code>b</code> are okay (we talked about them earlier), but <code>d0</code>, <code>al</code>, and <code>bl</code> make this code more complex than it should be.</p>
<p>Let’s try to improve it a bit:</p>
<p><!--
let result = [
{edit: { range: [5, 10]}},
{edit: { range: [3, 4]}},
{edit: { range: [12, 20]}},
{edit: { range: [7, 7]}},
{edit: { range: [5, 6]}},
{edit: { range: [12, 12]}},
{edit: { range: [19, 19]}},
{edit: { range: [5, 12]}},
{edit: { range: [3, 3]}},
]
--></p>
<pre><code>result.sort((a, b) => {
const startDifference = a.edit.range[0] - b.edit.range[0];
// If start offsets are different, sort by the start offset
if (startDifference !== 0) {
return startDifference;
}
// Otherwise, sort by the range length
const lengthA = a.edit.range[1] - a.edit.range[0];
const lengthB = b.edit.range[1] - b.edit.range[0];
return lengthA - lengthB;
});
</code></pre>
<p><!--
expect(result).toEqual([
{edit: { range: [3, 3]}},
{edit: { range: [3, 4]}},
{edit: { range: [5, 6]}},
{edit: { range: [5, 10]}},
{edit: { range: [5, 12]}},
{edit: { range: [7, 7]}},
{edit: { range: [12, 12]}},
{edit: { range: [12, 20]}},
{edit: { range: [19, 19]}},
])
--></p>
<p>Now, it’s clearer what the code is doing, and the comments explain the high-level idea instead of repeating the code.</p>
<p>On the other hand, long names in a short scope make the code cumbersome:</p>
<p><!-- const purchaseOrders = [{poNumber: 11}, {poNumber: 22}], purchaseOrderData = {poNumber: 22} --></p>
<pre><code>const index = purchaseOrders.findIndex(
purchaseOrder =>
purchaseOrder.poNumber === purchaseOrderData.poNumber
);
</code></pre>
<p><!-- expect(index).toBe(1) --></p>
<p>We can shorten the names to make the code more readable:</p>
<p><!-- const purchaseOrders = [{poNumber: 11}, {poNumber: 22}], purchaseOrder = {poNumber: 22} --></p>
<pre><code>const index = purchaseOrders.findIndex(
po => po.poNumber === purchaseOrder.poNumber
);
</code></pre>
<p><!-- expect(index).toBe(1) --></p>
<h2>The shorter the scope, the better</h2>
<p>We talked about the scope in the previous section. A variable’s scope size affects readability too. The shorter the scope, the easier it is to keep track of a variable.</p>
<p>The extreme cases would be:</p>
<ul>
<li>One-liner functions, where the scope of a variable is a single line: easy to follow (example: <code>[8, 16].map(x => x + 'px')</code>).</li>
<li>Global variables, whose scope is infinite: a variable can be used or modified anywhere in the project, and there’s no way to know which value it holds at any given moment, which often leads to bugs. That’s why many developers have been <a href="https://wiki.c2.com/?GlobalVariablesAreBad">advocating against global variables</a> for decades.</li>
</ul>
<p>Usually, the shorter the scope, the better. However, extreme scope shortening has the same issues as splitting code into many teeny-tiny functions: it’s easy to overdo it and hurt readability.</p>
<p><strong>Info:</strong> We talk about splitting code into functions in the <a href="/blog/divide/">Divide and conquer, or merge and relax</a> chapter.</p>
<p>I found that <em>reducing the lifespan of variables</em> works as well and doesn’t produce lots of tiny functions. The idea here is to reduce the number of lines between the variable declaration and the line where the variable is accessed for the last time. A variable’s <em>scope</em> might be a whole 200-line function, but if the lifespan of a particular variable is three lines, then we only need to look at these three lines to understand how this variable is used.</p>
<p><!-- const MAX_RELATED = 3 --></p>
<pre><code>function getRelatedPosts(
posts: {
slug: string;
tags: string[];
timestamp: string;
}[],
{ slug, tags }: { slug: string; tags: string[] }
) {
const weighted = posts
.filter(post => post.slug !== slug)
.map(post => {
const common = (post.tags || []).filter(t =>
(tags || []).includes(t)
);
return {
...post,
weight: common.length * Number(post.timestamp)
};
})
.filter(post => post.weight > 0);
const sorted = weighted.toSorted((a, b) => b.weight - a.weight);
return sorted.slice(0, MAX_RELATED);
}
</code></pre>
<p><!--
const posts = [{slug: 'a', tags: ['cooking'], timestamp: 111}, {slug: 'b', tags: ['cooking', 'sleeping'], timestamp: 222}, {slug: 'c', tags: ['cooking', 'tacos'], timestamp: 333}]
expect(getRelatedPosts(posts, {slug: 'd', tags: ['cooking', 'tacos'], timestamp: 444})).toEqual([{slug: 'c', tags: ['cooking', 'tacos'], timestamp: 333, weight: 666}, {slug: 'b', tags: ['cooking', 'sleeping'], timestamp: 222, weight: 222}, {slug: 'a', tags: ['cooking'], timestamp: 111, weight: 111}])
--></p>
<p>In the code above, the lifespan of the <code>sorted</code> variable is only two lines. This kind of sequential processing is a common use case for this technique.</p>
<p><strong>Tip:</strong> Double-click on a variable name to select all its appearances in the code. This helps to quickly see the variable’s lifespan.</p>
<p><strong>Info:</strong> See a larger example in the <a href="/blog/avoid-reassigning-variables/#avoid-pascal-style-variables">Avoid Pascal-style variables</a> section in the <em>Avoid reassigning variables</em> chapter.</p>
<h2>Making magic numbers less magic</h2>
<p>Magic numbers are any numbers that might be unclear to the code reader. Consider this example:</p>
<pre><code>const getHoursSinceLastChange = timestamp =>
Math.round(timestamp / 3600);
</code></pre>
<p><!-- expect(getHoursSinceLastChange(36000)).toBe(10) --></p>
<p>A seasoned developer would likely guess that 3600 is the number of seconds in an hour, but the actual number is less important to understand what this code does than the meaning of this number. We can make the meaning clearer by moving the magic number into a constant:</p>
<pre><code>const SECONDS_IN_AN_HOUR = 60 * 60;
const getHoursSinceLastChange = timestamp =>
Math.round(timestamp / SECONDS_IN_AN_HOUR);
</code></pre>
<p><!-- expect(getHoursSinceLastChange(36000)).toBe(10) --></p>
<p>I also like to include a unit in a name if it’s not obvious otherwise:</p>
<pre><code>const FADE_TIMEOUT_MS = 2000;
</code></pre>
<p><!-- expect(FADE_TIMEOUT_MS).toBe(2000) --></p>
<p>A perfect example where constants make code more readable is days of the week:</p>
<p><!--
const Calendar = props => <div>{props.disabledDaysOfWeek.join(':')}</div>;
const Test = () => (
--></p>
<pre><code><Calendar disabledDaysOfWeek={[1, 6]} />
</code></pre>
<p><!--
)
const {container: c1} = RTL.render(<Test />);
expect(c1.textContent).toEqual('1:6')
--></p>
<p>Is 6 a Saturday, Sunday, or Monday? Are we counting days from 0 or 1? Does the week start on Monday or Sunday?</p>
<p>Defining constants for these values makes it clear:</p>
<pre><code>const WEEKDAY_MONDAY = 1;
const WEEKDAY_SATURDAY = 6;
</code></pre>
<p><!-- --></p>
<p><!--
const WEEKDAY_MONDAY = 1;
const WEEKDAY_SATURDAY = 6;
const Calendar = props => <div>{props.disabledDaysOfWeek.join(':')}</div>;
const Test = () => (
--></p>
<pre><code><Calendar disabledDaysOfWeek={[WEEKDAY_MONDAY, WEEKDAY_SATURDAY]} />
</code></pre>
<p><!--
)
const {container: c1} = RTL.render(<Test />);
expect(c1.textContent).toEqual('1:6')
--></p>
<p>Another common use case for magic numbers, which is somehow widely accepted, is HTTP status codes:</p>
<pre><code>function getErrorMessage(error) {
if (error.response?.status === 404) {
return 'Not found';
}
if (error.response?.status === 429) {
return 'Rate limit exceeded';
}
return 'Something went wrong';
}
</code></pre>
<p><!--
expect(getErrorMessage({ response: { status: 404 } })).toBe('Not found')
expect(getErrorMessage({ response: { status: 429 } })).toBe('Rate limit exceeded')
expect(getErrorMessage({ response: { status: 500 } })).toBe('Something went wrong')
--></p>
<p>I know what the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/404">404</a> status is, but who remembers what the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/429">429</a> status means?</p>
<p>Let’s replace the magic numbers with constants:</p>
<pre><code>const STATUS_NOT_FOUND = 404;
const STATUS_TOO_MANY_REQUESTS = 429;
function getErrorMessage(error) {
if (error.response?.status === STATUS_NOT_FOUND) {
return 'Not found';
}
if (error.response?.status === STATUS_TOO_MANY_REQUESTS) {
return 'Rate limit exceeded';
}
return 'Something went wrong';
}
</code></pre>
<p><!--
expect(getErrorMessage({ response: { status: 404 } })).toBe('Not found')
expect(getErrorMessage({ response: { status: 429 } })).toBe('Rate limit exceeded')
expect(getErrorMessage({ response: { status: 500 } })).toBe('Something went wrong')
--></p>
<p>Now, it’s clear which status codes we’re handling.</p>
<p>Personally, I’d use a library like <a href="https://github.com/prettymuchbryce/http-status-codes">http-status-codes</a> here if I needed to work with status codes often or use not-so-common codes:</p>
<pre><code>import { StatusCodes } from 'http-status-codes';
function getErrorMessage(error) {
if (error.response?.status === StatusCodes.NOT_FOUND) {
return 'Not found';
}
if (error.response?.status === StatusCodes.TOO_MANY_REQUESTS) {
return 'Rate limit exceeded';
}
return 'Something went wrong';
}
</code></pre>
<p><!--
expect(getErrorMessage({ response: { status: 404 } })).toBe('Not found')
expect(getErrorMessage({ response: { status: 429 } })).toBe('Rate limit exceeded')
expect(getErrorMessage({ response: { status: 500 } })).toBe('Something went wrong')
--></p>
<p>However, having a clear name is sometimes not enough:</p>
<pre><code>const date = '2023-03-22T08:20:00+01:00';
const CHARACTERS_IN_ISO_DATE = 10;
const dateWithoutTime = date.slice(0, CHARACTERS_IN_ISO_DATE);
// → '2023-03-22'
</code></pre>
<p><!-- expect(dateWithoutTime).toBe('2023-03-22') --></p>
<p>In the code above, we remove the time portion of a string containing a date and time in the ISO format (for example, <code>2023-03-22T08:20:00+01:00</code>) by keeping only the first ten characters — the length of the date part. The name is quite clear, but the code is still a bit confusing and brittle. We can do better:</p>
<pre><code>const date = '2023-03-22T08:20:00+01:00';
const DATE_FORMAT_ISO = 'YYYY-MM-DD';
const dateWithoutTime = date.slice(0, DATE_FORMAT_ISO.length);
// → '2023-03-22'
</code></pre>
<p><!-- expect(dateWithoutTime).toBe('2023-03-22') --></p>
<p>Now, it’s easier to visualize what the code does, and we don’t need to count characters manually to be sure that The Very Magic number 10 is correct.</p>
<p>Code reuse is another good reason to introduce constants. However, we need to wait for the moment when the code is actually reused.</p>
<h2>Not all numbers are magic</h2>
<p>Sometimes, programmers replace absolutely all literal values with constants, ideally stored in a separate module:</p>
<pre><code>const ID_COLUMN_WIDTH = 40;
const TITLE_COLUMN_WIDTH = 120;
const TYPE_COLUMN_WIDTH = 60;
const DATE_ADDED_COLUMN_WIDTH = 50;
const CITY_COLUMN_WIDTH = 80;
const COUNTRY_COLUMN_WIDTH = 90;
const USER_COLUMN_WIDTH = 70;
const STATUS_COLUMN_WIDTH = 50;
const columns = [
{
header: 'ID',
accessor: 'id',
width: ID_COLUMN_WIDTH
}
// …
];
</code></pre>
<p><!-- expect(columns[0].width).toBe(40) --></p>
<p>However, not every value is magic; some values are just values. Here, it’s clear that the value is the width of the ID column, and a constant doesn’t add any information that’s not in the code already. Instead, it makes the code harder to read: we need to go to the constant definition to see the actual value.</p>
<p>Often, code reads perfectly even without constants:</p>
<p><!--
const Modal = (props) => <div>{props.title}:{props.minWidth}</div>;
const Test = () => (
--></p>
<pre><code><Modal title="Out of cheese error" minWidth="50vw" />
</code></pre>
<p><!--
)
const {container: c1} = RTL.render(<Test />);
expect(c1.textContent).toEqual('Out of cheese error:50vw')
--></p>
<p>In the code above, it’s clear that the minimum width of a modal is 50vw. Adding a constant won’t make this code any clearer:</p>
<pre><code>const MODAL_MIN_WIDTH = '50vw';
</code></pre>
<p><!-- --></p>
<p><!--
const MODAL_MIN_WIDTH = '50vw';
const Modal = (props) => <div>{props.title}:{props.minWidth}</div>;
const Test = () => (
--></p>
<pre><code><Modal title="Out of cheese error" minWidth={MODAL_MIN_WIDTH} />
</code></pre>
<p><!--
)
const {container: c1} = RTL.render(<Test />);
expect(c1.textContent).toEqual('Out of cheese error:50vw')
--></p>
<p>I avoid such constants unless the values are reused.</p>
<p>Sometimes, such constants are misleading:</p>
<pre><code>const ID_COLUMN_WIDTH = 40;
const columns = [
{
header: 'ID',
accessor: 'id',
minWidth: ID_COLUMN_WIDTH
}
];
</code></pre>
<p><!-- expect(columns[0].minWidth).toBe(40) --></p>
<p>The <code>ID_COLUMN_WIDTH</code> name is imprecise: it says that the value is the <em>width</em>, but it’s the <em>minimum width</em>.</p>
<p>Often, <em>zeroes</em> and <em>ones</em> aren’t magic, and code is easier to understand when we use <code>0</code> and <code>1</code> directly instead of constants with inevitably awkward names:</p>
<p><!--
const addDays = (x, y) => x + y * 10
const addSeconds = (x, y) => x + y
const startOfDay = x => x - 0.1
--></p>
<pre><code>const DAYS_TO_ADD_IN_TO_FIELD = 1;
const SECONDS_TO_REMOVE_IN_TO_FIELD = -1;
function getEndOfDayFromDate(date) {
const nextDay = addDays(startOfDay(date), DAYS_TO_ADD_IN_TO_FIELD);
return addSeconds(nextDay, SECONDS_TO_REMOVE_IN_TO_FIELD);
}
</code></pre>
<p><!-- expect(getEndOfDayFromDate(10)).toBe(18.9) --></p>
<p>This function returns the last second of a given date. Here, 1 and -1 really mean <em>next</em> and <em>previous</em>. They are also an essential part of the algorithm, not a configuration. It doesn’t make sense to change 1 to 2 because it will break the function. Constants make the code longer and don’t help us understand it. Let’s remove them:</p>
<p><!--
const addDays = (x, y) => x + y * 10
const addSeconds = (x, y) => x + y
const startOfDay = x => x - 0.1
--></p>
<pre><code>function getEndOfDayFromDate(date) {
const nextDay = addDays(startOfDay(date), 1);
return addSeconds(nextDay, -1);
}
</code></pre>
<p><!-- expect(getEndOfDayFromDate(10)).toBe(18.9) --></p>
<p>Now, the code is short and clear, with enough information to understand it.</p>
<h2>Group related constants</h2>
<p>We often use constants for ranges of values:</p>
<pre><code>const SMALL = 'small';
const MEDIUM = 'medium';
</code></pre>
<p>These constants are related — they define different values of the same scale, size of something, and are likely to be used interchangeably. However, it’s not clear from the names that they are related. We could add a suffix:</p>
<pre><code>const SMALL_SIZE = 'small';
const MEDIUM_SIZE = 'medium';
</code></pre>
<p>Now, it’s clear that these values are related, thanks to the <code>_SIZE</code> suffix. But we can do better:</p>
<pre><code>const SIZE_SMALL = 'small';
const SIZE_MEDIUM = 'medium';
</code></pre>
<p>The common part of the names, the <code>SIZE_</code> prefix, is aligned, making it easier to notice related constants in the code.</p>
<p>Another option is to use an object:</p>
<pre><code>const Size = {
Small: 'small',
Medium: 'medium'
};
</code></pre>
<p>It has some additional benefits over separate constants:</p>
<ul>
<li>We only need to import it once (<code>import { Size } from '...'</code> instead of <code>import { SIZE_SMALL, SIZE_MEDIUM } from '...'</code>).</li>
<li>Better autocomplete after typing <code>Size.</code></li>
</ul>
<p>However, my favorite approach is to use a TypeScript enum:</p>
<pre><code>enum Size {
Small = 'small',
Medium = 'medium'
}
</code></pre>
<p><strong>Tip:</strong> Usually, enum names are singular nouns in PascalCase, like <code>Month</code>, <code>Color</code>, <code>OrderStatus</code>, or <code>ProductType</code>.</p>
<p>Which is essentially the same as an object, but we can also use it as a type:</p>
<pre><code>interface ButtonProps {
size: Size;
}
</code></pre>
<p>This gives us better type checking and even better autocomplete. For example, we can define separate types for button sizes and modal sizes, so the button component will only accept valid button sizes.</p>
<h2>Abbreviations and acronyms</h2>
<p>The road to hell is paved with abbreviations. What do you think OTC, RN, PSP, or SDL mean? I also don’t know, and these are just from one project. That’s why I try to avoid abbreviations almost everywhere, not just in code.</p>
<p>There’s a <a href="https://www.nccmerp.org/recommendations-enhance-accuracy-prescription-writing">list of dangerous abbreviations</a> for doctors prescribing medicine. We should have the same for IT professionals.</p>
<p>I’d even go further and create a list of <em>approved</em> abbreviations. I could only find one example of such a list — <a href="https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/CodingGuidelines/Articles/APIAbbreviations.html">from Apple</a> — and I think it could be a great start.</p>
<p>Common abbreviations are okay; we don’t even think of most of them as abbreviations:</p>
<table>
<thead>
<tr>
<th>Abbreviation</th>
<th>Full term</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>alt</code></td>
<td>alternative</td>
</tr>
<tr>
<td><code>app</code></td>
<td>application</td>
</tr>
<tr>
<td><code>arg</code></td>
<td>argument</td>
</tr>
<tr>
<td><code>err</code></td>
<td>error</td>
</tr>
<tr>
<td><code>info</code></td>
<td>information</td>
</tr>
<tr>
<td><code>init</code></td>
<td>initialize</td>
</tr>
<tr>
<td><code>max</code></td>
<td>maximum</td>
</tr>
<tr>
<td><code>min</code></td>
<td>minimum</td>
</tr>
<tr>
<td><code>param</code></td>
<td>parameter</td>
</tr>
<tr>
<td><code>prev</code></td>
<td>previous (especially when paired with <code>next</code>)</td>
</tr>
</tbody>
</table>
<p>As well as common acronyms, such as:</p>
<ul>
<li>HTML;</li>
<li>HTTP;</li>
<li>JSON;</li>
<li>PDF;</li>
<li>RGB;</li>
<li>URL.</li>
</ul>
<p>And possibly a few very common ones used on a project, but they still should be documented (new team members will be very thankful for that!) and shouldn’t be ambiguous.</p>
<h2>Prefixes and suffixes</h2>
<p>I like to use the following prefixes for function names:</p>
<ul>
<li><code>get</code>: returns a value (example: <code>getPageTitle</code>).</li>
<li><code>set</code>: stores a value or updates React state (example: <code>setProducts</code>)</li>
<li><code>fetch</code>: fetches data from the backend (example: <code>fetchMessages</code>).</li>
<li><code>reset</code>: resets something to its initial state (example: <code>resetForm</code>).</li>
<li><code>remove</code>: removes something from somewhere (example: <code>removeFilter</code>).</li>
<li><code>to</code>: converts the data to a certain type (examples: <code>toString</code>, <code>hexToRgb</code>, <code>urlToSlug</code>).</li>
<li><code>on</code> and <code>handle</code> for event handlers (examples: <code>onClick</code>, <code>handleSubmit</code>).</li>
</ul>
<p><strong>Info:</strong> Verb prefixes are also called <em>actions</em> in the A/HC/LC pattern. See more in the <a href="/blog/naming/#use-the-ahclc-pattern">Use the A/HC/LC pattern</a> section later in this chapter.</p>
<p>And the following prefixes for boolean variables or functions that return a boolean value:</p>
<ul>
<li><code>is</code>, <code>are</code>, <code>has</code>, or <code>should</code> (examples: <code>isPhoneNumberValid</code>, <code>hasCancelableTickets</code>).</li>
</ul>
<p>These conventions make code easier to read and distinguish functions that return values from those with side effects.</p>
<p><strong>Tip:</strong> Don’t combine <code>get</code> with other prefixes: I often see names like <code>getIsCompaniesFilterDisabled</code> or <code>getShouldShowPasswordHint</code>, which should be just <code>isCompaniesFilterDisabled</code> or <code>shouldShowPasswordHint</code>, or even better <code>isCompaniesFilterEnabled</code>. On the other hand, <code>setIsVisible</code> is perfectly fine when paired with <code>isVisible</code>.</p>
<p>I also make an exception for React components, where I prefer to skip the <code>is</code> prefix, similar to HTML properties like <code><button disabled></code>:</p>
<p><!-- const ButtonStyled = ({children}) => children --></p>
<pre><code>function PayButton({ loading, onClick, id, disabled }) {
return (
<ButtonStyled
id={id}
onClick={onClick}
loading={loading}
disabled={disabled}
>
Pay now!
</ButtonStyled>
);
}
</code></pre>
<p><!--
const {container: c1} = RTL.render(<PayButton />);
expect(c1.textContent).toEqual('Pay now!')
--></p>
<p>And I wouldn’t use <code>get</code> <a href="https://www.nikolaposa.in.rs/blog/2019/01/06/better-naming-convention/">for class property accessors</a> (even read-only):</p>
<pre><code>class User {
#firstName;
#lastName;
constructor(firstName, lastName) {
this.#firstName = firstName;
this.#lastName = lastName;
}
get fullName() {
return [this.#firstName, this.#lastName].join(' ');
}
}
</code></pre>
<p><!--
const user = new User('Chuck', 'Norris')
expect(user.fullName).toBe('Chuck Norris')
--></p>
<p>In general, I don’t like to remember too many rules, and any convention can go too far. A good example, and fortunately almost forgotten, is <a href="https://en.wikipedia.org/wiki/Hungarian_notation">Hungarian notation</a>, where each name is prefixed with its type, or with its intention or kind. For example, <code>lAccountNum</code> (long integer), <!-- cspell:disable --><code>arru8NumberList</code><!-- cspell:enable --> (array of unsigned 8-bit integers), <code>usName</code> (unsafe string).</p>
<p>Hungarian notation made sense for old untyped languages like C, but with modern typed languages and IDEs that show types when you hover over the name, it clutters the code and makes reading each name harder. So, keep it simple.</p>
<p>One of the examples of Hungarian notation in the modern frontend is prefixing TypeScript interfaces with <code>I</code>:</p>
<pre><code>interface ICoordinates {
lat: number;
lon: number;
}
</code></pre>
<p>Luckily, most TypeScript developers prefer to drop it these days:</p>
<pre><code>interface Coordinates {
lat: number;
lon: number;
}
</code></pre>
<p>I would generally avoid repeating information in the name that’s already accessible in its type, class name, or namespace.</p>
<p><strong>Info:</strong> We talk more about conventions in the Code style chapter.</p>
<h2>Next and previous values</h2>
<p>Often, we need to create a new value based on the previous value of a certain variable or object.</p>
<p>Consider this example:</p>
<p><!--
let count_
let useState = (initialValue) => {
count_ = initialValue
return [initialValue, fn => count_ = fn(count_)]
}
--></p>
<pre><code>const [count, setCount] = useState(0);
setCount(prevCount => prevCount + 1);
</code></pre>
<p><!-- expect(count_).toBe(1) --></p>
<p>In the code above, we have a basic counter function that returns the next counter value. The <code>prev</code> prefix makes it clear that this value is out of date.</p>
<p>Similarly, when we need to store the new value in a variable, we can use the <code>next</code> prefix:</p>
<p><!--
let window = { location: { href: 'http://example.com/?tacos=many' } }
let history = { replaceState: (x, y, z) => window.location.href = z }
--></p>
<pre><code>function updateUrlState(name, action) {
const url = new URL(window?.location.href);
const value = url.searchParams.get(name);
const nextValue = _.isFunction(action) ? action(value) : action;
url.searchParams.set(name, String(nextValue));
const nextUrl = url.toString();
history.replaceState(null, '', nextUrl);
}
</code></pre>
<p><!--
updateUrlState('tacos', 'lots')
expect(window.location.href).toBe('http://example.com/?tacos=lots')
--></p>
<p>Both conventions are widely used by React developers.</p>
<h2>Beware of incorrect names</h2>
<p><em>Incorrect</em> names are worse than magic numbers. With magic numbers, there’s a possibility of making a correct guess, but with incorrect names, we have no chance of understanding the code.</p>
<p>Consider this example:</p>
<p><!-- eslint-disable unicorn/numeric-separators-style --></p>
<pre><code>// Constant used to correct a Date object's time to reflect
// a UTC timezone
const TIMEZONE_CORRECTION = 60000;
const getUTCDateTime = datetime =>
new Date(
datetime.getTime() -
datetime.getTimezoneOffset() * TIMEZONE_CORRECTION
);
</code></pre>
<p><!-- expect(getUTCDateTime({ getTime: () => 1686815699187, getTimezoneOffset: () => -120 }).toISOString()).toBe('2023-06-15T09:54:59.187Z') --></p>
<p>Even a comment doesn’t help us understand what this code does.</p>
<p>What’s actually happening here is that the <code>getTime()</code> function returns milliseconds while the <code>getTimezoneOffset()</code> returns minutes, so we need to convert minutes to milliseconds by multiplying minutes by the number of milliseconds in one minute. 60000 is exactly this number.</p>
<p>Let’s correct the name:</p>
<pre><code>const MILLISECONDS_IN_MINUTE = 60_000;
const getUTCDateTime = datetime =>
new Date(
datetime.getTime() -
datetime.getTimezoneOffset() * MILLISECONDS_IN_MINUTE
);
</code></pre>
<p><!-- expect(getUTCDateTime({ getTime: () => 1686815699187, getTimezoneOffset: () => -120 }).toISOString()).toBe('2023-06-15T09:54:59.187Z') --></p>
<p>Now, it’s much easier to understand the code.</p>
<p><strong>Info:</strong> Underscores (<code>_</code>) as separators for numbers were introduced in ECMAScript 2021 and make long numbers easier to read: <code>60_000</code> instead of <code>60000</code>.</p>
<p>Types often make incorrect names more noticeable:</p>
<pre><code>type Order = {
id: number;
title: string;
};
type State = {
filteredOrder: Order[];
selectedOrder: number[];
};
</code></pre>
<p>By looking at the types, it’s clear that both names should be plural (they contain arrays), and the <code>selectedOrder</code> only contains order IDs, not whole order objects:</p>
<p><!-- type Order = { id: number, title: string } --></p>
<pre><code>type State = {
filteredOrders: Order[];
selectedOrderIds: number[];
};
</code></pre>
<p>We often change the logic but forget to update the names to reflect that. This makes understanding the code much harder and can lead to bugs when we later change the code and make incorrect assumptions based on incorrect names.</p>
<h2>Beware of abstract and imprecise names</h2>
<p><em>Abstract</em> and <em>imprecise</em> names are less dangerous than incorrect names. However, they are unhelpful and make the code harder to understand.</p>
<p><strong>Abstract names</strong> are too generic to give any useful information about the value they hold:</p>
<ul>
<li><code>data</code>;</li>
<li><code>list</code>;</li>
<li><code>array</code>;</li>
<li><code>object</code>.</li>
</ul>
<p>The problem with such names is that any variable contains <em>data</em>, and any array is a <em>list</em> of something. These names don’t say what kind of data it is or what kind of things are in the list. Essentially, such names aren’t better than <code>x</code>/<code>y</code>/<code>z</code>, <code>foo</code>/<code>bar</code>/<code>baz</code>, <code>New Folder 39</code>, or <code>Untitled 47</code>.</p>
<p>Consider this example:</p>
<p><!--
import { Record } from 'immutable'
const UPDATE_RESULTS = 'ur', UPDATE_CART = 'uc'
const Currency = Record({
iso: '',
name: '',
symbol: '',
})
--></p>
<p><!-- eslint-skip --></p>
<pre><code>function currencyReducer(state = new Currency(), action) {
switch (action.type) {
case UPDATE_RESULTS:
case UPDATE_CART:
if (!action.res.data.query) {
return state;
}
const iso = _.get(
action,
'res.data.query.userInfo.userCurrency'
);
const obj = _.get(action, `res.data.currencies[${iso}]`);
return state
.set('iso', iso)
.set('name', _.get(obj, 'name'))
.set('symbol', _.get(obj, 'symbol'));
default:
return state;
}
}
</code></pre>
<p><!--
expect(currencyReducer(undefined, { type: UPDATE_RESULTS, res: { data: { query: { userInfo: { userCurrency: 'eur' } }, currencies: { eur: {name: 'Euro', symbol: '€' } } } } }).toJS()).toEqual({iso: 'eur', name: 'Euro', symbol: '€'})
expect(currencyReducer(undefined, { type: UPDATE_RESULTS, res: { data: { query: { userInfo: { userCurrency: 'eur' } }, currencies: {} } } }).toJS()).toEqual({iso: 'eur', name: '', symbol: ''})
--></p>
<p>Besides using Immutable.js and Lodash’s <a href="https://lodash.com/docs#get"><code>get()</code> method</a>, which already makes the code hard to read, the <code>obj</code> variable makes the code even harder to understand.</p>
<p>All this code does is reorganize the data about the user’s currency into a neat object:</p>
<p><!--
import { Record } from 'immutable'
const UPDATE_RESULTS = 'ur', UPDATE_CART = 'uc'
const Currency = Record({
iso: '',
name: '',
symbol: '',
})
--></p>
<pre><code>const currencyReducer = (state = new Currency(), action) => {
switch (action.type) {
case UPDATE_RESULTS:
case UPDATE_CART: {
const { data } = action.res;
if (data.query === undefined) {
return state;
}
const iso = data.query.userInfo?.userCurrency;
const { name = '', symbol = '' } = data.currencies[iso] ?? {};
return state.merge({ iso, name, symbol });
}
default: {
return state;
}
}
};
</code></pre>
<p><!--
expect(currencyReducer(undefined, { type: UPDATE_RESULTS, res: { data: { query: { userInfo: { userCurrency: 'eur' } }, currencies: { eur: {name: 'Euro', symbol: '€' } } } } }).toJS()).toEqual({iso: 'eur', name: 'Euro', symbol: '€'})
expect(currencyReducer(undefined, { type: UPDATE_RESULTS, res: { data: { query: { userInfo: { userCurrency: 'eur' } }, currencies: {} } } }).toJS()).toEqual({iso: 'eur', name: '', symbol: ''})
--></p>
<p>Now, it’s clearer what shape of data we’re building here, and even Immutable.js isn’t so intimidating. I kept the <code>data</code> name because that’s how it’s coming from the backend, and it’s commonly used as a root object for whatever the backend API is returning. As long as we don’t leak it into the app code and only use it during the initial processing of the raw backend data, it’s okay.</p>
<p>Abstract names are also okay for generic utility functions, like array filtering or sorting:</p>
<pre><code>function findFirstNonEmptyArray(...arrays) {
return (
arrays.find(array => Array.isArray(array) && array.length > 0) ??
[]
);
}
</code></pre>
<p><!-- expect(findFirstNonEmptyArray([], [1], [2,3])).toEqual([1]) --></p>
<p>In the code above, <code>arrays</code> and <code>array</code> are totally fine since that’s exactly what they represent: generic arrays. We don’t yet know what values they will contain, and for the context of this function, it doesn’t matter — it can be anything.</p>
<p><strong>Imprecise names</strong> don’t describe a value enough to be useful. One of the common cases is names with number suffixes. Usually, this happens for the following reasons:</p>
<ul>
<li><strong>Multiple objects:</strong> we have several entities of the same kind.</li>
<li><strong>Data processing:</strong> we process data in some way and use suffixed names to store the result.</li>
<li><strong>New version:</strong> we make a new version of an already existing module, function, or component.</li>
</ul>
<p>In all cases, the solution is to clarify each name.</p>
<p><strong>For multiple objects and data processing</strong>, I try to find something that differentiates the values to make the names more precise.</p>
<p>Consider this example:</p>
<p><!--
const StatusCode = {SuccessCreated: 201}
const test = (comment, fn) => fn(), login = () => {}
const request = () => ({get: () => ({set: () => ({set: () => ({headers: {}, status: 200, body: {data: {}}})})}), post: () => ({send: () => ({set: () => ({headers: {}, status: 200, body: {data: {}}, set: () => ({headers: {}, status: 200, body: { data:{}}})})})})})
const users = [], app = () => {}, usersEndpoint = 'http://localhost', loginEndpoint = 'http://localhost'
const collections = { users: { insertMany: () => {} } }
function expect() { return { toBe: () => {}, toHaveProperty: () => {}, toEqual: () => {} } }
expect.stringContaining = () => {}
expect.stringMatching = () => {}
expect.objectContaining = () => {}
expect.arrayContaining = () => {}
--></p>
<pre><code>test('creates new user', async () => {
const username = 'cosmo';
await collections.users.insertMany(users);
// Log in
const cookies = await login();
// Create user
const response = await request(app)
.post(usersEndpoint)
.send({ username })
.set('Accept', 'application/json')
.set('Cookie', cookies);
expect(response.headers).toHaveProperty(
'content-type',
expect.stringContaining('json')
);
expect(response.status).toBe(StatusCode.SuccessCreated);
expect(response.body).toHaveProperty('data');
expect(response.body.data).toEqual(
expect.objectContaining({
username,
password: expect.stringMatching(/^(?:[a-z]+-){2}[a-z]+$/)
})
);
// Log in with the new user
const response2 = await request(app)
.post(loginEndpoint)
.send({
username,
password: response.body.data.password
})
.set('Accept', 'application/json');
// Fetch users
const response3 = await request(app)
.get(usersEndpoint)
.set('Accept', 'application/json')
.set('Cookie', response2.headers['set-cookie']);
expect(response3.body).toHaveProperty('data');
expect(response3.body.data).toEqual(
expect.arrayContaining([
expect.objectContaining({ username: 'chucknorris' }),
expect.objectContaining({ username })
])
);
});
</code></pre>
<p><!-- // This would be difficult to test so we only run the text function to make sure there are no syntax errors --></p>
<p>In the code above, we send a sequence of network requests to test a REST API. However, the names <code>response</code>, <code>response2</code>, and <code>response3</code> make the code harder to understand, especially when we use the data returned by one request to create the next one. We can make the names more precise:</p>
<p><!--
let test = () => {}, login = () => {}
let collections = { users: { insertMany: () => {} } }
const request = () => ({get: () => ({set: () => ({set: () => ({headers: {}, status: 200, body: {data: {}}})})}), post: () => ({send: () => ({set: () => ({headers: {}, status: 200, body: {data: {}}, set: () => ({headers: {}, status: 200, body: { data:{}}})})})})})
function expect() { return { toBe: () => {}, toHaveProperty: () => {}, toEqual: () => {} } }
expect.stringContaining = () => {}
expect.stringMatching = () => {}
expect.objectContaining = () => {}
--></p>
<pre><code>test('creates new user', async () => {
const username = 'cosmo';
await collections.users.insertMany(users);
// Log in
const cookies = await login();
// Create user
const createResponse = await request(app)
.post(usersEndpoint)
.send({ username })
.set('Accept', 'application/json')
.set('Cookie', cookies);
expect(createResponse.headers).toHaveProperty(
'content-type',
expect.stringContaining('json')
);
expect(createResponse.status).toBe(StatusCode.SuccessCreated);
expect(createResponse.body).toHaveProperty('data');
expect(createResponse.body.data).toEqual(
expect.objectContaining({
username,
password: expect.stringMatching(/^(?:[a-z]+-){2}[a-z]+$/)
})
);
// Log in with the new user
const loginResponse = await request(app)
.post(loginEndpoint)
.send({
username,
password: createResponse.body.data.password
})
.set('Accept', 'application/json');
// Fetch users
const usersResponse = await request(app)
.get(usersEndpoint)
.set('Accept', 'application/json')
.set('Cookie', loginResponse.headers['set-cookie']);
expect(usersResponse.body).toHaveProperty('data');
expect(usersResponse.body.data).toEqual(
expect.arrayContaining([
expect.objectContaining({ username: 'chucknorris' }),
expect.objectContaining({ username })
])
);
});
</code></pre>
<p><!-- // This would be difficult to test so we only run the text function to make sure there are no syntax errors --></p>
<p>Now, it’s clear which request data we’re accessing at any given time.</p>
<p>For a <strong>new version</strong>, I try to rename the old module, function, or component to something like <code>ModuleLegacy</code> instead of naming the new one <code>Module2</code> or <code>ModuleNew</code>, and keep using the original name for the new implementation.</p>
<p>It’s not always possible, but it makes using the old, deprecated module more awkward than the new, improved one — exactly what we want to achieve. Also, names tend to stick forever, even when the original module is long gone. Names like <code>Module2</code> or <code>ModuleNew</code> are fine during development, though, when the new module isn’t yet fully functional or well tested.</p>
<h2>Use the A/HC/LC pattern</h2>
<p>To improve the consistency and clarity of function names, we can follow the A/HC/LC pattern:</p>
<pre><code>prefix? + action (A) + high context (HC) + low context? (LC)
</code></pre>
<p>Let’s see how it works on several examples:</p>
<table>
<thead>
<tr>
<th>Name</th>
<th>Prefix</th>
<th>Action</th>
<th>High context</th>
<th>Low context</th>
</tr>
</thead>
<tbody>
<tr>
<td><code>getRecipe</code></td>
<td></td>
<td><code>get</code></td>
<td><code>recipe</code></td>
<td></td>
</tr>
<tr>
<td><code>getRecipeIngredients</code></td>
<td></td>
<td><code>get</code></td>
<td><code>recipe</code></td>
<td><code>ingredients</code></td>
</tr>
<tr>
<td><code>handleUpdateResponse</code></td>
<td></td>
<td><code>handle</code></td>
<td><code>update</code></td>
<td><code>response</code></td>
</tr>
<tr>
<td><code>shouldShowFooter</code></td>
<td><code>should</code></td>
<td><code>show</code></td>
<td><code>footer</code></td>
<td></td>
</tr>
</tbody>
</table>
<p><strong>Info:</strong> Read more about the <a href="https://github.com/kettanaito/naming-cheatsheet?tab=readme-ov-file#ahclc-pattern">A/HC/LC pattern</a> in Artem Zakharchenko’s Naming cheat sheet.</p>
<h2>Use common terms</h2>
<p>It’s a good idea to use well-known and widely adopted terms for programming and domain concepts instead of inventing something cute or clever but likely misunderstood. This is especially problematic for non-native English speakers because we usually don’t know many rare and obscure words.</p>
<p><a href="https://stackoverflow.com/questions/33742899/where-does-reacts-scryrendereddomcomponentswithclass-method-name-come-from">A “great” example</a> of this is the React codebase, where they used “scry” (meaning something like <em>peeping into the future through a crystal ball</em>) instead of “find”.</p>
<h2>Use a single term for each concept</h2>
<p>Using different words for the same concept is confusing. A person reading the code may think that since the words are different, these things aren’t the same and will try to find the difference between the two. It will also make the code less <em>greppable</em>, meaning it will be harder to find all uses of the same thing.</p>
<p><strong>Info:</strong> We talk more about greppability in the Write greppable code section of the <em>Other techniques</em> chapter.</p>
<p><strong>Tip:</strong> Having a project dictionary, or even a linter, might be a good idea to avoid using different words for the same things. <a href="https://cspell.org">CSpell</a> allows us to create a project dictionary and ban certain words that shouldn’t be used. I use a similar approach for writing this book: I use the <a href="https://github.com/sapegin/textlint-rule-terminology">Textlint terminology plugin</a> to make sure I use the terms consistently and spell them correctly in my writing.</p>
<h2>Prefer US English</h2>
<p>Most APIs and programming languages use US English, so it makes a lot of sense to use US English for naming in our project as well. Unless we’re working on a British, Canadian, or Australian project, where the local language may be a better choice.</p>
<p>In any case, consistency is more important than language choice. On several projects, I’ve seen US and UK terms used interchangeably. For example, <em>canceling</em> (US) and <em>cancelling</em> (UK). (Curiously, <em>cancellation</em> is the correct spelling in both.)</p>
<p>Some common words that are spelled differently:</p>
<p><!-- cspell:disable --></p>
<table>
<thead>
<tr>
<th>US English</th>
<th>UK English</th>
</tr>
</thead>
<tbody>
<tr>
<td>behavior</td>
<td>behaviour</td>
</tr>
<tr>
<td>canceling</td>
<td>cancelling</td>
</tr>
<tr>
<td>center</td>
<td>centre</td>
</tr>
<tr>
<td>color</td>
<td>colour</td>
</tr>
<tr>
<td>customize</td>
<td>customise</td>
</tr>
<tr>
<td>favorite</td>
<td>favourite</td>
</tr>
<tr>
<td>license</td>
<td>licence</td>
</tr>
<tr>
<td>math</td>
<td>maths</td>
</tr>
<tr>
<td>optimize</td>
<td>optimise</td>
</tr>
</tbody>
</table>
<p><!-- cspell:enable --></p>
<p><strong>Tip:</strong> <a href="https://cspell.org">CSpell</a> allows us to choose between US and UK English and will highlight inconsistencies in code and comments, though some words are present in both dictionaries.</p>
<h2>Use common opposite pairs</h2>
<p>Often, we create pairs of variables or functions that do the opposite operations or hold values that are on the opposite ends of the range. For example, <code>startServer</code>/<code>stopServer</code> or <code>minWidth</code>/<code>maxWidth</code>. When we see one, we expect to see the other, and we expect it to have a certain name because it either sounds natural in English (if one happened to be a native speaker) or has been used by generations of programmers before us.</p>
<p>Some of these common pairs are:</p>
<table>
<thead>
<tr>
<th>Term</th>
<th>Opposite</th>
</tr>
</thead>
<tbody>
<tr>
<td>add</td>
<td>remove</td>
</tr>
<tr>
<td>begin</td>
<td>end</td>
</tr>
<tr>
<td>create</td>
<td>delete</td>
</tr>
<tr>
<td>enable</td>
<td>disable</td>
</tr>
<tr>
<td>first</td>
<td>last</td>
</tr>
<tr>
<td>get</td>
<td>set</td>
</tr>
<tr>
<td>increment</td>
<td>decrement</td>
</tr>
<tr>
<td>lock</td>
<td>unlock</td>
</tr>
<tr>
<td>minimum</td>
<td>maximum</td>
</tr>
<tr>
<td>next</td>
<td>previous</td>
</tr>
<tr>
<td>old</td>
<td>new</td>
</tr>
<tr>
<td>open</td>
<td>close</td>
</tr>
<tr>
<td>read</td>
<td>write</td>
</tr>
<tr>
<td>send</td>
<td>receive</td>
</tr>
<tr>
<td>show</td>
<td>hide</td>
</tr>
<tr>
<td>start</td>
<td>stop</td>
</tr>
<tr>
<td>target</td>
<td>source</td>
</tr>
</tbody>
</table>
<p><strong>Tip:</strong> There’s a certain debate on where to use <em>remove</em> and where <em>delete</em>. I’m not so picky about this and recommend sticking to the add/remove and create/delete pairs where it makes sense. Otherwise, I’m okay with either. The difference isn’t as clear as some like to think: for example, on the Unix command line we <em>remove</em> files using the <code>rm</code> command, but on Windows we <em>delete</em> them using the <code>del</code> command.</p>
<h2>Check the spelling of your names</h2>
<p>Typos in names and comments are very common. They don’t cause bugs <em>most of the time</em>, but could still reduce readability a bit, and code with many <!-- cspell:disable -->“typoses”<!-- cspell:enable --> looks sloppy. Typos also make the code less greppable. So having a spell checker in the code editor is a good idea.</p>
<p><strong>Info:</strong> We talk more about spell checking in the Spell checking section of the <em>Learn your code editor</em> chapter, and about code greppability in the Write greppable code section of the <em>Other techniques</em> chapter.</p>
<h2>Use established naming conventions</h2>
<p>Each programming language has its own conventions and idiomatic way of doing certain things, including the way programmers spell the names of variables, functions, and other symbols in the code: <em>naming conventions</em>.</p>
<p>The most popular naming conventions are:</p>
<ul>
<li>camelCase;</li>
<li>kebab-case.</li>
<li>PascalCase;</li>
<li>snake_case;</li>
<li>SCREAMING_SNAKE_CASE.</li>
</ul>
<p><strong>Tip:</strong> There are also lowercase, UPPERCASE, and SpoNGEcAsE, but I wouldn’t recommend them because these conventions make it hard to distinguish separate words.</p>
<p>Most JavaScript and TypeScript style guides suggest the following:</p>
<ul>
<li>camelCase for variable names and functions;</li>
<li>PascalCase for class names, types, and components;</li>
<li>SCREAMING_SNAKE_CASE for constants.</li>
</ul>
<p><strong>Tip:</strong> One of the benefits of naming conventions that use an underscore (<code>_</code>) or nothing to glue words together over conventions that use a dash (<code>-</code>) is that we can select a full name using a double-click, or Alt+Shift+Left, or Alt+Shift+Right hotkeys (these hotkeys expand the selection to the word boundary).</p>
<p>The code that doesn’t follow the established naming conventions for a particular language looks awkward for developers who are used to these conventions. For example, here’s a JavaScript snippet that uses snake_case names:</p>
<p><!-- let console = { log: vi.fn() } --></p>
<pre><code>const fruits = ['Guava', 'Papaya', 'Pineapple'];
const loud_fruits = fruits.map(fruit => fruit.toUpperCase());
console.log(loud_fruits);
// → ['GUAVA', 'PAPAYA', 'PINEAPPLE']
</code></pre>
<p><!-- expect(loud_fruits).toEqual(['GUAVA', 'PAPAYA', 'PINEAPPLE']) --></p>
<p>Note the use of different naming conventions: <code>loud_fruits</code> uses snake_case, and <code>toUpperCase</code> uses camelCase.</p>
<p>Now, compare it with the same code using camelCase:</p>
<p><!-- let console = { log: vi.fn() } --></p>
<pre><code>const fruits = ['Guava', 'Papaya', 'Pineapple'];
const loudFruits = fruits.map(fruit => fruit.toUpperCase());
console.log(loudFruits);
// → ['GUAVA', 'PAPAYA', 'PINEAPPLE']
</code></pre>
<p><!-- expect(loudFruits).toEqual(['GUAVA', 'PAPAYA', 'PINEAPPLE']) --></p>
<p>Since JavaScript’s own methods and browser APIs all use camelCase (for example, <code>forEach()</code>, <code>toUpperCase()</code>, or <code>scrollIntoView()</code>), using camelCase for our own variables and functions feels natural.</p>
<p>However, in Python, where snake_case is common, it looks natural:</p>
<pre><code>fruits = ['Guava', 'Papaya', 'Pineapple']
loud_fruits = [fruit.upper() for fruit in fruits]
print(loud_fruits)
</code></pre>
<p>One thing that developers often disagree on is how to spell acronyms (for example, HTML) and words with unusual casing (for example, iOS). There are several approaches:</p>
<ul>
<li>Keep the original spelling: <code>dangerouslySetInnerHTML</code>, <!-- cspell:disable --><code>WebiOS</code><!-- cspell:enable -->;</li>
<li>Do something weird: <code>XMLHttpRequest</code>, <code>DatePickerIOS</code>, <!-- cspell:disable --><code>HTMLHRElement</code><!-- cspell:enable -->;</li>
<li>Normalize the words: <code>WebIos</code>, <code>XmlHttpRequest</code>, <code>HtmlHrElement</code>.</li>
</ul>
<p>Unfortunately, the most readable approach, normalization, seems to be the least popular. Since we can’t use spaces in names, it can be hard to separate words: <!-- cspell:disable --><code>WebiOS</code><!-- cspell:enable --> could be read as <!-- cspell:disable --><code>webi os</code><!-- cspell:enable --> instead of <code>web ios</code>, and it takes extra time to read it correctly. Such names also don’t work well with code spell checkers: they mark <!-- cspell:disable --><code>webi</code><!-- cspell:enable --> and <!-- cspell:disable --><code>htmlhr</code><!-- cspell:enable --> as incorrect words.</p>
<p>The normalized spelling doesn’t have these issues: <code>dangerouslySetInnerHtml</code>, <code>WebIos</code>, <code>XmlHttpRequest</code>, <code>DatePickerIos</code>, or <code>HtmlHrElement</code>. The word boundaries are clear.</p>
<h2>Avoid unnecessary variables</h2>
<p>Often, we add intermediate variables to store the result of an operation before passing it somewhere else or returning it from the function. In many cases, this variable is unnecessary.</p>
<p>Consider this example:</p>
<p><!--
let state = null;
let setState = (value) => { state = value }
let handleUpdateResponse = x => x
--></p>
<pre><code>function handleUpdate(response) {
const result = handleUpdateResponse(response.status);
setState(result);
}
</code></pre>
<p><!--
handleUpdate({ status: 200 })
expect(state).toBe(200)
--></p>
<p>And this one:</p>
<pre><code>async function handleResponse(response) {
const data = await response.json();
return data;
}
</code></pre>
<p><!-- expect(handleResponse({ json: () => Promise.resolve(42) })).resolves.toBe(42) --></p>
<p>In both cases, the <code>result</code> and <code>data</code> variables don’t add much to the code. The names don’t adding new information, and the code is short enough to be inlined:</p>
<p><!--
let state = null;
let setState = (value) => { state = value }
let handleUpdateResponse = x => x
--></p>
<pre><code>function handleUpdate(response) {
setState(handleUpdateResponse(response.status));
}
</code></pre>
<p><!--
handleUpdate({ status: 200 })
expect(state).toBe(200)
--></p>
<p>Or for the second example:</p>
<pre><code>function handleResponse(response) {
return response.json();
}
</code></pre>
<p><!-- expect(handleResponse({ json: () => Promise.resolve(42) })).resolves.toBe(42) --></p>
<p>Here’s another example that checks whether the browser supports CSS transitions by probing available CSS properties:</p>
<p><!-- function test(document) { --></p>
<pre><code>let b = document.body.style;
if (
b.MozTransition == '' ||
b.WebkitTransition == '' ||
b.OTransition == '' ||
b.transition == ''
) {
document.documentElement.className += ' trans';
}
</code></pre>
<p><!--
}
let document1 = {
documentElement: { className: '' },
body: { style: {} }
}
test(document1)
expect(document1.documentElement.className).toBe('')</p>
<p>let document2 = {
documentElement: { className: '' },
body: { style: { transition: '' } }
}
test(document2)
expect(document2.documentElement.className).toBe(' trans')
--></p>
<p>In the code above, the alias <code>b</code> replaces a clear name <code>document.body.style</code> with not just an obscure one but misleading: <code>b</code> and <code>styles</code> are unrelated. Inlining makes the code too long because the style values are accessed many times, but having a clearer shortcut would help a lot:</p>
<p><!-- function test(document) { --></p>
<pre><code>const { style } = document.body;
if (
style.MozTransition === '' ||
style.WebkitTransition === '' ||
style.OTransition === '' ||
style.transition === ''
) {
document.documentElement.className += ' trans';
}
</code></pre>
<p><!--
}
let document1 = {
documentElement: { className: '' },
body: { style: {} }
}
test(document1)
expect(document1.documentElement.className).toBe('')</p>
<p>let document2 = {
documentElement: { className: '' },
body: { style: { transition: '' } }
}
test(document2)
expect(document2.documentElement.className).toBe(' trans')
--></p>
<p>Another case is when we create an object to hold a group of values but never use it as a whole (for example, to pass it to another function), only to access separate properties in it. It makes us waste time inventing a new variable name, and we often end up with something awkward.</p>
<p>For example, we can use such an object to store a function return value:</p>
<p><!--
const console = { log: vi.fn() }
const parseMs = (x) => ({minutes: x, seconds: 0}), durationSec = 5
--></p>
<pre><code>const duration = parseMs(durationSec * 1000);
// Then later we access the values like so:
console.log(duration.minutes, duration.seconds);
</code></pre>
<p><!-- expect(duration.minutes).toBe(5000)--></p>
<p>In the code above, the <code>duration</code> variable is only used as a container for <code>minutes</code> and <code>seconds</code> values. By using destructuring we could skip the intermediate variable:</p>
<p><!-- const parseMs = (x) => ({minutes: x, seconds: 0}), durationSec = 5 --></p>
<pre><code>const { minutes, seconds } = parseMs(durationSec * 1000);
</code></pre>
<p><!-- expect(minutes).toBe(5000)--></p>
<p>Now, we can access <code>minutes</code> and <code>seconds</code> directly.</p>
<p>Functions with optional parameters grouped in an object are another common example:</p>
<p><!--
let document = window.document;
const hiddenInput = (name, value) => {
const input = document.createElement('input');
input.type = 'hidden';
input.name = name;
input.value = value;
return input;
};
--></p>
<p><!-- eslint-skip --></p>
<pre><code>function submitFormData(action, options) {
const form = document.createElement('form');
form.method = options.method;
form.action = action;
form.target = options.target;
if (options.parameters) {
Object.keys(options.parameters)
.map(paramName =>
hiddenInput(paramName, options.parameters[paramName])
)
.forEach(form.appendChild.bind(form));
}
document.body.appendChild(form);
form.submit();
document.body.removeChild(form);
}
</code></pre>
<p><!--
expect(submitFormData('/foo', { method: 'post', target: '_top', parameters: {a: 42} }))
expect(submitFormData('/foo', { method: 'post', target: '_top' }))
--></p>
<p>We can use destructuring again to simplify the code:</p>
<p><!--
let document = window.document;
const hiddenInput = (name, value) => {
const input = document.createElement('input');
input.type = 'hidden';
input.name = name;
input.value = value;
return input;
};
--></p>
<p><!-- eslint-disable unicorn/prefer-dom-node-append, unicorn/prefer-dom-node-remove --></p>
<pre><code>function submitFormData(action, { method, target, parameters }) {
const form = document.createElement('form');
form.method = method;
form.action = action;
form.target = target;
if (parameters) {
for (const [name, parameter] of Object.entries(parameters)) {
const input = hiddenInput(name, parameter);
form.appendChild(input);
}
}
document.body.appendChild(form);
form.submit();
document.body.removeChild(form);
}
</code></pre>
<p><!--
expect(submitFormData('/foo', { method: 'post', target: '_top', parameters: {a: 42} }))
expect(submitFormData('/foo', { method: 'post', target: '_top' }))
--></p>
<p>We removed the <code>options</code> object that was used in almost every line of the function body, making the function shorter and more readable.</p>
<p>Sometimes, intermediate variables can serve as comments, explaining the data they hold that might not otherwise be clear:</p>
<p><!--
const hasTextLikeOnlyChildren = () => false
const Flex = ({children}) => <>{children}</>
const Body = ({children}) => <>{children}</>
--></p>
<pre><code>function Tip({ type, content }) {
const shouldBeWrapped = hasTextLikeOnlyChildren(content);
return (
<Flex alignItems="flex-start">
{shouldBeWrapped ? <Body type={type}>{content}</Body> : content}
</Flex>
);
}
</code></pre>
<p><!--
const {container: c1} = RTL.render(<Tip type="pizza" content="Hola" />);
expect(c1.textContent).toEqual('Hola')
--></p>
<p>Another good reason to use an intermediate variable is to split a long line of code into multiple lines. Consider this example of an SVG image stored as a CSS URL:</p>
<pre><code>const borderImage = `url("data:image/svg+xml,<svg xmlns='http://www.w3.org/2000/svg' width='12' height='12'><path d='M2 2h2v2H2zM4 0h2v2H4zM10 4h2v2h-2zM0 4h2v2H0zM6 0h2v2H6zM8 2h2v2H8zM8 8h2v2H8zM6 10h2v2H6zM0 6h2v2H0zM10 6h2v2h-2zM4 10h2v2H4zM2 8h2v2H2z' fill='%23000'/></svg>")`;
</code></pre>
<p><!-- expect(borderImage).toMatch('<svg ') --></p>
<p>Lack of formatting makes it hard to read and modify. Let’s split it into several variables:</p>
<pre><code>const borderPath = `M2 2h2v2H2zM4 0h2v2H4zM10 4h2v2h-2zM0 4h2v2H0zM6 0h2v2H6zM8 2h2v2H8zM8 8h2v2H8zM6 10h2v2H6zM0 6h2v2H0zM10 6h2v2h-2zM4 10h2v2H4zM2 8h2v2H2z`;
const borderSvg = `<svg xmlns='http://www.w3.org/2000/svg' width='12' height='12'><path d='${borderPath}' fill='%23000'/></svg>`;
const borderImage = `url("data:image/svg+xml,${borderSvg}")`;
</code></pre>
<p><!-- expect(borderImage).toMatch('<svg ') --></p>
<p>While there’s still some line wrapping, it’s now easier to see the separate parts the image is composed of.</p>
<h2>Avoiding name clashes</h2>
<p>We’ve talked about avoiding number suffixes by making names more precise. Now, let’s explore a few other cases of clashing names and <a href="https://gist.github.com/sapegin/a46ab46cdd4d6b5045027d120b9c967d">how to avoid them</a>.</p>
<p>I often struggle with name clashes for two reasons:</p>
<ol>
<li>Storing a function’s return value (example: <code>const isCrocodile = isCrocodile()</code>).</li>
<li>Creating a React component to display an object of a certain TypeScript type (example: <code>const User = (props: { user: User }) => null</code>).</li>
</ol>
<p>Let’s start with function return values. Consider this example:</p>
<p><!-- const getCrocodiles = (x) => ([ x.color ]) --></p>
<pre><code>const crocodiles = getCrocodiles({ color: 'darkolivegreen' });
</code></pre>
<p><!-- expect(crocodiles).toEqual(['darkolivegreen']) --></p>
<p>In the code above, it’s clear which one is the function and which one is the array returned by the function. Now, consider this:</p>
<p><!--
let crocodiles = [{type: 'raccoon'}]
let isCrocodile = x => x.type === 'croc'
--></p>
<pre><code>const _o_0_ = isCrocodile(crocodiles[0]);
</code></pre>
<p><!-- expect(<em>o_0</em>).toBe(false) --></p>
<p>In this case, our naming choices are limited:</p>
<ul>
<li><code>isCrocodile</code> is a natural choice but clashes with the function name;</li>
<li><code>crocodile</code> could be interpreted as a variable holding one element of the <code>crocodiles</code> array.</li>
</ul>
<p>So, what can we do about it? Not much:</p>
<ul>
<li>choose a domain-specific name (example: <code>shouldShowGreeting</code>);</li>
<li>inline the function call and avoid a local variable altogether;</li>
<li>choose a more specific name (examples: <code>isFirstItemCrocodile</code> or <code>isGreenCrocodile</code>);</li>
<li>shorten the name if the scope is small (example: <code>isCroc</code>).</li>
</ul>
<p>Unfortunately, all options are somewhat not ideal:</p>
<ul>
<li>Inlining can make the code more verbose, especially if the function’s result is used several times or if the function has multiple parameters. It can also affect performance, though it usually doesn’t.</li>
<li>Longer names can also make the code a bit more verbose.</li>
<li>Short names can be confusing.</li>
</ul>
<p>I usually use domain-specific names or inlining (for very simple calls, used once or twice):</p>
<p><!-- const isCrocodile = x => x.type === 'croc' --></p>
<pre><code>function UserProfile({ user }) {
const shouldShowGreeting = isCrocodile(user);
return (
<section>
{shouldShowGreeting && (
<p>Hola, green crocodile, the ruler of the Galaxy!</p>
)}
<p>Name: {user.name}</p>
<p>Age: {user.age}</p>
</section>
);
}
</code></pre>
<p><!--
const {container: c1} = RTL.render(<UserProfile user={{type: 'croc', name: 'Gena', age: '37'}} />);
expect(c1.textContent).toEqual('Hola, green crocodile, the ruler of the Galaxy!Name: GenaAge: 37')
const {container: c2} = RTL.render(<UserProfile user={{type: 'che', name: 'Cheburashka', age: '12'}} />);
expect(c2.textContent).toEqual('Name: CheburashkaAge: 12')
--></p>
<p>The <code>shouldShowGreeting</code> name describes how the value is used (domain-specific name) — to check <em>whether we need to show a greeting</em>, as opposed to the value itself — <em>whether the user is a crocodile</em>. This has another benefit: if we decide to change the condition, we don’t need to rename the variable.</p>
<p>For example, we could decide to greet crocodiles only in the morning:</p>
<p><!-- const isCrocodile = x => x.type === 'croc' --></p>
<pre><code>function UserProfile({ user, date }) {
const shouldShowGreeting =
isCrocodile(user) && date.getHours() < 10;
return (
<section>
{shouldShowGreeting && (
<p>Hola, green crocodile, the ruler of the Galaxy!</p>
)}
<p>Name: {user.name}</p>
<p>Age: {user.age}</p>
</section>
);
}
</code></pre>
<p><!--
const {container: c1} = RTL.render(<UserProfile user={{type: 'croc', name: 'Gena', age: '37'}} date={new Date(2023,1,1,9,37,0)} />);
expect(c1.textContent).toEqual('Hola, green crocodile, the ruler of the Galaxy!Name: GenaAge: 37')
const {container: c2} = RTL.render(<UserProfile user={{type: 'croc', name: 'Gena', age: '37'}} date={new Date(2023,1,1,17,37,0)} />);
expect(c2.textContent).toEqual('Name: GenaAge: 37')
--></p>
<p>The name still makes sense when something like <code>isCroc</code> becomes incorrect.</p>
<p>Unfortunately, I don’t have a good solution for clashing React components and TypeScript types. This usually happens when we create a component to render an object or a certain type:</p>
<pre><code>interface User {
name: string;
email: string;
}
export function User({ user }: { user: User }) {
return (
<p>
{user.name} ({user.email})
</p>
);
}
</code></pre>
<p><!--
const {container: c1} = RTL.render(<User user={{ name: 'Chuck', email: '@' }} />);
expect(c1.textContent).toEqual('Chuck (@)')
--></p>
<p>Though TypeScript allows using a type and a value with the same name in the same scope, it makes the code confusing.</p>
<p>The only solution I see is renaming either the type or the component. I usually try to rename a component, though it requires some creativity to come up with a name that’s not confusing. For example, names like <code>UserComponent</code> or <code>UserView</code> would be confusing because other components don’t have these suffixes, but something like <code>UserProfile</code> may work in this case:</p>
<pre><code>interface User {
name: string;
email: string;
}
export function UserProfile({ user }: { user: User }) {
return (
<p>
{user.name} ({user.email})
</p>
);
}
</code></pre>
<p><!--
const {container: c1} = RTL.render(<UserProfile user={{ name: 'Chuck', email: '@' }} />);
expect(c1.textContent).toEqual('Chuck (@)')
--></p>
<p>This matters most when either the type or the component is exported and reused in other places. Local names are more forgiving since they are only used in the same file, and the definition is right there.</p>
<hr />
<p>Names don’t affect the way our code works, but they do affect the way we read it. Misleading or imprecise names can cause misunderstandings and make the code harder to understand and change. They can even cause bugs when we act based on incorrect assumptions caused by bad names.</p>
<p>Additionally, it’s hard to understand what a certain value is when it doesn’t have a name. For example, it could be a mysterious number, an obscure function parameter, or a complex condition. In all these cases, by naming things, we could tremendously improve code readability.</p>
<p>Start thinking about:</p>
<ul>
<li>Replacing negative booleans with positive ones.</li>
<li>Reducing the scope or the lifespan of variables.</li>
<li>Choosing more specific names for symbols with a larger scope or longer lifespan.</li>
<li>Choosing shorter names for symbols with a small scope and short lifespan.</li>
<li>Replacing magic numbers with meaningfully named constants.</li>
<li>Merging several constants representing a range or a scale into an object or enum.</li>
<li>Using destructuring or inlining to think less about inventing new names.</li>
<li>Choosing domain-specific names for local variables instead of more literal names.</li>
</ul>
<hr />
<p>Read other sample chapters of the book:</p>
<ul>
<li><a href="/blog/avoid-loops/">Avoid loops</a></li>
<li><a href="/blog/avoid-conditions/">Avoid conditions</a></li>
<li><a href="/blog/avoid-reassigning-variables/">Avoid reassigning variables</a></li>
<li><a href="/blog/avoid-mutation/">Avoid mutation</a></li>
<li><a href="/blog/avoid-comments/">Avoid comments</a></li>
<li><em>Naming is hard (<em>this post</em>)</em></li>
<li><a href="/blog/divide/">Divide and conquer, or merge and relax</a></li>
<li><a href="/blog/dont-make-me-think/">Don’t make me think</a></li>
</ul>
Healing my open source addictionhttps://sapegin.me/blog/ex-open-source/https://sapegin.me/blog/ex-open-source/I started my first open source project in 2012 but 10 years later I quit it almost entirely because of its hostile culture. Recently, I found a hobby that gives me everything I liked about open source but without any of its downsides.Fri, 26 May 2023 00:00:00 GMT<p>I started <a href="https://github.com/sapegin/richtypo.js">my first open source project</a> in 2012 to share my work with the community, to learn, and as a way to organize and document reusable pieces of code for my personal projects. Since then I’ve created <a href="https://github.com/sapegin">a few relatively popular projects</a> and published <a href="https://www.npmjs.com/~sapegin">85 npm packages</a>.</p>
<p></p>
<p>However, the hostility of the open source community made me stop contributing and even looking at issues and pull requests of most of my projects. I also lost interest in writing code as a hobby over the years.</p>
<p><strong>My open source journey:</strong> I’ve written about open source before: from <a href="/blog/open-source-for-everyone/">encouraging folks to do it</a>, to <a href="/blog/personal-open-source/">explaining why I keep my personal code on GitHub</a>, to <a href="/blog/no-complaints-oss/">ranting about the culture</a>, and <a href="/blog/going-offline/">not contributing anymore</a>.</p>
<p>I liked many things about open source, and it was a bit sad to loose them. Luckily, around the same time I found a new hobby that turned out to be a great, and much more healthy, replacement for open source — leathercraft.</p>
<p>Here’s what I like about it:</p>
<ul>
<li><strong>Design.</strong> I always loved to design things, whether it’s a site design, app user interface, or software architecture. Leather has it all: designing a product to solve a set of problems with a bunch of constraints is a lot of fun for me. Sketching, making templates on a computer — either based on a product I’ve seen somewhere else, or completely from scratch.</li>
<li><strong>Iterations.</strong> Improving design over time: starting from paper prototypes and then using an actual thing in leather for some time to see how it performs and thinking how to solve uncovered issues (bugs).</li>
<li><strong>Making something that people will use and love.</strong> Leathercraft is all about making useful products either for yourself, for your family, or friends. Things that <a href="https://klatzleathergoods.etsy.com/">people will use every day</a>: wallets, keyholders, bags, camera straps, and so on.</li>
</ul>
<p>And even more:</p>
<ul>
<li>No more angry users demanding that you fix their problems for free. Leathercraft is primarily a solitary hobby. (It’s probably different when you try to sell your goods but that’s not my case now.)</li>
<li>No more sitting at the computer until night. I use computer only to draw templates that I later print, and an iPad to sketch new products. It still requires a lot of sitting though.</li>
<li>Leathercraft is very meditative and relaxing: cutting leather, finishing edges, stitching…</li>
</ul>
<p>I also see some similarities with my work as a software developer, with the things I (still) like about it. I try to find some kind of design system for my leather work: learn certain ways of doing things, like cutting strap ends or finishing edges, or particular design elements like pockets, so all my leather products look consistent and match each other, but at the same time have a unique style.</p>
Washing your code: avoid commentshttps://sapegin.me/blog/avoid-comments/https://sapegin.me/blog/avoid-comments/Some developers never comment their code, some comment too much. The former believe that the code should be self-documenting, the latter read somewhere that they should always comment their code. Both are wrong.Thu, 27 Apr 2023 00:00:00 GMT<p><!-- description: Writing useful comments, when to comment our code, and when it’s better not to --></p>
<p>Some programmers never comment their code, while others comment too much. The former believe code should be self-documenting, while the latter have read somewhere that they should comment every line of their code.</p>
<p>Both are wrong.</p>
<p>I don’t believe in self-documenting code. Yes, we should rewrite unclear code to make it more obvious and use meaningful, correct names, but some things can’t be expressed by the code alone.</p>
<p>Commenting too much doesn’t help either: comments start to repeat the code, and instead of aiding understanding, they clutter the code and distract the reader.</p>
<p>Let’s talk about writing useful comments.</p>
<h2>Getting rid of comments (or not)</h2>
<p>Some programmers have a habit of creating a new function whenever they need to explain a block of code. Instead of writing a comment, they turn the comment text into a function name. Most of the time there’s no benefit, and often it reduces code readability and clarity of comments (function names are less expressive than comment text).</p>
<p>Here’s a typical example of code I usually write:</p>
<p><!--
let window = { showInformationMessage: () => {} }
let logMessage = () => {}
class Test {
quickPick = {
hide: () => {}
}
getRelativePath() { return 'src/foo.txt' }
getAbsolutePath() { return '/stuff/src/foo.txt' }
isDirectory() { return true }
ensureFolder() { return true }</p>
<p>test() {
--></p>
<pre><code>async function createNew() {
const relativePath = this.getRelativePath();
const fullPath = this.getAbsolutePath();
if (this.isDirectory()) {
// User types a folder name: foo/bar/
logMessage('Creating a folder:', fullPath);
// Create a folder with all subfolders
const created = await this.ensureFolder(fullPath);
if (created === false) {
return;
}
// There seem to be no API to reveal a folder in Explorer,
// so show a notification instead
window.showInformationMessage(`Folder created: ${relativePath}`);
} else {
// User types a file name: foo/bar.ext
logMessage('Creating a file:', fullPath);
// Check if file already exists
if (fs.existsSync(fullPath)) {
// Open the file and show an info message
await window.showTextDocument(Uri.file(fullPath));
window.showInformationMessage(
`File already exists: ${relativePath}`
);
return;
}
// Create an empty file
const created = await this.ensureFile(fullPath);
if (created === false) {
return;
}
// Open the new file
await window.showTextDocument(Uri.file(fullPath));
}
this.quickPick.hide();
}
</code></pre>
<p><!--
return true;
}
}
let test = new Test()
expect(test.test).not.toThrowError()
--></p>
<p>Let’s try to replace comments with function calls:</p>
<p><!--
let window = { showInformationMessage: () => {} }
let logMessage = () => {}
class Test {
quickPick = {
hide: () => {}
}
getRelativePath() { return 'src/foo.txt' }
getAbsolutePath() { return '/stuff/src/foo.txt' }
isDirectory() { return true }
ensureFolder() { return true }</p>
<p>test() {
--></p>
<pre><code>async function createDirectory(fullPath, relativePath) {
logMessage('Creating a folder:', fullPath);
const created = await this.ensureFolder(fullPath);
if (created === false) {
return;
}
window.showInformationMessage(`Folder created: ${relativePath}`);
}
async function openExistingFile(fullPath, relativePath) {
await window.showTextDocument(Uri.file(fullPath));
window.showInformationMessage(
`File already exists: ${relativePath}`
);
}
async function createFile(fullPath, relativePath) {
logMessage('Creating a file:', fullPath);
if (fs.existsSync(fullPath)) {
await this.openExistingFile(fullPath, relativePath);
return;
}
const created = await this.ensureFile(fullPath);
if (created === false) {
return;
}
await window.showTextDocument(Uri.file(fullPath));
}
async function createNew() {
const relativePath = this.getRelativePath();
const fullPath = this.getAbsolutePath();
if (this.isDirectory()) {
await this.createDirectory(fullPath, relativePath);
} else {
await this.createFile(fullPath, relativePath);
}
this.quickPick.hide();
}
</code></pre>
<p><!--
return true;
}
}
let test = new Test()
expect(test.test).not.toThrowError()
--></p>
<p>The main condition (is directory?) is now more visible and the code has less nesting. However, the <code>openExistingFile()</code> method adds unnecessary indirection: it doesn’t contain any complexity or nesting worth hiding away, but now we need to check the source to see what it actually does. It’s hard to choose a name that is both concise and clearer than the code itself.</p>
<p>Each branch of the main condition has only one level of nesting, and the overall structure is mostly linear, so it doesn’t make sense to split them further than extracting each branch into its own method. Additionally, comments provided a high-level overview and the necessary context, allowing us to skip blocks we aren’t interested in.</p>
<p>On the other hand, the <code>isDirectory()</code> and <code>ensureFile()</code> are good examples of methods, as they abstract away generic low-level functionality.</p>
<p>Overall, I don’t think that splitting a function into many small functions just because it’s “long” makes the code more readable. Often, it has the opposite effect because it hides important details inside other functions, making it harder to modify the code.</p>
<p><strong>Info:</strong> We talk about code splitting in more detail in the <a href="/blog/divide/">Divide and conquer, or merge and relax</a> chapter.</p>
<p>Another common use for comments is complex conditions:</p>
<p><!--
let regExp = /y/
let textEditor = {
document: {
lineCount: 9,
lineAt: () => ({text: 'xxxx'})
}
}
let decorate = vi.fn()
--></p>
<pre><code>function handleChange({ contentChanges, decoratedLines, lineCount }) {
const changedLines = contentChanges.map(
({ range }) => range.start.line
);
// Skip decorating for certain cases to improve performance
if (
// No lines were added or removed
lineCount === textEditor.document.lineCount &&
// All changes are single line changes
contentChanges.every(({ range }) => range.isSingleLine) &&
// Had no decorators on changed lines
changedLines.every(x => decoratedLines.has(x) === false) &&
// No need to add decorators to changed lines
changedLines.some(x =>
regExp?.test(textEditor.document.lineAt(x).text)
) === false
) {
return;
}
decorate();
}
</code></pre>
<p><!--
decorate.mockClear();</p>
<p>handleChange({
contentChanges: [
{range: {start: { line: 4 }, isSingleLine: true }}
],
decoratedLines: new Set([5]),
lineCount: 9
})
expect(decorate).not.toHaveBeenCalled()</p>
<p>handleChange({
contentChanges: [
{range: {start: { line: 5 }, isSingleLine: true }}
],
decoratedLines: new Set([5]),
lineCount: 10
})
expect(decorate).toHaveBeenCalled()</p>
<p>handleChange({
contentChanges: [
{range: {start: { line: 5 }, isSingleLine: false }}
],
decoratedLines: new Set([5]),
lineCount: 9
})
expect(decorate).toHaveBeenCalled()</p>
<p>handleChange({
contentChanges: [
{range: {start: { line: 5 }, isSingleLine: true }}
],
decoratedLines: new Set([5]),
lineCount: 9
})
expect(decorate).toHaveBeenCalled()
--></p>
<p>In the code above, we have a complex condition with multiple clauses. The problem with this code is that it’s hard to see the high-level structure of the condition. Is it <code>something && something else</code>? Or is it <code>something || something else</code>? It’s hard to see what code belongs to the condition itself and what belongs to individual clauses.</p>
<p>We can extract each clause into a separate variable or function and use comments as their names:</p>
<p><!--
let regExp = /y/
let textEditor = {
document: {
lineCount: 9,
lineAt: () => ({text: 'xxxx'})
}
}
let decorate = vi.fn()
--></p>
<pre><code>function handleChange({ contentChanges, decoratedLines, lineCount }) {
const changedLines = contentChanges.map(
({ range }) => range.start.line
);
const lineCountHasChanged =
lineCount !== textEditor.document.lineCount;
const hasMultilineChanges = contentChanges.some(
({ range }) => range.isSingleLine === false
);
const hadDecoratorsOnChangedLines = changedLines.some(x =>
decoratedLines.has(x)
);
const shouldHaveDecoratorsOnChangedLines = changedLines.some(x =>
regExp?.test(textEditor.document.lineAt(x).text)
);
// Skip decorating for certain cases to improve performance
if (
lineCountHasChanged === false &&
hasMultilineChanges === false &&
hadDecoratorsOnChangedLines === false &&
shouldHaveDecoratorsOnChangedLines === false
) {
return;
}
decorate();
}
</code></pre>
<p><!--
decorate.mockClear();</p>
<p>handleChange({
contentChanges: [
{range: {start: { line: 4 }, isSingleLine: true }}
],
decoratedLines: new Set([5]),
lineCount: 9
})
expect(decorate).not.toHaveBeenCalled()</p>
<p>handleChange({
contentChanges: [
{range: {start: { line: 5 }, isSingleLine: true }}
],
decoratedLines: new Set([5]),
lineCount: 10
})
expect(decorate).toHaveBeenCalled()</p>
<p>handleChange({
contentChanges: [
{range: {start: { line: 5 }, isSingleLine: false }}
],
decoratedLines: new Set([5]),
lineCount: 9
})
expect(decorate).toHaveBeenCalled()</p>
<p>handleChange({
contentChanges: [
{range: {start: { line: 5 }, isSingleLine: true }}
],
decoratedLines: new Set([5]),
lineCount: 9
})
expect(decorate).toHaveBeenCalled()
--></p>
<p>In the code above, we separated levels of abstraction, so the implementation details of each clause don’t distract us from the high-level condition. Now, the structure of the condition is clear.</p>
<p>However, I wouldn’t go further and extract each clause into its own function unless they are reused.</p>
<p><strong>Info:</strong> We talk more about conditions in the <a href="/blog/avoid-conditions/">Avoid conditions</a> chapter.</p>
<h2>Good comments</h2>
<p>Good comments explain <em>why</em> code is written in a certain, sometimes mysterious, way:</p>
<ul>
<li>If the code is fixing a bug or is a workaround for a bug in a third-party library, a ticket number or a link to the issue will be useful.</li>
<li>If there’s an obvious, simpler alternative solution, a comment should explain why this solution doesn’t work in this case.</li>
<li>If different platforms behave differently and the code accounts for this, it should be mentioned in a comment.</li>
<li>If the code has known limitations, mentioning them (possibly using todo comments, see below) will help developers working with this code.</li>
</ul>
<p>Such comments save us from accidental “refactoring” that makes the code easier but removes some necessary functionality or breaks it for some users.</p>
<p>High-level comments explaining how the code works are useful too. If the code implements an algorithm, explained somewhere else, a link to that place would be useful. However, if a piece of code is too difficult to explain and requires a long, convoluted comment, we should probably rewrite it instead.</p>
<h2>Hack comments</h2>
<p>Any hack should be explained in a <em>hack comment</em>:</p>
<p><!-- let Button = {} --></p>
<pre><code>// HACK: Importing defaultProps from another module crashes
// Storybook Docs, so we have to duplicate them here
Button.defaultProps = {
label: ''
};
</code></pre>
<p><!-- expect(Button.defaultProps).toHaveProperty('label') --></p>
<p>Here’s another example:</p>
<p><!--
let item = { image: 'pizza.jpg' }
let SliderSlide = () => null
let Test = () => (
--></p>
<pre><code>// @hack: Use ! to override width:100% and height:100%
// hardcoded in Swiper styles
<SliderSlide
key={item.image ?? item.text}
className="mr-8 !h-auto !w-80 shrink-0 last:mr-0"
>
…
</SliderSlide>
</code></pre>
<p><!--
)
const {container: c1} = RTL.render(<Test />);
expect(c1.textContent).toEqual('')
--></p>
<p><strong>Info:</strong> You may encounter various styles of hack comments: <code>HACK</code>, <code>XXX</code>, <code>@hack</code>, and so on, though I prefer <code>HACK</code>.</p>
<h2>Todo comments</h2>
<p>I like <em>todo comments</em>, and I add plenty of them when writing code. Todo comments can serve several purposes:</p>
<ul>
<li><strong>Temporary comments</strong> that we add while writing code so we don’t forget what we want to do.</li>
<li><strong>Planned improvements:</strong> must haves that weren’t yet implemented.</li>
<li><strong>Known limitations and dreams:</strong> nice to haves that may never be implemented.</li>
</ul>
<p><strong>Temporary comments</strong> help us focus on the essentials when we write code by writing down everything we want to do or try later. Such comments are an essential part of my coding process, and I remove most of them before submitting my code for review.</p>
<p><strong>Info:</strong> You may encounter various styles of todo comments: <code>TODO</code>, <code>FIXME</code>, <code>UNDONE</code>, <code>@todo</code>, <code>@fixme</code>, and so on. I prefer <code>TODO</code>.</p>
<p>Comments for <strong>planned improvements</strong> are useful when we know that we need to do something:</p>
<p><!--
let FavoriteType = {Taco: 'Taco'}
let fetchFavorites = ({favoriteType}) => favoriteType
--></p>
<pre><code>const query = await fetchFavorites({
favoriteType: FavoriteType.Taco
// TODO: Implement pagination (TCO-321)
});
</code></pre>
<p><!-- expect(query).toBe(FavoriteType.Taco) --></p>
<p>It’s a good idea to include a ticket number in such comments, like in the example above.</p>
<p>There might be another condition, like a dependency upgrade, required to complete the task:</p>
<pre><code>/**
* Manually toggle the window scroll functionality,
* important especially when the modal is open from another
* modal. In such cases HeadlessUI messes up the html element
* state and the scroll is not working properly
* See following:
* https://github.com/tailwindlabs/headlessui/issues/1000
* https://github.com/tailwindlabs/headlessui/issues/1199
* The clean up is called after the modal is open and when
* closed.
* @todo [headlessui/react@>=2.0.0]: review if the issue is
* fixed in HeadlessUI, debug in WebDev tools using the 6x
* CPU slowdown
*/
function blockWindowScroll(active) {
/* … */
}
</code></pre>
<p>This is a hell of a comment!</p>
<p>Comments for <strong>known limitations and dreams</strong> express a desire for the code to do more than it does. For example, error handling, special cases, support for more platforms or browsers, minor features, and so on. Such todos don’t have any deadlines or even the expectation that they will ever be resolved:</p>
<p><!--
const Environment = {
DEV: 'DEV',
QA: 'QA',
PROD: 'PROD',
}
--></p>
<pre><code>// TODO: On React Native it always returns DEV, since there's
// no actual location available
const getEnvironment = (hostname = window.location.hostname) => {
if (hostname.includes('qa.')) {
return Environment.QA;
}
if (hostname.includes('example.com')) {
return Environment.PROD;
}
return Environment.DEV;
};
</code></pre>
<p><!--
expect(getEnvironment('qa.example.com')).toBe('QA')
expect(getEnvironment('www.example.com')).toBe('PROD')
expect(getEnvironment('localhost')).toBe('DEV')
--></p>
<p><strong>Tip:</strong> Maybe we should start using <code>DREAM</code> comments for such cases…</p>
<p>However, there’s a type of todo comments I don’t recommend — comments with an expiration date:</p>
<pre><code>// TODO [2024-05-12]: Refactor before the sprint ends
</code></pre>
<p>The <a href="https://github.com/sindresorhus/eslint-plugin-unicorn/blob/main/docs/rules/expiring-todo-comments.md">unicorn/expiring-todo-comments</a> linter rule fails the build after the date mentioned in the comment. This is unhelpful because it usually happens when we work on an unrelated part of the code, forcing us to deal with the comment right away, most likely by adding another month to the date.</p>
<p>There are other conditions in the <code>unicorn/expiring-todo-comments</code> rule that might be more useful, such as the dependency version:</p>
<pre><code>// TODO [react@>=18]: Use useId hook instead of generating
// IDs manually
</code></pre>
<p>This is a better use case because it will fail only when someone updates React, and fixing such todos should probably be part of the upgrade.</p>
<p><strong>Tip:</strong> I made a Visual Studio Code extension to highlight todo and hack comments: <a href="https://marketplace.visualstudio.com/items?itemName=sapegin.todo-tomorrow">Todo Tomorrow</a>.</p>
<h2>Comments that reduce confusion</h2>
<p>Comments can make code more intentional. Consider this example:</p>
<p><!--
const doOrDoNot = () => { throw new Error('x') }
function test() {
--></p>
<p><!-- eslint-skip --></p>
<pre><code>try {
doOrDoNot();
} catch {
// eslint-disable-next-line no-empty
}
</code></pre>
<p><!--
}
expect(test).not.toThrowError()
--></p>
<p>In the code above, we disable the linter, which complains about missing error handling. However, it’s unclear why the error handling is missing.</p>
<p>We can make the code clearer by adding a comment:</p>
<p><!--
const doOrDoNot = () => { throw new Error('x') }
function test() {
--></p>
<pre><code>try {
doOrDoNot();
} catch {
// Ignore errors
}
</code></pre>
<p><!--
}
expect(test).not.toThrowError()
--></p>
<p>Now, it’s clear that we don’t care about errors in this piece of code. On the other hand, this comment:</p>
<p><!--
const doOrDoNot = () => { throw new Error('x') }
function test() {
--></p>
<pre><code>try {
doOrDoNot();
} catch {
// TODO: Handle errors
}
</code></pre>
<p><!--
}
expect(test).not.toThrowError()
--></p>
<p>Tells a different story: we want to add error handling in the future.</p>
<h2>Comments with examples</h2>
<p>I like to add examples of input and output in function comments:</p>
<pre><code>/**
* Returns a slug from a Markdown link:
* [#](tres-leches-cake) → tres-leches-cake
*/
function getSubrecipeSlug(markdown) {
/* … */
}
</code></pre>
<p><!-- expect(getSubrecipeSlug).not.toThrowError() --></p>
<p>Such comments help to immediately see what the function does without reading the code.</p>
<p>Here’s another example:</p>
<p><!-- cspell:disable --></p>
<pre><code>/**
* Add IDs to headings.
*
* Copy of rehype-slug but handles non-breaking spaces by
* converting them to regular spaces before generating slugs.
* The original rehype-slug swallows non-breaking spaces:
* rehype-slug: `best tacos in\xA0town` → `best-tacos-intown`
* this module: `best tacos in\xA0town` → `best-tacos-in-town`
*/
function rehypeSlug() {
/* … */
}
</code></pre>
<p><!-- expect(rehypeSlug).not.toThrowError() -->
<!-- cspell:enable --></p>
<p>In the code above, we don’t just give an example of the input and output, but also explain the difference with the original <code>rehype-slug</code> package and why a custom implementation exists in the codebase.</p>
<p>Usage examples are another thing to include in function comments:</p>
<pre><code>/**
* React component which children are only rendered when
* there is an active authenticated user session
*
* @example
* <AuthenticatedOnly>
* <button>Add to favorites</button>
* </AuthenticatedOnly>
*/
function AuthenticatedOnly({ children }) {
/* … */
}
</code></pre>
<p>Such comments help to understand how to use a function or a component, highlight the necessary context, and the correct way to pass parameters.</p>
<p><strong>Tip:</strong> When we use the JSDoc <code>@example</code> tag, Visual Studio Code shows a syntax-highlighted example when we hover on the function name anywhere in the code.</p>
<p></p>
<h2>Bad comments</h2>
<p>We’ve talked about useful comments. However, there are many other kinds of comments that we should avoid.</p>
<p>Probably the worst kind of comments are those explaining <em>how</em> code works. They either repeat the code in more verbose language or explain language features:</p>
<pre><code>// Fade timeout = 2 seconds
const FADE_TIMEOUT_MS = 2000;
</code></pre>
<p>Or:</p>
<pre><code>// This will make sure that your code runs
// in the strict mode in the browser
'use strict';
</code></pre>
<p>Such comments are good for coding tutorials, but not for production code. Code comments aren’t the best place to teach teammates how to use certain language features. Code reviews, pair programming sessions, and team documentation are more suitable and efficient.</p>
<p>Next, there are <em>fake</em> comments: they pretend to explain some decision, but they don’t explain anything, and they often blame someone else for poor code and tech debt:</p>
<p><!-- const locale = 'ko' --></p>
<pre><code>// force 24 hours formatting for Chinese and Korean
const hour12 = locale === 'zh' || locale === 'ko' ? false : undefined;
</code></pre>
<p><!-- expect(hour12).toBe(false) --></p>
<p>Why do Chinese and Koreans need a different time format? Who knows; the comment only tells us what’s already clear from the code but doesn’t explain why.</p>
<p>I see lots of these comments in one-off design “adjustments.” For example, a comment might say that there was a <em>design requirement</em> to use a non-standard color, but it won’t explain why it was required and why none of the standard colors worked in that case:</p>
<pre><code>.shareButton {
color: #bada55; // Using non-standard color to match design
}
</code></pre>
<p>And by lots, I mean really <em>plenty</em>:</p>
<pre><code>// Design decision
// This is for biz requirements
// Non-standard background color needed for design
// Designer's choice
</code></pre>
<p><em>Requirement</em> is a very tricky and dangerous word. Often, what’s treated as a requirement is just a lack of education and collaboration between developers, designers, and project managers. If we don’t know why something is required, we should always ask. The answer can be surprising!</p>
<p>There may be no <em>requirement</em> at all, and we can use a standard color from the project theme:</p>
<pre><code>.shareButton {
color: $text-color--link;
}
</code></pre>
<p>Or there may be a real reason to use a non-standard color, that we can put into a comment:</p>
<pre><code>$color--facebook: #3b5998; // Facebook brand color
.shareButton {
color: $color--facebook;
}
</code></pre>
<p>In any case, it’s our responsibility to ask <em>why</em> as many times as necessary; otherwise we’ll end up with mountains of tech debt that don’t solve any real problems.</p>
<hr />
<p>Comments enrich code with information that cannot be expressed by the code alone. They help us understand why the code is written in a certain way, especially when it’s not obvious. They help us avoid disastrous “refactorings”, when we simplify the code by removing its essential parts.</p>
<p>However, if it’s too hard to explain a certain piece of code in a comment, perhaps we should rewrite such code instead of trying to explain it.</p>
<p>Finding a balance between commenting too much and too little is essential and comes with experience.</p>
<p>Start thinking about:</p>
<ul>
<li>Removing comments that don’t add anything to what’s already in the code.</li>
<li>Adding hack comments to document hacks in the code.</li>
<li>Adding todo comments to document planned improvements and dreams.</li>
<li>Adding examples of input/output, or usage to function comments.</li>
<li>Asking why a commented requirement or decision exists in the first place.</li>
</ul>
<hr />
<p>Read other sample chapters of the book:</p>
<ul>
<li><a href="/blog/avoid-loops/">Avoid loops</a></li>
<li><a href="/blog/avoid-conditions/">Avoid conditions</a></li>
<li><a href="/blog/avoid-reassigning-variables/">Avoid reassigning variables</a></li>
<li><a href="/blog/avoid-mutation/">Avoid mutation</a></li>
<li><em>Avoid comments (<em>this post</em>)</em></li>
<li><a href="/blog/naming/">Naming is hard</a></li>
<li><a href="/blog/divide/">Divide and conquer, or merge and relax</a></li>
<li><a href="/blog/dont-make-me-think/">Don’t make me think</a></li>
</ul>
A rebel’s guide to pull requests, commits, and code reviewshttps://sapegin.me/blog/rebels-guide-to-pull-requests-commits-code-reviews/https://sapegin.me/blog/rebels-guide-to-pull-requests-commits-code-reviews/Mon, 30 May 2022 00:00:00 GMT<p>I happened to have a somewhat controversial approach to pull requests. This approach has worked well for me for many years, and my colleagues seem to be happy with it. However, many would consider these practices atrocious — read on to find out on which side you are!</p>
<p>I’ve written before on <a href="/blog/faster-code-reviews/">getting code reviewed faster</a> and have published a <a href="https://github.com/sapegin/frontend-pull-request-checklist">frontend pull request checklist</a> — check them out!</p>
<p><strong>Story time:</strong> I once broke a deployment tool by adding an emoji into a commit message, and I still consider this a tiny victory in my programming career. Funnily, my colleagues split into two camps after this incident. The first camp was saying that I’m an idiot to use emojis in commit messages, and the second — that it shouldn’t be so easy to break the tool.</p>
<h2>Don’t bother with atomic commits</h2>
<p>Some folks prefer to check the changes in a pull request as a whole, some prefer to check each commit separately. The latter explains that they want to understand the thought process of a developer, and they prefer to look at smaller changes, and how a developer came up with the solution.</p>
<p>This approach is called <a href="https://www.codewithjason.com/atomic-commits/">atomic commits</a>, meaning a bigger change is split into multiple smaller independentish changes, and each is represented as a separate commit with a meaningful description of a change.</p>
<p></p>
<p>(An illustration from <a href="https://www.frederickvanbrabant.com/posts/atomic-commits">Frederick Vanbrabant’s article</a>.)</p>
<p>Some even go further and want each change requested during code review to be in a separate commit.</p>
<p>I always review the whole pull request because I don’t want to review <em>tiny changes</em> but the <em>state of the codebase</em> after the change. Small changes may look okay independently but in the end, they may produce messy code, duplication, or inconsistencies with already existing code — even in the same file. And most of the time such small changes don’t make sense on their own, without the context of the whole pull request. In the end, that’s what we should care about: how the codebase will look like after we merge the pull request.</p>
<p>Atomic commits read like a novel, taking us behind the scenes of a pull request, and it takes a lot of time to write and maintain them. I’d prefer developers to spend this energy on writing great and helpful pull request descriptions.</p>
<p>If some of the changes need additional explanation, code or pull request comments are better places for them — depending on whether they help to understand the code or the reason for the changes.</p>
<p>Also, atomic commits will likely require rewriting commit history and force-pushing — another thing I don’t want to waste my time on.</p>
<p>I only use atomic commits after the first code review iteration, so reviewers could better see what was changed since the last review. And still, if the changes are minor, I’d group them into a single commit.</p>
<p></p>
<p>And if the pull request is too big to review, or contains unrelated changes like refactoring or bugfixes, we should split it into several pull requests.</p>
<p><strong>Tip:</strong> To avoid reviewing the same code on each iteration, mark files as reviewed on GitHub, and they will stay collapsed until the author changes anything inside them.</p>
<p></p>
<h2>Squash-merge pull requests</h2>
<p>When I merge pull requests, I always squash all commits into a single commit, and edit the commit message to describe the whole change.</p>
<p>This has several benefits:</p>
<ul>
<li><strong>Clean and readable project history:</strong> developers merge dozens or even hundreds of pull requests every day on a typical project, keeping each commit will make the history enormous and unusable. Also, commit messages will be too low-level to be useful.</li>
<li><strong>No time wasted on managing commits inside a pull request:</strong> discard all the separate commits, so developers don’t have to bother keeping the branch history clean and force pushing.</li>
<li><strong>Easier debugging:</strong> each commit corresponds to a pull request, so once we find where the bug was introduced, we know which pull request is responsible for it.</li>
<li><strong>Easier reverts:</strong> we revert a complete feature, so the author could fix it, retest, and submit it as a new pull request. Reverting a whole pull request brings the project back to a previous known working state, reverting atomic commits will have unpredictable consequences, and likely break something.</li>
</ul>
<p><strong>Tip:</strong> Allow only squash merging on GitHub, and aisable other merge types. Also, disable force pushing to prevent various problems, like overwriting someone else’s work.</p>
<p></p>
<p>It’s easier and safer to treat pull requests as atomic changes, meaning we merge a pull request as a single commit (a single item in the project history), and we revert a pull request completely if something goes wrong.</p>
<h2>Merge commits in pull requests are fine</h2>
<p>Rebasing a feature branch with the recent changes in the main branch is such a big pain — and only because some folks don’t want to see merge commits in a pull request! It also requires force pushing the branch — another source of issues or even lost work, when several people are working on the same feature, which is common when one developer helps another with their task.</p>
<p>Merging the main branch into a feature branch is significantly faster, easier, and less error-prone. Squashing commits while merging a pull request keeps the project history clean, so we’ll only see these merge commits on the pull request page.</p>
<p></p>
<p><strong>Tip:</strong> I have a <a href="https://github.com/sapegin/dotfiles/blob/a051afa17b618e7929aabafefdbb7e676513a72a/tilde/.gitconfig#L37-L38">Git alias <code>git mmm</code></a> (“merge master motherfucker”) that fetched the fresh main branch and merges it into my current working branch. I also use <a href="https://github.com/git-friendly/git-friendly">git-friendly</a> scripts for pulling, pushing and working with branches and stashes.</p>
<h3>Conclusion</h3>
<p>The overall quality of a pull request is much more important than the quality of each commit in it. A clear description of what was changed and why, screenshots of any UI changes, and so on. Discarding individual commits while merging a pull request saves time, makes debugging and reverting easier, and avoid issues. Less stress and more time to work on something useful for your colleagues.</p>
<p>And don’t forget to put some emojis in your commit messages once in a while — our work would be too sad without them!</p>
<p><em>Thanks to <a href="https://drtaco.net/">Margarita Diaz</a>.</em></p>
Going offlinehttps://sapegin.me/blog/going-offline/https://sapegin.me/blog/going-offline/The Coronavirus allowed me to reflect on what’s important to me, and to see my life from a different point of view regarding work, open source, hobbies, and social networks over the past two years.Wed, 04 May 2022 00:00:00 GMT<p>I don’t know whether I should blame the Coronavirus for the changes in my life and the way I see work, open source, hobbies, and social networks over the past two years, or I’m just getting old. In any case, the Coronavirus allowed me to reflect on what’s important to me, and to see my life from a different point of view.</p>
<h2>Work</h2>
<p>I used to think that I’d keep working even if I had enough money not to work. I was enjoying my work and everything frontend for over a decade. However, recently I realized that this is no longer true.</p>
<p>Work doesn’t excite me as it did a few years ago, sometimes it doesn’t excite me at all for weeks or even months. And I’m still learning to live with that. I think I’d stop working if I had enough money to live comfortably.</p>
<p>Thought about a promotion kept lurking in my mind a few times a year. The most common promotion path for a developer — becoming a manager — never appealed to me but I kept thinking about the next technical level after a senior developer. Now I think it’s not really worth it: you get a lot of work and more responsibility for a very little salary increase.</p>
<p>I’m getting dissatisfied with where the frontend community is going. We went from having no tools at all to having way too many. There are more than one million packages on npm, and only a tiny fraction of them are doing something useful and compatible with each other. 15 years ago we were wasting days fighting browser bugs, now we waste our days fighting tooling incompatibilities and googling obscure error messages. Both are utterly unproductive, and I’m not quite sure which is worse.</p>
<p>This all happened around the same time as the Coronavirus thing started, and I’m not sure if Coronavirus is to blame here or not; however, it did two things for me.</p>
<p>First, we were all sent to work from home. I always liked to work from the office but that has changed after I’ve really tried to work from home for several months. Now I have a great home office setup with good hardware, great coffee, and homemade food, without constantly chatting colleagues, without always occupied toilet and everyday commute. I can better plan my day and my girlfriend <a href="https://drtaco.net/">Margarita</a> is working next to me.</p>
<p></p>
<p>Second, my employer was seriously affected by the virus, and we had to work four days a week and for some time even two days a week instead of five. I’d never considered reducing my working hours before but here I am, working four days a week since last summer, and it feels great. 3/4 days distribution is much healthier than 2/5. Now I can <a href="https://www.onlandscape.co.uk/2022/03/morning-in-magical-forest/">go to a forest in the early morning</a> for some nature photography, come back to have lunch with Margarita, and still have time to do something at home.</p>
<p><a href="/photos/favorites/"></a></p>
<p>I think it’s important to know what you like and don’t like at work and try to find a balance. When you do too much of what you don’t like, work gets depressing very fast. I always liked improving things: doing refactoring or <a href="/blog/">researching better ways to do something</a>, and never liked digging into complex business requirements.</p>
<p>It’s also important to know your limits: it’s better to do great work within your limits than poor work trying to go outside. My limits are: I’m good at frontend, not great at backend, and very poor at devops. I’m also not so great at talking to people.</p>
<p>I’ve also realized that most jobs suck, and it doesn’t make sense to change them too often. Too much stress, and the rest stays the same. In most places, employers don’t care about their employees, doesn’t matter how loud they say otherwise, and how many happy faces they have on their hiring page. What’s really important is your team: if you find one, stick to it.</p>
<p>I still miss a few things from the office life: meeting my colleagues and postlunch walks; however, I don’t see myself working in the office again, especially one with hot desks, full time.</p>
<p><a href="/photos/favorites/"></a></p>
<p>I’d also like to try actual remote work, not just working from home. Meaning primarily async work, not when we’re all sitting at our homes and having meetings all day.</p>
<h2>Open source</h2>
<p>The way open source works today is completely broken. I’ve read many accounts of prolific open source developers who <a href="https://github.com/bzg/opensource-challenges">burned out and quit</a>, I never thought it was going to happen to me, and I haven’t realized when it did happen. When you’re there it’s hard to see how mentally damaging open source is. You’re like a boiling frog: don’t realize you’re boiling until you became a soup.</p>
<p>For many years I was enjoying working on my open source projects of all sizes: large like <a href="https://react-styleguidist.js.org/">React Styleguidist</a> or a tiny library that nobody else is using. However, the expectation that you owe someone free work to fix bugs in their projects and add features they need to do their job, the rude comments on the issues, the hit and run pull requests where you spend an hour reviewing the code and the author never comes back to answer your comments, made it less and less enjoyable, and my attempts to pretend that it doesn’t hurt my mental health became less and less successful.</p>
<p></p>
<p>At a certain point, I unsubscribed from my most popular projects on GitHub, then stopped checking issues, then pull requests... I felt anxious every time I thought that I should do something on one of my open source projects. This was no longer fun, this was misery.</p>
<p>In many ways I’m grateful to open source: I’ve learned a lot of things by working on my projects. I’ve also met a lot of great people, partially because of my open source work, on Twitter, and at conferences.</p>
<p>The only thing that has never happened to me is that some of my projects helped me to find a better job. And none of my employers were using any of my projects. The only employer that was using one, was actively migrating to something else when I joined.</p>
<p>I’ve also never managed to build a community around any of my projects. Any attempts to share the maintenance burden only created more work for me, not less. The moment I click that unsubscribe button on GitHub, the project essentially dies. Even 10000 stars on GitHub or hundreds of thousands of weekly downloads on npm don’t really mean anything.</p>
<h2>Hobbies</h2>
<p>In the past few years, I see a clear shift to making things with my hands instead of looking at the screen all day: either by taking new offline hobbies or spending more time offline for the existing ones, and it feels great!</p>
<p>Of course, I still use computers a lot for most of my hobbies, and I’m glad I have the skills and now can combine both worlds.</p>
<p><strong>Programming</strong> was my main hobby <a href="https://github.com/sapegin/ama/issues/13">since I was 13</a>, and for many years it was the only one. I was always working on some personal projects and later open source. I was always reading books and articles, and learning something new in my free time. Now I don’t feel like spending time programming just for the fun of programming, and only do it to support my other hobbies.</p>
<p><strong>Photography</strong>, which I started in 2014, was probably my first offline hobby. There’s a still a lot of sitting in front of the computer involved — editing photos, updating <a href="/photos/">my site</a>, and so on — however, I recently started making more prints and even started <a href="/photos/zine">my own photo zine</a>. It’s nice to see my photos printed! I even started shooting film again with plastic cameras.</p>
<p></p>
<p><strong>Bouldering</strong>, which I started around four years ago, was the first sport in my life that I actually enjoyed doing. This is probably the only hobby that doesn’t involve any computers, even my phone always stays in the locker. Unfortunately, it isn’t very possible now but I hope to do it again one day.</p>
<p><strong>Cooking</strong>. I have never cooked anything more complex than a boiled sausage or baked frozen potatoes covered with grated cheese, until I moved to Germany seven years ago. Here, I started by cooking <a href="https://marleyspoon.com/">Marley Spoon</a> boxes twice a week. This was great for learning, since you don’t need to think about what to cook and buy all the ingredients. After several years, it became boring, ingredients were not so great either. So eventually, and with a lot of influence from Margarita (she’s a great teacher!), I stopped my subscription, and started cooking myself, gathering inspiration for the recipes mostly from cookbooks. The more I do it, the more I like it, and enjoy not just the result but the process too. Now Margarita and I are cooking most of the food we eat, and we maintain <a href="https://tacohuaco.co/">a site with our recipes</a>. My goal is to learn grandma-style cooking: when you start taking random ingredients from your fridge and pantry, and produce a delicious meal without a recipe.</p>
<p><a href="https://tacohuaco.co/"></a></p>
<p>I’ve been drinking <strong>coffee</strong> since I was a kid, but until a few years ago I didn’t care much what kind of coffee I drink. I only cared that it doesn’t have milk or sugar — dark and bitter like life, and most of the coffee I drank was like this. Starting with instant coffee many-many years ago and then through various poorly made americanos, Starbucks, and a one-button coffee machine at home. And only after I moved to Germany, I started drinking mostly filter coffee and going to nice coffee shops. And only when I started working from home, I started making good coffee at home — using first Chemex and later V60 and Aeropress.</p>
<p>There’s nothing better than to wake up early in the morning (for me it’s usually between 5 and 7), brew a cup of fresh coffee, make breakfast (plain yogurt with <a href="https://tacohuaco.co/recipes/date-granola/">homemade granola</a> and fresh berries), sit on a sofa in my favorite corner next to the window and read a book for an hour or so.</p>
<p></p>
<p>Even better: after a long walk in a forest at sunrise, sit on a fallen tree and pour delicious coffee from a thermos, and drink it, ideally with some homemade sweet bread.</p>
<p></p>
<p>And the latest one, <strong>leathercraft</strong>, which I just piked up and finished only a couple of projects, and already like it very much. I think, it has many things I liked in open source projects — problem solving, seeing the result of your work in action, iterating to improve the design — but much much healthier, without sitting in front of the screen all night and without all the toxic community. Here I use a computer to create and print patterns that I later transfer to the leather.</p>
<p></p>
<p><strong>Writing</strong> is kind of a weird one. I like to <a href="/blog/">write articles</a> and <a href="/book/">even books</a>, and I hope to start writing again after writing only my private journal for almost two years. Writing is also my favorite way of learning new things and understanding myself. Probably topics I’m writing about will change too.</p>
<p>I also started to enjoy doing things at home, like fixing stuff or putting shelves on the wall, which I never liked before, and was never good at it (I still am!).</p>
<p>I think it’s good to have different hobbies. If <a href="/blog/waves/">you don’t feel like doing one thing</a>, there’s always something else that may feel great doing.</p>
<h2>Social networks</h2>
<p>I was always disappointed in social networks. During the past 20 years, I’ve tried probably all the popular ones but only two were moderately successful for me: LiveJournal, which barely exists now, and <a href="https://twitter.com/iamsapegin">Twitter</a> which may die the moment they kill third-party clients completely (the web UI is already unusable).</p>
<p>However, I kept trying until I read Cal Newport’s book <a href="https://www.calnewport.com/books/digital-minimalism/">Digital minimalism</a>. This book made me rethink what I was doing, I stopped using Facebook and reduced my feeds on other social networks.</p>
<p>I’ve tried many places to share my photography: 500px, <a href="http://instagram.com/sapegin/">Instagram</a>, <a href="https://unsplash.com/@sapegin">Unsplash</a>, and ended up buying photography books and publishing <a href="/photos/zine/">my own photo zine</a>. And <a href="/photos/">my site</a> now is the only place to see my photos online.</p>
<p></p>
<p>I try to limit my usage of social networks to a minimum and set up some rules for myself to limit mindless browsing and checking the feed every five minutes. For now, these networks and rules are:</p>
<ul>
<li><a href="https://twitter.com/iamsapegin">Twitter</a>: I only read the feed once a day on working days (Monday to Thursday) in the morning. Later in the day I only check replies to my tweets. The maximum feed size is 50 people. I also use Twitter a lot for work by asking questions, and opinions, and staying up to date with the industry.</li>
<li><a href="https://www.youtube.com/channel/UCUhq6nh1EdZaVwddb-x3ISg">YouTube</a>: I don’t really have my own channel, and mostly watch other people. I usually use an iPad, so it already limits how much and when I can watch something there. I also have YouTube Premium to keep sanity without endless ads.</li>
</ul>
<p>I use other social networks only when I need something there. For example, I use Facebook to sell things but I don’t post anything to my feed there and don’t read the feed.</p>
<h2>Conclusion</h2>
<p>Writing this article was an interesting process. I don’t think I’ve ever been so open about my life and my feelings online but it feels good to share this. I also feel this is just the beginning of a long and very interesting journey, and I’m looking forward to seeing where it will lead me.</p>
<p>Now, moving away from a big city and living somewhere in a village, and raising vegetables and chickens doesn’t feel like <em>just a dream</em>…</p>
Transpiling ESM files inside node_moduleshttps://sapegin.me/blog/transpiling-esm-in-node-modules/https://sapegin.me/blog/transpiling-esm-in-node-modules/Tue, 19 Apr 2022 00:00:00 GMT<p>This is a gigantic hack but it seems to work, until ECMAScript modules (ESM) are more widely supported. Many npm modules are already published as ECMAScript modules but not all apps could import them. Iʼm using it on <a href="https://tacohuaco.co/">one of my Gatsby sites</a>.</p>
<p>The idea is to use <a href="https://esbuild.github.io/">esbuild</a> to compile ESM files to CommonJS directly inside the <code>node_modules</code> folder on npm install. We’ll detect packages that need to be compiled by checking that the <code>type</code> filed in the <code>package.json</code> has the <code>module</code> value.</p>
<ol>
<li>Add dependencies:</li>
</ol>
<pre><code>npm install -D tiny-glob esbuild
</code></pre>
<ol>
<li>Add a new script, <code>scripts/esmtocjs.js</code></li>
</ol>
<pre><code>const path = require('path');
const { readFile, writeFile } = require('fs/promises');
const glob = require('tiny-glob');
const { build } = require('esbuild');
/**
* Read and parse a JSON file
*/
const readJson = async filepath => {
const buffer = await readFile(filepath);
const file = buffer.toString();
try {
return JSON.parse(file);
} catch {
console.error(`Cannot parse JSON file: ${filepath}`);
return {};
}
};
async function transpileNodeModules() {
// Get all package.json file inside node_modules, including in subfolders
const allPackages = await glob(`node_modules/**/package.json`);
for (const packageJson of allPackages) {
const json = await readJson(packageJson);
// Skip unless the type of the package is ESM
if (!json.name || json.type !== 'module') {
return;
}
console.log(`🦀 Transpiling ${json.name}...`);
const dir = path.dirname(packageJson);
// Get all .js files unless they are in a nested node_modules folder
const files = await glob(`${dir}/**/*.js`);
const entryPoints = files.filter(
d => !d.startsWith(`${dir}/node_modules/`)
);
if (entryPoints.length === 0) {
return;
}
// Compile all ESM files to CommonJS
await build({
entryPoints,
outdir: dir,
allowOverwrite: true,
bundle: false,
minify: false,
sourcemap: false,
logLevel: 'info',
platform: 'node',
format: 'cjs',
target: 'node12'
});
// Overwrite the package.json with the type of CommonJS
await writeFile(
packageJson,
JSON.stringify({ ...json, type: 'commonjs' }, null, 2)
);
}
}
transpileNodeModules();
</code></pre>
<ol>
<li>Add a <code>postinstall</code> script to the <code>package.json</code>:</li>
</ol>
<pre><code>{
"scripts": {
"postinstall": "node scripts/esmtocjs.js"
}
}
</code></pre>
<p>Now, every time we run <code>npm install</code>, all ESM files inside <code>node_modules</code> will be compiled to CommonJS, so they could be imported by Gatsby or any other tool that doesn’t yet support ESM.</p>
Writing cross-platform components for web and React Nativehttps://sapegin.me/blog/react-native-components/https://sapegin.me/blog/react-native-components/One of the selling points of React Native is code sharing between web, iOS, and Android — “seamless cross-platform” as they say on the homepage. Unfortunately, React Native gives us very few tools to write components that work on web and native, and the experience is far from seamless.Tue, 12 Apr 2022 00:00:00 GMT<p>One of the selling points of React Native is code sharing between web, iOS, and Android — “seamless cross-platform” as they say on the homepage. Unfortunately, React Native gives us very few tools to write components that work on web and native, and the experience is far from seamless.</p>
<h2>Problems of cross-platform development for React Native</h2>
<p>The main obstacles to writing cross-platform components with React Native are:</p>
<ul>
<li><strong>Different elements for the web and native</strong>: on web we use <code>p</code> and <code>div</code>, but on native we should use <code>Text</code> and <code>View</code> from <code>react-native</code> package. React Native is also picky about rendering text: we should always wrap it in the <code>Text</code> component, and it should be a direct parent.</li>
<li><strong>Unforgiving styling</strong>: there’s a custom way of doing <a href="https://reactnative.dev/docs/style">styles on React Native</a> which looks like CSS but doesn’t behave like CSS. In CSS, if a browser doesn’t understand a certain property, it would ignore it, but React Native will throw an exception, and it supports a very limited number of CSS properties.</li>
</ul>
<p><a href="https://styled-components.com/docs/basics#react-native">Styled-components</a> solve some of the problems on the low level: primarily, it allows us to use the same syntax to write styles for web and native. However, it doesn’t solve the problem of breaking on unsupported properties.</p>
<p>Another issue is the <strong>slowness and generally poor developer experience of the emulators</strong>: iOS, and especially Android. Developing user interfaces using simulators is much harder and slower than using a desktop browser.</p>
<h2>Possible solutions</h2>
<p>My current approach is to develop on desktop web and then test on React Native on emulators and actual devices.</p>
<p>This also allows me to use the same setup for end-to-end tests as I use for web: <a href="/blog/react-testing-4-cypress/">Cypress and Cypress testing library</a>, which is fast to run and easy to write and debug. Then I’d use end-to-end tests with emulators only for smoke tests or functionality that is very different on native platforms.</p>
<p>Following are my solutions to develop cross-platform components for web and React Native, from better to worse.</p>
<h3>Primitive components</h3>
<p>Primitive components solve many problems and they shine for cross-platform development. By having components for layout, typography, UI elements, and so on, we could encapsulate all the platform-specific code into these components, and the consumer doesn’t have to care about supporting React Native anymore:</p>
<pre><code><Stack gap="medium">
<Heading>Do or do not</Heading>
<Paragraph>There is no try</Paragraph>
<Button>Try nonetheless</Button>
</Stack>
</code></pre>
<p>For a consumer, it doesn’t matter that the <code>Stack</code> has completely different implementations for web and React Native, and that the <code>Heading</code> and <code>Paragraph</code> are rendered using different elements. The APIs are the same, and the implementation is hidden.</p>
<p><a href="https://www.component-driven.dev/">Using primitive components</a> instead of custom styles is my favorite way of making user interfaces in the past few years, and it works well for cross-platform interfaces most of the time. It gives us the cleanest possible markup and design system constraints (limits our choice of spacing, fonts, sizes, colors, and so on to the ones that are supported by the design system).</p>
<p><strong>Note:</strong> I only have experience with <a href="https://styled-system.com/">styled-system</a>, which doesn’t support React Native by default and wasn’t updated in two years. There might be a better solution now, and I’d like to know about it!</p>
<p>I’ve implemented <a href="https://gist.github.com/sapegin/991704a876057393efe3a3f74d4c8c47">a very primitive React Native support</a> by keeping only the first value (for the narrowest screen) of responsive props. So code like this:</p>
<pre><code><Box width={[1, 1 / 2, 1 / 4]}>...</Box>
</code></pre>
<p>Will be rendered like this on React Native:</p>
<pre><code><Box width={1}>...</Box>
</code></pre>
<p>This isn’t ideal but works okay so far.</p>
<h3>Elements object</h3>
<p>Customizing HTML elements of components is a common practice for writing semantic markup. The most common way to do this is by using <a href="https://styled-components.com/docs/api#as-polymorphic-prop">the <code>as</code> prop in styled-components</a>, which would require code splitting to work cross-platform because on React Native all HTML elements should be replaced with <code>View</code> and <code>Text</code> components:</p>
<pre><code>// Web
const Container = ({ children }) => (
<Stack as="form">{children}</Stack>
);
</code></pre>
<pre><code>// React Native
import { View } from 'react-native';
const Container = ({ children }) => (
<Stack as={View}>{children}</Stack>
);
</code></pre>
<p>The same problem when we use the styled-components factory:</p>
<pre><code>// Web
const Heading = styled.p`...`;
// React Native
import { Text } from 'react-native';
const Heading = styled(Text)`...`;
</code></pre>
<p>One way of solving this issue is to create an object with a mapping of elements for both web and React Native, and then use it instead of string literals:</p>
<pre><code>// elements.ts
export const Elements = {
div: 'div',
h1: 'h1',
h2: 'h2',
h3: 'h3',
h4: 'h4',
h5: 'h5',
h6: 'h6',
header: 'header',
footer: 'footer',
main: 'main',
aside: 'aside',
p: 'p',
span: 'span'
} as const;
// elements.native.ts
import { View, Text } from 'react-native';
export const Elements = {
div: View,
h1: Text,
h2: Text,
h3: Text,
h4: Text,
h5: Text,
h6: Text,
header: View,
footer: View,
main: View,
aside: View,
p: Text,
span: Text
} as const;
// Cross-platform component
import { Elements } from './elements';
const Container = ({ children }) => (
<Stack as={Elements.form}>{children}</Stack>
);
</code></pre>
<p>It’s slightly more verbose but the code is split at a lower level and only once, we don’t need to code-split each component and duplicate the code.</p>
<p><strong>Idea:</strong> Now I think a better way would be encapsulating a mapping inside primitive components and a custom styled-component factory, so we could keep writing <code>as="form"</code> or <code>styled.form</code>, and it will be transparently converted to the correct elements for React Native. I haven’t tried it yet but I think this idea is worth exploring.</p>
<h3>Code splitting</h3>
<p>Code splitting should always be our last resort when better options aren’t available. However, done at the lowest possible level, it could still be a good solution, especially when we need to use some platform-specific APIs.</p>
<p>To split code between web and native, we could use <a href="https://reactnative.dev/docs/platform-specific-code#platform-specific-extensions">platform-specific extensions</a>:</p>
<pre><code>// Link.tsx
export const Link = ({ href, children }) => (
<a href={href}>{children}</a>
);
// Link.native.tsx
import {
Text,
Linking,
TouchableWithoutFeedback
} from 'react-native';
export const Link = ({ href, children }) => (
<TouchableWithoutFeedback onPress={() => Linking.openURL(href)}>
<Text>{children}</Text>
</TouchableWithoutFeedback>
);
</code></pre>
<p>This allows us to import platform-specific modules that would break on one of the platforms.</p>
<p>Code splitting is a good option for making primitive components, which we could later use to write cross-platform markup:</p>
<pre><code><Stack gap="medium">
<Heading>Do or do not</Heading>
<Paragraph>There is no try</Paragraph>
<Link href="/try">Try nonetheless</Link>
</Stack>
</code></pre>
<h2>Conclusion</h2>
<p>Writing cross-platform components for web and React Native isn’t as smooth as promised but by choosing the right abstractions we could make it less painful, and improve the readability and maintainability of the code.</p>
<p>My main advice to create cross-platform interfaces is:</p>
<p><strong>Write platform-specific code on the lowest possible level.</strong></p>
<p>Improve your primitive components, so you don’t have to write custom styles and split code too much.</p>
The most useful accessibility testing tools and techniqueshttps://sapegin.me/blog/accessibility-testing/https://sapegin.me/blog/accessibility-testing/Shipping accessible features is as important for a frontend developer as shipping features without bugs, learn about tools and techniques that will help you achieve that.Wed, 07 Oct 2020 00:00:00 GMT<p>Shipping accessible features is as essential for a frontend developer as shipping features without bugs. Here is a list of tools I regularly use to make sure everything I do is accessible for folks with different abilities, whether they are blind or holding a sandwich in their hand. I’ll start with tools that will give us immediate feedback when we’re writing code, and continue with tools that we have to run ourselves or guide us on how to test things manually. This article will be useful not only for developers but also for designers, project managers, and other team members — many of the tools could be used directly in the browser and don’t require any technical knowledge.</p>
<h2>Getting started with accessibility testing</h2>
<p>If you haven’t done accessibility testing before or you’ve got a project that’s build without accessibility in mind, I’d recommend to start with the following steps to assess the project’s accessibility and start improving it:</p>
<ol>
<li>(For React projects) Install the React ESLint plugin, and fix all reported issues.</li>
<li>Run FastPass in the Accessibility Insights browser extension to find two most common accessibility issues, and fix them.</li>
<li>Tab through the site or app with a keyboard to test keyboard navigation and focus states.</li>
</ol>
<p>This won’t make your site or app fully accessible but it’s a good first step in that direction. We’ll talk about each tool and technique in more detail in the rest of the article.</p>
<h2>React ESLint plugin</h2>
<p>I like it when someone tells me when I’m doing something wrong as soon as possible without asking myself. The linter is a perfect tool for that because it gives us immediate feedback when I’m writing code, right in the editor.</p>
<p>For React projects, <a href="https://github.com/evcohen/eslint-plugin-jsx-a11y">eslint-plugin-jsx-a11y</a> checks many accessibility issues, like missing alternative text on images or incorrect ARIA attributes and roles.</p>
<p></p>
<p>Unfortunately, this plugin is somewhat limited:</p>
<ul>
<li>static analysis of the code could only find <a href="https://github.com/jsx-eslint/eslint-plugin-jsx-a11y#supported-rules">few problems</a>;</li>
<li>it only works with plain HTML elements but doesn’t know anything about our custom components.</li>
</ul>
<p>However, we’re likely already using ESLint on a project, so the cost of having this plugin is minimal, and occasionally it finds issues before we even look at our site or app in the browser.</p>
<h2>Axe-core</h2>
<p><a href="https://github.com/dequelabs/axe-core">Axe-core</a> is a library that checks the accessibility of the rendered HTML in the browser. This is more powerful than static code analysis, like ESLint, because it can find <a href="https://github.com/dequelabs/axe-core/blob/develop/doc/rule-descriptions.md">more problems</a>, like checking if the text has sufficient color contrast.</p>
<p>There are many tools based on axe-core.</p>
<p>For <a href="https://storybook.js.org/">Storybook</a>, there’s a <a href="https://github.com/storybookjs/storybook/tree/master/addons/a11y">a11y addon</a>:</p>
<p></p>
<p>For <a href="https://react-styleguidist.js.org/">React Styleguidist</a>, we could <a href="https://react-styleguidist.js.gorg/docs/cookbook#how-to-use-react-axe-to-test-accessibility-of-components">add react-axe manually</a>:</p>
<p></p>
<p>Both don’t check things like the document outline or landmark regions, which would require rendering a complete page. However, we could have quick feedback when we <a href="https://egghead.io/playlists/component-driven-development-in-react-e0bf">develop new components in isolation</a>. We could check each component variant’s accessibility, which is hard to do using the actual site or app.</p>
<h2>Cypress-axe</h2>
<p>Unless we test our site or app’s accessibility after every change, we’ll eventually introduce regressions. That’s why it’s essential to make accessibility testing a part of the continuous integration (CI) process. We should never merge the code to the codebase without testing its accessibility.</p>
<p><a href="https://github.com/avanslaars/cypress-axe">Cypress-axe</a> is based on axe-core. It allows us to run accessibility checks inside <a href="/blog/react-testing-4-cypress/">end-to-end Cypress tests</a>, which is good because we likely already run end-to-end tests during continuous integration, and we render all our pages there. We could also run checks multiple times to test pages in different states. For example, with an open modal or an expanded content section.</p>
<p></p>
<p>Such tests could be a good accessibility <em>smoke test</em> that makes sure we’re not breaking our site or app. However, cypress-axe is inconvenient to analyze pages that already have accessibility issues. For that, a browser extension, like Axe or Accessibility Insights, would be more convenient.</p>
<p>Read more about <a href="/blog/detecting-accessibility-issues-on-ci-with-cypress-axe/">setting up and using cypress-axe</a>.</p>
<p><strong>Tip:</strong> For automated accessibility testing of separate components, <a href="https://github.com/nickcolley/jest-axe">jest-axe</a> could be a good option.</p>
<h2>Axe browser extension</h2>
<p>Axe browser extension for <a href="https://chrome.google.com/webstore/detail/axe-web-accessibility-tes/lhdoppojpmngadmnindnejefpokejbdd">Chrome</a> and <a href="https://addons.mozilla.org/en-US/firefox/addon/axe-devtools/">Firefox</a> is based on axe-core. However, we run this extension on an actual site or app, so it could find issues that are impossible to find by checking a single component, like correct headings structure or landmark regions.</p>
<p></p>
<p>This extension is great to do an accessibility audit but we have to remember to run it every time we add or change something on our site or app. Sometimes, it has false negatives, for example, when Axe can’t determine the background color and reports text as having insufficient color contrast.</p>
<h2>Accessibility Insights browser extension</h2>
<p>Microsoft’s <a href="https://accessibilityinsights.io/">Accessibility Insights</a> browser extension is also based on axe-core but has a few unique features.</p>
<p>Accessibility Insights has automated checks similar to the Axe extension, but it also highlights all the issues directly on a page:</p>
<p></p>
<p>Accessibility Insights also has instructions for many manual checks that can’t be automated:</p>
<p></p>
<p>The FastPass feature finds two most common accessibility issues, and is a good first step in improving site or app’s accessibility.</p>
<p>Finally, it could highlight headings, landmark regions, and tab stops (see “Tab key” below) on a page:</p>
<p></p>
<h2>Contrast app and Chrome DevTools contrast checker</h2>
<p>Sometimes we need to check the color contrast on a mockup or somewhere else, where running a browser extension is inconvenient or impossible.</p>
<p>To check color contrast in the browser, Chrome DevTools contrast checker is a good option (inspect an element, and click a color swatch in the Styles tab):</p>
<p></p>
<p>For all other cases, use <a href="https://usecontrast.com/">Contrast app</a>, and pick any two colors using an eyedropper:</p>
<p></p>
<p><strong>Bonus:</strong> <a href="https://contrast-ratio.com/">Contrast ratio</a> web app by Lea Verou is another option when you want to <a href="https://contrast-ratio.com/#%23fa6b6b-on-white">share a link</a> with the check results.</p>
<h2>Spectrum Chrome extension</h2>
<p>Spectrum browser extension allows us to check how folks with different types of color blindness see our site or app, and make sure there’s enough contrast between different elements.</p>
<p></p>
<p><strong>Update May 2024:</strong> Looks like Spectrum extension is no longer available. <a href="https://chromewebstore.google.com/detail/colorblindly/floniaahmccleoclneebhhmnjgdfijgg">Colorblindly</a> seems to be a good replacement.</p>
<p><strong>Bonus:</strong> Chrome DevTools can emulate some of these vision deficiencies. Press Escape, enable the Rendering tab from the three-dot menu button and scroll to the Emulate vision deficiencies section.</p>
<h2>Tab key</h2>
<p>By <em>tabbing</em> through the app, meaning pressing the Tab key on the keyboard to navigate between interactive elements on the page, we can check that:</p>
<ul>
<li>
<p>all interactive elements are focusable and have a visible focus state;</p>
</li>
<li>
<p>the tab order <a href="https://webaim.org/techniques/keyboard/">should make sense</a>; usually, it should follow the visual order of elements on the page;</p>
</li>
<li>
<p>the focus should be <a href="https://www.w3.org/TR/wai-aria-practices-1.1/examples/dialog-modal/dialog.html">trapped inside modals</a>, meaning we shouldn’t be able to tab back to the page behind the modal until we close it, and once we close the modal, the focus should go back to the element that opened the modal;</p>
</li>
<li>
<p>skip navigation link should appear when we press the Tab key for the first time:</p>
<p></p>
</li>
</ul>
<p>Along with the Tab key, it’s worth checking that other keys work as expected: for example, we can navigate lists using arrow keys, or some validation code doesn’t block arrows and Backspace in text fields.</p>
<p>We should be able to complete all important actions in our site or app without touching a mouse, trackpad, or touchscreen. At any time, we should know which element is in focus.</p>
<p><strong>Tip:</strong> I often use a live expression on <code>document.activeElement</code> in Chrome DevTools to see which element is in focus (“Create live expression” button in the Console tab’s toolbar). It helps to find elements without a visible focus state, or invisible elements that can be focused.</p>
<p></p>
<p><strong>Bonus:</strong> <a href="https://github.com/marcysutton/no-mouse-days">No Mouse Days</a> npm package by Marcy Sutton disables the mouse cursor to encourage better keyboard support in a site or app.</p>
<h2>Zoom</h2>
<p>By zooming in our site or app, we can check how it handles, well, zooming. Try to zoom in to 200% in the browser, and see what breaks. Many people (myself included) zoom in when the text is too small for them, so we should make sure that the layout isn’t breaking, the text isn’t cropped, and elements aren’t overlapping each other.</p>
<p></p>
<p><strong>Tip:</strong> Using <code>rem</code>s for all sizes in CSS, including media query breakpoints, is usually enough to avoid problems with zooming.</p>
<h2>Screen reader</h2>
<p>By using our site or app with a screen reader, we can check that:</p>
<ul>
<li>all interactive elements are focusable and have accessible labels;</li>
<li>tab order, semantic markup, and textual content make sense;</li>
<li>the skip navigation link brings us directly to the main page content, so we don’t have to listen through all navigation links again and again.</li>
</ul>
<p>Testing with a screen reader is in many ways similar to testing with a keyboard. Since we can’t see the screen (and I’d recommend turning it off or closing your eyes during testing), we can’t use a mouse or a trackpad to <em>select</em> items on a page, we can only tab to them with a keyboard. The main difference is that we can’t recognize elements like buttons by their look, or can’t connect form inputs with labels by their location. We should define these relationships <a href="https://www.ovl.design/text/inclusive-inputs/">using semantic markup or ARIA attributes</a>.</p>
<p>On macOS, we already have VoiceOver. On Windows, there are built-in Narrator, free <a href="https://www.nvaccess.org/">NVDA</a>, or paid <a href="https://www.freedomscientific.com/products/software/jaws/">JAWS</a>. There’s also <a href="https://chrome.google.com/webstore/detail/chromevox-classic-extensi/kgejglhpjiefppelpmljglcjbhoiplfn/related">ChromeVox</a> that we can install as a Chrome extension.</p>
<p><strong>Tip:</strong> To get started with VoiceOver, check out <a href="https://bocoup.com/blog/getting-started-with-voiceover-accessibility">this article</a> and <a href="https://interactiveaccessibility.com/education/training/downloads/VoiceOver-CommandReference.pdf">keep this cheat sheet</a>.</p>
<p><strong>Bonus:</strong> Use the Accessibility tab in Chrome DevTools to check how assisting technologies see a particular element:</p>
<p></p>
<h2>There’s always more</h2>
<p>A few more things that are worth testing:</p>
<ul>
<li>
<p><strong>Browser reading mode</strong> is an accessibility tool itself: it helps readers concentrate on the main content, or make colors readable. We could also use it as a quick way to test the semantic markup of our pages: we should see the main page heading, complete main content, all content images but nothing extra like decorative images or banners.</p>
<p></p>
</li>
<li>
<p><strong>Reduced motion</strong> is an operating system option that tells sites and apps (via <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-reduced-motion"><code>prefers-reduced-motion</code></a> media query) that the user prefers to minimize non-essential motion on the screen. We could use it to disable animation on things like reveal on scroll or carousels.</p>
</li>
<li>
<p><strong>The dark mode</strong> could be a site or app option or an operating system option that we could read via <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-color-scheme"><code>prefers-color-scheme</code></a> media query. We should ensure that our site or app, especially colors, is still accessible in the dark mode.</p>
</li>
<li>
<p><strong>Hover alternatives</strong> for keyboard and touchscreens: hover shouldn’t be the only way to reveal some content or an interactive element. A common example is a menu that appears on hover on an item in a long list. <a href="https://inclusive-components.design/tooltips-toggletips/">A tooltip</a> is another example. We could show these elements when the container is in focus for keyboard users, and always show them on touchscreens.</p>
</li>
</ul>
<p><strong>Tip:</strong> Use CSS <a href="https://www.w3.org/TR/mediaqueries-4/#any-input"><code>any-hover</code></a> interaction media feature query to test hover support on the device - though beware of making <a href="https://css-tricks.com/interaction-media-features-and-their-potential-for-incorrect-assumptions/">incorrect assumptions</a>.</p>
<p><strong>Tip:</strong> We could use Cypress and cypress-axe <a href="https://www.cypress.io/blog/2019/12/13/test-your-web-app-in-dark-mode/">to test the accessibility of our site or app in the dark mode</a>.</p>
<h2>Resources</h2>
<p><!-- textlint-disable --></p>
<ul>
<li><a href="https://web.dev/accessible/">Accessible to all</a></li>
<li><a href="https://usecontrast.com/guide">Color contrast guide</a></li>
<li><a href="https://accessibility-for-teams.com/">Accessibility for teams</a></li>
<li><a href="https://www.udacity.com/course/web-accessibility--ud891">Web accessibility course</a> by Google</li>
<li><a href="https://www.a11yproject.com/checklist/">The a11y project accessibility checklist</a></li>
<li><a href="https://medium.com/alistapart/writing-html-with-accessibility-in-mind-a62026493412">Writing HTML with accessibility in mind</a> by Manuel Matuzovic</li>
<li><a href="https://medium.com/@matuzo/writing-javascript-with-accessibility-in-mind-a1f6a5f467b9">Writing JavaScript with accessibility in mind</a> by Manuel Matuzovic</li>
<li><a href="https://medium.com/@matuzo/writing-css-with-accessibility-in-mind-8514a0007939">Writing CSS with accessibility in mind</a> by Manuel Matuzovic</li>
<li><a href="https://www.matuzo.at/blog/beyond-automatic-accessibility-testing-6-things-i-check-on-every-website-i-build/">Beyond automatic accessibility testing: 6 things I check on every website I build</a> by Manuel Matuzovic</li>
<li><a href="https://daverupert.com/2018/07/assistive-technologies-i-test-with/">Assistive technologies I test with</a> by Dave Rupert</li>
<li><a href="https://www.adrianbolonio.com/testing-web-accessibility-part-1/">Testing web accessibility</a> by Adrián Bolonio</li>
<li><a href="https://websitesetup.org/web-accessibility-checklist/">16 things to improve your website accessibility (checklist)</a> by Bruce Lawson</li>
<li><a href="https://www.w3.org/WAI/business-case/">The business case for digital accessibility</a></li>
<li><a href="https://bocoup.com/blog/getting-started-with-voiceover-accessibility">Getting Started with VoiceOver & Accessibility</a> by Sue Lockwood
<!-- textlint-enable --></li>
</ul>
<h2>Conclusion</h2>
<p>We’ve covered a lot of different tools and techniques, many of which I use not only to test my work but to be able to use some sites, like zooming in on a site with tiny fonts or using the reading mode on a site with a dark background.</p>
<p>Keep in mind that tools can only detect some issues, and we should find a balance between automated and manual accessibility testing.</p>
<p><strong>Manual accessibility testing</strong>, when done right, allows us to find most of the problems. However, it’s time-consuming, and we have to redo it for every new feature of our site or app.</p>
<p><strong>Automated accessibility testing</strong> is cheap to run, and it keeps the site or app from regressions. However, automated testing could only find certain types of issues.</p>
<hr />
<p>Thanks to <a href="https://twitter.com/steppe_fox">Eldar Amantay</a>, <a href="https://twitter.com/drwendyfox">Wendy Fox</a>, Anna Gerus, Anita Kiss, <a href="https://www.matuzo.at/">Manuel Matuzovic</a>, <a href="https://icing.space/">Patrick Smith</a>.</p>
Detecting accessibility issues on CI with cypress-axehttps://sapegin.me/blog/detecting-accessibility-issues-on-ci-with-cypress-axe/https://sapegin.me/blog/detecting-accessibility-issues-on-ci-with-cypress-axe/Tue, 06 Oct 2020 00:00:00 GMT<p>Unless we check the accessibility of our pages every time we change them, it’s to easy to introduce regressions. Therefore, we should test accessibility during our Continuous Integration (CI) checks.</p>
<p><a href="https://github.com/avanslaars/cypress-axe">Cypress-axe</a> allows us to do exactly that, and it’s a good place to do that in Cypress because we already render all the pages in our end-to-end tests and run them during CI.</p>
<h2>Setting up cypress-axe</h2>
<ol>
<li>Install cypress-axe (assuming we already have Cypress installed and configured on our project):</li>
</ol>
<pre><code>npm install --dev cypress-axe
</code></pre>
<ol>
<li>Import the commands, add to the <strong>cypress/support/index.js</strong>:</li>
</ol>
<pre><code>import 'cypress-axe';
</code></pre>
<ol>
<li>Update the plugins file, <strong>cypress/plugins/index.js</strong>:</li>
</ol>
<pre><code>module.exports = (on, config) => {
+ on('task', {
+ table(message) {
+ console.table(message);
+ return null;
+ },
+ });
return config;
};
</code></pre>
<p>We need this for printing the results in the terminal.</p>
<ol>
<li>Add a custom Cypress command, <strong>cypress/support/commands.js</strong>:</li>
</ol>
<pre><code>// Print cypress-axe violations to the terminal
function printAccessibilityViolations(violations) {
cy.task(
'table',
violations.map(({ id, impact, description, nodes }) => ({
impact,
description: `${description} (${id})`,
nodes: nodes.length
}))
);
}
Cypress.Commands.add(
'checkAccessibility',
{
prevSubject: 'optional'
},
(subject, { skipFailures = false } = {}) => {
cy.checkA11y(
subject,
null,
printAccessibilityViolations,
skipFailures
);
}
);
</code></pre>
<p>This command runs cypress-axe <a href="https://github.com/avanslaars/cypress-axe#cychecka11y"><code>checkA11y</code></a> method with a custom violation callback function that prints a list of violations to the terminal, and can be chained to queries.</p>
<h2>Running cypress-axe</h2>
<p>To run accessibility checks, we need to do two things:</p>
<ol>
<li>Inject Axe into the page — we need to do it once, after calling <code>cy.visit</code>.</li>
<li>Run the checks using our <code>checkAccessibility</code> command — we can do it multiple times to check the page in different states.</li>
</ol>
<p>A test case could look like this:</p>
<pre><code>describe('Our awesome site', () => {
it('Happy path', () => {
// Visiting the page to test
cy.visit('http://localhost:8000');
// Injecting Axe runtime into the page
cy.injectAxe();
// Waiting for the page to render
cy.log('Page header is rendered');
cy.findByRole('heading', { name: /awesome site/i }).should(
'be.visible'
);
// Running accessibility checks
cy.checkAccessibility();
// All the regular end-to-end checks
// ...
});
});
</code></pre>
<p>If there are any accessibility violations, the test case fails:</p>
<p></p>
<p>And the list of violations is printed in the terminal:</p>
<p></p>
<p>We could also check a particular area of the page, for example, a modal:</p>
<pre><code>describe('Our awesome site', () => {
it('Happy path', () => {
// ...
// Click a button that opens a modal
cy.findByRole('button', { name: /open modal/i });
// Check accessibility only inside the modal
cy.findByTestId('some-modal').checkAccessibility();
});
});
</code></pre>
<p><strong>Hint:</strong> <code>cy.findByRole</code> and <code>cy.findByTestId</code> are from <a href="https://testing-library.com/docs/cypress-testing-library/intro">Cypress Testing Library</a>, read more about it in <a href="https://sapegin.me/blog/react-testing-4-cypress/">my article on Cypress</a>.</p>
Finding unused and missing npm dependencies with depcheckhttps://sapegin.me/blog/finding-unused-npm-dependencies-with-depcheck/https://sapegin.me/blog/finding-unused-npm-dependencies-with-depcheck/Tue, 29 Sep 2020 00:00:00 GMT<p>Unused dependencies in the project increases installation time, and every time we upgrade dependencies we spend more time than necessary updating packages we don’t use. <a href="https://github.com/depcheck/depcheck">Depcheck</a> could identify which of the dependencies listed in project’s <code>package.json</code> aren’t used and could be removed.</p>
<ol>
<li>Run depcheck in the project root directory:</li>
</ol>
<pre><code>npx depcheck . --specials=babel,bin,eslint,husky,jest,lint-staged,prettier,webpack
</code></pre>
<p><strong>Note:</strong> The <code>--specials</code> option defines <a href="https://github.com/depcheck/depcheck#special">additional dependency checkers</a>, like ESLint or webpack configuration files.</p>
<p>We should see something like this:</p>
<pre><code>Unused devDependencies
* @storybook/addon-a11y
* @storybook/addons
* @types/cypress
* @types/jest
* @types/react-dom
* babel-plugin-transform-async-to-promises
* babel-preset-react-app
* core-js
* jest-environment-jsdom-sixteen
* typescript-styled-plugin
* webpack-cli
Missing dependencies
* eslint-config-airbnb-base: ./.eslintrc.js
* @typescript-eslint/eslint-plugin: ./.eslintrc.js
* eslint-plugin-react: ./.eslintrc.js
* eslint-config-prettier: ./.eslintrc.js
* eslint-plugin-import: ./.eslintrc.js
* history: ./src/pages/BookingManagementPage/BookingManagementPage.stories.tsx
</code></pre>
<p>Unfortunately, there are many false positives but some dependencies are actually unused. So we should check each manually before removing anything.</p>