ajames.dev | Blogs https://ajames.dev I'm a senior software engineer based in Glasgow, Scotland. My passion for frontend technologies continually drives me to advance my skill set and adopt the latest industry best practices. An analytical mindset and strong communication skills allow me to excel in environments where I can learn from others and inspire my peers.Over the years I've refined a set of technical principles to strive towards, namely: complexity should only be introduced when it’s unavoidable; code should be easy to reason with and delete; avoid abstracting too early, and the top priority is always the best possible user experience. Mon, 25 Nov 2024 01:00:07 GMT https://validator.w3.org/feed/docs/rss2.html https://github.com/jpmonette/feed en All rights reserved 2024, Andrew James <![CDATA[So you want to game the system and get promoted?]]> https://ajames.dev/writing/get-promoted https://ajames.dev/writing/get-promoted Thu, 27 Jun 2024 00:00:00 GMT <![CDATA[A guide to stacking the odds in your favour to get to the next level.]]> <![CDATA[Getting promoted can be a complex and often unpredictable process. Ideally, career promotions should be based solely on merit, where rewards are proportionate to your proven abilities and individual performance. However, demonstrating your capability to deliver at the next level involves more than just personal achievement. Factors largely beyond your control, such as the company’s and industry’s current performance, internal bureaucracy, and individual biases, can all have a negative influence on the final outcome. I recently participated in two promotion cycles at Coinbase. Initially, I was passed over during the mid-year review but secured the promotion on my second attempt six months later. With hindsight, I’ve taken some time to reflect on why I was unsuccessful initially and what changed to lead to success in the following cycle. By the end of the first quarter, I was responsible for and successfully launched several major feature releases. End users were pleased. My manager was satisfied. I was confident that a repeat performance in the second quarter would all but guarantee me a senior role. Unfortunately, a team-wide architectural overhaul in the second quarter resulted in code freezes that disrupted code merges and production deployments. Although I had technically completed my assigned tasks, the work was ultimately held up by frozen pull requests and backend dependencies that faced similar issues. Despite these challenges, the promotion packet was submitted at the end of the second quarter. I hoped that the impact of the re-architecture on the entire team would be considered, even though my Objectives and Key Results (OKR henceforth) were not fully met. A few weeks later, I received the disappointing news. I was devastated. Not just by the result, but by the feeling that an uncontrollable factor had impeded my progress. The decision left me feeling jaded and resentful. It felt like I was back at square one, and what was to stop a similar scenario from unfolding before the next packet submission? On reflection, I realised that other engineers had been promoted during this period. This led me to two conclusions: either those engineers managed to overcome the code freeze and deliver results, or they demonstrated competency in areas beyond just meeting OKRs. This realization underscored that promotion considerations extend beyond individual efforts. As a software engineer, you’ll eventually become responsible for areas you don’t directly contribute to, even as an Individual Contributor. Besides achieving hard results, cultivating soft skills becomes equally important. So, how did I game the system and secure the promotion? In truth, I didn’t. At least not in that sense. But don’t let the truth get in the way of a good clickbait title. I’m a lifelong gamer, with a particular soft spot for role-playing games. For the uninitiated, these game provide mechanisms for you to improve core abilities to take on greater challenges. Assuming you are meeting the technical requirements for your next role, I’m going to outline the outside factors that I believe contributed to my promotion, and contextualise them within the core statistics of role playing games. Special thanks to [Chafin Bryant](https://x.com/dcbthree) for seeing it through with me 👊🏻 *** ## Strength Learn to flex. Create a [brag list](https://blog.pragmaticengineer.com/work-log-template-for-software-engineers/) to document your accomplishments. A brag list is an invaluable tool for tracking your [smart goals](https://www.forbes.com/advisor/business/smart-goals/) and helping you stay focused throughout the year. Continual progress simplifies demonstrating your value during performance reviews and promotion discussions. Rather than scrambling to recall earlier achievements before the packet submission, this living document will already offer all relevant information and context from when it was recorded. When [noting your accomplishments](https://www.umassglobal.edu/news-and-events/blog/writing-an-accomplishment-statement), make sure to highlight the existing problem, what you did to solve it, and the benefits of the result. ## Dexterity Learn to be agile. The word itself has [picked up baggage](https://www.forbes.com/sites/cognitiveworld/2019/08/23/the-end-of-agile/) in recent years, but here I’m referring to a method of execution rather than an idealistic process. Being agile essentially makes complex projects more approachable. Even if the end goal is not entirely clear, [breaking down large tasks into smaller ones](https://www.linkedin.com/advice/1/how-can-you-break-down-larger-smart-goals) and using project management tools to track them provides a structured path forward. This also helps distribute work effectively to other team members. Most importantly, it provides clear milestones, allowing stakeholders to see tangible progress throughout the project’s lifespan, and not just the end result. ## Constitution Learn to stand firm. Set clear boundaries for yourself and your team, and prioritise [managing your time effectively](https://arc.dev/talent-blog/time-management-skills/). Doing so will help you to control the project scope’s and avoid the otherwise inevitable overwork and burnout. Sticking to these principles will ensure a sustainable performance over time and a healthier work environment. Regardless, things won’t always go your way; it’s possible to commit no mistakes and still lose, after all. It’s important to remain calm in these situations, and focus on solving problems rather than complaining about them. You’ll lead by example and become known as a dependable source. ## Intelligence Learn to solve problems. Level up your technical skill set outside of your daily workload: write a technical [blog](https://ajames.dev/#writing), work on some fun [side projects](https://findphunk.vercel.app/), and [learn from others](https://ajames.dev/#learning) through educational courses. While these may not directly appear in a promotion packet, they’ll benefit your overall growth as a software engineer. These transferable activities will refine your communication skills, increase motivation, and enhance your day-to-day output. ## Wisdom Learn to solve the right problems. Focus on understanding and aligning your efforts with that of your company. What your manager (and their manager) are looking for will yield far more impact than something you personally find interesting. Better still, if your company holds [cultural tenets](https://www.coinbase.com/en-gb/blog/culture-at-coinbase-2021), demonstrate their application in your accomplishments. Additionally, mentor less experienced colleagues to help them solve similar problems. Assign them relevant work, provide regular feedback, and celebrate their successes to support their growth and promotion. A rising tide lifts all ships. ## Charisma Learn to network. [Attend company and industry events](https://www.linkedin.com/pulse/why-attending-conferences-matters-eprglobal/), and build rapport with your wider network. Promotions are often decided by a committee of senior peers and managers. By collaborating with others, you’ll build presence within your company and become known within your professional sphere. Strengthen your relationship with line managers and senior peers by understanding their goals, requesting feedback, and showing appreciation for their efforts. Since they will likely champion you during the promotion cycle, help them see your true worth, both as a professional and an individual. Finally, I’ll leave you with one of my favourite quotes from [Sara Viera](https://x.com/nikkitaftw): ”You can teach someone to code, but you can’t undick a person”. Sage advice. If you enjoyed the article, please share it with others on [Bluesky](https://ajames.dev/bluesky) or [LinkedIn](https://ajames.dev/linkedin). ]]> [email protected] (Andrew James) Self Improvement <![CDATA[Using Notion and Next.js ISR to sync content across platforms]]> https://ajames.dev/writing/synchronize-content https://ajames.dev/writing/synchronize-content Mon, 22 May 2023 00:00:00 GMT <![CDATA[Create an optimised workflow that synchronises content across multiple platforms]]> <![CDATA[Publishing content across multiple platforms offers a number of advantages for content creators. Third-party platforms provide authors a wider reach and access to a dedicated user base. Publishing content on a personal website gives the author complete creative control over their content and its presentation. Ensuring synchronicity across platforms can be laborious, time-consuming, and prone to human error. It requires careful attention to detail, an efficient workflow, and a commitment to maintaining consistency and accuracy across all published platforms. This article will demonstrate an optimized workflow for creating and publishing synchronized blog content across the web. For additional context, the final solution is available on [GitHub](https://github.com/phunkren/isr-notion-blog) and deployed on [Vercel](https://isr-notion-blog.vercel.app/). The approach primarily uses Notion and the Notion API to create a single source of truth for the content. It also integrates the Incremental Static Regeneration feature into a Next.js website to ensure effortless synchronization of each article across the platforms. ## Creating a blog database with Notion Notion is a powerful productivity tool that caters to both individual users and large enterprise teams. This article uses Notion to create a small database of technical blog posts, along with a custom integration that will allow our Next.js website to retrieve the content from a personal workspace. ### Setting up the database To get started, create a new page in your Notion workspace and select the Table template. With the table created, selecting the “+ New Database” option from the Data Source menu creates a blank table on the page. Each row on the table will contain the following information: | Column | Description | | ------------- | ----------------------------------------------------------------------------- | | **Published** | Control whether or not the page displays on the personal website | | **Page** | A link to a subpage in Notion containing the article’s content | | **Canonical** | The preferred URL of a web page for search engine rankings | | **Date** | The date the article was published | | **Tags** | An array of tags that can be used to filter the articles on the personal site | | **Abstract** | A short description of the article. Useful for previews and SEO | | **Slug** | A unique identifier for each post. Used for the URL routing on the website | The “Published” column allows the Next.js website to filter any unfinished articles. This also allows a grace period for the article to be published on a third-party platform before it goes live on the personal website. The “Page” column links to a subpage in Notion with the respective article’s content. We’ll use this subpage later to retrieve the content blocks for each article in the Next.js website. The “Canonical” column helps specify the preferred URL of the content when multiple versions exist across various platforms. Where the Notion article can be considered the source of truth for the content, the canonical URL is considered the source of truth for search engine rankings. | Published | Page | Canonical | | --------- | --------------------------------------- | --------------------------------------------------------------------------------------------------------- | | \[ ] | Using Notion and ISR to synchro… | | | \[ x ] | Building an Accessible Menubar… | [https://blog.logrocket.com/...](https://blog.logrocket.com/building-accessible-menubar-component-react/) | The remaining columns contain frontmatter for each article: * Date — Allows the articles to be sorted into chronological order * Tags — Can be used to filter the articles by a specific category * Abstract — Provides a short description of the article to pique the user’s interest * Slug — A unique identifier for retrieving the post’s content | Date | Tags | Abstract | Slug | | ---------- | -------------- | ----------------------------------------- | ------------------- | | 04/01/2021 | react, a11y | How to create an accessible menubar… | accessible-menubar | | 09/27/2021 | react, dev rel | Synchronize content across multiple plat… | synchronize-content | ### Setting up the custom integration Let’s now create the integration that will allow the Next.js website to make authenticated requests to the database in the workspace. In Notion, open the My Integrations page and select “Create New Integration.” Since this integration will be private, choose “Internal Integration.” Give the integration a name and associate it with your workspace. Then, select “Read Content” from the Content Capabilities and check the “No User Information” radio button in User Capabilities. You can leave all Comment Capabilities unchecked. This will permit the website to retrieve the content without assigning any unnecessary permissions. You should end up with something like the following: ![The settings screen for a custom Notion integration](https://i.imgur.com/Y5LQLkn.png) ## Integrating Next.js with Notion Previously, publishing articles on my personal website was a manual, error-prone task that involved copying, pasting, and refining content into separate markdown files. This process was tedious and made it difficult to maintain consistency across the platforms. Fortunately, with the help of [the Notion API](https://blog.logrocket.com/getting-started-with-the-notion-api/), this task can now be automated! Let’s configure a Next.js project that retrieves all of the content from the database we created earlier. To accomplish this, we’ll create a custom client to fetch the posts and use our custom integration from Notion to authenticate the requests. ### Configuring environment variables There are two environment variables required to authenticate and retrieve the database content. The first is the Internal Integration Token, which can be found in the “Secrets” menu after creating the integration: ![An example Internal Integration Token for a custom Notion integration](https://i.imgur.com/OpGKkmc.png) The second is the Database ID. You can find this in the Notion URL between the workspace and the View ID, shown in bold below: ```text https://www.notion.so//****? https://www.notion.so/phunkren/**9d7344da8c66c9a7487577735b83141c**?129sj7... ``` Let’s add these variables to the Next.js application’s local environment and the project dashboard. Paste the following code into the `.env.local` file, using your own variables in place of the dummy values: ```shell NOTION_INTERNAL_INTEGRATION_TOKEN=secret_hIGy3ihYFsp... NOTION_DATABASE_ID=9d7344da8c66c9a7487577735b83141c ``` The Environment Variables section in the Settings menu on the Vercel project dashboard should look something like the image below: ![The Environment Variables section in the Settings menu on the Vercel project dashboard ](https://i.imgur.com/7ltynow.png) Note that the token and database ID values presented here are invalid and for demonstration purposes only. You should never share these values outside of your Vercel project dashboard and environment files. ### Fetching data from Notion With the Notion integration successfully configured, the next step is to create a client that fetches the posts from the Notion database and stores the result in Markdown files. To achieve this, we’ll start by creating a new `Client` instance using the [`notionhq/client`](https://www.npmjs.com/package/@notionhq/client) library. We’ll also use the `NOTION_INTERNAL_INTEGRATION_TOKEN` environment variable to authenticate the requests. Add the following code to the `lib/notion.ts` file: ```typescript import { Client } from "@notionhq/client"; const notionClient = new Client({ auth: process.env.NOTION_INTERNAL_INTEGRATION_TOKEN, }); ``` With the authenticated client, we can now make requests to retrieve the content from the Notion database using the `NOTION_DATABASE_ID` environment variable in the same file: ```typescript import { BlogPost } from "../types/notion"; async function getPosts(databaseId: string): BlogPost[] { const response = await notionClient.databases.query({ database_id: databaseId, }); return response.results; }; ``` In the code above, we declared an asynchronous function `getPosts` with a single argument of `databaseId`. We used the `await` keyword to pause the function’s execution until the `notionClient.databases.query` promise resolves and returns the collection of blog posts from the requested database. ### Creating Markdown files If we inspect the `response.results`, each chunk of content in Notion is parsed as a block, which is an object containing the raw content and metadata for each chunk. See the following example content block from Notion: ```javascript { "object": "block", "id": "c02fc1d3-db8b-45c5-a222-27595b15aea7", "parent": { "type": "page_id", "page_id": "59833787-2cf9-4fdf-8782-e53db20768a5" }, "created_time": "2022-03-01T19:05:00.000Z", "last_edited_time": "2022-07-06T19:41:00.000Z", "created_by": { "object": "user", "id": "ee5f0f84-409a-440f-983a-a5315961c6e4" }, "last_edited_by": { "object": "user", "id": "ee5f0f84-409a-440f-983a-a5315961c6e4" }, "has_children": false, "archived": false, "type": "heading_2", "heading_2": { "rich_text": [ ... ], "color": "default", "is_toggleable": false } } ``` Although it’s possible to serialize each block manually, we’ll use a third-party wrapper — [`notion-to-md`](https://www.npmjs.com/package/notion-to-md) — for brevity. The package is somewhat literal, in that it will take a collection of Notion blocks and convert them into a single Markdown file. To convert the Notion content blocks to a Markdown file, we’ll create an `n2m` client by passing the `notionClient` we created earlier to the `NotionToMarkdown` function from the `notion-to-md` package. Add the following code to the `lib/notion.ts` file: ```typescript import { NotionToMarkdown } from "notion-to-md"; const n2m = new NotionToMarkdown({ notionClient }); ``` We can then define a `createPosts` function that takes the results of the `getPosts` function and generates a Markdown file for each result. Initially, the `n2m` client will convert the Notion content blocks into Markdown blocks. We’ll then reuse the client to format those blocks into a Markdown string. Finally, we can create a Markdown file for each post using the “Slug” column property as the filename and populating it with the newly-created Markdown string: ```typescript const POSTS_DIR = path.join(process.cwd(), "posts"); export async function createPosts(posts) { if (!fs.existsSync(POSTS_DIR)) { fs.mkdirSync(POSTS_DIR); } for (const post of posts) { const uuid = post.id; const slug = post.properties.slug.rich_text[0].plain_text; const mdblocks = await n2m.pageToMarkdown(uuid); const mdString = n2m.toMarkdownString(mdblocks); const filename = `${POSTS_DIR}/${slug}.mdx`; fs.writeFile(filename, mdString, (err) => { err !== null && console.log(err); }); } } ``` ### Filtering unpublished articles To prevent any unpublished articles from appearing on the site, and to mitigate any potentially unnecessary computational cycles when we generate the markdown files, we’ll also create a `filterPosts` utility function that uses the “Published” column property from earlier to remove any unwanted results. We can consider an article published if its published checkbox is checked, which we can determine if the post’s `properties.published.checkbox` property is `true`: ```javascript export function filterPosts(posts: BlogPost[]) { const publishedPosts = posts.filter( (post) => post.properties.published.checkbox ); return publishedPosts; } ``` ### Creating dynamic routes With the published blog posts retrieved from Notion, formatted into Markdown, and serialized in their respective files, let’s create a dynamic route for each blog post to be pre-rendered at build time. We’ll start by exporting a `getPostIds` function from `lib/notion.ts`. This function will return a collection of unique identifiers for each post, derived from the filenames in the `posts` directory. ```typescript import * as fs from "fs"; import path from "path"; const POSTS_DIR = path.join(process.cwd(), "posts"); export function getPostIds() { if (!fs.existsSync(POSTS_DIR)) { fs.mkdirSync(POSTS_DIR); } const fileNames = fs.readdirSync(POSTS_DIR); return fileNames.map((fileName) => { return { params: { id: fileName.replace(/\.mdx$/, ""), }, }; }); } ``` Finally, we’ll use the [`getStaticPaths`](https://nextjs.org/docs/basic-features/data-fetching/get-static-paths) function in the `pages/[id].tsx` file to execute our `getPosts`, `filterPosts`, `createPosts`, and `getPostIds` functions to generate the Markdown files and return a collection of paths that will enable Next.js to statically pre-render the respective routes. It’s important that we call these methods inside the `getStaticPaths` function. `getStaticPaths` will run once during the production build, whereas `getStaticProps` will run once for each dynamic route: ```typescript import { GetStaticProps } from "next"; import { getPosts } from "../lib/notion"; export const getStaticPaths: GetStaticPaths = async () => { const posts = await getPosts(process.env.NOTION_DATABASE_ID); const publishedPosts = filterPosts(posts); await createPosts(publishedPosts); const paths = getPostsIds(); return { paths: paths, fallback: false, }; }; ``` ### Rendering Markdown content To parse the Markdown content from each file into HTML, we’ll create a `getPostData` function in the `util/notion.ts` file that uses the IDs generated from the `getPostsIds` function and the [`remark`](https://www.npmjs.com/package/remark) npm package to convert the respective Markdown content to a string: ```typescript import * as fs from "fs"; import { remark } from "remark"; import mdx from "remark-mdx"; const POSTS_DIR = path.join(process.cwd(), "posts"); export async function getPostData(id: string) { const filePath = path.join(POSTS_DIR, `${id}.mdx`); const fileContents = fs.readFileSync(filePath, "utf8"); const processedContent = await remark().use(mdx).process(fileContents); const contentHtml = processedContent.toString(); return contentHtml; } ``` Finally, in the `pages/[id].tsx` file, we’ll use the `ReactMarkdown` component from the [`react-markdown`](https://blog.logrocket.com/how-to-safely-render-markdown-using-react-markdown/) library and the [`remarkMdx`](https://www.npmjs.com/package/remark-mdx) plugin to render the Markdown string as HTML on the `BlogPost` page 💥: ```typescript import { GetStaticProps } from "next"; import ReactMarkdown from "react-markdown"; import remarkMdx from "remark-mdx"; export const getStaticProps: GetStaticProps = async ({ params }) => { const postId = params.id as string; const postData = await getPostData(postId); return { props: { postData, }, }; }; export default function BlogPost({ postData }: Props) { return (
{postData}
); } ``` ### Creating a blog page index Now that each article has its own individual page in the Next.js project, we’ll present a list of them on the homepage. Each list item will display the article’s title, frontmatter, and a link to the corresponding page on the website. We’ll start by creating a `sortPosts` utility function that sorts the posts chronologically: ```typescript import { BlogPost } from "../types/notion"; // Sort posts in chronological order (newest first) export function sortPosts(posts: BlogPost[]) { return posts.sort((a, b) => { let dateA = new Date(a.properties.date.date.start).getTime(); let dateB = new Date(b.properties.date.date.start).getTime(); return dateB - dateA; }); } ``` We then use the `getStaticProps` function in the `index.tsx` file to fetch the data with `getPosts`, filter any unpublished articles with `filterPosts`, sort the published posts chronologically with `sortPosts`, and then return the result as the `posts` prop for the `Home` component. Add the following to your `pages/index.tsx` file: ```typescript import { GetStaticProps } from "next"; export const getStaticProps: GetStaticProps = async () => { const posts = await getPosts(process.env.NOTION_DATABASE_ID); const publishedPosts = filterPosts(posts); const sortedPosts = sortPosts(publishedPosts); return { props: { posts: sortedPosts, }, }; }; ``` In the same file, using a `map` function, we render the collection of `posts` on the homepage. For each post, we display the title, the abstract as a description, the publish date, relevant tags, and a link to the dynamic routes that we created earlier: ```typescript type Props = { posts: BlogPost[]; }; export default function Home({ posts }: Props) { return (

ISR Notion Example

Blog posts

{posts.map((post) => { const title = post.properties.page.title[0].plain_text; const description = post.properties.abstract.rich_text[0].plain_text; const publishDate = post.properties.date.date.start; const url = `/${post.properties.slug.rich_text[0].plain_text}`; const tags = post.properties.tags.multi_select .map(({ name }) => name) .join(", "); return (

{title}

{description}

  • Published: {publishDate}
  • Tags: {tags}
); })}
); } ``` ## Enabling ISR to synchronize content over time Currently, the Next.js project is statically generated at build time, allowing the resulting content to be cached. This approach, known as [static site generation (SSG)](https://blog.logrocket.com/ssg-vs-ssr-in-next-js/), is known for having great performance and SEO. However, updating the fetched content requires a new build and site deployment. This can be problematic for syncing our content with Notion, as the site will continue to serve cached versions of the articles until a new build is deployed, regardless of whether or not the content has been updated. [Server-side rendering (SSR)](https://blog.logrocket.com/implementing-ssr-next-js-dynamic-routing-prefetching/) potentially solves this problem, but it can impact performance as each page is now rendered on every request. This leads to an increase in server load and longer page load times, diminishing the overall user experience. A relatively newer method in Next.js called [Incremental Static Regeneration (ISR)](https://blog.logrocket.com/incremental-static-regeneration-next-js/) is a perfect compromise between the two, as it allows static content to be updated over time without requiring a complete rebuild of the website. When users access the page within the revalidation window, they are served a cached version, regardless of whether or not the content has since been updated. The first user to access the page after the revalidation window has expired will also be served the cached version. At this point Next.js will re-fetch and cache the latest data on the fly without rebuilding the entire site! If the build is successful, the next user will be served the updated content. We’ll start by exporting a `ONE_MINUTE_IN_SECONDS` constant in the `util/constants.ts` file to represent a time interval of one minute: ```typescript export const ONE_MINUTE_IN_SECONDS = 60; ``` To enable ISR for the homepage and the dynamic routes, we need to return the desired revalidation period as the `revalidate` key on the object returned from `getStaticProps`. Add the following code to the `pages/index.tsx` file: ```typescript import { ONE_MINUTE_IN_SECONDS } from "../constants"; export const getStaticProps: GetStaticProps = async () => { const posts = await getPosts(process.env.NOTION_DATABASE_ID); const publishedPosts = filterPosts(posts); const sortedPosts = sortPosts(publishedPosts); return { props: { posts: sortedPosts, }, revalidate: ONE_MINUTE_IN_SECONDS, }; }; ``` And finally, add the following to the `pages/[id].tsx` file: ```typescript import { ONE_MINUTE_IN_SECONDS } from "../constants"; export const getStaticProps: GetStaticProps = async ({ params }) => { const postId = params.id as string; const postData = await getPostData(postId); return { props: { postData, }, revalidate: ONE_MINUTE_IN_SECONDS, }; }; ``` With that, we’re done. The application will now periodically check to see if any content in Notion has been updated, and update any pages with changes without having to rebuild the entire site! ## Conclusion By using Incremental Static Regeneration with the Notion API, content creators can now confidently define a single source of truth that synchronizes content across multiple online platforms. Whenever new articles are created or existing articles are updated, the latest version is immediately available to third-party platforms. The author’s personal website will also periodically update the content on a per-page basis after a given period of time. This approach will continue to benefit the author as their collection of articles scales without significantly compromising the overall build times for their personal website. Ultimately, this solution provides an optimized workflow for synchronizing content while also supporting long-term content creation goals. ]]>
[email protected] (Andrew James) react dev rel
<![CDATA[So you want to work remotely?]]> https://ajames.dev/writing/work-remote https://ajames.dev/writing/work-remote Sat, 01 Oct 2022 00:00:00 GMT <![CDATA[Tips and product suggestions for setting up an effective remote workstation. ]]> <![CDATA[Back in 2020, I put together a list of basic principles for [working from home](https://ajames.dev/writing/work-home). Where that focused on the ethos of working remotely, this time I’d like to discuss equipment that will allow you to be effective working in a remote environment. Unfortunately, it’s impossible to recommend a setup that covers every use case. Everyone’s budget, available space, and personal circumstances are infinitely variable. This post will therefore assume you are working from a laptop, in a dedicated space, and are willing to spend a reasonable sum of money on upgrading your workstation. For those interested in my personal setup, I’ve also included recommendations below each section that I’ve either used in the past, or I’m currently using at the time of writing. ## Desk It’s tough to recommend a desk, as it all comes down to available space and personal preference. My overarching advice is to minimise the clutter on the desk to optimise your working space. There should be enough space to type on your keyboard and scribble notes on a pad without rearranging the desk. Bigger is usually better. Motorised standing desks are popular, but they can be expensive. If you don’t see yourself standing on a daily basis, spend the money elsewhere. I know plenty of people with height-adjustable desks that are permanently seated. Try to position the desk close to any natural light. Facing the window is usually best. It can be visually stimulating, and it’ll give you the best lighting for conference calls. If you find yourself squinting or getting annoyed by the glare, rotate it perpendicular to the window. Try not to have your back to the window. This can lead to underexposure, making you appear as a silhouette on video calls. ### **Recommendation** * [IKEA Karlby worktop](https://www.ikea.com/gb/en/p/karlby-worktop-walnut-veneer-00335201/) * [Flexispot standing desk legs](https://www.amazon.co.uk/FLEXISPOT-Adjustable-Electric-Two-Stage-Automatic/dp/B07HFZP1Q3/ref=sr_1_6?crid=35K9YAEDWXO0Y\&keywords=standing%20desk%20legs\&qid=1665875715\&qu=eyJxc2MiOiI0LjY3IiwicXNhIjoiNC4yNiIsInFzcCI6IjMuMzAifQ%3D%3D\&sprefix=standing%20desk%20legs%2Caps%2C147\&sr=8-6) ## Chair On average, you’ll spend at least a third of the day sitting down, so it makes sense to invest in a high-quality ergonomic chair. My advice is to scan second-hand outlets (Facebook Marketplace, Gumtree, eBay, etc) for a Herman Millar chair. If space is limited, check out the Setu. I’m my opinion, the best all-rounder is the Mirra / Mirra 2. If money is no object, treat yourself to an Aeron! When sat at your desk, you ideally want to be looking directly ahead or slightly below your eye level. Your elbows, knees, and hips should be at around 90 degrees, and your feet should be flat on the floor and in front of your knees. ### **Recommendation** * [Herman Miller Mirra](https://www.amazon.com/Mirra-Chair-Highly-Adjustable-Herman-Miller/dp/B0035BCRRU/ref=sr_1_15?keywords=Herman%20Miller\&qid=1665876180\&qu=eyJxc2MiOiI1LjY4IiwicXNhIjoiNi4wNyIsInFzcCI6IjUuNzMifQ%3D%3D\&sr=8-15) ## Communication Virtual meetings come with the territory. It’s important to ensure you are broadcasting clearly - both audibly and visually - and can focus on what is being said without becoming distracted. Laptop speakers traditionally have poor fidelity, making it easy to miss something important. Nearby distractions can also cause you to lose focus. To counter both of these issues, I highly recommend noise-cancelling headphones. Although most conferencing software can detect audio feedback, headphones prevent your speakers from leaking unwanted noise into your microphone. I also recommend using a dedicated webcam over the laptop’s integrated equivalent. The camera should be at least 1080p resolution, with a 30fps refresh rate, and a microphone if your headphones do not have their own. If you’re paranoid like me, it’s also worth looking out for a camera that comes with a physical privacy screen / slide for when it is not in use. ### **Recommendation** * [Logitech C925-E webcam](https://www.amazon.co.uk/Logitech-Auto-Focus-Webcam-Omni-Directional-Microphones/dp/B01GRE7W9O/ref=sxts_rp_s_1_0?content-id=amzn1.sym.78489f60-7584-4e4f-a0a9-1b053691012c%3Aamzn1.sym.78489f60-7584-4e4f-a0a9-1b053691012c\&crid=3430EUF4UQGOK\&cv_ct_cx=logitech%20webcam\&keywords=logitech%20webcam\&pd_rd_i=B01GRE7W9O\&pd_rd_r=284061e3-e364-428d-932e-007832fe4d88\&pd_rd_w=g5wHl\&pd_rd_wg=JAvno\&pf_rd_p=78489f60-7584-4e4f-a0a9-1b053691012c\&pf_rd_r=1NFFPMSZDA2VZ25SHVJ1\&psc=1\&qid=1665876942\&qu=eyJxc2MiOiI0LjE5IiwicXNhIjoiMy43MyIsInFzcCI6IjMuNjEifQ%3D%3D\&sprefix=logitech%20webca%2Caps%2C117\&sr=1-1-1890b328-3a40-4864-baa0-a8eddba1bf6a) * [Bose QuietComfort earbuds](https://www.amazon.co.uk/dp/B08CJP6V6W/ref=asc_df_B08C4KWM9T1665615600000?tag=georiot-trd-21\&ascsubtag=trd-gb-2956678197275574000-21\&geniuslink=true\&th=1) ## Monitor One extra screen is essential. Two is optimal. My general recommendation for each screen is 1080p resolution, 60Hz refresh rate, and a 24” IPS panel. Ideally look for screens with little to no bezels (the plastic frame around the screen), especially if you are positioning them side by side. Speaking of which, you definitely want to mount your monitors. Not only will it allow you to reposition the monitor to prevent neck strain, but it also frees up space on the desk and improves the overall aesthetic. I’ve been using the Amazon basic mounts for years and I love them! ### **Recommendation** * [PC Part Picker (List)](https://uk.pcpartpicker.com/products/monitor/#F=609600000\&r=192001080\&D=60000\&P=2) * [ASUS VZ249HE Monitor](https://www.amazon.co.uk/ASUS-VZ249HE-Monitor-Ultra-Slim-Certified/dp/B07281PZWK/ref=psdc_340832031_t3_B0859YRX7C?th=1) * [Amazon Basic Monitor Arm](https://www.amazon.co.uk/AmazonBasics-Single-Monitor-Display-Mounting/dp/B00MIBN16O/ref=sr_1_3?keywords=amazon%20basics%20monitor%20arm\&qid=1665875343\&qu=eyJxc2MiOiIzLjE0IiwicXNhIjoiMi44MCIsInFzcCI6IjIuNTcifQ%3D%3D\&s=computers\&sprefix=amazon%20basics%2Ccomputers%2C85\&sr=1-3) ## Peripherals Laptop keyboards are traditionally smaller than their external counterparts, which can feel cramped over longer periods of time. With the exception of a Macbook, trackpads are also notorious to work with. Even if you are working from a laptop, I’d still recommend using a wireless keyboard and mouse. Mechanical keyboards feel great to type on, and I’ve always preferred using a wireless mouse for both professional and personal use. Rather than relying on Wi-Fi, it’s a good idea to use a powerline adaptor to connect your machine to your internet router. Depending on the condition of the wiring in your house, this usually leads to faster speeds and a more stable connection. Be sure to buy an adaptor that is capable of transferring the same speed as your service provides! It’s also worth investing in an adaptor that has a passthrough socket. This will prevent you from losing the socket when the adaptor is plugged in. For those that split their time between the office and working from home, a USB hub is a great investment. It will allow you to connect all of your devices, internet, displays, and even charging to a single source. When it comes to working from home, all you need to do is plug a single cable into your laptop and you’re good to go! ### **Recommendation** * [Logitech MX Master 2s mouse](https://www.amazon.co.uk/Logitech-Rechargeable-Multi-Device-Programmable-Productivity/dp/B071KZS3MF) * [Keychron K8 keyboard](https://www.keychron.uk/products/keychron-k8-wireless-mechanical-keyboard-uk-iso-layout) * [Powerline Adaptor](https://www.amazon.co.uk/dp/B08LW5VPPV?tag=pcad03-21\&linkCode=ogi\&th=1\&psc=1\&ascsubtag=4-1-723387-6-664724-14959) * [Anker USB-C Hub](https://www.amazon.co.uk/Anker-PowerExpand-Adapter-Delivery-Ethernet-Gray/dp/B08NDGD2V5/ref=sr_1_15?crid=3JXOI78MP1MK1\&keywords=anker%20usb%20hub\&qid=1665878993\&qu=eyJxc2MiOiI0LjI0IiwicXNhIjoiMi45NCIsInFzcCI6IjIuNjUifQ%3D%3D\&sprefix=ankey%20usb%20hub%2Caps%2C102\&sr=8-15) ## Ambience As trivial as it sounds, investing in appropriate lighting is a worthwhile investment. Diffusing the glare from the monitor with ambient light will reduce the strain on your eyes over long periods of time. This is especially important in rooms with low natural light. It also helps to personalise the space beyond the usual white walls. I use a BenQ screen light that is mounted above my primary monitor. The back of my desk is fitted with a Philips Hue light strip, and my keyboard is backlit to make the keys easier to see in low-light conditions. ### **Recommendation** * [BenQ Screenbar](https://www.amazon.co.uk/BenQ-ScreenBar-Halo-Controller-Temperature/dp/B08WT889V3/ref=sr_1_5?crid=1N9LALPL4K7GI\&keywords=benq%20screenbar\&qid=1665875930\&qu=eyJxc2MiOiIyLjM3IiwicXNhIjoiMi4zNyIsInFzcCI6IjIuMjUifQ%3D%3D\&sprefix=benq%20screenbar%2Caps%2C94\&sr=8-5) * [Philips Hue lightstrip](https://www.amazon.co.uk/Philips-Lightstrip-Ambiance-Bluetooth-Assistant/dp/B088RX9CSZ/ref=sr_1_4?crid=141DJPMWH7LF\&keywords=hue%20lightstrip\&qid=1665875844\&qu=eyJxc2MiOiIyLjk4IiwicXNhIjoiMi4wNSIsInFzcCI6IjEuOTQifQ%3D%3D\&sprefix=hue%20lightstrip%2Caps%2C102\&sr=8-4) ## Miscellaneous Plants are great for clearing the air and giving some visual stimulation. If you decide on a plant, I’d recommend something low maintenance - particularly those that survive well in low-light conditions and can go for extend periods if you forget to water it. The only thing worse than no plant is a dead one! Everything gets dusty and dirty over time. Screen cleaner is great for removing fingerprints and streaks off of your monitor, and the microfibre cloth can also be used for dusting the desk off after the weekend. Instead of a small mouse mat, I prefer one that spans the majority of my working space. It allows for wider sweeping motions with the mouse (this is handy when you have multiple monitors), it gives the keyboard extra grip to prevent slipping, and it helps prevent damage to the desk through continual use. Repetitive strain injury is common for those of us typing on keyboards for extend periods of time. Adding a padded rest below the keyboard can help alleviate the strain on the wrists, and allow you to type for extended periods of time without any noticeable discomfort. Finally, cable tidies are great for removing clutter from your desk. They can really give your desk a clean, minimal feel for a trivial cost. ### **Recommendation** * [Sansevieria (Snake plant)](https://thelittlebotanical.com/product/sanseveria-punk-succulent-snakeplant/) * [Microfibre cloth & screen cleaner](https://www.amazon.co.uk/gp/product/B078QQRKWL/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8\&psc=1) * [XXXL mouse mat](https://www.amazon.co.uk/Corsair-Anti-Fray-Anti-Skid-Optimised-High-Performance/dp/B0833PQPPG/ref=psdc_340832031_t2_B08JH8C5T5?th=1) * [Keyboard wrist rest](https://www.amazon.co.uk/gp/product/B07HWKFQD6/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8\&psc=1) * [Cable management](https://www.amazon.co.uk/Anker-PowerExpand-Adapter-Delivery-Ethernet-Gray/dp/B08NDGD2V5/ref=sr_1_15?crid=3JXOI78MP1MK1\&keywords=anker%20usb%20hub\&qid=1665878993\&qu=eyJxc2MiOiI0LjI0IiwicXNhIjoiMi45NCIsInFzcCI6IjIuNjUifQ%3D%3D\&sprefix=ankey%20usb%20hub%2Caps%2C102\&sr=8-15) ]]> [email protected] (Andrew James) Productivity <![CDATA[Building an Accessible Menubar Component Using React]]> https://ajames.dev/writing/accessible-menubar https://ajames.dev/writing/accessible-menubar Fri, 01 Apr 2022 00:00:00 GMT <![CDATA[Create an accessible Menubar based on the WAI-ARIA design pattern for a menubar widget. ]]> <![CDATA[Last week I watched [Pedro Duarte](https://www.youtube.com/watch?v=lY-RQjWeweo)'s excellent *"So You Think You Can Build A Dropdown"* talk at Next.js Conf. It inspired me to write up an accessible component of my own that I recently worked on — the menubar widget. I have a real interest in accessibility, particularly in frontend web development. Of all the patterns that I've researched to date, the menubar was the most complex. [Reach](https://reach.tech/), [Radix](https://www.radix-ui.com/), and [React Aria](https://react-spectrum.adobe.com/react-aria/index.html) all provide flexible and accessible React components. Yet, I struggled to find any library that provided a menubar component out of the box. Given the complexity and lack of material, I thought I'd share my discoveries with the community. ## Introduction This article will explain how I created an accessible `Menubar` component with React. The aim was to create a component that adhered to the WAI-ARIA [design pattern](https://www.w3.org/TR/wai-aria-practices/#menu) for a menubar widget. For brevity, the article will focus on a horizontal menubar with a single submenu. It also assumes you are comfortable with React hooks and the [compound component](https://kentcdodds.com/blog/compound-components-with-react-hooks) pattern. I've included the solution as a Code Sandbox link below. ### Useful Links * [Design pattern](https://www.w3.org/TR/wai-aria-practices/#menu) * [Navigation Menubar Example](https://www.w3.org/TR/wai-aria-practices/examples/menubar/menubar-1/menubar-1.html) * [Code Sandbox](https://codesandbox.io/s/a11y-menubar-logrocket-cv5q3w) ## The Menubar We'll kick off with the requirements. The Mythical University has requested an accessible site navigation for their website. To get started, we'll group a collection of hyperlinks in an unordered list. We'll also wrap the list in a navigation section. The HTML might look something like this: ```html ``` At first glance, the markup looks comprehensive, but how accessible is it for those reliant on assistive technologies? Additionally, can the user navigate the menubar with the expected keyboard controls? Although we have provided semantic HTML, the current iteration is not considered accessible. The markup is missing critical `aria-` roles that give context to both the links and the widget itself. Poor keyboard support also means the user is only able to tab through the list of links. Let's improve both of these areas. We'll start by creating two functional components. One is a parent `Menubar` list, and the other is a child `MenuItem` list item. Together we'll use these to compose a compound `` component. The parent `Menubar` returns an unordered list element. Since it's the widget's root element, we'll assign it the `menubar` role. The `aria-orientation` attribute allows assistive technology to determine the direction of the menu. Finally, let's include a custom `data-` attribute for targeting and styling later on. ```javascript function Menubar({ children, ...props }) { const listProps = { ...props, "aria-orientation": "horizontal", "data-menubar-list": "", role: "menubar", }; return
    {children}
; }; ``` The second component is the `MenuItem`. It accepts a single node for its `children` prop and returns the node wrapped in a list item element. Assistive technology should only announce the child node. A list item element has the `listitem` role by default. By overriding it to `none`, we completely remove it from the accessibility tree. We then assign the child node the `menuitem` role by [cloning the element](https://reactjs.org/docs/react-api.html#cloneelement) and shallow merging the prop. ```javascript function MenuItem({ children, ...props }) { const listItemProps = { ...props, "data-menubar-listitem": "", role: "none" }; const childProps = { "data-menubar-menuitem": "", role: "menuitem", }; return (
  • {React.cloneElement(children, childProps)}
  • ); }; ``` Finally, let's add a matching `aria-label` to the navigation element. The current React markup will look something like this: ```html ``` Which will compile into the following HTML: ```html ``` So far we've improved the menubar for those using assistive technology, but what about those who are reliant on keyboard controls? For them to navigate the list of menu items, the `Menubar` component needs to be aware of each child `MenuItem`. We can achieve this by utilizing the React `createContext()` and `useEffect()` hooks. Let's start by creating a new `MenubarContext`: ```javascript export const MenubarContext = React.createContext(null); ``` The `MenubarContext` will store a [Set](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set) of nested `MenuItem` nodes within a parent `Menubar`. We contain the `Set` in a mutable ref object created with the `useRef()` hook, and store the `current` value in a variable. This allows us to manipulate the `Set` contents without re-rendering the `Menubar`. Next, we'll memoize an object with the `useMemo()` hook and assign the `menuItems` as a property. Finally, we'll pass the object to the value attribute of the `MenubarContext.Provider`. ```javascript function Menubar({ children, ...props }) { const menuItems = React.useRef(new Set()).current; const value = React.useMemo(() => ({ menuItems }), [menuItems]); const listProps = { ... }; return (
      {children}
    ); }; ``` The `MenuItem` should only ever be a child of a `Menubar` component. To enforce this, let's throw an error if the `useContext()` hook cannot find a `MenubarContext`. This allows us to assert that `menuItems` exists below the following conditional statement: ```javascript const menubarContext = React.useContext(MenubarContext); if (!menubarContext) { throw new Error("MenuItem must be used within a Menubar Context"); } const { menuItems } = menubarContext; ``` Let's create an object reference to the `MenuItem` DOM node with the `useRef()` hook. Then let's use the `useEffect()` hook to trigger a side-effect that adds the node to the `menuItems` `Set`. We'll also return a cleanup function to remove it from the `Set` if the `MenuItem` unmounts. ```javascript const { menuItems } = menubarContext; const menuItemRef = React.useRef(null); const listItemProps = { [ ... ], ref: menuItemRef, }; React.useEffect(() => { const menuItemNode = menuItemRef.current; if (menuItemNode) { menuItems.add(menuItemNode); } return () => { menuItems.delete(menuItemNode); }; }, [menuItems]); return (
  • {React.cloneElement(children, childProps)}
  • ); ``` ### Roving tab index We now have a reference to each `MenuItem` node. With them, we can apply the [roving tab index](https://www.w3.org/TR/wai-aria-practices-1.1/#kbd_roving_tabindex) pattern to manage focus within the component. To do that, the `Menubar` needs to keep track of the current and previously-focused `MenuItem`. We can do this by storing the indexes of the current and previous nodes in the `Menubar`'s component state. The current index is a stateful value stored using the React `useState()` hook. When the Menubar first mounts, the first `MenuItem` child should have a tab index of `0`. Thus, we can assign `0` as the default state for the current index. We can use a custom hook to track the previous index. The hook accepts the current index as a function parameter. If the hook does not return a value, we can assume that one does not exist and fall back to `null`. ```javascript /* https://usehooks.com/usePrevious/ */ const [currentIndex, setCurrentIndex] = React.useState(0); const previousIndex = usePrevious(currentIndex) ?? null; function usePrevious(value) { const ref = React.useRef(); React.useEffect(() => { ref.current = value; }, [value]); return ref.current; } ``` To apply the roving tab index, the `menuItems[currentIndex]` node must have a tab index of `0`. All other elements in the component's tab sequence should have a tab index of `-1`. Whenever the user navigates from one menu item to another, the following should occur: * The current node should blur and its tab index should set to `-1` * The next node's tab index is set to `0` * The next node receives focus Let's utilize the React `useEffect()` hook for this. We'll pass the current and previous indexes as effect dependencies. Whenever either index changes, the effect will update all appropriate indexes. Note that we are applying the tab index attribute to the first child of the `MenuItem`, not the list item wrapper. ```javascript React.useEffect(() => { if (currentIndex !== previousIndex) { const items = Array.from(menuItems); const currentNode = items[currentIndex]?.firstChild; const previousNode = items[previousIndex]?.firstChild; previousNode?.setAttribute("tabindex", "-1"); currentNode?.setAttribute("tabindex", "0"); currentNode?.focus(); } }, [currentIndex, previousIndex, menuItems]); ``` We don’t have to add the tab index to each menu item, we can update the `MenuItem` component to do that for us! We can assume that if the `menuItems` `Set` is empty, then the node is the first menu item in the sequence. Let's add some component state to track whether the `MenuItem` is the first node in the set. If it is, we can assign its tab index a value of `0` — otherwise, we'll fall back to `-1`. ```javascript const [isFirstChild, setIsFirstChild] = React.useState(false); const menuItemRef = React.useRef(null); const { menuItems } = menubarContext; const listItemProps = { [ ... ], ref: menuItemRef, }; const childProps = { [ ... ], tabIndex: isFirstChild ? "0" : "-1", }; React.useEffect(() => { const menuItemNode = menuItemRef.current; if (menuItemNode) { if (!menuItems.size) { setIsFirstChild(true); } menuItems.add(menuItemNode); } return () => { menuItems.delete(menuItemNode); }; }, [menuItems]); return (
  • {React.cloneElement(children, childProps)}
  • ; ); ``` ### Keyboard controls Next, we'll use the `Menubar`'s `onKeyDown()` event to update the current index based on the user's keypress. There are five primary methods that a user can navigate through the menu items. They can: * Return to the previous item * Advance to the next * Jump to the first * Skip to the last * Move to the next match Let's encapsulate that logic into some helper methods that we can pass to the `keyDown` event. ```javascript // Moves focus to the first item in the menubar. const first = () => setCurrentIndex(0); // Moves focus to last item in the menubar. const last = () => setCurrentIndex(menuItems.size - 1); // Moves focus to the next item in the menubar. // If focus is on the last item, moves focus to the first item. const next = () => { const index = currentIndex === menuItems.size - 1 ? 0 : currentIndex + 1; setCurrentIndex(index); }; // Moves focus to the previous item in the menubar. // If focus is on the first item, moves focus to the last item. const previous = () => { const index = currentIndex === 0 ? menuItems.size - 1 : currentIndex - 1; setCurrentIndex(index); }; // Moves focus to next item in the menubar that starts with the character. // If none of the items start with the typed character, focus does not move. const match = (e) => { const items = Array.from(menuItems); const reorderedItems = [ ...items.slice(currentIndex), ...items.slice(0, currentIndex) ]; const matches = reorderedItems.filter((menuItem) => { const { textContent } = menuItem.firstChild; const firstLetter = textContent.toLowerCase().charAt(0); return e.key === firstLetter; }); if (!matches.length) { return; } const currentNode = items[currentIndex]; const nextMatch = matches.includes(currentNode) ? matches[1] : matches[0]; const index = items.findIndex((item) => item === nextMatch); setCurrentIndex(index); }; ``` With the helper methods defined, we can assign them to the appropriate key codes. We'll check to see if the keypress matches any keys associated with movement; if it doesn’t, we'll default to the `match()` helper method. ```javascript const keyDown = (e) => { e.stopPropagation(); switch (e.code) { case "ArrowLeft": e.preventDefault(); previous(); break; case "ArrowRight": e.preventDefault(); next(); break; case "End": e.preventDefault(); last(); break; case "Home": e.preventDefault(); first(); break; default: match(e); break; } } const listProps = { [ ... ], onKeyDown: (e) => { keyDown(e); }, }; ``` Notice that we are calling `e.preventDefault()` on most of the helper methods. This is to suppress any default browser behavior as the user interacts with the menubar. For example, by default, the `End` key scrolls the user to the bottom of the page. Let's say we did not prevent the default behavior; the scroll position would jump to the bottom of the page any time the user tried to skip to the final menu item! We mustn't call `e.preventDefault()` on the default case. If we did, it would ignore any default browser behavior not captured by a switch case. This could lead to undesired behavior. An example would be if a menu item within the menubar had focus and the user pressed `ctrl + r` to refresh the page. If we called `e.preventDefault()` on the default case, it would ignore the refresh request. It would then pass the `r` key to the `match` helper method. We now have a fully-accessible Menubar widget for a collection of navigation links! Each menu item provides rich contextual information to assistive technology. It also allows those reliant on keyboard support to navigate the list of links as they would expect. The component API hasn't changed from the previous example... ```html ``` ...yet the compiled HTML markup now includes tab indexes on the menu items. Progress! ```html ``` ## The Submenu The previous example is great for a single collection of links, but what if we replaced one of them with a dropdown that revealed a secondary set of navigation links? ```html ``` For this, we're going to need to create a second compound component — the ``. It is composed of three functional components: * The `Submenu` will hold shared logic and component state * The `Trigger` will allow the user to expand the menu * The `List` will display the expanded menu items The `MenubarContext` keeps track of menu items within the `Menubar`. In turn, let's create a `SubmenuContext` to keep track of menu items nested within a `Submenu`. ```javascript export const SubmenuContext = React.createContext(null); ``` Let's start by defining the `Submenu` component. It'll share some similar behaviors and functionality to the `Menubar`. Alongside the index tracking, it also needs to know if its menu has expanded. We could declare another state variable with `useState()`. Instead, it makes more sense to merge the logic into a reducer function. The purpose of the `Submenu` parent component is to hold the compound component state. It is also responsible for distributing shared logic to its sub-components. We assign the logic to a memoized object, after which that object is then passed to the value attribute of a `SubmenuContext.Provider`. ```javascript const submenuInitialState = { currentIndex: null, previousIndex: null, isExpanded: false, }; function submenuReducer(state, action) { switch (action.type) { case "expand": return { ...state, isExpanded: true }; case "collapse": return submenuInitialState; case "move": return { ...state, isExpanded: true, currentIndex: action.index, previousIndex: state.currentIndex }; default: throw new Error(`${action.type} not recognised`); } } const Submenu = ({ children }) => const menuItems = React.useRef(new Set()).current; const [state, dispatch] = React.useReducer(submenuReducer, submenuInitialState); const value = React.useMemo(() => ({ menuItems }), [menuItems]); return ( {children} ); }; ``` Now, let's define the helper methods for navigating the submenu's menu items. These are almost identical to the `Menubar` helpers. The key difference is they dispatch reducer actions instead of updating the component state directly. ```javascript const open = React.useCallback(() => dispatch({ type: "expand" }), []); const close = React.useCallback(() => dispatch({ type: "collapse" }), []); const first = React.useCallback() => dispatch({ type: "move", index: 0 }), []); const last = React.useCallback(() => ( dispatch({ type: "move", index: menuItems.size - 1 }), [menuItems.size] )); const move = React.useCallback((index) => dispatch({ type: "move", index }), []); const value = React.useMemo(() => ({ open, close, first, last, move }), [open, close, first, last, move] ); return ( {children} ); ``` Some functional requirements need the subcomponents to have knowledge of their sibling. We can achieve this by defining ids and references for each subcomponent in the `Submenu`. Note that we store the `menuId` within a reference object. This is to prevent the `uniqueId()` function from regenerating the id on every render. Each subcomponent can now retrieve the values from the `useContext()` hook. ```javascript const id = React.useRef(_.uniqueId("submenu--")).current; const buttonId = `button--${id}`; const listId = `list--${id}`; const buttonRef = React.useRef(null); const listRef = React.useRef(null); const value = React.useMemo( () => ({ buttonId, buttonRef, listId, listRef }) [buttonId, buttonRef, listId, listRef] ); ``` Let's now manage focus within the `Submenu`. We'll start by adding another side effect. This one will focus the first child of the current index if the tracked indexes do not match. Whenever we update the current index, we focus the first child of the new current node. ```javascript React.useEffect(() => { const items = Array.from(menuItems); if (currentIndex !== previousIndex) { const currentNode = items[currentIndex]?.firstChild; currentNode?.focus(); } }, [menuItems, currentIndex, previousIndex]); ``` Submenus do not follow the roving tab index pattern. Instead, the tab index of each menu item within a submenu will always be `-1`. This requires a small change to the `MenuItem` component. If a `SubmenuContext` exists, we can assume the `MenuItem` is inside a `Submenu` and apply `-1` to its tab index. ```javascript const [isFirstChild, setIsFirstChild] = React.useState(false); const submenuContext = React.useContext(SubmenuContext); const childProps = { [ ... ], tabIndex: !submenuContext && isFirstChild ? "0" : "-1", }; ``` ### Trigger With the `Submenu` defined, let's create the `Trigger` component. We'll start by retrieving the `buttonId` and `buttonRef` from the `SubmenuContext`. Since a button's default type is `submit`, it's usually a good idea to override it to `button`. Finally, the `Trigger` should only ever be a child of the `Submenu`. Like before, let's throw an error if we use it outside of a `SubmenuContext`. ```javascript const Trigger = ({ onKeyDown, ...props }) => { const context = React.useContext(SubmenuContext); if (!context) { throw new Error("Trigger must be used within a Submenu Context"); } const { buttonId, buttonRef } = context; const buttonProps = { ...props, id: buttonId, ref: buttonRef, type: "button", } return ; }; ``` Next, let's add the appropriate `aria-` attributes. `aria-haspopup='true'` will inform assistive technology that the button controls a submenu. To go one step further, we can also add the `aria-controls` attribute. This informs the screen reader of the exact submenu controlled by the `Trigger`. Let's also retrieve the `listId` and the `isExpanded` state from the `SubmenuContext`. We'll assign the `listId` to `aria-controls`. Then, all that's left is to assign the `isExpanded` state to the `aria-expanded` attribute. Assistive technology is now aware of the menu button controls, and whether they are open or closed. ```javascript const { buttonId, buttonRef, listId, isExpanded } = submenuContext; const buttonProps = { ...props, "aria-haspopup": true, "aria-expanded": isExpanded, "aria-controls": listId, "data-menubar-submenu-trigger": "", id: buttonId, ref: buttonRef, type: "button", }; ``` Now, let's add keyboard support to the `Trigger`. The `Trigger` will be a sibling of the Menubar menu items. That means it should perform the same `keyDown` events as the Menubar links. It also requires some additional functionality. Alongside the menu item behavior, the Trigger should: * `ArrowUp`: Open the submenu and focus the last item * `ArrowDown`: Opens the submenu and focus the first item * `Space`, `Enter`: Open the submenu and focus to the first item To do this, we'll retrieve some methods from the `SubmenuContext` and assign them to the relevant `e.code`. Note that we only want to execute the `e.stopPropagation()` method on unique events. Doing so allows all other events to bubble up to the `MenuBar`. This is what prevents us from having to duplicate the menu item's `keydown` events. ```javascript const { first, last } = submenuContext; const keyDown = (e) => { switch (e.code) { case "ArrowUp": e.stopPropagation(); last(); break; case "ArrowDown": e.stopPropagation(); first(); break; case "Enter": case "Space": e.stopPropagation(); first(); break; default: break; } }; const buttonProps = { [ ... ], onKeyDown: (e) => { onKeyDown?.(e); keyDown(e); }, }; ``` Let's say a submenu is open when the user presses the `ArrowLeft`or `ArrowRight` key. The submenu should close and focus the previous or next `Menubar` menu item. If the root menu item is also a submenu, it should expand the menu but keep focus on the trigger. The `Trigger` achieves this by checking to see if the event originated from a submenu menu item. This ensures that the menu does not expand when other `keydown` methods focus the trigger. ```javascript const buttonProps = { [ ... ], onFocus: (e) => { const isFromSubmenu = e.relatedTarget?.getAttribute( "data-menubar-submenu-menuitem" ) === ""; if (isFromSubmenu) { open(); } } }; ``` ### List Now that we have a `Trigger`, all we need to do is create a submenu `List`. Like the `Trigger`, we'll throw an error if the `List` component is not used within a `SubmenuContext`. Let's also define some attributes. First, we'll apply the `role='menu'` and retrieve the `listId` from the `SubmenuContext`. We'll retrieve `isExpanded` from the context and assign it to the `aria-hidden` attribute. This will hide the List from the accessibility tree if the menu is not expanded. Next, let's label the menu by assigning the `buttonId` to the `aria-labelledby` attribute. Finally, we'll supply the menu's direction to assistive technology with the `aria-orientation` attribute. ```javascript const List = ({ children, ...props }) => { const submenuContext = React.useContext(SubmenuContext); if (!submenuContext ) { throw new Error("List must be used within a Submenu Context"); } const { listId, listRef, isExpanded } = submenuContext; const listProps = { ...props, "aria-hidden": !isExpanded, "aria-labelledby": buttonId, "aria-orientation": "vertical", "data-menubar-submenu-list": "", id: listId, ref: listRef, role: "menu", }; return (
      {children}
    ); }; ``` Now let's add some `keydown` events specific to the `List` component. We'll retrieve the appropriate helpers from the `SubmenuContext`. Again, we only want to stop propagation on events that we do not want to bubble up to the `Menubar`'s `keydown` event. ```javascript const { close, first, last, move } = submenuContext; const keyDown = (e) => { switch (e.code) { case "ArrowUp": e.stopPropagation(); e.preventDefault(); previous(); break; case "ArrowDown": e.stopPropagation(); e.preventDefault(); next(); break; case "ArrowLeft": e.preventDefault(); close(); break; case "ArrowRight": e.preventDefault(); close(); break; case "Home": e.stopPropagation(); e.preventDefault(); first(); break; case "End": e.stopPropagation(); e.preventDefault(); last(); break; case "Enter": case "Space": close(); break; case "Escape": e.stopPropagation(); e.preventDefault(); close(); break; case "Tab": close(); break; default: e.stopPropagation(); match(e); break; } }; const listProps = { [ ... ], onKeyDown: (e) => { e.preventDefault(); keyDown(e); }, }; ``` The `MenuItem` component will work within a `Submenu` for the most part. We'll need to make a couple of changes to ensure that both the `Menubar` and `Submenu` can make use of the component. The first change is to ensure that the correct `menuItems` `Set` receives the `menuItem` node. We can assert that a submenu is an ancestor element if the `MenuItem` can retrieve a `SubmenuContext`. If it returns a false value, then the `Menuitem` must belong to the Menubar. Let's update the error to check for the `SubmenuContext`. The error should only throw if both contexts do not exist. A `MenuItem` can now be a child of either a `Menubar` or a `Submenu`. ```javascript const menubarContext = React.useContext(MenubarContext); const submenuContext = React.useContext(SubmenuContext); if (!menubarContext && !submenuContext) { throw new Error( "MenuItem must be used within either a Menubar or Submenu Context" ); } ``` There is one final change that we need to make to the `MenuItem` component. Let's revisit the structure of the `Submenu`. The `MenuItem` currently clones its `children` prop and appends extra props. In the example below, we can see that `MenuItem`'s child is the `Submenu` component. The `Submenu` returns a context provider as its parent element. The provider returns nothing from render, and so the props are not attached to any DOM node. ```html ``` Instead, we would like to append the `MenuItem`'s `childProps` onto the submenu `Trigger`. To do so, the `MenuItem` component will need to check its `children`'s type. If the type is a node, then we clone it and append the props. If the type is a function, then we instead provide the props as an argument in the function signature. This allows us the flexibility of choosing which element should receive the props and additionally retains the convenience of appending the props onto the child by default. ```javascript return (
  • { typeof children === "function" ? children(childProps) : React.cloneElement(children, childProps) }
  • ); MenuItem.propTypes = { children: PropTypes.oneOfType([PropTypes.node, PropTypes.func]).isRequired, } ``` That leaves us with this flexible React markup: ```html ``` ...which compiles into this beautiful, accessible HTML: ```html ``` Now, all that's left is to add extra logic for mouse pointer events, nested submenus, and a full suite of unit tests! Unfortunately, we'll consider these features out of scope for this article and they would warrant a follow-up post to cover. I've included all the extra logic and the unit tests in the [Code Sandbox demo](https://codesandbox.io/s/a11y-menubar-ej7kh?file=%2Fsrc%2FApp.js) at the top of the page. Special thanks to [Jenna Smith](https://twitter.com/jjenzz) for her invaluable contributions to the initial API design. ]]>
    [email protected] (Andrew James) react a11y
    <![CDATA[So you want to write a Groom’s speech?]]> https://ajames.dev/writing/wedding-speech https://ajames.dev/writing/wedding-speech Mon, 27 Sep 2021 00:00:00 GMT <![CDATA[Advice for the groom to help him create a memorable and meaningful wedding speech.]]> <![CDATA[When it comes to giving a wedding speech, there are a few tips and tricks that I discovered while researching my own. I’ll provide some examples below of what I found to be particularly useful, and I’ll leave my speech afterwards. First and foremost, timing is key. 8 minutes is usually the sweet spot; anything shorter may seem unprepared, and anything longer can lose the attention of the crowd. To minimize distractions, hand out flowers and gifts after the speech, rather than during. Don’t expect everyone to be hanging on your every word, but try to deny the opportunity for guests to check their phone. Be sure to thank the bride, parents, wedding party, guests, and those who couldn't attend. Remember to speak on behalf of both you and your spouse. Speak from the heart. An honest sentiment will always land easier than a punchline. When it came to structuring my speech, I found that focusing on a couple of key themes helped to keep it flowing smoothly. One of my main inspirations was the rule of three, which I incorporated by talking about the three relationship-defining moments with my wife. This helped to break up the speech and keep the audience engaged. I also structured the speech chronologically, starting with the moment I first thought about marrying her and ending with the day of the wedding. This allowed me to take the audience on a journey through our relationship and highlight defining moments along the way. Each moment was not only a way to tell our story, but also served as an opportunity to thank everyone who has been a part of our journey so far. Last but not least, practice, practice, practice. Rehearsing your speech repeatedly will help keep you calm and prevent you from relying too heavily on the paper. And while you're speaking, remember to breathe, pause between sections, avoid filler words (ums and ahs), and and try to speak slowly. Good luck (and congratulations)! *** Hi there. Hi. Good evening! My name is Andrew. This evening I have the privilege of giving thanks to those that not only made today extra special, but in all the moments leading up to it as well. I also have some gifts that I'd like to hand out to people without which today would not be possible. I'll come round and hand those out at some point after the speeches. I'd like to kick us off with a fresh round of applause for my new *father-in-law.* Thank you for the kind words, Billy. I'll do my best to repay them in kind shortly. 12 years, 1 month, 1 week, 5 days, and some change. After all of that I can finally say: Welcome to the wedding! Finally! It’s been a long time. And during that time, there were three distinct moments that I knew I was going to marry this beautiful young...ish woman. (It's been twelve years. Time marches on!) The first time I wanted to marry her was on September 8th, 2011. I was spending my second year of university studying in Boston Massachusetts. I woke up on my birthday feeling homesick for the first time since I had arrived. At the bottom of the birthday messages on my social media feed was a video from Catriona. She held out a cupcake with a candle, sang happy birthday to me and blew out the candle. Not in a weird way like 'Happy birthday, Mr President', but in the thoughtful and endearing manner that only she is truly capable of. I must’ve watched that video a hundred times. My beautiful bride. Doesn't she look wonderful? Catriona. My first thanks is to you. Thank you for making me the happiest man in the world. I love you. I love you, and I’d choose you. In a hundred lifetimes, in a hundred worlds, in any version of reality, I’d find you, and I’d choose you. I cannot wait to spend the rest of my life with you. For all who are able, I'd like you to join me in raising a glass as I propose my first toast of the evening. **To the beautiful bride!** The second time I wanted to marry Catriona was at Queen Street station in 2013. She had recently accepted a job in London, and we we about to spent the next two years apart in a long distance relationship, travelling between Glasgow, Edinburgh, and London. Thinking on it, that was probably the make or break moment in our relationship. We did not break. We made it because we're both resilient, we believe in each other, and because we're both as stubborn as they come. We went for lunch on the day she left, and I think for the first time in my life I’d completely lost my appetite. When she disappeared from sight onto the platform, I made a quiet promise to myself that the next time we found ourselves in the same city, I'd never let her out of my sight again. What I honestly believe got us through that period of our lives, was the continual love and support from our closest friends and family during that time. Whether it be a shoulder to cry on when times were tough, lonely lime-lit bus stops at 3am, or simply always being on hand to offer us advice - and having the courage to do so, even if it wasn't necessarily something that we wanted to hear at the time. Billy and Jennifer. My next thanks is to you. To be honest…I'm just as surprised how I turned out as you probably are! With that in mind, thank you for welcoming into your family with open arms from day one, regardless. I’ve cherished your love and generosity over the last twelve years, and I look forward to creating many more happy memories with you both as a member of your family. My Maw and Paw. There are no words. Guess I'll just move on. Thank you for the unconditional love and support over all these years - both emotional and financial. Both of which are debts that I'll never be able to truly repay. I love you both dearly. I hope to make you proud as a son, and know that I couldn't have asked for a better set of parents to guide me to where I stand today. Thanks to the Maid of Honour, and my awesome new sister-in-law, Jenny. Thanks to the bridesmaids Sinead, and my sister Elaine. Thanks to our flower girl and my other sister-in-law, Miss Abigail. Thanks to my best man, my brother, Iain. And finally thanks to my groomsmen and closest friends Euan, Tony, and Kenny. I’d like you to join me in raising a glass as I propose my second toast of the evening. **To the wedding party!** The third time I wanted to marry Mrs...Catriona…James, was about four and a half hours ago. And when I first caught her eye as she was walking down the aisle towards me, I knew I was about to make the best decision of my life. My dog Kiwi was an *incredibly* close second. But you pipped her at the post, so well done you! What truly made that moment special, was being able to share it with each and every one of you here today. It is my sincerest hope that we are able to share many more of these memories with you all in the years to come. And with that I'd like to give my final thanks and propose a final toast. Thank you for the children our friends and family have recently brought into this world, thank you to all of you here today, and thank you to all of you here with us in spirit. And on behalf of my wife and I … **Slàinte Mhath** Thank you. ]]> [email protected] (Andrew James) Family <![CDATA[So you want to build a PC?]]> https://ajames.dev/writing/build-pc https://ajames.dev/writing/build-pc Thu, 18 Mar 2021 00:00:00 GMT <![CDATA[An overview of the main hardware components you will need to build a desktop PC. ]]> <![CDATA[A few people have recently asked for advice in building their own desktop PC, so I thought I'd throw together a quick introduction to the main pieces of hardware you'll need to get yourself up and running. You can find my current setup at [PC Part Picker](https://uk.pcpartpicker.com/user/funkrenegade/saved/7dVgwP). **Useful Resources** * [PCPartPicker](https://pcpartpicker.com/) * [Discord](https://discordapp.com/invite/buildapc) * [Reddit](https://www.reddit.com/r/buildapc/) ## MOBO It's essential that your motherboard is compatible with your case and other components. PCPartPicker's [System Builder](https://uk.pcpartpicker.com/list/) will automatically do this for you and filter out any incompatible parts. For a starter rig, try to buy a motherboard with four RAM slots. This allows you to purchase two sticks to get you started, with the option to add another two sticks further down the line. This is usually cheaper than two slots and upgrading the sticks. The motherboard's form factor denotes the size of the board. You want to match the form factor to your case type. As a general rule of thumb, expensive things come in small packages. It’s worth noting that you can put smaller boards (e.g. micro-ATX) into larger cases (ATX), just be sure to check the board size is included in the case’s motherboard form factor. ## CPU AMD is traditionally known as having the best bang for buck. It fell behind NVIDIA in recent years, but it’s made a real comeback with low/mid tier machines (it was also less affected by the [Meltdown](https://en.wikipedia.org/wiki/Meltdown_\(security_vulnerability\)) / [Spectre](https://en.wikipedia.org/wiki/Spectre_\(security_vulnerability\)) fiascos). Some chips can have their performance enhanced by overclocking the base clock speed (it’s simpler than it sounds). Note that doing so will also require you to purchase an aftermarket cooling system, otherwise the CPU will cook. For a first-time build I'd recommend a factory-overclocked CPU and save yourself the hassle. ## GPU Try to select a GPU that complements your monitor. If your monitor is 4k, then you're going to want a GPU that is capable of outputting native 4k resolution. A GPU rendering 1080p to a native 1080 monitor looks better than downscaling a 4k monitor to 1080p. If this ends up happening, try to adjust the resolution on the monitor itself to match your game’s video settings, otherwise the screen will appear pixelated and/or blurry. At the time of writing, 4k gaming is still pretty expensive. I’d advise downscaling to 1080p for a budget build, and 1440p if you want to spend a little more. If you mainly play competitive games, a 144Hz 1080p build could be more favourable than running a 60Hz monitor at 1440p. Work out what you need from your PC, and then pick a GPU that meets the criteria. Ignore anything about running two GPUs (known as SLI / Crossfire). It’s just not worth the effort (imo). ## RAM Not much to talk about here, other than running dual channel will almost always yield better results over single (i.e. using 2 x 8GB sticks rather than 1 x 16GB). As mentioned above I’d suggest buying a motherboard with 4 RAM slots and two sticks then upgrading to four at a later date. Don’t mix and match the sticks, i.e. try to use the same brand and frequency across the board. ## SSD A single SSD is a solid (pun intended) starting point. It’s helpful to know that your case will likely have multiple drive bays, so you can always add more if you’re struggling for space. Games take up a surprising amount of space, and, y’know, `node_modules`... To be cost-efficient you could buy a small SSD exclusively for the operating system and a large hard disk drive (HDD) for storage. I’ve used this approach a couple of times, but generally find it to be more hassle than it’s worth. ## PSU It’s tempting to cheap out here, but make sure your power supply has a branded 80+ Gold energy efficiency rating. AC power from the mains needs converted to DC, but some of that power is lost to heat in the transfer. The 80+ ensures that at least 80% is converted whilst under 100% load. When creating a build PCPartPicker will calculate the total wattage required to run your machine, which makes it easier to choose the correct wattage for your power supply. In terms of cabling, you have the choice of modular, semi-modular, and non-modular. Non-modular comes with a bunch of cables permanently connected to the supply that you might not need to use. Semi- has the essential cables connected with the option to add more, and modular has no connected cables and allows you to connect exactly what you need. As with everything, beauty comes at a price: non-modular is cheap and ugly (if you have a Perspex case you’ll see a clutter of unused wires), full modular is sleek and expensive, semi is half-way happy. Pick your poison. ### Case More important than people give it credit for. You want two things from a case: good airflow and cable management. A large 140mm intake fan on the front of the case and a smaller 120mm exhaust fan on the rear to expel hot air is a standard airflow and usually enough for most starter builds. Better cable management means better airflow (and aesthetics), so it's worth putting in the time. ![My desktop PC (side-view)](https://i.imgur.com/Tn78QWZ.jpg) ]]> [email protected] (Andrew James) Technology <![CDATA[Multiple Entry Points in Create React App Without Ejecting]]> https://ajames.dev/writing/multiple-entry https://ajames.dev/writing/multiple-entry Tue, 02 Jun 2020 00:00:00 GMT <![CDATA[Create multiple entry points without ejecting from the safety net of Create React App.]]> <![CDATA[I was recently tasked with building two applications in parallel. The first was a commercial web application, and a second that acted as a platform to A|B test content messaging, page layouts, and so on. To be ruthlessly efficient we wanted to reuse the majority of core components and styles for both applications, and interchange any branded assets (images, fonts, colour palette, etc) with a dummy brand using Styled Components' [theming](https://styled-components.com/docs/advanced#theming) capabilities. The challenge then was to create multiple applications from a single [Create-React-App](https://github.com/facebook/create-react-app) (CRA henceforth) with each having no trace of the other's branded assets in their bundled build files. Thankfully there's a number of ways to achieve this, ranging in complexity and development effort. [Lerna](https://github.com/lerna/lerna) is a popular tool that maintains multiple packages under a single repository (commonly referred to as a monorepo). It achieves this by linking identical dependencies across it's packages, with the ability to publish them either collectively or individually. Lerna would allow us to create a package for each application and one for the core components to share between them. This certainly solves the use case, but requires us to rearchitect the entire codebase and increase the complexity of the build process. Given that there are no immediate plans to add any other containers to the codebase, and that the testing application will likely not be required beyond the initial development phases, we decided the associated overhead for this scenario was overkill. A leaner approach would be to rewire the codebase with [React App Rewired](https://github.com/timarney/react-app-rewired), which tweaks the CRA build scripts without having to [eject](https://www.notion.so/phunkren/Multiple-entry-points-in-Create-React-App-without-ejecting-8b9f99a040c04225b4f5f2c19022420b#b2e9e1ca8a0f4141bc0992918bae2a92). In our case we would use rewired to alter the application's entry point at build time. A major drawback here is that in doing so we'd break the guarantees that CRA provides by hiding the configuration files from us, and the software itself is only lightly maintained by the community at the time of writing ([customise-cra](https://github.com/arackaf/customize-cra) is a popular package built on top of rewired that supports CRA v2). This solution could be viable on a personal project, but it wasn't something we were willing to depend on for a commercial application. [Ejecting](https://create-react-app.dev/docs/available-scripts/#npm-run-eject) is a one-way operation that cannot be undone. It allows us complete control of the project's infrastructure by converting the codebase into a standard React application, at the cost of transferring the responsibility of maintaining the exposed configuration to our team. This option is viable in some scenarios, but it usually considered a last resort due to the increased complexity and associated maintenance cost. Each of these - and plenty more - are all viable solutions that come with their own set of benefits and drawbacks. However, for this particular scenario we were keen to investigate a simple solution that allows us to work from a single codebase, not rely on third-party dependancies, and not eject from the safety net of Create React App. ## **To infinity, or beyond** Let's look at the default entry point in a Create React Application. The `src/index.js` file imports the `App` container and renders it inside the `div#root` element defined in `public/index.html`. ```javascript /* src/index.js */ import React from 'react'; import ReactDOM from 'react-dom'; import App from './App'; ReactDOM.render(, document.getElementById('root')); ``` ```html /* public/index.html */
    ``` We can introduce multiple points of entry by importing both containers into the `index.js` file and conditionally render them based on a constant variable. This allows us to switch between the containers, but comes with a couple of caveats. In order to switch between the builds we'd need to manually update the `isTestEnv` variable. The variable always needs to be correctly set when each of the sites are deployed, otherwise the wrong code would be deployed to the production environment. ```javascript /* src/index.js */ import React from "react"; import ReactDOM from "react-dom"; import App from "./app"; import Test from './test'; const isTestEnv = true; if (isTestEnv) { ReactDOM.render(, document.getElementById("root")); } else { ReactDOM.render(, document.getElementById("root")); } ``` Let's tighten this up by creating a `.env` file with a [custom environment variable](https://create-react-app.dev/docs/adding-custom-environment-variables/). Now we have the ability to choose the build target before running the local development script, and also permanently assign a value to each of our production environments. ```javascript /* .env */ REACT_APP_BUILD_TARGET= ``` ```javascript /* index.js */ import React from "react"; import ReactDOM from "react-dom"; import { App } from "./App"; import { Test } from './Test'; if (process.env.REACT_APP_BUILD_TARGET === 'test') ReactDOM.render(, document.getElementById("root")); } else { ReactDOM.render(, document.getElementById("root")); } ``` We used [Netlify](https://www.netlify.com/) to create a production environment for each application. Both sites will be virtually identical. They'll both point to the same [GitHub repository](https://github.com/phunkren/multiple-entry-points) and have master set as the production branch. The only difference will be their respective `BUILD_TARGET` environment variable: `test` is assigned to the [testing site](https://multiple-entry-points-test.netlify.app/), and `app` for the [main application](https://multiple-entry-points-app.netlify.app/). ![Netlify: App test environment](https://i.imgur.com/MiVtOXx.jpg) ![Netlify: App production environment](https://i.imgur.com/UObQijK.jpg) We now have two production environments with the correct build target and free from human error. All that's left is to ensure that only the code from the defined container appears in the bundled build. Due to the nature of tree-shaking, all of the imported containers in the application's current `index.js` file would appear in the production build files, regardless of our build target. To remedy this we can use CommonJS to conditionally require the desired container based on the `BUILD_TARGET` environment variable. ```javascript /* index.js */ require(process.env.REACT_APP_BUILD_TARGET === "test" ? "./test" : "./app" ) ``` This works, but setting the environment variable to anything other than `test` will import the main application. We can fix this with an `if/else` statement, and further refine the solution with ES6 [dynamic imports](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import#Dynamic_Imports). The `importBuildTarget()` function below will return a promise for each entry point, and a fallback error in the event that the specified build target is not found. Once the `import` promise has resolved it will render the requested build target with none of the other entry point files in the bundled build. 💥 ```javascript import React from "react"; import ReactDOM from "react-dom"; function importBuildTarget() { if (process.env.REACT_APP_BUILD_TARGET === "app") { return import("./app.js"); } else if (process.env.REACT_APP_BUILD_TARGET === "test") { return import("./test.js"); } else { return Promise.reject( new Error("No such build target: " + process.env.REACT_APP_BUILD_TARGET) ); } } // Import the entry point and render it's default export importBuildTarget().then(({ default: Environment }) => ( ReactDOM.render( , document.getElementById("root") ) ); ``` ## **TL; DR** You can create multiple entry points in a Create React Application without ejecting by using an environment variable to conditionally import container files. Doing this prevents code from the other containers appearing in the desired bundled build. ### **Resources** * [GitHub Repository](/8b9f99a040c04225b4f5f2c19022420b) * [**Entry point A**](https://multiple-entry-points-app.netlify.app/) * [**Entry point B**](https://multiple-entry-points-test.netlify.app/) Special thanks to [Stephen Taylor](https://twitter.com/meandmycode) and [Robin Weston](https://twitter.com/robinweston) for their valuable input, and to [Jonathan Hawkes](https://twitter.com/jonathanhawkes) for his solution to all build target files appearing in the bundle. ]]>
    [email protected] (Andrew James) react infra
    <![CDATA[So you want to WFH?]]> https://ajames.dev/writing/work-home https://ajames.dev/writing/work-home Sun, 15 Mar 2020 00:00:00 GMT <![CDATA[Tips maintaining focus and effective communication whilst working from home.]]> <![CDATA[After spending some time last year working from home, I’ve quickly written up ten rules that kept me sane and helped retain my focus throughout the day. ## 1: Shower & Shave. Dress for the job as normal - It’ll make you less reluctant to go on video (see below), and make the day feel more ‘real’. ## 2: Leave the house / exercise. In the morning I either walk for coffee or go for a short cycle and then walk the dog after lunch. Try not to use any devices and instead use it to clear your head. ## 3: Have somewhere you can work both standing and sitting. Alternate. Do not use the sofa; you are mentally conditioned to relax there. Try to use a chair with good back support, and have your monitor at eye level to prevent neck strain. ## 4: Designate an office space. Ideally a room with minimal distractions that you can leave once the working day is over. Working and relaxing in your lounge will give you cabin fever. ## 5: Conference calls are synchronous; Slack/email is asynchronous. Get comfortable with both. Try to respond promptly regardless. Over-communicate and document outcomes. ## 6: Always be ready to go on video. Cameras should be enabled by default on conference calls, and only disabled on poor connections. Try to establish an open channel that acts as a hangout space for people to be part of (even if they’re permanently muted). ## 7: Use headphones with a dedicated microphone. Record yourself speaking into the microphone and listen to it. Is it clear? Does it pick up a load of background noise? Most communication software will allow you to modify your input threshold to fix this. If you’re not speaking, mute yourself. ## 8: Be proficient with all of your communication software. Get comfortable creating and joining rooms, sharing your screen, generating meeting invites / etc. ## 9: Don’t work an extra handful of hours each day. Spend the time you save commuting on yourself, spend time with family, etc. ## 10: Pimp yo ride. You’ll be spending 40+ hours per week working here, so try to make it as inviting as possible. Plants / artwork / lighting can really make a difference. I recently wrote about [my remote setup](https://ajames.dev/writing/work-remote). ]]> [email protected] (Andrew James) Productivity <![CDATA[Building a Responsive Camera Component Using React Hooks]]> https://ajames.dev/writing/responsive-camera https://ajames.dev/writing/responsive-camera Thu, 07 Nov 2019 00:00:00 GMT <![CDATA[Build a responsive frontend camera component using React hooks, and the getUserMedia API]]> <![CDATA[I was recently tasked with building a frontend camera component that allows users to upload images of their identification cards to a backend service. In this post I'll demonstrate how I created the component by explaining how to configure a live media stream and capture a snapshot with React hooks, and how to style and position the elements using Styled Components. As such, the article assumes a working knowledge of functional components in React 16.x and the Styled Components library. Below you can see a demo of the component in action, and feel free to play around with the complete solution on my [Code Sandbox](https://codesandbox.io/s/react-camera-component-with-hooks-mf1i2) as you read along. Enjoy! ![Camera demo](https://i.imgur.com/sfbwXN1.gif) ## Configuration Let’s begin by accessing the browser navigator and invoking the [`getUserMedia()`](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia) method to display a live video feed from the user’s camera. Since the component is designed to take photographs of identity cards, we can pass a configuration object that does not require audio and defaults to the rear-facing camera on mobile devices. By passing an options object to the video property, video is assumed to be `true`. ```javascript const CAPTURE_OPTIONS = { audio: false, video: { facingMode: "environment" }, }; ``` The `getUserMedia()` method requests permission from the user to access the media defined in the configuration. It then returns a promise that will either resolve and return a [`MediaStream`](https://developer.mozilla.org/en-US/docs/Web/API/MediaStream) object that can be stored in local state, or reject and return an error. Using one of React's [`useEffect()`](https://reactjs.org/docs/hooks-effect.html) hooks, we create and store the requested stream if none exists (i.e. our local state is empty) or return a cleanup function to prevent any potential memory leaks when the component unmounts. The cleanup simply loops through and stops each of the media tracks stored in local state via the [`getTracks()`](https://developer.mozilla.org/en-US/docs/Web/API/MediaStream/getTracks) method. With the stream stored in local state it can then be bound to a `` element. Since React [does not support the srcObject attribute](https://github.com/facebook/react/pull/9146#issuecomment-355584767), we use a ref to target the `` and assign the stream to it's `srcObject` property. With a valid source the video will trigger an `onCanPlay()` event where we can trigger video playback. This implementation is necessary since the video `autoPlay` attribute does not work consistently across all platforms. We can abstract all of this logic into a custom hook that takes the configuration object as an argument, creates the cleanup function, and returns the stream to the camera component. ```javascript import { useState, useEffect } from "react"; export function useUserMedia(requestedMedia) { const [mediaStream, setMediaStream] = useState(null); useEffect(() => { async function enableStream() { try { const stream = await navigator.mediaDevices.getUserMedia(requestedMedia); setMediaStream(stream); } catch(err) { // Removed for brevity } } if (!mediaStream) { enableStream(); } else { return function cleanup() { mediaStream.getTracks().forEach(track => { track.stop(); }); } } }, [mediaStream, requestedMedia]); return mediaStream; } ``` ```javascript import React, { useRef, useState } from 'react'; import { useUserMedia } from './useUserMedia'; const CAPTURE_OPTIONS = { audio: false, video: { facingMode: "environment" }, }; function Camera() { const videoRef = useRef(); const mediaStream = useUserMedia(CAPTURE_OPTIONS); if (mediaStream && videoRef.current && videoRef.current.srcObject === null) { videoRef.current.srcObject = mediaStream; } function handleCanPlay() { videoRef.current.play(); } return ( ); } ``` ## Positioning With the media stream configured we can start to position the video within the component. To enhance the user experience, the camera feed should resemble an identification card. This requires the preview container to maintain a landscape ratio regardless of the native resolution of the camera (desktop cameras typically have a square or landscape ratio, where we assume that mobile devices will capture the images in portrait). This is achieved by calculating a ratio that is >= 1 by always dividing by the largest dimension. Once the video is available for playback (i.e. when the `onCanPlay()` event is invoked) we can evaluate the native resolution of the camera and use it to calculate the desired aspect ratio of the parent container. In order for the component to be responsive, it will need to be notified whenever the width of the parent container has changed so that the height can be recalculated. [`React-measure`](https://www.npmjs.com/package/react-measure) exports a `` component that provides the `DOMRect` of a referenced element as an argument in an `onResize()` callback. Whenever the container mounts or is resized, the argument's `contentRect.bounds.width` property is used to determine the container height by dividing it by the calculated ratio. Similar to before, the ratio calculation is abstracted into a custom hook and returns both the calculated ratio and setter function. Since the ratio will remain constant we can utilise React's [`useCallback()`](https://reactjs.org/docs/hooks-reference.html#usecallback) hook to prevent any unnecessary recalculations. ```javascript import { useState, useCallback } from "react"; export function useCardRatio(initialRatio) { const [aspectRatio, setAspectRatio] = useState(initialRatio); const calculateRatio = useCallback((height, width) => { if (height && width) { const isLandscape = height <= width; const ratio = isLandscape ? width / height : height / width; setAspectRatio(ratio); } }, []); return [aspectRatio, calculateRatio]; } ``` ```javascript import React, { useRef, useState } from 'react'; import { Measure } from 'react-measure'; import { useUserMedia } from './useUserMedia'; import { useCardRatio } from './useCardRatio'; const CAPTURE_OPTIONS = { audio: false, video: { facingMode: "environment" }, }; function Camera() { const videoRef = useRef(); const mediaStream = useUserMedia(CAPTURE_OPTIONS); const [container, setContainer] = useState({ height: 0 }); const [aspectRatio, setAspectRatio] = useCardRatio(1.586); // default card ratio if (mediaStream && videoRef.current && videoRef.current.srcObject === null) { videoRef.current.srcObject = mediaStream; } function handleCanPlay() { calculateRatio(videoRef.current.videoHeight, videoRef.current.videoWidth); videoRef.current.play(); } function handleResize(contentRect) { setContainer({ height: Math.round(contentRect.bounds.width / aspectRatio) }); } function handleCanPlay() { setAspectRatio(videoRef.current.videoHeight, videoRef.current.videoWidth); videoRef.current.play(); } return ( {({ measureRef }) => (
    )}
    ); } ``` The current solution works well if the video element is smaller than the parent container, but in the event that the native resolution is larger it will overflow and cause layout issues. Adding `overflow: hidden` & `position: relative` to the parent and absolutely positioning the video will prevent the break in layout, but the video will appear off-centre to the user. To compensate for this we centre the feed by calculating axis-offsets that subtract the dimensions of the video element from the parent container and half the resulting value. ```javascript const offsetX = Math.round((videoWidth - containerWidth) / 2); const offsetY = Math.round((videoHeight - containerHeight) / 2); ``` We only want to apply the offsets in the event that the video *(v)* is larger than the parent container *(c)*. We can create another custom hook that uses an effect to evaluate whether an offset is required and returns the updated results whenever any of the values change. At this point we now have a responsive live feed that roughly resembles an identification card and is correctly positioned within the parent container. ```javascript import { useState, useEffect } from "react"; export function useOffsets(vWidth, vHeight, cWidth, cHeight) { const [offsets, setOffsets] = useState({ x: 0, y: 0 }); useEffect(() => { if (vWidth && vHeight && cWidth && cHeight) { const x = vWidth > cWidth ? Math.round((vWidth - cWidth) / 2) : 0; const y = vHeight > cHeight ? Math.round((vHeight - cHeight) / 2) : 0; setOffsets({ x, y }); } }, [vWidth, vHeight, cWidth, cHeight]); return offsets; } ``` ```javascript import React, { useRef, useState } from 'react'; import { Measure } fropm 'react-measure'; import { useUserMedia } from './useUserMedia '; import { useCardRatio } from './useCardRatio'; import { useOffsets } from './useOffsets'; const CAPTURE_OPTIONS = { audio: false, video: { facingMode: "environment" }, }; function Camera() { const videoRef = useRef(); const mediaStream = useUserMedia(CAPTURE_OPTIONS); const [container, setContainer] = useState({ height: 0, width: 0 }); const [aspectRatio, calculateRatio] = useCardRatio(1.586); const offsets = useOffsets( videoRef.current && videoRef.current.videoWidth, videoRef.current && videoRef.current.videoHeight, container.width, container.height ); if (mediaStream && videoRef.current && videoRef.current.srcObject === null) { videoRef.current.srcObject = mediaStream; } function handleResize(contentRect) { setContainer({ height: Math.round(contentRect.bounds.width / aspectRatio), width: contentRect.bounds.width }); } function handleCanPlay() { calculateRatio(videoRef.current.videoHeight, videoRef.current.videoWidth); videoRef.current.play(); } return ( {({ measureRef }) => (
    )}
    ); } ``` ## Capture / Clear To emulate a camera snapshot, a `` element is positioned on top of the video with matching dimensions. Whenever the user initiates a capture, the current frame in the feed will be drawn onto the canvas and cause the video to become temporarily hidden. This is achieved by creating a two-dimensional rendering context on the canvas, drawing the current frame of the video as an image and then exporting the resulting `Blob` as an argument in a `handleCapture()` callback. ```javascript function handleCapture() { const context = canvasRef.current.getContext("2d"); context.drawImage(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight); canvasRef.current.toBlob(blob => onCapture(blob), "image/jpeg", 1); } ``` The arguments supplied to the [`drawImage()`](https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/drawImage) method are broadly split into three groups: the source image, the source image parameters *(s)*, and the destination canvas parameters *(d)*. We need to consider the potential axis-offsets when drawing the canvas, as we only want to snapshot the section of the video feed that is visible from within the parent container. We'll add our offsets to the source image's starting axis coordinates and use the parent container's width and height for both the source and destination boundaries. Since we want to draw our snapshot onto the entire canvas, no destination offsets are required. ```javascript context.drawImage( videoRef.current, // source offsets.x, // sx offsets.y, // sy container.width, // sWidth container.height, // sHeight 0, // dx 0, // dy container.width, // dWidth container.height // dHeight ); ``` To discard the image, the canvas is reverted to it's initial state via a `handleClear()` callback. Calling `handleClear()` will retrieve the same drawing context instance that was previously returned in the `handleCapture()` function. We can then pass the canvas' width and height to the context [`clearRect()`](https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/clearRect) function to convert the requested pixels to transparent and resume displaying the video feed. ```javascript function handleClear() { const context = canvasRef.current.getContext("2d"); context.clearRect(0, 0, canvasRef.current.width, canvasRef.current.height); onClear(); } ``` ```javascript import React, { useRef, useState } from 'react'; import { Measure } fropm 'react-measure'; import { useUserMedia } from './useUserMedia '; import { useCardRatio } from './useCardRatio'; import { useOffsets } from './useOffsets'; const CAPTURE_OPTIONS = { audio: false, video: { facingMode: "environment" }, }; function Camera() { const videoRef = useRef(); const mediaStream = useUserMedia(CAPTURE_OPTIONS); const [container, setContainer] = useState({ height: 0, width: 0 }); const [aspectRatio, calculateRatio] = useCardRatio(1.586); const [isCanvasEmpty, setIsCanvasEmpty] = useState(true); const offsets = useOffsets( videoRef.current && videoRef.current.videoWidth, videoRef.current && videoRef.current.videoHeight, container.width, container.height ); if (mediaStream && videoRef.current && videoRef.current.srcObject === null) { videoRef.current.srcObject = mediaStream; } function handleResize(contentRect) { setContainer({ height: Math.round(contentRect.bounds.width / aspectRatio), width: contentRect.bounds.width }); } function handleCanPlay() { calculateRatio(videoRef.current.videoHeight, videoRef.current.videoWidth); videoRef.current.play(); } function handleCapture() { const context = canvasRef.current.getContext("2d"); context.drawImage( videoRef.current, offsets.x, offsets.y, container.width, container.height, 0, 0, container.width, container.height ); canvasRef.current.toBlob(blob => onCapture(blob), "image/jpeg", 1); setIsCanvasEmpty(false); } function handleClear() { const context = canvasRef.current.getContext("2d"); context.clearRect(0, 0, canvasRef.current.width, canvasRef.current.height); onClear(); setIsCanvasEmpty(true); } return ( {({ measureRef }) => (
    )}
    ); } ``` ## Styling With the ability to capture an image, all that remains is to implement a card-aid overlay, a flash animation on capture, and style the elements using [Styled Components](https://www.styled-components.com/). The overlay component is a white rounded border layered on top of the video to encourage the user to fit their identification card within the boundary, with an outer box-shadowed area acting as a safe-zone to prevent clipping. The flash component has a solid white background and is also layered on top of the video, but will initially appear hidden due to a default opacity of 0. The keyframes animation triggers whenever the user captures an image, which briefly sets the opacity to 0.75 before quickly reducing it back to zero to emulate a flash effect. Adding a local state variable, `isVideoPlaying`, keeps the video and overlay elements hidden until the camera begins streaming. We can pass the resolution of the camera as props to the parent container to determine it's maximum width and height, and finally add `display: none` to `-webkit-media-controls-play-button` to hide the video's play symbol on iOS devices. 💥 ```javascript import styled, { css, keyframes } from 'styled-components'; const flashAnimation = keyframes` from { opacity: 0.75; } to { opacity: 0; } `; export const Wrapper = styled.div` display: flex; flex-flow: column; align-items: center; width: 100%; `; export const Container = styled.div` position: relative; overflow: hidden; width: 100%; max-width: ${({ maxWidth }) => maxWidth && `${maxWidth}px`}; max-height: ${({ maxHeight }) => maxHeight && `${maxHeight}px`}; `; export const Canvas = styled.canvas` position: absolute; top: 0; left: 0; `; export const Video = styled.video` position: absolute; &::-webkit-media-controls-play-button { display: none !important; -webkit-appearance: none; } `; export const Overlay = styled.div` position: absolute; top: 20px; right: 20px; bottom: 20px; left: 20px; box-shadow: 0px 0px 20px 56px rgba(0, 0, 0, 0.6); border: 1px solid #ffffff; border-radius: 10px; `; export const Flash = styled.div` position: absolute; top: 0; right: 0; bottom: 0; left: 0; opacity: 0; background-color: #ffffff; ${({ flash }) => { if (flash) { return css` animation: ${flashAnimation} 750ms ease-out; `; } }} `; export const Button = styled.button` width: 75%; min-width: 100px; max-width: 250px; margin-top: 24px; padding: 12px 24px; background: silver; `; ``` ```javascript import React, { useState, useRef } from "react"; import Measure from "react-measure"; import { useUserMedia } from "../hooks/use-user-media"; import { useCardRatio } from "../hooks/use-card-ratio"; import { useOffsets } from "../hooks/use-offsets"; import { Video, Canvas, Wrapper, Container, Flash, Overlay, Button } from "./styles"; const CAPTURE_OPTIONS = { audio: false, video: { facingMode: "environment" } }; export function Camera({ onCapture, onClear }) { const canvasRef = useRef(); const videoRef = useRef(); const [container, setContainer] = useState({ width: 0, height: 0 }); const [isVideoPlaying, setIsVideoPlaying] = useState(false); const [isCanvasEmpty, setIsCanvasEmpty] = useState(true); const [isFlashing, setIsFlashing] = useState(false); const mediaStream = useUserMedia(CAPTURE_OPTIONS); const [aspectRatio, calculateRatio] = useCardRatio(1.586); const offsets = useOffsets( videoRef.current && videoRef.current.videoWidth, videoRef.current && videoRef.current.videoHeight, container.width, container.height ); if (videoRef.current && videoRef.current.srcObject === null) { videoRef.current.srcObject = mediaStream; } function handleResize(contentRect) { setContainer({ width: contentRect.bounds.width, height: Math.round(contentRect.bounds.width / aspectRatio) }); } function handleCanPlay() { calculateRatio(videoRef.current.videoHeight, videoRef.current.videoWidth); setIsVideoPlaying(true); videoRef.current.play(); } function handleCapture() { const context = canvasRef.current.getContext("2d"); context.drawImage( videoRef.current, offsets.x, offsets.y, container.width, container.height, 0, 0, container.width, container.height ); canvasRef.current.toBlob(blob => onCapture(blob), "image/jpeg", 1); setIsCanvasEmpty(false); setIsFlashing(true); } function handleClear() { const context = canvasRef.current.getContext("2d"); context.clearRect(0, 0, canvasRef.current.width, canvasRef.current.height); setIsCanvasEmpty(true); onClear(); } if (!mediaStream) { return null; } return ( {({ measureRef }) => ( setIsFlashing(false)} /> {isVideoPlaying && ( )} )} ); } ``` ## Conclusion For the moment the component serves to provide images as proof of authenticity, and is used alongside a form where users manually input field information from the identification cards. I'm hoping to follow this post up with an integration with [OCR technology](https://en.wikipedia.org/wiki/Optical_character_recognition) to scrape the fields from the images and remove the requirement for the form altogether. Thanks for reading along, and special thanks to [Pete Correia](https://twitter.com/petecorreia) for taking the time to review the component code. ]]>
    [email protected] (Andrew James) react css
    <![CDATA[Mastering Modals in React with Context and Portals]]> https://ajames.dev/writing/custom-modal https://ajames.dev/writing/custom-modal Tue, 16 Jul 2019 00:00:00 GMT <![CDATA[Create modals, and trigger them using React context, hooks, and portals]]> <![CDATA[Modals are a useful tool for displaying information on top of your application, and are often used for notifications, alerts, or standalone dialogs such as registration or login forms. Before building a custom modal, it's a good idea to check if there are any pre-existing solutions that meet your needs ([Reach UI's Dialog](https://reacttraining.com/reach-ui/dialog/) and [react-modal](http://reactcommunity.org/react-modal/) are both popular options). If you don't find a suitable solution, let's explore creating a bespoke modal component in React. To get started, we'll create a basic modal that appears and disappears based on some local state in our React app. The process is simple: when a button in the root of the app is clicked, the modal will appear. Then, when the button inside the modal is clicked, the modal will close. Let's start building! [embed](https://codesandbox.io/embed/zen-pare-76gl3?autoresize=1\&fontsize=14\&hidenavigation=1\&theme=dark\&view=preview) If you want to trigger the modal from a nested component rather than just from within ``, you can pass the `setState` action `setIsModalOpen` as a prop. Then, you can call this action as a callback when a button within the nested component is clicked, which will trigger the modal. [embed](https://codesandbox.io/embed/peaceful-bardeen-7jexx?autoresize=1\&fontsize=14\&hidenavigation=1\&theme=dark\&view=preview) This works for a single level of nesting, but it probably won’t scale very well. We could keep passing the callback down through the components, but that could get a bit tedious and create a lot of extra code that's tough to manage. Enter [React Context](https://reactjs.org/docs/context.html). Context allows you to store and access a value anywhere in your React app. You can use a Provider to store the value and a Consumer to access it, and the Consumer will search up the component tree for the first Provider that matches its context. This is useful when you want to trigger the modal from a nested component, rather than just from the top-level App component. You can use the `useContext` hook to consume the value in a nested component. Let’s wrap the previous example with a Provider, set the `setIsModalOpen` callback as its value, then utilise the [useContext()](https://reactjs.org/docs/hooks-reference.html#usecontext) hook to consume it in a nested component. [embed](https://codesandbox.io/embed/sweet-brown-yn44i?autoresize=1\&fontsize=14\&hidenavigation=1\&theme=dark\&view=preview) We now have a modal that can be opened from anywhere in our app, but it can only display static content for now. If we want it to render dynamic content, we'll need to refactor it to accept children. Plus, since React's data flow only goes one way, we'll need to find a good way to pass data from a nested component back up to the modal at the root level. My former colleague, [Jenna Smith](https://twitter.com/jjenzz), a highly skilled front-end developer, suggested using [React Portal](https://reactjs.org/docs/portals.html) as a solution. Portal's are designed to pass children to a DOM node outside the hierarchy of the parent component, which is perfect for our needs. To use a portal, we'll need to provide two arguments: a React element (for our dynamic content) and a DOM element to inject the content into (the modal's container). This should allow us to effectively pass the data from the nested component to the modal at the root level. [embed](https://codesandbox.io/embed/7w6mq72l2q?autoresize=1\&fontsize=14\&hidenavigation=1\&theme=dark\&view=preview) As demonstrated in the sandbox, Jenna created two functional components to provide dynamic content for the modal. The `` component includes a DOM element with a ref attached (`
    `), as well as a context provider that wraps the entire app and distributes the ref's current value to any relevant consumers within it. The second component is the modal itself. Each time a `` component is rendered, it will try to retrieve the `modalRef` element using `useContext()`. If the ref exists, the component will create a React portal and inject the modal's children into the ref element, rather than mounting in its expected position in the DOM tree.. With this approach, the `Modal` component can now be used anywhere within the `ModalProvider` to display dynamic content on top of the app. One thing to keep in mind is that the body will still be able to scroll on iOS when the modal is mounted. I highly recommend reading Will Po's article on [body scroll lock](https://medium.com/jsdownunder/locking-body-scroll-for-all-devices-22def9615177) for potential solutions to this issue. ]]> [email protected] (Andrew James) react <![CDATA[So you want to buy a puppy? ]]> https://ajames.dev/writing/buy-puppy https://ajames.dev/writing/buy-puppy Mon, 01 Jul 2019 00:00:00 GMT <![CDATA[Personal experiences of the responsibilities and challenges of owning a puppy.]]> <![CDATA[A few people have recently asked me about whether or not to buy a puppy, so I figured I’d collect my thoughts here as a point of reference. I mention advice from friends, family, and trainers throughout, but ultimately all opinions are my own and formulated from experience with my dog only. ## **Bio** * **Name**: Kiwi * **Breed**: Cavapoo (KCS / Toy poodle cross) * **Age**: 4.5 months (at time of writing) * **Status**: Legend ## **Toilet training** The puppy will pee and shit on the carpet. Make peace with it. After speaking with a trainer, you’re looking at 3months best case / 6 months worst to toilet train them. You have to take them out every 30-40minutes at the start. It’s difficult to focus on anything because you always have one eye on the dog. ## **Crate training** This is probably the toughest part in the beginning, but so worth it. It takes time to get them comfortable just walking into the crate, then being in it whilst you are there with them, until finally you can leave the room. Be prepared to get up 2/3 times every night for bathroom breaks. And fuck me that whining is heart-wrenching. It will make you argue. ## **Teething** They bite everything. It gets old. Quick. Don’t think they’re going to sit in your lap and cuddle you. Plants, carpet, clothes, fingers; they’ll chew anything for the first few months until their adult teeth grow in. Socialising with other dogs is good for this, as they’ll quickly let them know if they bite too hard. ## **Leaving her alone** Some people say to leave and let them cry it out. Our trainer advised the complete opposite, and to gradually increase the time / distance you are away from them to prevent separation anxiety developing. This has definitely worked for us, but it meant Cat and I were usually in different rooms until the puppy could be left alone crated in her room. ## **Walks** You can’t take them out for exercise until they have their second injection (~12 weeks), so they have loads of energy and nowhere to really expel it. The cabin fever can really set in if you don’t like being stuck at home for long periods of time. Thankfully mental stimulation also tires them out, so simple training (e.g. responding to their name) can really help out here. They pull on the leash, eat everything on the ground, try to jump up on everyone and chase after everything. It takes time to desensitise them to the environment and stick with you. Good luck recalling them when they get hooked on a scent! ## **Cost** Probably spent ~£4k (Inc. sale, equipment, vet, toys, food, and training) since we brought her home. ## **Training** There’s loads to train. Toilet, crate, tricks, walking (on/off leash), recalls, scavenging, socialisation, etc. You also have train them in different environments; teaching her to sit in the house is not the same as in the park. It apparently becomes harder to train them once they reach adulthood, so you really have to commit to all of the training early on. Trying to fit all of that in with a full work schedule and a semblance of a social life is exhausting. ## **Ageing** I have a couple of friends with slightly older dogs. The general curve seems to be: First week is the honeymoon phase, 1 week-3 months is a riot, 3-6 months is when you start having fun, 6-18 months is adolescence when it goes back to being a riot, and if you can make it past that and you’ve put the time in training them you’ll have a chilled, obedient adult dog. ## **How has my life changed** I’m now up at 6am every morning. Casual beers now need planning. We’ve been on a handful of date nights since we got her. Everything revolves around her for the moment, and all I seem to talk about at the moment is dogs. I can’t imagine life without her now, and it’s amazing how quickly you become attached to them. It’s incredibly rewarding when they start responding to your teachings and start to become independent. It’s also brought Cat and I closer, and given a real insight into parenthood. Just make sure you’re ready to commit to 12-18 months of training and tough times to get the pay off afterwards. My friend gave some advice the day before we went to collect Kiwi: “I love \[their dog] so much. I would literally peel your face off with a spoon if it meant she would be happy. But \*\*\*\* me she can be a real \*\*\*\* sometimes.” I now fully understand this statement. ]]> [email protected] (Andrew James) Pets Family <![CDATA[Streamlining State Management in Storybook with React]]> https://ajames.dev/writing/storybook-state https://ajames.dev/writing/storybook-state Sat, 29 Jun 2019 00:00:00 GMT <![CDATA[Simplify state management in Storybook with React.]]> <![CDATA[Storybook is an amazing tool for developing UI components in isolation. One of my current projects is a large form with controlled components that rely on their parent container as the source of truth. While Storybook is great for testing individual component state, I found myself writing repetitive code in each story to pass state to a parent container. ```javascript /* src/stories/index.js */ import React, { useState } from 'react'; import { storiesOf } from '@storybook/react'; storiesOf('Input', module).add('controlled', () => { function Parent({ children, ...props }) { const [state, setState] = useState(); return
    {children(state, setState)}
    ; } return ( {(state, setState) => ( setState({ value: e.target.value })} /> )} ); }); ``` This parent component could easily be abstracted and imported into relevant stories, but since each story is effectively a render function you would ideally pass the state variables through as arguments, i.e. ```javascript /* src/stories/index.js */ import React, { useState } from "react"; import { storiesOf } from "@storybook/react"; storiesOf("Input", module).add("controlled", (state, setState) => ( setState({ value: e.target.value })} /> ); ``` To solve this, I created two components and a custom decorator in the .storybook/config.js file. The first component is a function-as-child that acts as a render callback, emulating the parent component from my project. The second is a presentation component that receives state as a prop and displays the current value below each story. The custom decorator adds these components and state variables to each story, where the components wrap the story and the state values are passed as arguments. ```javascript /* .storybook/config.js */ import React, { useState } from "react"; import { configure, addDecorator } from "@storybook/react"; function loadStories() { require("../src/stories/index.js"); } // Component 1 function Stage({ children, …props }) { const [state, setState] = useState({}); return
    {children(state, setState)}
    ; } // Component 2 function State({ state, …props }) { return (
    Parent state:
    {JSON.stringify(state)}
    ); } // Custom decorator addDecorator(story => ( {(state, setState) => (
    {story(state, setState)}
    )}
    )); configure(loadStories, module); ``` This allows each component to easily set and retrieve hoisted state values from the story itself, without any extra code. 💥 ```javascript /* src/stories/index.js */ import React from 'react'; import { storiesOf } from '@storybook/react'; storiesOf('Input', module) // stateless .add('uncontrolled', () => ) // stateful .add('controlled', (state, setState) => ( setState({ value: e.target.value })} /> )); ``` You can find the repository on [GitHub](https://github.com/phunkren/storybook-state), and here’s a quick look at it in action: ![1.1: Test both controlled and uncontrolled inputs in Storybook](https://i.imgur.com/tqd1QZR.gif) ]]>
    [email protected] (Andrew James) react infra
    <![CDATA[Elevating Your Tech Meetup with Live Streaming]]> https://ajames.dev/writing/livestream-meetup https://ajames.dev/writing/livestream-meetup Mon, 05 Nov 2018 00:00:00 GMT <![CDATA[Enhance the live streaming experience of your tech meetup and engage remote attendees]]> <![CDATA[One of my favourite parts of my job is being surrounded by creative professionals who are eager to openly discuss ideas and expand our collective understanding. Some of these professionals have been in the industry for over ten years and, while they have developed their own opinions, they remain open to outside ideas and criticism to continually improve their professional perspective. About a year ago, a group of developers organized the company's first "Egg Talk", an in-house meet-up where developers could discuss any topic they found interesting, such as project challenges, office diversity, emerging technologies, and even banana peeling techniques. These talks were well received, and now events are scheduled at least once a month with several speakers at each event. Recently, we have also started welcoming talks from other business disciplines. Remote work is becoming increasingly common in digital professions, including ours. As a result, we decided to extend our meet-up to employees who were unable to attend in person. Initially, we recorded the talks using a laptop placed on a coffee table facing the presenter and a television, and then uploaded the recordings to YouTube for later viewing. While this solution allowed us to share the talks with a wider audience, it also had its fair share of issues. ![1.1: One of the original Egg talk recordings](https://i.imgur.com/6MhtDD7.jpg) The omnidirectional microphone on the laptop made it difficult to clearly capture the presenter's voice and was prone to picking up background noise. Additionally, the glare on the television and the inconsistency in refresh rate between the television and the laptop's camera made it hard to see the presenter's slides. The talks also tended to last over two hours, which meant that it took several hours to upload and process the files for remote viewers to watch. Ideally, those watching remotely should have the same level of clarity as those in the room. However, the challenge is to improve the quality of the live stream without compromising the experience for those attending in person. One option is to invest in equipment such as a dedicated camera, wireless lapel microphones for each speaker, and a digital mixing board to control and monitor the hardware. However, it's difficult to justify this investment without a baseline for comparison or a clear financial return for the business. Therefore, we decided to explore ways to progressively enhance the stream using existing technology as a baseline for determining the potential return on any future upgrades. Since our office heavily uses Apple products, we decided to try leveraging that technology. We used four Apple devices to set up the live stream: a MacBook Pro running broadcasting software, an Apple TV displaying the presenter's slides, an iPhone serving as the presenter's microphone, and another iPhone capturing a panoramic shot of the room. We used third-party broadcasting software called Open Broadcaster Software (OBS) because it is currently not possible to live stream via QuickTime. However, OBS does not support the Apple TV as a direct input or the iPhone's microphone or camera without additional third-party support. To use the iPhone's microphone as an audio input, we had to hardwire it to the MacBook Pro and enable it as an input device in the system preferences. We also needed to download an app called VonBruno Microphone from the App Store to enable the microphone on the iPhone. To use the iPhone's camera as a video input, we downloaded an app called EpocCam HD from the App Store and used the EpocCam Viewer software to connect to it as a video capture device in OBS. In OBS, we created two types of scenes: input scenes and broadcasting scenes. An input scene contains one of the hardware inputs and any related sources, which can then be imported into a broadcasting scene. A broadcasting scene is what the end user will see on the live stream and is composed of sources and input scenes. By creating a scene for each input, we were able to group and control the sources from a single source of truth, while still being able to distribute the group across multiple scenes. ![1.2: A list of scenes and sources in OBS (input scenes as sources are highlighted green)](https://i.imgur.com/oUCr7yw.jpg) In OBS, we created a list of scenes and designated the "Primary" scene as the one that would be broadcasted. The Primary scene included the live camera shot, the presenter's slides, and the microphone. We also added a green border and a small logo to the live camera view and used a placeholder image of the company logo to replace the background shot when the slides were not available. Additionally, we created a "Placeholder" scene to be displayed at the beginning and end of the stream, or during intermissions. This scene consisted of a simple background image of our Egg poster with no audio or visual inputs. ![1.3: Broadcasting scene overlay in OBS](https://i.imgur.com/E41JTDl.jpg) The final step was to choose a platform to host the live stream. We decided to use YouTube and had to enable our account for live streaming, which was a one-time process that took about 24 hours for YouTube to approve. Once our account was enabled, we scheduled an event for the upcoming Egg talk. Events provide more control over the broadcast, such as generating an event URL that we can share with invited viewers, displaying a countdown to the start time if the URL is accessed before the event goes live, and allowing us to restrict access to the event. We chose to host our talks privately for now, which means the stream is only accessible to the host and those invited and will not appear in search results or public playlists. We also configured default event settings and a reusable stream key, which allows us to use presets for future events, including privacy options, video categorization, and advanced broadcasting configurations (such as stream optimizations, licensing, and rights). This ensures consistency across the channel and minimizes setup before future events go live. To connect the event to our broadcasting software, we simply pasted the reusable stream key and the server URL from the event into the OBS stream settings. It's important to keep the stream key secret and only share it with trusted individuals. ![1.4: YouTube encoder settings](https://i.imgur.com/qar9gqx.jpg) ![1.5: OBS stream settings](https://i.imgur.com/Dc14Jt0.jpg) While we were able to achieve our desire result, there were a few compromises. For example, a thick red border appeared around the screen when a device was recording the output from the Apple TV, which was visible to those in the room but not on the live stream. There was also a notification in the top right corner of the screen indicating which device was recording the output, which was visible on both the television and the live stream. Additionally, the camera, which was connected over Wi-Fi through a third-party application, introduced about two seconds of latency between the live camera shot and the presenter's voice and slides, which reduced the overall quality of the stream. Despite these issues, the stream still provided a significant improvement in quality compared to the initial talks. The event is now immediately available to anyone with permission and access to the link, and YouTube automatically archives the broadcast for later viewing. The presenter's voice is clearer, the background noise has been reduced, the slides are the main focus and can be seen clearly, and the camera gives viewers a sense of the room. With some modest upgrades, we believe it is possible to make further refinements, although the benefits of doing so remain to be seen. ]]> [email protected] (Andrew James) dev rel technology <![CDATA[Bridging the Gap: Improving Collaboration Between Engineering and Design]]> https://ajames.dev/writing/grid-overlay https://ajames.dev/writing/grid-overlay Sun, 13 May 2018 00:00:00 GMT <![CDATA[Creating a visual grid overlay in a React to bridge the gap between engineering and design]]> [email protected] (Andrew James) css react