Skip to content

Latest commit

 

History

History
 
 

README.md

JavaScript - Node.js Driver - Example Test Suite

This project contains the infrastructure to test and extract Node.js Driver code examples for use across MongoDB documentation.

The structure of this Node.js project is as follows:

  • /examples: This directory contains example code, marked up with Bluehawk, that will be outputted to the external /content/code-examples/tested/javascript/driver directory when we run the snip script.
  • /tests: This directory contains the test infrastructure to actually run the tests by invoking the example code.

Overview

  1. Set up environment
  2. Create a new code example
  3. Add a test for a new code example
  4. Run tests locally (optional) or in CI
  5. Snip code examples for inclusion in docs

Refer to the README at the root of the code-example-tests directory for information about how to use the tested code examples in your documentation project after you complete the snip step.

Set up environment

This test suite requires you to have Node.js v20.9.0 or newer installed. If you do not yet have Node installed, refer to the Node.js installation page for details. We recommend using Node Version Manager (NVM) to manage your Node versions.

From the root of the /javascript/driver directory, run the following command to install dependencies:

npm install

To create a new code example

To create a new code example:

  1. Create a code example file
  2. Create an output file (optional)
  3. Format the code example files
  4. Add a corresponding test - refer to the instructions below for testing
  5. Run the snip command to move the tested code to a docs directory
  6. Use the code example in a literalinclude or io-code-block in your docs set

If you're not comfortable adding a test, create this as an untested code example in your docs project's source/code-examples directory. Then, file a DOCSP ticket with the component set to DevDocs to request the DevDocs team move the file into this test project and add a test.

Create a code example file

Create a new file in the /examples directory. Organize these examples to group related concepts - i.e. aggregation/pipelines or crud/insert. With the goal of single-sourcing code examples across different docs projects, avoid matching a specific docs project's page structure and instead group code examples by related concept or topic for easy reuse.

Refer to the examples/example-stub.js file for an example you can copy/paste to stub out your own example.

Create an output file

If the output from the code example will be shown in the docs, create a file to store the output alongside the example. For example:

  • aggregation/pipelines/filter/tutorial.js
  • aggregation/pipelines/filter/tutorial-output.sh

Format the code example files

This project uses Prettier to enforce style formatting for the files in the examples directory. A GitHub workflow checks formatting automatically when you add or change any files in this directory. You can check and fix formatting manually on your machine before making your PR in a few ways:

  • Install dependencies and run the Prettier formatting tool from the command line
  • Configure VS Code to automatically apply formatting rules when you save a file

Run Prettier from the command line

To check for formatting issues without automatically fixing them, run:

npx prettier --check examples/

To automatically fix any formatting issues, run:

npm run format

Configure VS Code to automatically apply formatting on save

Prettier works with popular editors such as VS Code through extensions. To format automatically when you save a file in VS Code:

  1. Install the Prettier plugin: Prettier - Code Formatter.

  2. Open your settings and enable "editor.formatOnSave":

    "editor.formatOnSave": true
  3. Set Prettier as the default formatter:

    "[javascript]": {
      "editor.defaultFormatter": "esbenp.prettier-vscode"
    }

You can find similar extensions for other editors and IDEs like Sublime Text, Atom, or IntelliJ.

To add a test for a new code example

To add a test for a new code example:

  1. Create a new test case (optionally, in a new test file)
  2. Define logic to verify the output matches expectations
  3. Run the tests to confirm everything works

Create a new test case

This test suite uses the Jest testing framework to verify that our code examples compile, run, and produce the expected output when executed.

Each test file starts with a describe block that groups together related test cases. Within the describe block, you can execute many individual test cases, which are each contained within an it block.

You may choose to add a new it block to a group of related tests; for example, if you have a crud/insert test file, you might add tests for many insert operation examples. If there is no test file and describe block related to your code example, create a new file.

Add a test case to an existing file

Add an import to the top of the file, importing the new code example you created. It should look similar to:

import { yourExampleName } from '../examples/example-stub.js';

After the last it block in the file, create a new it block similar to:

it('Should return the expected text string when executing the example', async () => {
  const actualReturn = await yourExampleName();
  const expectedReturn = 'some output to verify in a test';

  // Insert your logic to verify the output matches your expectations
});

The string following the it is the description of your test case; this is what shows when a test fails. Make it a descriptive string so it's easier to find the test case and fix a failure if it fails.

In the test case:

  1. Call the function that runs your example
  2. Capture the output to a variable
  3. Verify that the output from running your example matches what you expect

Refer to the Define logic to verify the output section of this README for examples of different ways you can perform this verification.

Create a new test file/describe block

If there is no test file that relates to your code example's topic, create a new test file. The naming convention is YOUR-EXAMPLE-TOPIC.test.js.

You can nest these test files as deeply as needed to make them easy to find and organize.

Inside the test file, create a new describe block, similar to:

describe('Example tests: show output printed to the console and return a value', () => {
  // Add test cases and setup/teardown code as needed
});

The string following the describe is the description of the concept that this test file is testing. It should broadly fit the group of individual test cases within the file.

Set up and tear down tests

Inside each test file, you can add a beforeEach and afterEach block to execute some code before or after every test case, such as loading fresh sample data or dropping the database after performing a write operation to avoid cross-contaminating the tests. You can only define one beforeEach and afterEach block per test file, so ensure the logic in these blocks is reusable for all test cases.

Then, inside the describe block add an it block to add an individual test case. Refer to the "Add a test case to an existing file" section of this README for details.

For an example you can copy/paste to stub out your own test case, refer to tests/example.test.js.

Writing tests that use sample data

If your code examples require MongoDB sample data, import the sample data utilities:

import {
  describeWithSampleData,
  itWithSampleData,
} from '../utils/sampleDataChecker.js';

Use describeWithSampleData() for test suites that entirely depend on sample data, or itWithSampleData() for individual test cases. Tests automatically skip when required sample databases are not available.

// Entire test suite requires sample data
describeWithSampleData(
  'Movie Tests',
  () => {
    it('should find movies', async () => {
      const result = await runMovieQuery();
      expect(result.length).toBeGreaterThan(0);
    });
  },
  'sample_mflix'
);

// Individual test case
itWithSampleData(
  'should query restaurants',
  async () => {
    const result = await runRestaurantQuery();
    expect(result.length).toBeGreaterThan(0);
  },
  'sample_restaurants'
);

Define logic to verify the output

You can verify the output in a few different ways:

  1. Return a simple string from your example function, and use a strict match to confirm it matches expectations.
  2. Read expected output from a file, such as when we are showing the output in the docs, and compare it to what the code returns.

Verify a simple string match

Some code examples might return a simple string. For example:

console.log(`Successfully created index named "${result}"`);
return `Successfully created index named "${result}"`; // :remove:

In the test file, you can call the function that executes your code example, establish what the expected string should be, and perform a match to confirm that the code executed correctly:

const actualReturn = await yourExampleName();
const expectedReturn = 'some output to verify in a test';
expect(actualReturn).toStrictEqual(expectedReturn);

Verify output from a file

If you are showing the output in the docs, write the output to a file whose filename matches the example - i.e. tutorial-output.sh. Then, use the outputMatchesExampleOutput helper function to verify that the output matches what the test returns.

Import the helper function at the top of the test file:

import outputMatchesExampleOutput from '../../../utils/outputMatchesExampleOutput.js';

Use this function to verify the output based on what your output contains:

const result = await runTutorial();
const outputFilepath = 'aggregation/pipelines/filter/tutorial-output.sh';
const comparisonOptions = { comparisonType: 'ordered' };
const arraysMatch = outputMatchesExampleOutput(
  outputFilepath,
  result,
  comparisonOptions
);
expect(arraysMatch).toBe(true);

The comparisonOptions parameter is an object that controls how the comparison is performed. Choose the appropriate options based on your output characteristics:

Verify unordered output (default behavior)

For output that can be in any order (most common case):

// Pass the `comparisonType` option explicitly:
const arraysMatch = outputMatchesExampleOutput(outputFilepath, result, {
  comparisonType: 'unordered',
});

// Omit the options object (unordered comparison is used by default)
const arraysMatch = outputMatchesExampleOutput(outputFilepath, result);
Verify ordered output

For output that must be in a specific order (e.g., when using sort operations):

const arraysMatch = outputMatchesExampleOutput(outputFilepath, result, {
  comparisonType: 'ordered',
});
Handle variable field values

When your output contains fields that will have different values between test runs (such as ObjectIds, timestamps, UUIDs, or other auto-generated values), ignore specific fields during comparison:

const arraysMatch = outputMatchesExampleOutput(outputFilepath, result, {
  comparisonType: 'unordered',
  ignoreFieldValues: ['_id', 'timestamp', 'userId', 'uuid', 'sessionId'],
});

This ensures the comparison only validates that the field names are present, without checking if the values match exactly. This is particularly useful for:

  • Database IDs: _id, userId, documentId
  • Timestamps: createdAt, updatedAt, timestamp
  • UUIDs and tokens: uuid, sessionId, apiKey
  • Auto-generated values: Any field with dynamic content
Handle flexible content in output files

For output files that truncate the actual output to show only what's relevant to our readers, use ellipsis patterns (...) in your output files to enable flexible content matching. Our tooling automatically detects and handles these patterns.

Shorten string values

You can use an ellipsis at the end of a string value to shorten it in the example output. This will match any number of characters in the actual return after the ....

In the expected output file, add an ellipsis to the end of a string value:

{
  plot: 'A young man is accidentally sent 30 years into the past...',
}

This matches the actual output of:

{
  plot: 'A young man is accidentally sent 30 years into the past in a time-traveling DeLorean invented by his close friend, the maverick scientist Doc Brown.',
}
Omit unimportant values for keys

If it's not important to show the value or type for a given key at all, replace the value with an ellipsis in the expected output file.

`{_id: ...}`

Matches any value for the key _id in the actual output.

Omit any number of keys and values entirely

If actual output contains many keys and values that are not necessary to show to illustrate an example, add an ellipsis as a standalone line in your expected output file:

{
  full_name: 'Carmen Sandiego',
  ...
}

Matches actual output that contains any number of additional keys and values beyond the full_name field.

You can also interject standalone ... lines between properties, similar to:

{
  full_name: 'Carmen Sandiego',
  ...
  address: 'Somewhere in the world...'
}
Complete options reference

The options object supports these properties:

{
  comparisonType: 'ordered' | 'unordered',        // Default: 'unordered'
  ignoreFieldValues: ['field1', 'field2']         // Default: []
}

To run the tests locally

Create an Atlas cluster

To run these tests locally, you need a local MongoDB deploy or an Atlas cluster. Save the connection string for use in the next step. If needed, see here for how to create a local deployment.

Load sample data

Some of the tests in this project use the MongoDB sample data. The test suite automatically detects whether sample data is available and skips tests that require missing datasets, providing clear feedback about what's available.

Automatic sample data detection

The test suite includes built-in sample data detection that:

  • Automatically skips tests when required sample datasets are not available
  • Shows a status summary at the start of test runs indicating available databases
  • Provides concise warnings about which specific tests are being skipped
  • Caches detection results to avoid repeated database queries during test runs
  • Works seamlessly - no special commands or scripts needed

When you run tests, you'll see a status summary like:

📊 Sample Data Status: 3 database(s) available
   Found: sample_mflix, sample_restaurants, sample_analytics

⚠️  Skipping "Advanced Movie Analysis" - Missing: sample_training

Atlas

To learn how to load sample data in Atlas, refer to this docs page:

Local deployment

If you're running MongoDB locally in a docker container:

  1. Install the MongoDB Database Tools.

    You must install the MongoDB Command Line Database Tools to access the mongorestore command, which you'll use to load the sample data. Refer to the Database Tools Installation docs for details.

  2. Download the sample database.

    Run the following command in your terminal to download the sample data:

    curl  https://atlas-education.s3.amazonaws.com/sampledata.archive -o sampledata.archive
  3. Load the sample data.

    Run the following command in your terminal to load the data into your deployment, replacing <port-number> with the port where you're hosting the deployment:

    mongorestore --archive=sampledata.archive --port=<port-number>

Create a .env file

Create a file named .env at the root of the /javascript/driver directory. Add the following environment variables:

CONNECTION_STRING="<your-connection-string>"
TZ=UTC

Replace the <your-connection-string> placeholder with the connection string from the Atlas cluster or local deployment you created in the prior step.

The TZ variable sets the Node.js environment to use the UTC time zone. This is required to enforce time zone consistency between dates across different local environments and CI when running the test suite.

Run All Tests from the command line

From the /javascript/driver directory, run:

npm test

This invokes the following command from the package.json test key:

export $(xargs < .env) && jest --run-in-band --detectOpenHandles

In the above command:

  • jest is the command to run the test suite
  • --runInBand is a flag that specifies only running one test at a time to avoid collisions when creating/editing/dropping indexes. Otherwise, Jest defaults to running tests in parallel.
  • --detectOpenHandles is a flag that tells Jest to track resource handles or async operations that remain open after the tests are complete. These can cause the test suite to hang, and this flag tells Jest to report info about these instances.

Run Test Suites from the command line

You can run all the tests in a given test suite (file).

From the /javascript/driver directory, run:

npm test -- -t '<text string from the 'describe()' block you want to run>'

Run Individual Tests from the command line

You can run a single test within a given test suite (file).

From the /javascript/driver directory, run:

npm test -- -t '<text string from the 'it()' block you want to run>'

To run the tests in CI

A GitHub workflow runs these tests in CI automatically when you change any files in the examples directory:

  • .github/workflows/node-driver-examples-test-in-docker.yml

GitHub reports the results as passing or failing checks on any PR that changes an example.

If changing an example causes its test to fail, this should be considered blocking to merge the example.

If changing an example causes an unrelated test to fail, create a Jira ticket to fix the unrelated test, but this should not block merging an example update.

To snip code examples for inclusion in docs

Add markup to the code example files (optional)

You can use markup to replace content that we do not want to show verbatim to users, remove test functionality from the outputted code examples, or rename awkward variables. You can find guides and reference documentation for this markup syntax here.

Inside your testable code example, add the comment // :snippet-start: <SNIPPET-NAME> where you want to start the snippet, and add // :snippet-end: to end the snippet. See an example in example-stub.js.

Run the snip script

This test suite uses Bluehawk to generate code examples from the test files.

If you do not already have Bluehawk, install it with the following command:

npm install -g bluehawk

To generate updated example files, from the /javascript/driver directory, run the snip command:

npm run snip

This command executes the snip.js script at the root of the /javascript/driver directory to generate updated example files.

The updated example files output to content/code-examples/tested/javascript/driver/. Subdirectory structure is also automatically transferred. For example, generating updated example files from code-example-tests/javascript/driver/aggregation/filter automatically outputs to content/code-examples/tested/javascript/driver/aggregation/filter.

This script will automatically create the specified output path if it does not exist.