This project contains the infrastructure to test and extract Node.js Driver code examples for use across MongoDB documentation.
The structure of this Node.js project is as follows:
/examples: This directory contains example code, marked up with Bluehawk, that will be outputted to the external/content/code-examples/tested/javascript/driverdirectory when we run thesnipscript./tests: This directory contains the test infrastructure to actually run the tests by invoking the example code.
- Set up environment
- Create a new code example
- Add a test for a new code example
- Run tests locally (optional) or in CI
- Snip code examples for inclusion in docs
Refer to the README at the root of the code-example-tests directory for information
about how to use the tested code examples in your documentation project after you complete the
snip step.
This test suite requires you to have Node.js v20.9.0 or newer installed. If you
do not yet have Node installed, refer to
the Node.js installation page for details.
We recommend using Node Version Manager (NVM) to manage your Node versions.
From the root of the /javascript/driver directory, run the following command
to install dependencies:
npm installTo create a new code example:
- Create a code example file
- Create an output file (optional)
- Format the code example files
- Add a corresponding test - refer to the instructions below for testing
- Run the snip command to move the tested code to a docs directory
- Use the code example in a
literalincludeorio-code-blockin your docs set
If you're not comfortable adding a test, create this as an untested code example
in your docs project's source/code-examples directory. Then, file a DOCSP ticket
with the component set to DevDocs to request the DevDocs team move the file
into this test project and add a test.
Create a new file in the /examples directory. Organize these examples to group
related concepts - i.e. aggregation/pipelines or crud/insert. With the goal
of single-sourcing code examples across different docs projects, avoid matching
a specific docs project's page structure and instead group code examples by
related concept or topic for easy reuse.
Refer to the examples/example-stub.js file for an example you can copy/paste
to stub out your own example.
If the output from the code example will be shown in the docs, create a file to store the output alongside the example. For example:
aggregation/pipelines/filter/tutorial.jsaggregation/pipelines/filter/tutorial-output.sh
This project uses Prettier to enforce style formatting for the files in the
examples directory. A GitHub workflow checks formatting automatically when
you add or change any files in this directory. You can check and fix formatting
manually on your machine before making your PR in a few ways:
- Install dependencies and run the Prettier formatting tool from the command line
- Configure VS Code to automatically apply formatting rules when you save a file
To check for formatting issues without automatically fixing them, run:
npx prettier --check examples/To automatically fix any formatting issues, run:
npm run formatPrettier works with popular editors such as VS Code through extensions. To format automatically when you save a file in VS Code:
-
Install the Prettier plugin: Prettier - Code Formatter.
-
Open your settings and enable
"editor.formatOnSave":"editor.formatOnSave": true
-
Set Prettier as the default formatter:
"[javascript]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }
You can find similar extensions for other editors and IDEs like Sublime Text, Atom, or IntelliJ.
To add a test for a new code example:
- Create a new test case (optionally, in a new test file)
- Define logic to verify the output matches expectations
- Run the tests to confirm everything works
This test suite uses the Jest testing framework to verify that our code examples compile, run, and produce the expected output when executed.
Each test file starts with a describe block that groups together related
test cases. Within the describe block, you can execute many individual test
cases, which are each contained within an it block.
You may choose to add a new it block to a group of related tests; for example,
if you have a crud/insert test file, you might add tests for many insert
operation examples. If there is no test file and describe block related to
your code example, create a new file.
Add an import to the top of the file, importing the new code example you created. It should look similar to:
import { yourExampleName } from '../examples/example-stub.js';After the last it block in the file, create a new it block similar to:
it('Should return the expected text string when executing the example', async () => {
const actualReturn = await yourExampleName();
const expectedReturn = 'some output to verify in a test';
// Insert your logic to verify the output matches your expectations
});The string following the it is the description of your test case; this is
what shows when a test fails. Make it a descriptive string so it's easier to
find the test case and fix a failure if it fails.
In the test case:
- Call the function that runs your example
- Capture the output to a variable
- Verify that the output from running your example matches what you expect
Refer to the Define logic to verify the output section of this README for examples of different ways you can perform this verification.
If there is no test file that relates to your code example's topic, create a
new test file. The naming convention is YOUR-EXAMPLE-TOPIC.test.js.
You can nest these test files as deeply as needed to make them easy to find and organize.
Inside the test file, create a new describe block, similar to:
describe('Example tests: show output printed to the console and return a value', () => {
// Add test cases and setup/teardown code as needed
});The string following the describe is the description of the concept that this
test file is testing. It should broadly fit the group of individual test cases
within the file.
Inside each test file, you can add a beforeEach and afterEach block
to execute some code before or after every test case, such as loading fresh
sample data or dropping the database after performing a write operation to
avoid cross-contaminating the tests. You can only define one beforeEach
and afterEach block per test file, so ensure the logic in these blocks is
reusable for all test cases.
Then, inside the describe block add an it block to add an individual test
case. Refer to the "Add a test case to an existing file" section of this README
for details.
For an example you can copy/paste to stub out your own test case, refer to
tests/example.test.js.
If your code examples require MongoDB sample data, import the sample data utilities:
import {
describeWithSampleData,
itWithSampleData,
} from '../utils/sampleDataChecker.js';Use describeWithSampleData() for test suites that entirely depend on sample data,
or itWithSampleData() for individual test cases. Tests automatically skip when
required sample databases are not available.
// Entire test suite requires sample data
describeWithSampleData(
'Movie Tests',
() => {
it('should find movies', async () => {
const result = await runMovieQuery();
expect(result.length).toBeGreaterThan(0);
});
},
'sample_mflix'
);
// Individual test case
itWithSampleData(
'should query restaurants',
async () => {
const result = await runRestaurantQuery();
expect(result.length).toBeGreaterThan(0);
},
'sample_restaurants'
);You can verify the output in a few different ways:
- Return a simple string from your example function, and use a strict match to confirm it matches expectations.
- Read expected output from a file, such as when we are showing the output in the docs, and compare it to what the code returns.
Some code examples might return a simple string. For example:
console.log(`Successfully created index named "${result}"`);
return `Successfully created index named "${result}"`; // :remove:In the test file, you can call the function that executes your code example, establish what the expected string should be, and perform a match to confirm that the code executed correctly:
const actualReturn = await yourExampleName();
const expectedReturn = 'some output to verify in a test';
expect(actualReturn).toStrictEqual(expectedReturn);If you are showing the output in the docs, write the output to a file whose
filename matches the example - i.e. tutorial-output.sh. Then, use the
outputMatchesExampleOutput helper function to verify that the output matches
what the test returns.
Import the helper function at the top of the test file:
import outputMatchesExampleOutput from '../../../utils/outputMatchesExampleOutput.js';Use this function to verify the output based on what your output contains:
const result = await runTutorial();
const outputFilepath = 'aggregation/pipelines/filter/tutorial-output.sh';
const comparisonOptions = { comparisonType: 'ordered' };
const arraysMatch = outputMatchesExampleOutput(
outputFilepath,
result,
comparisonOptions
);
expect(arraysMatch).toBe(true);The comparisonOptions parameter is an object that controls how the comparison
is performed. Choose the appropriate options based on your output characteristics:
For output that can be in any order (most common case):
// Pass the `comparisonType` option explicitly:
const arraysMatch = outputMatchesExampleOutput(outputFilepath, result, {
comparisonType: 'unordered',
});
// Omit the options object (unordered comparison is used by default)
const arraysMatch = outputMatchesExampleOutput(outputFilepath, result);For output that must be in a specific order (e.g., when using sort operations):
const arraysMatch = outputMatchesExampleOutput(outputFilepath, result, {
comparisonType: 'ordered',
});When your output contains fields that will have different values between test runs (such as ObjectIds, timestamps, UUIDs, or other auto-generated values), ignore specific fields during comparison:
const arraysMatch = outputMatchesExampleOutput(outputFilepath, result, {
comparisonType: 'unordered',
ignoreFieldValues: ['_id', 'timestamp', 'userId', 'uuid', 'sessionId'],
});This ensures the comparison only validates that the field names are present, without checking if the values match exactly. This is particularly useful for:
- Database IDs:
_id,userId,documentId - Timestamps:
createdAt,updatedAt,timestamp - UUIDs and tokens:
uuid,sessionId,apiKey - Auto-generated values: Any field with dynamic content
For output files that truncate the actual output to show only what's relevant
to our readers, use ellipsis patterns (...) in your output files to enable
flexible content matching. Our tooling automatically detects and handles these
patterns.
You can use an ellipsis at the end of a string value to shorten it in the
example output. This will match any number of characters in the actual return
after the ....
In the expected output file, add an ellipsis to the end of a string value:
{
plot: 'A young man is accidentally sent 30 years into the past...',
}This matches the actual output of:
{
plot: 'A young man is accidentally sent 30 years into the past in a time-traveling DeLorean invented by his close friend, the maverick scientist Doc Brown.',
}If it's not important to show the value or type for a given key at all, replace the value with an ellipsis in the expected output file.
`{_id: ...}`Matches any value for the key _id in the actual output.
If actual output contains many keys and values that are not necessary to show to illustrate an example, add an ellipsis as a standalone line in your expected output file:
{
full_name: 'Carmen Sandiego',
...
}Matches actual output that contains any number of additional keys and values
beyond the full_name field.
You can also interject standalone ... lines between properties, similar to:
{
full_name: 'Carmen Sandiego',
...
address: 'Somewhere in the world...'
}The options object supports these properties:
{
comparisonType: 'ordered' | 'unordered', // Default: 'unordered'
ignoreFieldValues: ['field1', 'field2'] // Default: []
}To run these tests locally, you need a local MongoDB deploy or an Atlas cluster. Save the connection string for use in the next step. If needed, see here for how to create a local deployment.
Some of the tests in this project use the MongoDB sample data. The test suite automatically detects whether sample data is available and skips tests that require missing datasets, providing clear feedback about what's available.
The test suite includes built-in sample data detection that:
- Automatically skips tests when required sample datasets are not available
- Shows a status summary at the start of test runs indicating available databases
- Provides concise warnings about which specific tests are being skipped
- Caches detection results to avoid repeated database queries during test runs
- Works seamlessly - no special commands or scripts needed
When you run tests, you'll see a status summary like:
📊 Sample Data Status: 3 database(s) available
Found: sample_mflix, sample_restaurants, sample_analytics
⚠️ Skipping "Advanced Movie Analysis" - Missing: sample_training
To learn how to load sample data in Atlas, refer to this docs page:
If you're running MongoDB locally in a docker container:
-
Install the MongoDB Database Tools.
You must install the MongoDB Command Line Database Tools to access the
mongorestorecommand, which you'll use to load the sample data. Refer to the Database Tools Installation docs for details. -
Download the sample database.
Run the following command in your terminal to download the sample data:
curl https://atlas-education.s3.amazonaws.com/sampledata.archive -o sampledata.archive
-
Load the sample data.
Run the following command in your terminal to load the data into your deployment, replacing
<port-number>with the port where you're hosting the deployment:mongorestore --archive=sampledata.archive --port=<port-number>
Create a file named .env at the root of the /javascript/driver directory.
Add the following environment variables:
CONNECTION_STRING="<your-connection-string>"
TZ=UTC
Replace the <your-connection-string> placeholder with the connection
string from the Atlas cluster or local deployment you created in the prior step.
The TZ variable sets the Node.js environment to use the UTC time zone. This
is required to enforce time zone consistency between dates across different
local environments and CI when running the test suite.
From the /javascript/driver directory, run:
npm testThis invokes the following command from the package.json test key:
export $(xargs < .env) && jest --run-in-band --detectOpenHandles
In the above command:
jestis the command to run the test suite--runInBandis a flag that specifies only running one test at a time to avoid collisions when creating/editing/dropping indexes. Otherwise, Jest defaults to running tests in parallel.--detectOpenHandlesis a flag that tells Jest to track resource handles or async operations that remain open after the tests are complete. These can cause the test suite to hang, and this flag tells Jest to report info about these instances.
You can run all the tests in a given test suite (file).
From the /javascript/driver directory, run:
npm test -- -t '<text string from the 'describe()' block you want to run>'You can run a single test within a given test suite (file).
From the /javascript/driver directory, run:
npm test -- -t '<text string from the 'it()' block you want to run>'A GitHub workflow runs these tests in CI automatically when you change any
files in the examples directory:
.github/workflows/node-driver-examples-test-in-docker.yml
GitHub reports the results as passing or failing checks on any PR that changes an example.
If changing an example causes its test to fail, this should be considered blocking to merge the example.
If changing an example causes an unrelated test to fail, create a Jira ticket to fix the unrelated test, but this should not block merging an example update.
You can use markup to replace content that we do not want to show verbatim to users, remove test functionality from the outputted code examples, or rename awkward variables. You can find guides and reference documentation for this markup syntax here.
Inside your testable code example, add the comment // :snippet-start: <SNIPPET-NAME>
where you want to start the snippet, and add // :snippet-end: to end the snippet.
See an example in example-stub.js.
This test suite uses Bluehawk to generate code examples from the test files.
If you do not already have Bluehawk, install it with the following command:
npm install -g bluehawkTo generate updated example files, from the /javascript/driver directory,
run the snip command:
npm run snipThis command executes the snip.js script at the root of the
/javascript/driver directory to generate updated example files.
The updated example files output to content/code-examples/tested/javascript/driver/.
Subdirectory structure is also automatically transferred. For example, generating
updated example files from code-example-tests/javascript/driver/aggregation/filter
automatically outputs to content/code-examples/tested/javascript/driver/aggregation/filter.
This script will automatically create the specified output path if it does not exist.