Skip to content

Latest commit

 

History

History
1238 lines (938 loc) · 37.4 KB

transports.md

File metadata and controls

1238 lines (938 loc) · 37.4 KB

Transports

Pino transports can be used for both transmitting and transforming log output.

The way Pino generates logs:

  1. Reduces the impact of logging on an application to the absolute minimum.
  2. Gives greater flexibility in how logs are processed and stored.

It is recommended that any log transformation or transmission is performed either in a separate thread or a separate process.

Before Pino v7 transports would ideally operate in a separate process - these are now referred to as Legacy Transports.

From Pino v7 and upwards transports can also operate inside a Worker Thread and can be used or configured via the options object passed to pino on initialization. In this case the transports would always operate asynchronously (unless options.sync is set to true in transport options), and logs would be flushed as quickly as possible (there is nothing to do).

v7+ Transports

A transport is a module that exports a default function that returns a writable stream:

import { createWriteStream } from 'node:fs'

export default (options) => {
  return createWriteStream(options.destination)
}

Let's imagine the above defines our "transport" as the file my-transport.mjs (ESM files are supported even if the project is written in CJS).

We would set up our transport by creating a transport stream with pino.transport and passing it to the pino function:

const pino = require('pino')
const transport = pino.transport({
  target: '/absolute/path/to/my-transport.mjs'
})
pino(transport)

The transport code will be executed in a separate worker thread. The main thread will write logs to the worker thread, which will write them to the stream returned from the function exported from the transport file/module.

The exported function can also be async. If we use an async function we can throw early if the transform could not be opened. As an example:

import fs from 'node:fs'
import { once } from 'events'
export default async (options) => {
  const stream = fs.createWriteStream(options.destination)
  await once(stream, 'open')
  return stream
}

While initializing the stream we're able to use await to perform asynchronous operations. In this case, waiting for the write streams open event.

Let's imagine the above was published to npm with the module name some-file-transport.

The options.destination value can be set when creating the transport stream with pino.transport like so:

const pino = require('pino')
const transport = pino.transport({
  target: 'some-file-transport',
  options: { destination: '/dev/null' }
})
pino(transport)

Note here we've specified a module by package rather than by relative path. The options object we provide is serialized and injected into the transport worker thread, then passed to the module's exported function. This means that the options object can only contain types that are supported by the Structured Clone Algorithm which is used to (de)serialize objects between threads.

What if we wanted to use both transports, but send only error logs to my-transport.mjs while sending all logs to some-file-transport? We can use the pino.transport function's level option:

const pino = require('pino')
const transport = pino.transport({
  targets: [
    { target: '/absolute/path/to/my-transport.mjs', level: 'error' },
    { target: 'some-file-transport', options: { destination: '/dev/null' }}
  ]
})
pino(transport)

If we're using custom levels, they should be passed in when using more than one transport.

const pino = require('pino')
const transport = pino.transport({
  targets: [
    { target: '/absolute/path/to/my-transport.mjs', level: 'error' },
    { target: 'some-file-transport', options: { destination: '/dev/null' }
  ],
  levels: { foo: 35 }
})
pino(transport)

It is also possible to use the dedupe option to send logs only to the stream with the higher level.

const pino = require('pino')
const transport = pino.transport({
  targets: [
    { target: '/absolute/path/to/my-transport.mjs', level: 'error' },
    { target: 'some-file-transport', options: { destination: '/dev/null' }
  ],
  dedupe: true
})
pino(transport)

To make pino log synchronously, pass sync: true to transport options.

const pino = require('pino')
const transport = pino.transport({
  targets: [
    { target: '/absolute/path/to/my-transport.mjs', level: 'error' },
  ],
  dedupe: true,
  sync: true,
});
pino(transport);

For more details on pino.transport see the API docs for pino.transport.

Writing a Transport

The module pino-abstract-transport provides a simple utility to parse each line. Its usage is highly recommended.

You can see an example using an async iterator with ESM:

import build from 'pino-abstract-transport'
import SonicBoom from 'sonic-boom'
import { once } from 'events'

export default async function (opts) {
  // SonicBoom is necessary to avoid loops with the main thread.
  // It is the same of pino.destination().
  const destination = new SonicBoom({ dest: opts.destination || 1, sync: false })
  await once(destination, 'ready')

  return build(async function (source) {
    for await (let obj of source) {
      const toDrain = !destination.write(obj.msg.toUpperCase() + '\n')
      // This block will handle backpressure
      if (toDrain) {
        await once(destination, 'drain')
      }
    }
  }, {
    async close (err) {
      destination.end()
      await once(destination, 'close')
    }
  })
}

or using Node.js streams and CommonJS:

'use strict'

const build = require('pino-abstract-transport')
const SonicBoom = require('sonic-boom')

module.exports = function (opts) {
  const destination = new SonicBoom({ dest: opts.destination || 1, sync: false })
  return build(function (source) {
    source.pipe(destination)
  }, {
    close (err, cb) {
      destination.end()
      destination.on('close', cb.bind(null, err))
    }
  })
}

(It is possible to use the async iterators with CommonJS and streams with ESM.)

To consume async iterators in batches, consider using the hwp library.

The close() function is needed to make sure that the stream is closed and flushed when its callback is called or the returned promise resolves. Otherwise, log lines will be lost.

Writing to a custom transport & stdout

In case you want to both use a custom transport, and output the log entries with default processing to STDOUT, you can use 'pino/file' transport configured with destination: 1:

    const transports = [
      {
        target: 'pino/file',
        options: { destination: 1 } // this writes to STDOUT
      },
      {
        target: 'my-custom-transport',
        options: { someParameter: true } 
      }
    ]

    const logger = pino(pino.transport({ targets: transports }))

Creating a transport pipeline

As an example, the following transport returns a Transform stream:

import build from 'pino-abstract-transport'
import { pipeline, Transform } from 'node:stream'
export default async function (options) {
  return build(function (source) {
    const myTransportStream = new Transform({
      // Make sure autoDestroy is set,
      // this is needed in Node v12 or when using the
      // readable-stream module.
      autoDestroy: true,

      objectMode: true,
      transform (chunk, enc, cb) {

        // modifies the payload somehow
        chunk.service = 'pino'

        // stringify the payload again
        this.push(`${JSON.stringify(chunk)}\n`)
        cb()
      }
    })
    pipeline(source, myTransportStream, () => {})
    return myTransportStream
  }, {
    // This is needed to be able to pipeline transports.
    enablePipelining: true
  })
}

Then you can pipeline them with:

import pino from 'pino'

const logger = pino({
  transport: {
    pipeline: [{
      target: './my-transform.js'
    }, {
      // Use target: 'pino/file' with STDOUT descriptor 1 to write
      // logs without any change.
      target: 'pino/file',
      options: { destination: 1 }
    }]
  }
})

logger.info('hello world')

NOTE: there is no "default" destination for a pipeline but a terminating target, i.e. a Writable stream.

TypeScript compatibility

Pino provides basic support for transports written in TypeScript.

Ideally, they should be transpiled to ensure maximum compatibility, but sometimes you might want to use tools such as TS-Node, to execute your TypeScript code without having to go through an explicit transpilation step.

You can use your TypeScript code without explicit transpilation, but there are some known caveats:

  • For "pure" TypeScript code, ES imports are still not supported (ES imports are supported once the code is transpiled).
  • Only TS-Node is supported for now, there's no TSM support.
  • Running transports TypeScript code on TS-Node seems to be problematic on Windows systems, there's no official support for that yet.

Notable transports

pino/file

The pino/file transport routes logs to a file (or file descriptor).

The options.destination property may be set to specify the desired file destination.

const pino = require('pino')
const transport = pino.transport({
  target: 'pino/file',
  options: { destination: '/path/to/file' }
})
pino(transport)

By default, the pino/file transport assumes the directory of the destination file exists. If it does not exist, the transport will throw an error when it attempts to open the file for writing. The mkdir option may be set to true to configure the transport to create the directory, if it does not exist, before opening the file for writing.

const pino = require('pino')
const transport = pino.transport({
  target: 'pino/file',
  options: { destination: '/path/to/file', mkdir: true }
})
pino(transport)

By default, the pino/file transport appends to the destination file if it exists. The append option may be set to false to configure the transport to truncate the file upon opening it for writing.

const pino = require('pino')
const transport = pino.transport({
  target: 'pino/file',
  options: { destination: '/path/to/file', append: false }
})
pino(transport)

The options.destination property may also be a number to represent a file descriptor. Typically this would be 1 to write to STDOUT or 2 to write to STDERR. If options.destination is not set, it defaults to 1 which means logs will be written to STDOUT. If options.destination is a string integer, e.g. '1', it will be coerced to a number and used as a file descriptor. If this is not desired, provide a full path, e.g. /tmp/1.

The difference between using the pino/file transport builtin and using pino.destination is that pino.destination runs in the main thread, whereas pino/file sets up pino.destination in a worker thread.

pino-pretty

The pino-pretty transport prettifies logs.

By default the pino-pretty builtin logs to STDOUT.

The options.destination property may be set to log pretty logs to a file descriptor or file. The following would send the prettified logs to STDERR:

const pino = require('pino')
const transport = pino.transport({
  target: 'pino-pretty',
  options: { destination: 1 } // use 2 for stderr
})
pino(transport)

Asynchronous startup

The new transports boot asynchronously and calling process.exit() before the transport starts will cause logs to not be delivered.

const pino = require('pino')
const transport = pino.transport({
  targets: [
    { target: '/absolute/path/to/my-transport.mjs', level: 'error' },
    { target: 'some-file-transport', options: { destination: '/dev/null' } }
  ]
})
const logger = pino(transport)

logger.info('hello')

// If logs are printed before the transport is ready when process.exit(0) is called,
// they will be lost.
transport.on('ready', function () {
  process.exit(0)
})

Legacy Transports

A legacy Pino "transport" is a supplementary tool that consumes Pino logs.

Consider the following example for creating a transport:

const { pipeline, Writable } = require('node:stream')
const split = require('split2')

const myTransportStream = new Writable({
  write (chunk, enc, cb) {
  // apply a transform and send to STDOUT
  console.log(chunk.toString().toUpperCase())
  cb()
  }
})

pipeline(process.stdin, split(JSON.parse), myTransportStream)

The above defines our "transport" as the file my-transport-process.js.

Logs can now be consumed using shell piping:

node my-app-which-logs-stuff-to-stdout.js | node my-transport-process.js

Ideally, a transport should consume logs in a separate process to the application, Using transports in the same process causes unnecessary load and slows down Node's single-threaded event loop.

Known Transports

PRs to this document are welcome for any new transports!

Pino v7+ Compatible

Legacy

@axiomhq/pino

@axiomhq/pino is the official Axiom transport for Pino, using axiom-js.

import pino from 'pino';

const logger = pino(
  { level: 'info' },
  pino.transport({
    target: '@axiomhq/pino',
    options: {
      dataset: process.env.AXIOM_DATASET,
      token: process.env.AXIOM_TOKEN,
    },
  }),
);

then you can use the logger as usual:

logger.info('Hello from pino!');

For further examples, head over to the examples directory.

@logtail/pino

The @logtail/pino NPM package is a transport that forwards logs to Logtail by Better Stack.

Quick start guide ⇗

@macfja/pino-fingers-crossed

@macfja/pino-fingers-crossed is a Pino v7+ transport that holds logs until a log level is reached, allowing to only have logs when it matters.

const pino = require('pino');
const { default: fingersCrossed, enable } = require('@macfja/pino-fingers-crossed')

const logger = pino(fingersCrossed());

logger.info('Will appear immedialty')
logger.error('Will appear immedialty')

logger.setBindings({ [enable]: 50 })
logger.info('Will NOT appear immedialty')
logger.info('Will NOT appear immedialty')
logger.error('Will appear immedialty as well as the 2 previous messages') // error log are level 50
logger.info('Will NOT appear')
logger.info({ [enable]: false }, 'Will appear immedialty')
logger.info('Will NOT appear')

@openobserve/pino-openobserve

@openobserve/pino-openobserve is a Pino v7+ transport that will send logs to an OpenObserve instance.

const pino = require('pino');
const OpenobserveTransport = require('@openobserve/pino-openobserve');

const logger = pino({
  level: 'info',
  transport: {
    target: OpenobserveTransport,
    options: {
      url: 'https://your-openobserve-server.com',
      organization: 'your-organization',
      streamName: 'your-stream',
      auth: {
        username: 'your-username',
        password: 'your-password',
      },
    },
  },
});

For full documentation check the README.

pino-airbrake-transport

pino-airbrake-transport is a Pino v7+ compatible transport to forward log events to Airbrake from a dedicated worker:

const pino = require('pino')
const transport = pino.transport({
  target: 'pino-airbrake-transport',
  options: {
    airbrake: {
      projectId: 1,
      projectKey: "REPLACE_ME",
      environment: "production",
      // additional options for airbrake
      performanceStats: false,
    },
  },
  level: "error", // minimum log level that should be sent to airbrake
})
pino(transport)

pino-applicationinsights

The pino-applicationinsights module is a transport that will forward logs to Azure Application Insights.

Given an application foo that logs via pino, you would use pino-applicationinsights like so:

$ node foo | pino-applicationinsights --key blablabla

For full documentation of command line switches read README

pino-axiom

pino-axiom is a transport that will forward logs to Axiom.

const pino = require('pino')
const transport = pino.transport({
  target: 'pino-axiom',
  options: {
    orgId: 'YOUR-ORG-ID', 
    token: 'YOUR-TOKEN', 
    dataset: 'YOUR-DATASET', 
  },
})
pino(transport)

pino-azuretable

The pino-azuretable module is a transport that will forward logs to the Azure Table Storage.

Given an application foo that logs via pino, you would use pino-azuretable like so:

$ node foo | pino-azuretable --account storageaccount --key blablabla

For full documentation of command line switches read README

pino-cloudwatch

pino-cloudwatch is a transport that buffers and forwards logs to Amazon CloudWatch.

$ node app.js | pino-cloudwatch --group my-log-group

pino-couch

pino-couch uploads each log line as a CouchDB document.

$ node app.js | pino-couch -U https://couch-server -d mylogs

pino-datadog

The pino-datadog module is a transport that will forward logs to DataDog through its API.

Given an application foo that logs via pino, you would use pino-datadog like so:

$ node foo | pino-datadog --key blablabla

For full documentation of command line switches read README

pino-datadog-transport

pino-datadog-transport is a Pino v7+ compatible transport to forward log events to Datadog from a dedicated worker:

const pino = require('pino')
const transport = pino.transport({
  target: 'pino-datadog-transport',
  options: {
    ddClientConf: {
      authMethods: {
        apiKeyAuth: <your datadog API key>
      }
    },
  },
  level: "error", // minimum log level that should be sent to datadog
})
pino(transport)

Logstash

The pino-socket module can also be used to upload logs to Logstash via:

$ node app.js | pino-socket -a 127.0.0.1 -p 5000 -m tcp

Assuming logstash is running on the same host and configured as follows:

input {
  tcp {
    port => 5000
  }
}

filter {
  json {
    source => "message"
  }
}

output {
  elasticsearch {
    hosts => "127.0.0.1:9200"
  }
}

See https://www.elastic.co/guide/en/kibana/current/setup.html to learn how to setup Kibana.

For Docker users, see https://github.com/deviantony/docker-elk to setup an ELK stack.

pino-discord-webhook

pino-discord-webhook is a Pino v7+ compatible transport to forward log events to a Discord webhook from a dedicated worker.

import pino from 'pino'

const logger = pino({
  transport: {
    target: 'pino-discord-webhook',
    options: {
      webhookUrl: 'https://discord.com/api/webhooks/xxxx/xxxx',
    }
  }
})

pino-elasticsearch

pino-elasticsearch uploads the log lines in bulk to Elasticsearch, to be displayed in Kibana.

It is extremely simple to use and setup

$ node app.js | pino-elasticsearch

Assuming Elasticsearch is running on localhost.

To connect to an external Elasticsearch instance (recommended for production):

$ node app.js | pino-elasticsearch --node http://192.168.1.42:9200

Assuming Elasticsearch is running on 192.168.1.42.

To connect to AWS Elasticsearch:

$ node app.js | pino-elasticsearch --node https://es-url.us-east-1.es.amazonaws.com --es-version 6

Then create an index pattern on 'pino' (the default index key for pino-elasticsearch) on the Kibana instance.

pino-gelf

Pino GELF (pino-gelf) is a transport for the Pino logger. Pino GELF receives Pino logs from stdin and transforms them into GELF format before sending them to a remote Graylog server via UDP.

$ node your-app.js | pino-gelf log

pino-hana

pino-hana is a Pino v7+ transport that save pino logs to a SAP HANA database.

const pino = require('pino')
const logger = pino({
  transport: {
    target: 'pino-hana',
    options: {
      connectionOptions: {
        host: <hana db host>,
        port: <hana db port>,
        user: <hana db user>,
        password: <hana db password>,
      },
      schema: <schema of the table in which you want to save the logs>,
      table: <table in which you want to save the logs>,
    },
  },
})

logger.info('hi') // this log will be saved into SAP HANA

For more detailed information about its usage please check the official documentation.

pino-http-send

pino-http-send is a configurable and low overhead transport that will batch logs and send to a specified URL.

$ node app.js | pino-http-send -u http://localhost:8080/logs

pino-kafka

pino-kafka transport to send logs to Apache Kafka.

$ node index.js | pino-kafka -b 10.10.10.5:9200 -d mytopic

pino-logdna

pino-logdna transport to send logs to LogDNA.

$ node index.js | pino-logdna --key YOUR_INGESTION_KEY

Tags and other metadata can be included using the available command line options. See the pino-logdna README for a full list.

pino-logflare

pino-logflare transport to send logs to a Logflare source.

$ node index.js | pino-logflare --key YOUR_KEY --source YOUR_SOURCE

pino-logfmt

pino-logfmt is a Pino v7+ transport that formats logs into logfmt. This transport can output the formatted logs to stdout or file.

import pino from 'pino'

const logger = pino({
  transport: {
    target: 'pino-logfmt'
  }
})

pino-loki

pino-loki is a transport that will forwards logs into Grafana Loki. Can be used in CLI version in a separate process or in a dedicated worker:

CLI :

node app.js | pino-loki --hostname localhost:3100 --labels='{ "application": "my-application"}' --user my-username --password my-password

Worker :

const pino = require('pino')
const transport = pino.transport({
  target: 'pino-loki',
  options: { host: 'localhost:3100' }
})
pino(transport)

For full documentation and configuration, see the README.

pino-mq

The pino-mq transport will take all messages received on process.stdin and send them over a message bus using JSON serialization.

This is useful for:

  • moving backpressure from application to broker
  • transforming messages pressure to another component
node app.js | pino-mq -u "amqp://guest:guest@localhost/" -q "pino-logs"

Alternatively, a configuration file can be used:

node app.js | pino-mq -c pino-mq.json

A base configuration file can be initialized with:

pino-mq -g

For full documentation of command line switches and configuration see the pino-mq README

pino-mysql

pino-mysql loads pino logs into MySQL and MariaDB.

$ node app.js | pino-mysql -c db-configuration.json

pino-mysql can extract and save log fields into corresponding database fields and/or save the entire log stream as a JSON Data Type.

For full documentation and command line switches read the README.

pino-opentelemetry-transport

pino-opentelemetry-transport is a transport that will forward logs to an OpenTelemetry log collector using OpenTelemetry JS instrumentation.

const pino = require('pino')

const transport = pino.transport({
  target: 'pino-opentelemetry-transport',
  options: {
    resourceAttributes: {
      'service.name': 'test-service',
      'service.version': '1.0.0'
    }
  }
})

pino(transport)

Documentation on running a minimal example is available in the README.

pino-papertrail

pino-papertrail is a transport that will forward logs to the papertrail log service through an UDPv4 socket.

Given an application foo that logs via pino, and a papertrail destination that collects logs on port UDP 12345 on address bar.papertrailapp.com, you would use pino-papertrail like so:

node yourapp.js | pino-papertrail --host bar.papertrailapp.com --port 12345 --appname foo

for full documentation of command line switches read README

pino-pg

pino-pg stores logs into PostgreSQL. Full documentation in the README.

pino-redis

pino-redis loads pino logs into Redis.

$ node app.js | pino-redis -U redis://username:password@localhost:6379

pino-sentry

pino-sentry loads pino logs into Sentry.

$ node app.js | pino-sentry --dsn=https://******@sentry.io/12345

For full documentation of command line switches see the pino-sentry README.

pino-sentry-transport

pino-sentry-transport is a Pino v7+ compatible transport to forward log events to Sentry from a dedicated worker:

const pino = require('pino')
const transport = pino.transport({
  target: 'pino-sentry-transport',
  options: {
    sentry: {
      dsn: 'https://******@sentry.io/12345',
    }
  }
})
pino(transport)

pino-seq

pino-seq supports both out-of-process and in-process log forwarding to Seq.

$ node app.js | pino-seq --serverUrl http://localhost:5341 --apiKey 1234567890 --property applicationName=MyNodeApp

pino-seq-transport

pino-seq-transport is a Pino v7+ compatible transport to forward log events to Seq from a dedicated worker:

const pino = require('pino')
const transport = pino.transport({
  target: '@autotelic/pino-seq-transport',
  options: { serverUrl: 'http://localhost:5341' }
})
pino(transport)

pino-slack-webhook

pino-slack-webhook is a Pino v7+ compatible transport to forward log events to Slack from a dedicated worker:

const pino = require('pino')
const transport = pino.transport({
  target: '@youngkiu/pino-slack-webhook',
  options: {
    webhookUrl: 'https://hooks.slack.com/services/xxx/xxx/xxx',
    channel: '#pino-log',
    username: 'webhookbot',
    icon_emoji: ':ghost:'
  }
})
pino(transport)

For full documentation of command line switches read the README.

pino-socket

pino-socket is a transport that will forward logs to an IPv4 UDP or TCP socket.

As an example, use socat to fake a listener:

$ socat -v udp4-recvfrom:6000,fork exec:'/bin/cat'

Then run an application that uses pino for logging:

$ node app.js | pino-socket -p 6000

Logs from the application should be observed on both consoles.

pino-stackdriver

The pino-stackdriver module is a transport that will forward logs to the Google Stackdriver log service through its API.

Given an application foo that logs via pino, a stackdriver log project bar, and credentials in the file /credentials.json, you would use pino-stackdriver like so:

$ node foo | pino-stackdriver --project bar --credentials /credentials.json

For full documentation of command line switches read README

pino-syslog

pino-syslog is a transforming transport that converts pino NDJSON logs to RFC3164 compatible log messages. The pino-syslog module does not forward the logs anywhere, it merely re-writes the messages to stdout. But when used in combination with pino-socket the log messages can be relayed to a syslog server:

$ node app.js | pino-syslog | pino-socket -a syslog.example.com

Example output for the "hello world" log:

<134>Apr  1 16:44:58 MacBook-Pro-3 none[94473]: {"pid":94473,"hostname":"MacBook-Pro-3","level":30,"msg":"hello world","time":1459529098958}

pino-telegram-webhook

pino-telegram-webhook is a Pino v7+ transport for sending messages to Telegram.

const pino = require('pino');

const logger = pino({
  transport: {
    target: 'pino-telegram-webhook',
    level: 'error',
    options: {
      chatId: -1234567890,
      botToken: "123456:ABC-DEF1234ghIkl-zyx57W2v1u123ew11",
      extra: {
              parse_mode: "HTML",
            },
    },
  },
})

logger.error('<b>test log!</b>');

The extra parameter is optional. Parameters that the method sendMessage supports can be passed to it.

pino-websocket

pino-websocket is a transport that will forward each log line to a websocket server.

$ node app.js | pino-websocket -a my-websocket-server.example.com -p 3004

For full documentation of command line switches read the README.

pino-yc-transport

pino-yc-transport is a Pino v7+ transport for writing to Yandex Cloud Logging from serveless functions or containers.

const pino = require("pino");

const config = {
  level: "debug",
  transport: {
    target: "pino-yc-transport",
  },
};

const logger = pino(config);

logger.debug("some message")
logger.debug({ foo: "bar" });
logger.debug("some message %o, %s", { foo: "bar" }, "baz");
logger.info("info");
logger.warn("warn");
logger.error("error");
logger.error(new Error("error"));
logger.fatal("fatal");

Communication between Pino and Transports

Here we discuss some technical details of how Pino communicates with its worker threads.

Pino uses thread-stream to create a stream for transports. When we create a stream with thread-stream, thread-stream spawns a worker (an independent JavaScript execution thread).

Error messages

How are error messages propagated from a transport worker to Pino?

Let's assume we have a transport with an error listener:

// index.js
const transport = pino.transport({
  target: './transport.js'
})

transport.on('error', err => {
  console.error('error caught', err)
})

const log = pino(transport)

When our worker emits an error event, the worker has listeners for it: error and unhandledRejection. These listeners send the error message to the main thread where Pino is present.

When Pino receives the error message, it further emits the error message. Finally, the error message arrives at our index.js and is caught by our error listener.