There has been a growing sentiment (for instance) that using node packages directly, with the command line interfaces they provide, is a good route to take. As opposed to abstracting the functionality away behind a task runner. Partly: you use npm anyway, npm provides scripting functionality, why not just use that? But there is more to it than that. Let’s walk through the thinking, but also exactly how to accomplish many of the most important tasks in a front end development build process.
I’ve started using npm scripts in my projects for about the last six months. Before that, I used Gulp and before that, Grunt. They’ve served me well and helped me perform my work faster and more efficiently by automating many of the things I used to do by hand. However, I started to feel that I was fighting the tools rather than focusing on my own code.
Grunt, Gulp, Broccoli, Brunch and the like all require you to fit your tasks into their paradigms and configurations. Each has it’s own syntaxes, quirks and gotchas that you need to learn. This adds code complexity, build complexity and makes you focus on fixing tooling rather than writing code.
These build tools rely on plugins that wrap a core command line tool. This creates another layer of abstraction away from the core tool, which means more potential for bad things to happen.
Here are three problems I’ve seen multiple times:
- If a plugin doesn’t exist for the command line tool you want to use, you’re out of luck (unless you write it).
- A plugin you’re trying to use wraps an older version of the tool you want to use. Features and documentation don’t always match between the plugin you’re using and the current version of the core tool.
- Errors aren’t always handled well. If a plugin fails, it might not pass along the error from the core tool, resulting in frustration and not really knowing how to debug the problem.
But, bear in mind…
Let me say this: if you are happy with your current build system and it accomplishes all that you need it to do, you can keep using it! Just because npm scripts are becoming more popular doesn’t mean you should jump ship. Keep focusing on writing your code instead of learning more tooling. If you start to get the feeling that you’re fighting with your tools, that’s when I’d suggest considering npm scripts.
If you’ve decided you want to investigate or start using npm scripts, keep reading! You’ll find plenty of example tasks in the rest of this post. Also, I’ve created npm-build-boilerplate with all of these tasks that you can use as a starting point. Let’s get to it!
Writing npm scripts
We’ll be spending a majority of our time in a `package.json` file. This is where all of our dependencies and scripts will live. Here’s a stripped down version from my boilerplate project:
{
"name": "npm-build-boilerplate",
"version": "1.0.0",
"scripts": {
...
},
"devDependencies": {
...
}
}
We’ll build up our `package.json` file as we go along. Our scripts will go into the scripts
object, and any tools we want to use will be installed and put into the devDependencies
object.
Before we begin, here’s a sample structure of the project I’ll be referring to throughout this post:
Compile SCSS to CSS
I’m a heavy user of SCSS, so that’s what I’ll be working with. To compile SCSS to CSS, I turn to node-sass. First, we need to install node-sass
; do this by running the following in your command line:
npm install --save-dev node-sass
This will install node-sass
in your current directory and add it to the devDependencies
object in your `package.json`. This is especially useful when someone else runs your project because they will have everything they need to get the project running. Once installed, we can use it on the command line:
node-sass --output-style compressed -o dist/css src/scss
Let’s break down what this command does. Starting at the end, it says: look in the `src/scss` folder for any SCSS files; output (-o
flag) the compiled CSS to `dist/css`; compress the output (using the --output-style
flag with “compressed” as the option).
Now that we’ve got that working the the command line, let’s move it to an npm script. In your `package.json` scripts
object, add it like so:
"scripts": {
"scss": "node-sass --output-style compressed -o dist/css src/scss"
}
Now, head back to the command line and run:
npm run scss
You will see the same output as running the node-sass
command directly in the command line.
Any time we create an npm script in the remainder of this post, you can run it by using a command like the above.
Just replace scss
with the name of the task you want to run.
As you will see, many of the command line tools we’ll use have numerous options you can use to configure it exactly you see fit. For instance, here’s the list of (node-sass options). Here’s a different setup show how to pass multiple options:
"scripts": {
"scss": "node-sass --output-style nested --indent-type tab --indent-width 4 -o dist/css src/scss"
}
Autoprefix CSS with PostCSS
Now that we’re compiling Scss to CSS, we can automatically add vendor prefixes using Autoprefixer & PostCSS. We can install multiple modules at the same time, separating them with spaces:
npm install --save-dev postcss-cli autoprefixer
We’re installing two modules because PostCSS doesn’t do anything by default. It relies on other plugins like Autoprefixer to manipulate the CSS you give it.
With the necessary tools installed and saved to devDependencies
, add a new task in your scripts
object:
"scripts": {
...
"autoprefixer": "postcss -u autoprefixer -r dist/css/*"
}
This task says: Hey postcss
, use (-u
flag) autoprefixer
to replace (-r
flag) any `.css` files in `dist/css` with vendor prefixed code. That’s it! Need to change the default browser support for autoprefixer? It’s easy to add to the script:
"autoprefixer": "postcss -u autoprefixer --autoprefixer.browsers '> 5%, ie 9' -r dist/css/*"
Again, there’s lot of options you can use to configure your own build: postcss-cli and autoprefixer.
Linting JavaScript
Keeping a standard format and style when authoring code is important to keep errors to a minimum and increase developer efficiency. “Linting” helps us do that automatically, so let’s add JavaScript linting by using eslint.
Once again, install the package; this time, let’s use a shortcut:
npm i -D eslint
This is the same as:
npm install --save-dev eslint
Once installed, we’ll set up some basic rules to run our code against using eslint
. Run the following to start a wizard:
eslint --init
I’d suggest choosing “Answer questions about your style” and answering the questions it asks. This will generate a new file in the root of your project that eslint
will check your code against.
Now, let’s add a lint task to our `package.json` scripts
object:
"scripts": {
...
"lint": "eslint src/js"
}
Our lint task is 13 characters long! It looks for any JavaScript files in the src/js
folder and runs them against the configuration it generated earlier. Of course, you can get crazy with the options.
Uglifying JavaScript files
Let’s work on combining and minifying our JavaScript files, which we can use uglify-js to do. We’ll need to install uglify-js
first:
npm i -D uglify-js
Then, we can set up our uglify task in `package.json`:
"scripts": {
...
"uglify": "mkdir -p dist/js && uglifyjs src/js/*.js -m -o dist/js/app.js"
}
One of the great things about npm scripts is that they are essentially an alias for a command line task that you want to run over and over. This means that you can use standard command line code right in your script! This task uses two standard command line features, mkdir
and &&
.
The first half of this task, mkdir -p dist/js
says: Create a folder structure (mkdir
), but only if it doesn’t exist already (-p
flag). Once that completes successfully, run the uglifyjs
command. The &&
lets you chain multiple commands together, running each one sequentially if the previous command completes successfully.
The second half of this task tells uglifyjs
to start with all of the JS files (`*.js`) in `src/js/`, apply the “mangle” command (-m
flag), and output the result to `dist/js/app.js`. Once again, check the documentation for the tool in question for a full list of options.
Let’s update our uglify
task to create a compressed version of `dist/js/app.js`. Chain another uglifyjs
command and passing the “compress” (-c
) flag:
"scripts": {
...
"uglify": "mkdir -p dist/js && uglifyjs src/js/*.js -m -o dist/js/app.js && uglifyjs src/js/*.js -m -c -o dist/js/app.min.js"
}
Compressing Images
Let’s now turn our attention to compressing images. According to httparchive.org, the average page weight of the top 1000 URLs on the internet is 1.9mb, with images accounting for 1.1mb of that total. One of the best things you can do to increase page speed is reduce the size of your images.
Install imagemin-cli:
npm i -D imagemin-cli
Imagemin is great because it will compress most types of images, including GIF, JPG, PNG and SVG. You can pass it a folder of images and it will crunch all of them, like so:
"scripts": {
...
"imagemin": "imagemin src/images dist/images -p",
}
This task tells imagemin
to find and compress all images in `src/images` and put them in `dist/images`. The -p
flag is passed to create “progressive” images when possible. Check the documentation for all available options.
SVG Sprites
The buzz surrounding SVG has increased in the last few years, and for good reason. They are crisp on all devices, editable with CSS, and screen reader friendly. However, SVG editing software usually leaves extraneous and unnecessary code. Luckily, svgo can help by removing all of that (we’ll install it below).
You can also automate the process of combining and spriting your SVGs to make a single SVG file (more on that technique here). To automate this process, we can install svg-sprite-generator.
npm i -D svgo svg-sprite-generator
The pattern is probably familiar to you now: once installed, add a task in your `package.json` scripts
object:
"scripts": {
...
"icons": "svgo -f src/images/icons && mkdir -p dist/images && svg-sprite-generate -d src/images/icons -o dist/images/icons.svg"
}
Notice the icons
task does three things, based on the presence of two &&
directives. First, we use svgo
, passing a folder (-f
flag) of SVGs; this will compress all SVGs inside the folder. Second, we’ll make the dist/images
folder if it doesn’t already exist (using the mkdir -p
command). Finally, we use svg-sprite-generator
, passing it a folder of SVGs (-d
flag) and a path where we want the SVG sprite to output (-o
flag).
Serve and Automatically Inject Changes with BrowserSync
One of the last pieces to the puzzle is BrowserSync. A few of the things it can do: start a local server, automatically inject updated files into any connected browser, and sync clicks & scrolls between browsers. Install it and add a task:
npm i -D browser-sync
"scripts": {
...
"serve": "browser-sync start --server --files 'dist/css/*.css, dist/js/*.js'"
}
Our BrowserSync task starts a server (--server
flag) using the current path as the root by default. The --files
flag tells BrowserSync to watch any CSS or JS file in the `dist` folder; whenever something in there changes, automatically inject the changed file(s) into the page.
You can open multiple browsers (even on different devices) and they will all get updated file changes in real time!
Grouping tasks
With all of the tasks from above, we’re able to:
- Compile SCSS to CSS and automatically add vendor prefixes
- Lint and uglify JavaScript
- Compress images
- Convert a folder of SVGs to a single SVG sprite
- Start a local server and automatically inject changes into any browser connected to the server
Let’s not stop there!
Combining CSS tasks
Let’s add a task that combines the two CSS related tasks (preprocessing Sass and running Autoprefixer), so we don’t have to run each one separately:
"scripts": {
...
"build:css": "npm run scss && npm run autoprefixer"
}
When you run npm run build:css
, it will tell the command line to run npm run scss
; when it completes successfully, it will then (&&
) run npm run autoprefixer
.
Just like with our build:css
task, we can chain our JavaScript tasks together to make it easier to run:
Combining JavaScript tasks
"scripts": {
...
"build:js": "npm run lint && npm run uglify"
}
Now, we can call npm run build:js
to lint, concatenate and uglify our JavaScript in one step!
Combine remaining tasks
We can do the same thing for our image tasks, as well as a task that combines all build tasks into one:
"scripts": {
...
"build:images": "npm run imagemin && npm run icons",
"build:all": "npm run build:css && npm run build:js && npm run build:images",
}
Watching for changes
Up until this point, our tasks require to make changes to a file, switch back to the command line and run the corresponding task(s). One of the most useful things we can do is add tasks that watch for changes that run tasks automatically when files change. To do this, I recommend using onchange. Install as usual:
npm i -D onchange
Let’s set up watch tasks for CSS and JavaScript:
"scripts": {
...
"watch:css": "onchange 'src/scss/*.scss' -- npm run build:css",
"watch:js": "onchange 'src/js/*.js' -- npm run build:js",
}
Here’s the breakdown on these tasks: onchange
expects you to pass a path as a string to the files you want to watch. We’ll pass our source SCSS and JS files to watch. The command we want to run comes after the --
, and it will run any time a file in the path given is added, changed or deleted.
Let’s add one more watch command to finish off our npm scripts build process.
Install one more package, parallelshell:
npm i -D parallelshell
Once again, add a new task to the scripts
object:
"scripts": {
...
"watch:all": "parallelshell 'npm run serve' 'npm run watch:css' 'npm run watch:js'"
}
parallelshell
takes multiple strings, which we’ll pass multiple npm run
tasks to run.
Why use parallelshell
to combine multiple tasks instead of using &&
like in previous tasks? At first, I tried this. The problem is that &&
chains commands together and waits for each command to finish successfully before starting the next. However, since we are running watch
commands, they never finish! We’d be stuck in an endless loop.
Therefore, using parallelshell
enables us to run multiple watch
commands simultaneously.
This task fires up a server with BrowserSync using the npm run serve
task. Then, it starts our watch commands for both CSS and JavaScript files. Any time a CSS or JavaScript file changes, the watch task performs a respective build task; since BrowserSync is set up to watch for changes in the `dist` folder, it automatically injects new files into any browser connected to it’s URL. Sweet!
Other useful tasks
npm
comes with lots of baked in tasks that you can hook into. Let’s write one more task leveraging one of these built scripts.
"scripts": {
...
"postinstall": "npm run watch:all"
}
postinstall
runs immediately after you run npm install
in your command line. This is a nice-to-have especially when working on teams; when someone clones your project and runs npm install
, our watch:all
tasks starts immediately. They’ll automatically have a server started, a browser window opened and files being watched for changes.
Wrap Up
Whew! We made it! I hope you’ve been able to learn a few things about using npm scripts as a build process and the command line in general.
Just in case you missed it, I’ve created an npm-build-boilerplate project with all of these tasks that you can use as a starting point. If you have questions or comments, please tweet at me or leave a comment below. I’d be glad to help where I can!
Switching to npm scripts is great, but if you are maintaining an open source project and want to maximize contributions, please ensure that your scripts will also work for non-*nix machines. The biggest cross-platform difficulty is with setting environment variables (e.g. NODE_ENV). A great library to handle this is better-npm-run.
Another problem is commenting your code, and generally keeping it readable and maintainable. I’ve built gulp-shelter to combine the best of npm-scripts and gulpfiles, without reinventing the wheel.
Instead of parallelshell, I’ve found npm-run-all to be quite useful. It allows for globbing of tasks when executing them, e.g.
The same goes for the build step:
Nice! I’d not heard of that package before. Thanks for pointing it out!
I think there is certainly valid arguments against the (perhaps) unnecessary abstraction of your code in task runners.
In all your examples above, are you globally installing all those things? node-sass, postcss, onchange, etc? Or are they still local just to the project directory?
You install them locally which is what the -D flag does (though I think the
eslint --init
command in the article is using a globally installed version initially). But NPM scripts havenode_modules/.bin/
available to them so any modules you install will put their executables there and you can refer to them in your NPM scripts as if they are globally installed. So, when installed locally, you can still puteslint ...
in your lint script and it’ll still use the local version to your project when it’s run.This is great, thanks — I’ve been using makefiles forever (terrible syntax though), and I’ve always felt the “task runners” were unnecessary.
What upsets me is I never thought of using npm’s “scripts” as a glorified makefile in this way, and I’m off to do some serious soul searching now to discover why :-)
You probably never thought about it because, why would you? If you’re already using make, and you’re developing Unix apps, why use npm scripts instead? Make can already do exactly what all of this does.
I was using make for front-end builds too (including certain app-specific deployment tasks).
As you say, make is good enough for all of this, but since I am using node anyway, there’s no harm in throwing the bombastic task runners overboard (and make, while I’m at it).
But I should have thought of that myself, since my bias is toward simpler build systems to begin with.
Since you’re already putting the scripts in the composer.json file was there a reason to use npm to execute them instead of composer?
https://getcomposer.org/doc/articles/scripts.md
I’ve been using Gulp with Node for about 8 months now, adding and improving my tasks as I worked. I now have a very decent boilerplate i simple download at the start of each project. Everythong is tide to my default task so I simply enter “gulp” in my Atom text editor’s command line and within seconds it builds everything for me. Haven’t had the need to simplify this any further. It is already stupid easy and the only “difficulties” I’ve had were solved running “npm update”. Nevertheless, kudos for the awesome tutorial!
I try to use npm script as much as possible. If a task is too elaborate, I write it as a gulp task. If a sequence of tasks is too slow and can be skipped instead of being repeated, I use Make to orchestrate the sequence.
Here is a post about compiling Sass with gulp being much faster than compiling with npm scripts:
I have used grunt, gulp and npm scripts in some of the projects that I’ve dealt with, and at the moment gulp is my favourite for its simplicity, speed and stream-based nature. We even moved from npm scripts to gulp in one of our projects because the scripts were quite messy: they contained replicated configurations that we could not simplify due to the lack of variables, and the “scripts” JSON object just looked like an ugly block of text, with the risk of not being cross-platform and no chance of documenting it.
But yes, npm scripts remain as my first choice in small projects. In my opinion, the key trick is to know when to choose something more powerful and manageable. Otherwise, just have what you get for free from npm.
I am going for:
:P
postinstall
is something I will definately try. I currently usenpm start
which is a nice shortcut and pointer to the main script to run.Cool article!
No step in your setup for cachebusting?
I didn’t think it was imperative for the purposes of this post; but, if I needed to add cache busting, I’d use this module: https://www.npmjs.com/package/hashmark.
I’ll take a look at adding this task to the boilerplate referenced in the post.
I’m definitely interested in the idea of using npm for everything (I recently stopped using Bower because all the front-end packages I use are now available on npm, so why use two package managers?) However, I’ve also recently switched from Grunt to Gulp, mainly because I found it gave me huge speed improvements that presumably come from the way Gulp uses streams to run consecutive tasks in memory rather than on disk. (On my machine, tasks that previously took over a second on Grunt are close to instantaneous on Gulp)
I’m guessing that this advantage of Gulp would be lost by switching to the approach described above…? (As far as I can see, it looks like each consecutive task would be reading from and then writing to disk). I wonder if there’s any way of using npm scripts but using pipes/streams the way Gulp does?
It’s awesome few days ago I worked dockerizing with nodejs, because on windows don’t support wildcards (e.x: ./js/*.js ), here my repo: https://github.com/andru255/learn-npm-buildtool and welcome the feedbacks :)
First, I want thank you for article. You described how easily you can use
npm run
to do some file-processing instead of gulp/grunt. Unfortunately, this method of doing streaming build systems job can’t be done in more complicated use:How about sourcemaps (extremely useful if sass/less were used) –
gulp-sourcemaps
?How about enviroment conditions (don’t genererate on sourcemaps on production, or minimify the js/css on production, never on dev env) –
gulpIf
.Same with other cases when the gulp/grunt plugins must have been customized by more refined function which describes more advanced stuff to do. So I would say that replacing a i.e. gulp will be waste of time or rather doing exactly the same work what you can do just by setting up three or more modules into gulp task. What is the point in repeating yourself if it is already done with task runner? Why we’re using code snippets or advanced IDEs instead of
nano
/pico
/notepad
? It is all about to be productive I think.Sourcemaps and environment settings are totally the things stopping me from using npm scripts. Would love if anyone had a neat solution.
“min-sass”: “node-sass resources/sass/main.scss | cleancss -o public/stylesheets/main.min.css –source-map”
npm run min-sass
This is another way of doing all the build things, but i would like to check practically whether it enhance any performance factor or save considerable amount of time before shift on this.
If anyone have any statics about performance/time-taken/Reducing Complexity, please share.
For small projects I am definitely all about NPM scripts because sometimes all I am squashing are CSS and JS files along with Webpack for RiotJS, React and Angular bundles. I usually develop on top of Laravel, Express or Meteor which have a little Sass or JS magic already. I love Meteor because sometimes a lot of magic is nice to get things done. Everything just happens. For some projects, the Meteor magic versus alcohol inducing Gulp build nightmares just to push something live is just what the doctor ordered.
And another thing if you just want to get going for a project with stylus(or css) and jade(or html):
It’s da baos for smaller things.
Other than that I’m starting to get an inclination towards keeping things more vanilla node. With
require('child_process').exec
orrequire('child_process').execSync
you can load any command just like you would in the terminal. I created a small script which compiles and runs java (and my tests) for a school project. Really easy too set up and work with. Used gulp for watching files and for the cli, but I can’t imagine it would be that difficult to find something for those tasks at npm. (fyi: execSync for compiling and exec for running the program/tests. When a new execution is performed i callrunningProgram.kill('SIGKILL')
on the running program)npm scripts are currently starting to gain traction, not because npm scripts (read: shell commands) are inherently that great, but because they provide a common ground for all(!) plugins you will use. The common grounds part is nice, let’s build on that. What npm offers that is really unique, is the common (shell) interface, and the possibility to reuse that programmatically. Additionally you no longer need every single dependency you, your dog, your mom and your (tech savvy unborn) grand child has ever used in your big almighty PATH (note: I am not the one providing the capital case here). You can keep things local.
So npm scripts: Yay! Brillllliant idea. But node(or python or ruby) scripts is where the meat of it all belongs if you want real control.
Anyways… My proposal: Download things local to your project through npm, create a folder called ‘build-scripts’, create npm scripts for your build tasks which use node scripts from ‘build-scripts’ to perform whatever task you please and use exec/execSync to perform actual terminal commands.
.. up to you. Use whatever doesn’t waste too much time configuring. And worst case scenario, you just end up with a bunch of execSync commands referring to your npm scripts.
I’ve had this discussion with my colleagues recently. I think the main thing you lose when switching to pure npm scripts is the notion of a common “task”. I totally agree that gulp/grunt/etc all have restrictive plugin systems but what both provide is a really simple way to create an abstract task and chain them together in a logical/reproduceable way.
For large apps with complex build systems having a simple and consistent way to wrap tasks and chain them together is necessary and, I think, quite easily achievable.
‘mkdir’ and ‘rm’ don’t work on Windows, so the whole process breaks there. Also, it’s more honest to show the scripts hash in its entirety (it’s already quite large) instead of omitting irrelevant parts.
rimraf and mkdirp
That’s not entirely true. If you use a UNIX-like shell (like the bash shell you get if you install Git for Windows rather than Windows’ default command line) as many developers do, then mkdir and rm – and the majority of other common UNIX commands – work fine on Windows.
I love this one: “Let me say this: if you are happy with your current build system and it accomplishes all that you need it to do, you can keep using it!”
To much ‘don’t use it, use this’ those days…
“When did it become ‘correct’ to write build scripts in a package manifest?” —Jon Schlinkert
Thank you so much for this post! It did help me a lot in struggling with concurrent tasks in a complex setup with both
webpack
andnpm
.Something I still cannot figure out though. When running
the npm task exists after successfully finishing. I would like to also watch the icons task. Is there a way to do this?
Katja,
It looks like you’d need to setup a watch task for icons, then replace
npm run icons
withnpm run watch:icons
.I like to call to the binaries inside of the
node_modules
folder instead of relying on global commands.instead of
I’d use
After seeing Stephen Sauceda’s comment above I think this makes more sense:
This is actually what npm does under the hood, so you’re unnecessarily cluttering your
package.json
. The npm docs state:It even goes so far as to say:
npm docs source
Nice! It would be good to know if npm prepends or appends to the PATH variable. I didn’t find the answer in the docs. That would determine if npm uses the local or global installation by default, which makes a difference if there’s a version conflict.
Hey Damon, great article!
For all the windows folks out there using the standard command line and not the UNIX-like one, is possible to rewrite the
mkdir -p
part like so:if not exist dist\\js mkdir dist\\js
Windows slashes need to be excaped.
Thanks Paolo!
If on windows, I might consider using mkdirp (https://github.com/substack/node-mkdirp) and rimraf(https://www.npmjs.com/package/rimraf)
Am I missing something? “watch” aside (and devDependencies) how is this solution handling the dependency chain? Is npm run-scripts building “everything”… “always”?
Coming from “make-land”, I guess I’m so old school, I just don’t get it – I don’t want a build step to execute ANYTHING if it doesn’t need to :/
Is there a difference between using parallelshell and just simple pipe | ?
This will be equivalent to parrallelshell I believe :
"start": "npm run watch | npm run server"
or something like this"deploy": "(npm run watch &) && npm run server"