Packaging and Publishing - Testing, Building, and Deploying Components with Polymer - Developing Web Components (2015)

Developing Web Components (2015)

Part IV. Testing, Building, and Deploying Components with Polymer

Chapter 17. Packaging and Publishing

Jarrod Overson

Build processes in JavaScript are a subject of common debate. On the one hand, there is a beautifully romantic simplicity in being able to author code that gets delivered directly to an end user and runs without modification. There are people who optimize for that simplicity, using it as a creative constraint to encourage simple and clever programming. On the other hand, authored code and delivered code exist in different environments and are read by different machines. On the client side, the user’s machines parses the code and runs it to deliver the end user experience. On the development side, the machine is you, and you parse code in a very different way than an executable interpreter. Good development practice may actually result in very poorly optimized code when it comes to delivery and parsing.

When acknowledging those differences, an intermediary process that translates the code from one environment to the deployable artifact for the other seems like an obvious necessity and something that one should optimize for from the start.

The important aspects to consider when determining the extent of a build process are the same as for anything else: what is the cost and value of one choice versus another? It’s impossible for anyone to tell you what you should do in your particular situation, so it’s important to test it yourself.

Oftentimes one of the biggest red flags in browser code is the number of requests being made, and it is trivial to make more requests than you realize when importing a single HTML file. This is more of a concern today than it used to be because while there has never been a great way to refer to and load dependencies in widget code like this, now, one poorly optimized HTML import can easily link to dozens of other resources—and we (and our implementers) don’t want to worry about things like that.

Our element includes a number of external scripts that aren’t immediately reused anywhere else. It also includes dependencies on Polymer, jQuery, and Jenga, which are third-party libraries that we shouldn’t be worrying about at the moment; our main concern is all the internal code. In traditional widgets and modular JavaScript code developed to be distributed, the build and delivery process often consists simply of the minification and concatenation of all the project’s files into one distributable, compressed file. You see this regularly with libraries like jQuery (and even Polymer’s platform.js): there is an uncompressed (though maybe concatenated) deployable called jquery.js and another smaller distributable called jquery.min.js. The source of jQuery is actually a couple dozen AMD modules that can be used independently but go through an extensive build process to arrive at the deployable artifact that optimizes the footprint, produces source maps, and generates multiple versions of “jQuery” proper.

For code meant to be delivered as HTML files and imported by way of HTML imports, though, we can’t use all the same tools because we’re potentially linking to a variety of resources that need to be embedded into the HTML file itself. UglifyJS, Esmangle, and the like only work for JavaScript. Fortunately, the Polymer team already has us covered here and has been working on a tool specifically for this problem.

Vulcanize

Vulcanize is the first major solution for packaging up web components for deployment: it provides a way to manage custom elements written with Polymer in external HTML files. It is both a library and a tool written in JavaScript on top of Node.js and can be installed via npm. Since Vulcanize is a library as well it can be incorporated into build tools, but the command-line functionality is a great way to test its viability before going through the effort of fitting it into a build.

You can install it globally in order to easily access its executable, vulcanize, from your command line:

$ npm install -g vulcanize

vulcanize@0.3.1

├── nopt@2.2.1 (abbrev@1.0.5)

├── clean-css@2.1.8 (commander@2.1.0)

├── cheerio@0.15.0 (entities@1.0.0, CSSselect@0.4.1, lodash@2.4.1, ...)

└── uglify-js@2.4.15 (uglify-to-browserify@1.0.2, async@0.2.10, ...)

NODE.JS COMMAND-LINE TOOLS

Vulcanize follows one of the current established best practices for command-line tools written on Node.js, the “library-first” pattern. This pattern establishes, first and foremost, that the functionality of a tool should be delivered in a reusable library while the command-line interface (CLI) exists only as a very minimal wrapper between the command line and the library, passing options largely unchanged.

Consider the vulcanize options:

$ vulcanize --help

vulcanize: Concatenate a set of Web Components into one file

Usage:

vulcanize [OPTIONS] <html file>*

Options:

--output, -o: Output file name (defaults to vulcanized.html)

--verbose, -v: More verbose logging

--help, -h, -?: Print this message

--config: Read the given config file

--strip, -s: Remove comments and empty text nodes

--csp: Extract inline scripts to a separate file (uses <output filename>.js)

--inline: The opposite of CSP mode, inline all assets (script and css) into

the document

--csp --inline: Bundle all javascript (inline and external) into <output

filename>.js

Config:

JSON file for additional options

{

"excludes": {

"imports": [ "regex-to-exclude" ],

"styles": [ "regex-to-exclude" ],

"scripts": [ "regex-to-exclude" ],

}

}

You can see where some concessions were made (by way of the JSON configuration). There isn’t an attempt to fully encompass every conceivable option by way of CLI arguments; the important arguments are covered and the rest can be configured via a simple JSON object.

This allows vulcanize to be reused in ways that the authors didn’t necessarily intend. For a tool like vulcanize, this might mean incorporating it as part of tasks in tools like grunt, gulp, or broccoli. For other command-line tools like jshint or coffeescript, this could also mean allowing the library to be consumed by the browser. If all of these tools were written “CLI-first,” this would be much less of a possibility.

Running vulcanize without any options other than the input document produces a vulcanized.html file next to your input:

$ vulcanize components/src/x-dialog.html

$ ls -l components/src/*.html

-rw-r--r-- 7224 components/src/vulcanized.html

-rw-r--r-- 1303 components/src/x-dialog.html

Right away we notice that 1) vulcanize did something, since our output file is larger than the input, and 2) it did it fast. (The curious among you will also have noticed that vulcanize didn’t tell us a damned thing about what it did do—and, no, --verbose doesn’t change that.)

By default, vulcanize inlines imported HTML and leaves linked JavaScript in place. Depending on your use cases, that may be exactly what you want, but we’re looking to produce a fully encapsulated deliverable that can be imported easily. In this case, the --inline option is exactly what we want:

$ vulcanize --inline components/src/x-dialog.html

$ ls -l components/src/*html

-rw-r--r-- 357090 components/src/vulcanized.html

-rw-r--r-- 1301 components/src/x-dialog.html

Clearly, a 357 KB web component isn’t what we’re going for. We didn’t specify what we wanted to inline, so vulcanize inlined Polymer, jQuery, and Jenga along with all of our dialog’s source. That’s rarely what anyone would want to do, and fortunately it’s possible to configure what to include and exclude via an external JSON configuration file. I have a habit of prefixing the names of optional configuration files with a dot, so I’ve called this file .vulcanize.json. It looks like this:

{

"excludes": {

"imports": [

"polymer"

],

"scripts": [

"jquery",

"jenga"

]

}

}

Here we specify that we’re not looking to import the polymer HTML file, nor are we looking to inline jquery or jenga. Those values are all regular expressions, so they would match any files that include the terms polymer, jenga, or jquery. We don’t have any other files, but this is worth noting in case you do have anything that might get matched (a common practice for jQuery plugins is to name them jquery.<plugin>. js, and it’s not unreasonable to think that some plugins may be better inlined than depended upon externally). With this configuration file in place, let’s have another go:

$ vulcanize --config .vulcanize.json --inline components/src/x-dialog.html

$ ls -l components/src/*html

-rw-r--r-- 30405 components/src/vulcanized.html

-rw-r--r-- 1301 components/src/x-dialog.html

This is substantially better. But we don’t want our built file to be called vulcanized.html, nor do we want it in our src folder, remember? This is where our choice of directory structure starts paying off when incorporated as part of a build process with a distributable artifact.

ARTIFACTS, DISTRIBUTABLES, AND BUILDS, OH MY!

The terms used here may be foreign to some people, but they go a long way toward describing an end goal effectively. “Widgets” and similar terms aren’t often appropriate because they can mean just about anything to different people. Let’s take a look at a few definitions:

Build process

A pipeline that is initiated and completes on its own.

Build

An individual run of a build pipeline. There can be successful builds and failed builds.

Artifact

A by-product.

Distributable

A self-contained file (or limited set of files) that can be distributed on its own.

Consumer

The end user (developer).

Publish

The act of making a distributable artifact available to consumers.

So, the term “build artifact” simply means a by-product of the build: something that does not exist as part of the source code and probably shouldn’t be tracked as source code.1 Analysis reports and test results are also build artifacts, but they are not (likely) distributable artifacts.

An example outside of web components and Polymer would be a library written in CoffeeScript. The CoffeeScript source code is the important stuff to be tracked by version control, and the build process produces executable JavaScript that is published as the distributable artifact.

In this case, our distributable artifact needs to enable us to be imported at the base of our directory structure, both for ease of use and, critically, to be able to access dependencies relatively in the exact same way we access them in our src folder. This can be done simply with the following command run in the root of our project directory:

$ vulcanize --config .vulcanize.json \

--inline components/src/x-dialog.html \

-o x-dialog.html

This does just what we want, but there is a concern we’re not addressing. Some of our users may want to be able to use our component in an environment that is secured by a security model that supports the W3C’s Content Security Policy (CSP) recommendation. This is a security measure that prevents, among other things, inline scripts from being executed. Vulcanize supports this handily in a couple of different ways. The method that works best for us is to have Vulcanize aggregate, concatenate, and extract all scripts to a separate file:

$ vulcanize --config .vulcanize.json \

--inline --csp components/src/x-dialog.html \

-o x-dialog.html

In order to accommodate both styles, we named the output file x-dialog-csp.html.

THE IMPORTANCE OF AUTOMATION

Now we have two commands we need to run upon every build, both with a number of arguments that can easily be forgotten, munged, or transposed between commands. Even with only one command like this, it is incredibly important that it be condensed down to a single idiotproof script. Why? Because builds are always the last step. They are the step we rush because the “important” stuff is done. We publish prematurely. We shout, “Let me just commit this and I’ll be done.” We run builds and leave without checking the results.

Builds are the shopping carts of the developer world. They are the task at the end of all the real work that isn’t accounted for, is trivialized, and isn’t a priority.

Build pipelines without automation force you to bring the shopping cart all the way back to the front of the store every single time. Automated builds are the equivalent of having a handy SuperMart™ employee waiting to whisk that cart away from you as soon as you are done. Think about it: do you always return your cart exactly where you should? No. You don’t. No one does.

And even if you really want to believe that you do it, your coworker doesn’t. That contributor to your project doesn’t. The user playing with your code won’t.

In order to keep these commands under control, we should put them in some sort of manageable automated build process. There are an infinite number of ways this can be done, depending on your background. It’s not as easy as having a build.sh and a build.bat, because that would still require two commands to be run, maintained, and kept in sync.

If you come from a Unixy upbringing, make is an extremely popular and battle-tested tool that seems to fit the bill entirely. The only major problem is that it isn’t easy to install and use on Windows, and once the commands start scaling you run into cross-compatibility issues at every turn.

Coming from a Java background? If so, ant is probably your go-to and can very easily be made to automate a couple of shell commands with a few deft swishes of the XML pen. It’s a bit heavy-handed, but it can definitely work.

If you come from the Ruby world, rake is very popular and can easily work just as well as the previous two tools. One of the benefits of rake, though, is how well it ties into Ruby tools and libraries in order to automate functionality that would normally be delegated to the command line. That doesn’t apply here.

Fortunately, a few tools have arisen in the JavaScript world that embrace the Node.js and JavaScript ecosystem to provide massive amounts of functionality in tight little automated packages. By far the most popular are grunt and gulp, both of which have plugins that tie directly intovulcanize as a library.

Gulp

Gulp became popular after Grunt and addresses some of the drawbacks of the earlier tool. Most notably, it alleviates the verbose and heavy configuration syntax with pure code and the long wait times with a lot of asynchronous, stream-based beauty.

Gulp’s community has exploded, and a lot of the major tasks that people use Grunt for have Gulp equivalents. The process of using this tool is excellent, straightforward, and scalable.

Whatever you choose, you will be taken care of; the drawbacks of one are benefits of the other. We’ll be working with Grunt from this point on because it is the most popular tool at the moment and shows all signs of continuing to improve and grow.

Grunt

Grunt became popular nearly immediately after it came on the scene because it answered a question so many frontend developers were starting to have at the exact same time: “How do I manage these scripts I run every time I am at the end of development?” The road up to Grunt was littered with numerous options that linted your code, optimized it, built its modules, transpiled whatever, and then bundled everything up at the end. All of this quickly became unmaintainable, and something had to be done. Ben Alman, then at Bocoup, was looking for a way to automate the cumbersome maintenance tasks that came with managing many jQuery plugins, and out of that frustration came Grunt.

Grunt has its rough edges, but it was at the forefront of the modular, composable command pipeline, and as such, it amassed well over 1,500 plugins in short order, doing everything from simple concatenation to building AMD modules to transpiling EcmaScript 6 code to IE8-compatible JavaScript.

Grunt is not the answer to everything, but it’s a great answer to the problem we have now: how do we automate the usage of Vulcanize to create multiple web component artifacts?

To get started with Grunt, it’s important to know how it’s architected. Grunt leverages Liftoff (by Tyler Kellen and born out of the experience developing Grunt), which is a library that allows tools to look for local modules to bootstrap into its further functionality. That translates to Grunt being a global tool, grunt, that leverages local modules for the entirety of its functionality. The global grunt executable is installed via the grunt-cli npm package:

$ npm install -g grunt-cli

Whenever this executable is run it will look for a local installation of the grunt module. The reason it does this is to support various versions of the grunt library via any arbitrary version of the grunt command-line interface. With this pattern, you’ll rarely ever need to update grunt-clion any local box since the updated versions will be taken care of on a project-by-project basis. It’s a clever style that has worked out well and is emulated by several projects now.

Running grunt without a local Grunt install will net you the following message:

$ grunt

grunt-cli: The grunt command line interface. (v0.1.11)

Fatal error: Unable to find local grunt.

If you're seeing this message, either a Gruntfile wasn't found or grunt

hasn't been installed locally to your project. For more information about

installing and configuring grunt, please see the Getting Started guide:

http://gruntjs.com/getting-started

In an established project, this almost always means that you have yet to run npm install. In a new project, it just means you haven’t installed grunt locally yet (or, if you have already run npm install, then it means you haven’t saved grunt as a development dependency to the project).

To get started using Grunt for our project, we need to initialize the project with npm so that dependencies can be tracked automatically, similar to how this was done with Bower in the previous chapter:

$ npm init

This utility will walk you through creating a package.json file.

It only covers the most common items, and tries to guess sane defaults.

See `npm help json` for definitive documentation on these fields

and exactly what they do.

Use `npm install <pkg> --save` afterwards to install a package and

save it as a dependency in the package.json file.

Press ^C at any time to quit.

name: *(x-dialog-grunt-vulcanize) x-dialog*

version: (0.0.0)

[ ...snipped... ]

npm walks through some basic questions in order to set some sensible defaults in our package.json file. This package.json file is npm’s way of tracking the metadata associated with a project that either is published to npm or uses npm in some way. Even if publishing to npm is not in the pipeline (which it’s not, until the developers explicitly support client-side modules), it’s important to treat the metadata as important and relevant because it’s used informally by other projects as a central metadata touchpoint for JavaScript-related projects. This includes accessing things like version data, test commands, author names, etc.

Installing Grunt plugins is trivially easy thanks to npm, and plugins are searchable from the official Grunt website. Many popular and well-supported tasks are simply named grunt-<task>. For example, Vulcanize’s task is grunt-vulcanize, and it can be installed with npm alongside Grunt itself:

$ npm install grunt grunt-vulcanize

NOTE

There was an initiative by Tyler Kellen shortly before Grunt 0.4 was released to centralize a wide number of common or first-class Grunt plugins under the umbrella of a Grunt Contrib team. Those who were part of the team were committed to keeping a large number of highly depended upon tasks (the grunt-contrib plugins) as up-to-date as possible with the most current Grunt version. Unless you know what you are doing, if there is a grunt-contrib-* version of the task you are looking for, it is recommended to start with that and only move on if it doesn’t suit your needs.

Ben Alman created Grunt, but a lot of its functionality and consistency is largely due to Tyler Kellen and the Grunt Contrib team, which includes members like Sindre Sorhus, Kyle Young, and Vlad Filippov.

Gruntfiles

The basic configuration for Grunt build chains is done via a Gruntfile (spiritual kin to the Makefile, Rakefile, Jakefile, Gulpfile, and build.xml).

Running grunt with a local installation but without an available Gruntfile.js, you’ll see the following error:

$ grunt

A valid Gruntfile could not be found. Please see the getting started guide for

more information on how to configure grunt: http://gruntjs.com/getting-started

Fatal error: Unable to find Gruntfile.

The Gruntfile is a plain old Node.js module that exports a function taking one argument, the context for the current project’s installed Grunt version. That grunt object is used to load plugins, configure tasks, and set up build chains. A basic, albeit useless, Gruntfile looks like this:

module.exports = function (grunt) {

};

Running grunt now will net you the error:

$ grunt

Warning: Task "default" not found. Use --force to continue.

Aborted due to warnings.

NOTE

One of the frustrating aspects of getting started with Grunt is the number of ways you can be tripped up without knowing why, or where to turn. Knowing the basic errors and their solutions will allow you to jump the hurdles a little bit faster and get productive sooner.

Grunt Tasks

Grunt runs on the concept of “tasks.” Each task can lead to a JavaScript function or can be made up of an array of other tasks, forming a task chain that in turn can lead to functions or further task chains. Grunt doesn’t have a single task built in, so by default it looks for a task registered as thedefault and complains if it cannot find one.

If we wanted to, we could define a very basic function that runs as the default task:

module.exports = function (grunt) {

grunt.registerTask('default', function () {

console.log('Running our own task!');

})

};

Now we have a fully qualified Grunt installation that actually does something!

$ grunt

Running "default" task

Running our own task!

Done, without errors.

The text around our printed string is interesting and further clarifies what kind of framework Grunt provides. Chunks of functionality are broken up into “tasks” and can end with success or failure. Any task that causes errors in part of a chain will abort the rest of the execution and cause the process to exit with a failure. If we were to throw an error there:

module.exports = function (grunt) {

grunt.registerTask('default', function () {

throw new Error("I broke.");

})

};

Grunt would respond appropriately, giving a warning and aborting with an error code:

$ grunt

Running "default" task

Warning: I broke. Use --force to continue.

Aborted due to warnings.

Admittedly, this default task is not very useful, and truth be told, we don’t really want to be writing our own plugins at this time—that’s not why one would choose Grunt off the bat. The real benefit comes in the community plugins.

Registering Tasks

In order for Grunt to be aware of a plugin installed via npm, you need to explicitly load it via loadNpmTasks. For example, to load our previously installed grunt-vulcanize plugin, we’d simply run grunt.loadNpmTasks('grunt-vulcanize');.

Many plugins register tasks themselves, but you’ll either need to run grunt --help to find out what the exact names are or read the documentation. There is no standard mapping from plugin name to task name. The rule of thumb, though, is that the registered task is the most obvious possible name for the task. For example, for a plugin like grunt-vulcanize the task is assumed to be (and is) vulcanize, for a plugin like grunt-contrib-jasmine the task is assumed to be jasmine, etc.

TIP

Use grunt --help in order to get a lot of contextual assistance as well as a list of the registered tasks in a project.

We can run our vulcanize task with the command grunt vulcanize, but since that is the main focus of our Grunt configuration right now, it may as well be the default:

module.exports = function (grunt) {

grunt.loadNpmTasks('grunt-vulcanize');

grunt.registerTask('default', [ 'vulcanize' ]);

};

Running grunt now will net us a new error, indicating that there are no “targets” found:

$ grunt

>> No "vulcanize" targets found.

Warning: Task "vulcanize" failed. Use --force to continue.

Aborted due to warnings.

There are two major task concepts in Grunt: regular tasks, which are simple functions run without any explicit configuration, and “multitasks,” which operate over a set of “targets.” Targets are individually configured runs of a task, configured manually or programmatically in the Grunt config setup.

Grunt Configuration

The largest number of lines your Gruntfile will have will almost always be in the configuration block, an object passed into the initConfig method. For people getting started with Grunt, it always starts innocently, but once the grunt Kool-Aid kicks in and plugins start falling from the sky it quickly stretches to 50, then 100, then 300 lines long. There are ways to manage it, but that’s for another book.

A Grunt configuration loading the vulcanize plugin starts off like this:

module.exports = function (grunt) {

grunt.loadNpmTasks('grunt-vulcanize');

var config = {};

grunt.initConfig(config);

grunt.registerTask('default', [ 'vulcanize' ]);

};

Task configurations are located in configuration properties of the same name. For a task like vulcanize, the configuration is located in a vulcanize property:

module.exports = function (grunt) {

grunt.loadNpmTasks('grunt-vulcanize');

var config = {

vulcanize: {}

};

grunt.initConfig(config);

grunt.registerTask('default', [ 'vulcanize' ]);

};

Multitask “targets” can be seen as conceptually equivalent to individual configurations of a task at one property deeper than the main task configuration. For example, for a target named “main” the task configuration would start at config.vulcanize.main, and to add another literal target, you’d add another key.

This is where the “library-first” pattern of tool development starts exhibiting true benefits. With grunt-vulcanize, we’re not interacting with a command line at all; we’re tying directly into the library itself, passing commands in and massaging return values into things that make sense to Grunt. This allows our previous work on the command line to translate over trivially, as the command-line options map to library options. Our primary vulcanize run then looks like this:

var config = {

vulcanize: {

main: {

src: "components/src/x-dialog.html",

dest: "x-dialog.html",

options: {

inline: true,

excludes : {

imports: [

"polymer"

],

scripts: [

"jquery",

"jenga"

]

}

}

}

}

};

Using that configuration, we can now use Grunt to automate our base vulcanize run:

$ grunt

Running "vulcanize:main" (vulcanize) task

OK

Done, without errors.

Now to add our CSP build, we just add a new target with similar configuration and one new option. You may be thinking that sounds like a lot of duplicated configuration if this pattern holds true for other tasks. Thankfully, Grunt has a way of setting common configuration through anoptions property at the root task configuration level. With no actual configuration except two targets and shared options, the properties of the vulcanize task then look like this:

var config = {

vulcanize: {

options: {},

main: {},

csp: {}

}

};

Adding the configuration, we then have two distinct targets with the same sources and nearly the same options, except the CSP target has a different destination and one added flag:

var config = {

vulcanize: {

options: {

inline: true,

excludes : {

imports: [

"polymer"

],

scripts: [

"jquery",

"jenga"

]

}

},

main: {

src: "components/src/x-dialog.html",

dest: "x-dialog.html"

},

csp: {

src: "components/src/x-dialog.html",

dest: "x-dialog-csp.html",

options: {

csp: true

}

}

}

};

This is already a hefty amount of configuration for a seemingly basic task, and this is one of the primary complaints Grunt detractors have. I can empathize, but there’s very little superfluous configuration there, so it’s mostly a matter of recognizing where that configuration lies. With Grunt, it’s passed in via one object to one method. With other tools, it’s managed at different levels, but it still has to exist somewhere.

NOTE

For those actually following along in code, you’ll notice that vulcanize (and grunt-vulcanize) doesn’t perform much optimization outside of the inlining and concatenation of scripts. The JavaScript file that is created isn’t minified at all and can have a substantial weight to it. Ours, right now, already weighs in at ~30 KB. When gzipped, as it would likely be served from a server, it’s 6.3 KB—which isn’t much, but any extra weight shouldn’t be tolerated. Minified, that code would be around 9 KB, gzipped and minified 2.9 KB—substantial enough savings to worry about.

Fortunately, we’re using Grunt and can tie a virtually infinite number of plugins into our flow!

The method of chaining together disparate tasks may take some creativity, but the overall benefit should be obvious. To minify our JavaScript we’ll use grunt-contrib-uglify, and we’ll vulcanize the CSP build first in order to get all of our JavaScript in one place without excess configuration. We’ll minify that file in place, and then run vulcanize on the CSP build to inline the minified JavaScript into a single HTML file. The total Gruntfile.js follows—see if you can spot the changes and the reasoning behind them (and don’t forget to install grunt-contrib-uglify via npm install --save-dev grunt-contrib-uglify!):

module.exports = function (grunt) {

grunt.loadNpmTasks('grunt-vulcanize');

grunt.loadNpmTasks('grunt-contrib-uglify');

var config = {

uglify: {

csp: {

files: {

'x-dialog-csp.js': ['x-dialog-csp.js']

}

}

},

vulcanize: {

options: {

inline: true,

excludes : {

imports: [

"polymer"

],

scripts: [

"jquery",

"jenga"

]

}

},

main: {

src: "x-dialog-csp.html",

dest: "x-dialog.html"

},

csp: {

src: "components/src/x-dialog.html",

dest: "x-dialog-csp.html",

options: {

csp: true

}

}

}

};

grunt.initConfig(config);

grunt.registerTask('default', [

'vulcanize:csp',

'uglify:csp',

'vulcanize:main'

]);

Publishing with Bower

Up until this point, Bower has been used to manage the installation of dependencies. Now that the component is nearing completion, it’s time to think about how consumers of this project might use it. Fortunately, if you’ve been managing dependencies with Bower already, then it becomes trivially easy to publish the package to the Bower registry.

The x-dialog package has already been using a bower.json file to keep track of its dependencies, so half of our metadata is already taken care of. Filling out the rest is fairly straightforward. Here’s what it might look like:

{

"name": "x-dialog",

"version": "0.0.0",

"authors": [

"Your Name <email@email.com>"

],

"description": "a dialog box web component",

"main": "./x-dialog.html",

"keywords": [

"dialog"

],

"license": "MIT",

"homepage": "http://github.com/YOURUSER/x-dialog.html",

"ignore": [

"**/.*",

"node_modules",

"bower_components",

"vendor",

"test",

"tests"

],

"dependencies": {

"polymer": "~0.3.3",

"jquery": "~2.1.1",

"jenga": "*"

}

}

Proper Git tags are an important aspect of using Git repository endpoints for Bower packages. If a consumer specifies a version of a library—say, 1.2.1—the only way Bower can look for that in a Git repository is by looking up the tag name v1.2.1. An extremely important note is that the Git tag will always win. If v1.2.1 is checked out and the bower.json file is specifying the package version is 1.0.1, it won’t matter; whatever state the code was in when the repo was tagged v1.2.1 is what will be installed.

THE WONDER OF THE WEB

This is yet another reason why Bower’s usefulness is primarily due to the diligence of the web community, and another example of “worse is better.” A vast majority of projects are actually managed in public Git repositories, and maintainers of public projects try to do a really good job of limiting backward compatibility concerns because they just have to.

The limitations of the Web have encouraged a substantial community of people to adopt standards that “just work.” There have been countless attempts at making things “better,” but they always fail when confronted with “what just works” because “what just works” gets adopted first.

In order to make the tagging easier and to ensure things stay in line, you can tag your Git repository and update the metadata with Bower itself. Issuing a bower version 0.0.1 command will update bower.json and tag our Git repository with v0.0.1 so that all we have to do from this point onward is push our changes live:

$ bower version 0.0.1

$ git tag

v0.0.1

$ git push --tags

Registering the Component

The metadata is in order and the code is working and packaged; it’s time to make it public. Doing so is trivially easy with Bower and GitHub, because all we need to do is pick a name and link it to our public Git location. That’s as easy as:

$ bower register x-dialog git://github.com/WebComponentsBook/x-dialog.git

Bower will indeed make sure this endpoint exists, but it won’t do much more than that. Managing releases from this point onward is as simple as tagging our Git repository when we push new versions public. It’s simple enough to be dangerous, so it encourages proper Git workflows and ensures a stable master branch.

To use our new component, install it like anything else:

$ bower install x-dialog

All of the dependencies will be pulled in and it’ll be ready to go!

Summary

In this chapter we’ve solidified our process to account for packaging and deployment of our web component. After this point you should be able to create, test, package, deploy, and install an individual web component.

Some readers may feel compelled to question the value of all this if they never plan to distribute a web component publicly. The benefit of this process is the process. It is overhead that ensures appropriate dependency management, encapsulated logic, faster unit test suites, proper versioning of components, and overall easier maintenance long term. Getting into this flow may be a burden in the short term, but it pays off in spades.

1 JavaScript developers often do check their distributable artifacts into source control because of the nature of how JavaScript is used and because there is no good browser code registry. Even so, the artifacts are treated like versioned binaries, not editable source code with changes tracked by commits.