Patterns and Code Organization - Oh My JS: The Best JavaScript Articles (2014)

Oh My JS: The Best JavaScript Articles (2014)

Patterns and Code Organization

Common JavaScript “Gotchas”

Original Article

http://www.jblotus.com/2013/01/13/common-javascript-gotchas

James Fuller, jblotus

PHP was my first programming language, and my initial exposure to JavaScript was through libraries like jQuery. There were things about JavaScript that always seemed to trip me up in the beginning due to how they worked differently than PHP. Heck there are still some things today that are confusing. I want to share some of the things that I struggled when I started working with JavaScript. I am going to cover the global namespace, this, knowing the difference between ECMAScript 3 and ECMAScript 5, asynchronous operations, prototypes, and simple JavaScript inheritance.

The Global Namespace

In PHP specifically, when you declare a variable function outside of a class (or namespace block) you are essentially adding a function to the global namespace. In JavaScript, there is no such thing as a namespace per se, rather everything is attached to the global object. In the case of the web browser, that is the window object. The other key difference is that in JavaScript, functions and variables are attributes of the global object, which we typically refer to as properties.

This can be troublesome because you won’t get a warning in JavaScript if you overwrite a global function or property and it can actually be quite dangerous.

1 function globalFunction() {

2 console.log('I am the original global function');

3 }

4

5 function overWriteTheGlobal() {

6 globalFunction = function() {

7 console.log('I am the new global function');

8 }

9 }

10 globalFunction(); //outputs "I am the original global function"

11 overWriteTheGlobal(); //this will overwrite the original global function

12 globalFunction(); //outputs "I am the new global function"

One technique that is useful in JavaScript to ensure that your variables and functions are self contained is to use a immediately-invoked function expression, commonly known as a self-executing anonymous function. I typically expose things to the outside world by passing in a carrier object to the function. This is a variation of the module pattern.

1 var module = {};

2

3 (function(exports){

4

5 exports.notGlobalFunction = function() {

6 console.log('I am not global');

7 };

8

9 }(module));

10

11 function notGlobalFunction() {

12 console.log('I am global');

13 }

14

15 notGlobalFunction(); //outputs "I am global"

16 module.notGlobalFunction(); //outputs "I am not global"

Inside the self-executing anonymous function, all of the global scope is enclosed and we finish by attaching it the the module variable. Technically you could just append properties directly to the module variable, but the reason we are passing it in to the function is to make it explicitly clear what we are attaching our function to. It also allows us to alias the passed in object inside the function. The critical thing here is that we are declaring our dependencies upfront and not relying on global variables, other than the module variable.

You also might have noticed the var keyword. If you aren’t sure of how it is used, a basic explanation is that by preceding a variable declaration with var creates a property on the nearest containing function. If you omit the var keyword than you are saying that you want to assign a new value to an existing variable higher up the scope chain, which may or may not be the global scope.

1 var imAGlobal = true;

2

3 function globalGrabber() {

4 imAGlobal = false;

5 return imAGlobal;

6 }

7

8 console.log(imAGlobal); //outputs "true"

9 console.log(globalGrabber()); //outputs "false"

10 console.log(imAGlobal); //outputs "false"

As you can see, it is quite dangerous to rely on globals in your functions, due to possible side effects and collisions that are bound to occur. Now what happens when we use the var keyword?

1 var imAGlobal = true;

2

3 function globalGrabber() {

4 var imAGlobal = false;

5 return imAGlobal;

6 }

7

8 console.log(imAGlobal); //outputs "true"

9 console.log(globalGrabber()); //outputs "false"

10 console.log(imAGlobal); //outputs "true"

JavaScript hoists the var declaration to the top of the function block, then initializes the variable. This is called variable hoisting.

To summarize: all variables are scoped to a function (which is itself an object), and where you declare those variables with var determines the function they are scoped to. Excluding var will imply global scope for a variable.

Let’s look at how variable hoisting happens:

1 function variableHoist() {

2 console.log(hoisty);

3 hoisty = 1;

4 console.log(hoisty);

5 var hoisty = 2;

6 console.log(hoisty);

7 }

8

9 variableHoist();

10 //outputs undefined (would get a ReferenceError if no var declaration existed in \

11 scope)

12 //outputs "1"

13 //outputs "2"

14

15 try {

16 console.log(hoisty); //outputs ReferenceError (no global var "hoisty")

17 } catch (e) {

18 console.log(e);

19 }

So as you can see, it doesn’t actually matter where you put the var declaration in the function, because the property gets created before the function executes any code. Now in practice, generally you want to put your var declarations at the top of the function, since that is where they end up anyway. It is also totally acceptable to initialize you variables at the top of the function, just be aware of the order of events here.

Functions declared with the function keyword in JavaScript (not variable assignment) also get hoisted. Behind the scenes, the entire function gets hoisted up and is made available for execution.

1 myFunction(); //outputs "i exist"

2

3 function myFunction() {

4 console.log('i exist');

5 }

**
** This wholesale function hoisting does not occur when you use the var form of function declaration:

1 try {

2 myFunction();

3 } catch (e) {

4 console.log(e); //throws "Uncaught TypeError: undefined is not a function"

5 }

6 var myFunction = function() {

7 console.log('i exist');

8 }

9

10 myFunction(); //outputs "i exist"

Understanding “this”

Since JavaScript uses function scope, the meaning of this is quite different than what you get in PHP, and causes a lot of confusion. Consider the following:

1 console.log(this); // outputs window object

2

3 var myFunction = function() {

4 console.log(this);

5 }

6

7 myFunction(); //outputs window object

8

9 var newObject = {

10 myFunction: myFunction

11 }

12

13 newObject.myFunction(); //outputs newObject

this by default refers to the object a function is contained in. Since myFunction() is a property of the global object, this is a reference to the global object, which is window. Now when we mix myFunction() into a newObject, this now refers to newObject. In PHP and other similar languages,this always refers to the the instance of a class containing the method. You could argue that JavaScript is doing something stupid here, but truthfully much of the power of the JavaScript language comes from this feature. In fact, we can even replace the value of this when invoking our JavaScript functions by using the call() or apply() methods.

1 var myFunction = function(arg1, arg2) {

2 console.log(this, arg1, arg2);

3 };

4

5 var newObject = {};

6

7 myFunction.call(newObject, 'foo', 'bar'); //outputs newObject "foo" "bar"

8 myFunction.apply(newObject, ['foo', 'bar']); //outputs newObject "foo" "bar"

But let’s not get ahead of ourselves. All we are doing here is invoking the function myFunction by substituting an alternative value for this inside the function by placing the value of the object we want to use a substitute as the first argument. The fundamental difference between call() andapply() is the way you pass arguments to the function. call() will take an unlimited amount of arguments after the first argument and apply() expects and array of arguments as it’s second argument.

Libraries like jQuery perform a lot of magic by invoking things this way. Let’s look at the $.each() method in jQuery:

1 var $els = [$('div'), $('span')];

2 var handler = function() {

3 console.log(this);

4 };

5

6 $.each($els, handler);

7

8 //iteration 1 outputs wrapped jquery dom element for "div" tag

9 //iteration 2 outputs wrapped jquery dom element for "span" tag

10

11 handler.apply({}); //outputs object

jQuery will often rewrite the value of this, so you should always try to be aware of what this means in the context of a jQuery event handler, or other such constructs.

Know the difference between ECMAScript 3 and ECMAScript 5

For many years, ECMAScript 3 has been the standard in most browsers, but more recently ECMAScript 5 has made it’s way into most modern browsers (IE is still lagging behind). ECMAScript 5 introduced a lot of common sense features to JavaScript and some native methods that you previously relied upon a library for, such as String.trim() and Array.forEach(). The problem is you still can’t rely on these methods being available in browser environments if you have users that are using Internet Explorer.

Take a look at what happens when we try to use String.trim in IE 8:

1 var fatString = " my string ";

2

3 //in modern browsers

4 console.log(fatString); //outputs " my string "

5 console.log(fatString.trim()); //outputs "my string"

6

7 //in IE 8

8 console.log(fatString.trim()); //error: Object doesn't support property or method\

9 'trim'

So in the interim, we can use methods like jQuery.trim to do this for us, which I believe will fallback to String.trim if it is available in your browser for increased performance (native browser implementations are faster).

You might not care or even need to know about all of the differences between ECMAScript 3 and ECMAScript 5, but it is generally a good idea to check out the Mozilla Developer Network (MDN) for function reference to see what versions of the language a function is available in first. Generally speaking, you should be fine if you are using a library like jQuery or underscore to handle this for you.

If you are interested in using a polyfill of ECMAScript 5 for older browser, please check out https://github.com/kriskowal/es5-shim

Understanding Async

One of the things that tripped me up the most when beginning to work with JavaScript code, jQuery in particular is the fact that some operations are asynchronous. There were many times that I wrote code in a procedural manner expecting a result to be returned immediately without realizing it.

Take a look at this broken code:

1 var remoteValue = false;

2 $.ajax({

3 url: 'http://google.com',

4 success: function() {

5 remoteValue = true;

6 }

7 });

8

9 console.log(remoteValue); //outputs "false"

It took me a while to realize that you need to program around asynchronous calls using callbacks to deal with the outcome of my ajax calls.

1 var remoteValue = false;

2

3 var doSomethingWithRemoteValue = function() {

4 console.log(remoteValue); //outputs true on success

5 }

6

7 $.ajax({

8 url: 'https://google.com',

9 complete: function() {

10 remoteValue = true;

11 doSomethingWithRemoteValue();

12 }

13 });

Another cool thing is deferred objects (sometimes called promises), which you can use to program in a more procedural style:

1 var remoteValue = false;

2

3 var doSomethingWithRemoteValue = function() {

4 console.log(remoteValue);

5 }

6

7 var promise = $.ajax({

8 url: 'https://google.com'

9 });

10

11 //outputs "true"

12 promise.always(function() {

13 remoteValue = true;

14 doSomethingWithRemoteValue();

15 });

16

17 //outputs "foobar"

18 promise.always(function() {

19 remoteValue = 'foobar';

20 doSomethingWithRemoteValue();

21 });

You can use promises to chain callbacks in a style that is in my opinion a bit easier to work with than nested callbacks in addition to a host of other benefits these objects offer.

Animations in the browser are also asyncronous, so this is also a common source of confusion. I’m not going to go into detail here, but you need to treat animations much like ajax requests in the way you handle them via callbacks. I’m not really an expert on the subject though so please take a look at the jQuery .animate() method.

Simple Inheritance in JavaScript

Grossly simplified, JavaScript clones objects to extend them, while PHP, Ruby, Python and Java use and extend classes. In JavaScript you have something called a prototype, and every object has one. In fact, all functions, strings, numbers and objects have a common ancestor, Object. There are two things about prototype to remember: blueprints and chains.

Each prototype is basically an object in itself that describes properties available when creating an instance of an object. The prototype chain is what allows prototypes to extend other prototypes. In fact, prototypes themselves can have prototypes. When a method or attribute does not exist on an object instance, then it is looked for in that object’s prototype, and the prototypes’s prototype, and so on until it finally reaches undefined if no such property exists.

Thankfully, beginners generally don’t need to mess with this stuff at all, since it is easy enough to create an object literal and append properties to it at runtime.

1 var obj = {};

2

3 obj.newFunction = function() {

4 console.log('I am a dynamic function');

5 };

6

7 obj.newFunction();

An easy way to extend objects that I use all the time is jQuery.extend()

1 var obj = {

2 a: 'i am a lonely property'

3 };

4

5 var newObj = {

6 b: function() {

7 return 'i am a lonely function';

8 }

9 };

10

11 var finalObj = $.extend({}, obj, newObj);

12

13 console.log(finalObj.a); //outputs "i am a lonely property"

14 console.log(finalObj.b()); //outputs "i am a lonely function"

ECMAScript 5 offers us Object.create(), which you can use to extend from an existing object but you probably need to avoid using this if you need to support older browsers. It does offer distinct advantages to property creation and setting attributes of properties (yes, properties also have properties).

1 var obj = {

2 a: 'i am a lonely property'

3 };

4

5 var finalObj = Object.create(obj, {

6 b: {

7 get: function() {

8 return "i am a lonely function";

9 }

10 }

11 });

12

13 console.log(finalObj.a); //outputs "i am a lonely property"

14 console.log(finalObj.b); //outputs "i am a lonely function"

You can get pretty deep into the subject of inheritance in JavaScript but the beautiful thing here again is that you really don’t have to due to the immense power and flexibility of the language.

Bonus Gotcha: Forgetting to use var in for loops

1 var i = 0;

2

3 function iteratorHandler() {

4 i = 10;

5 }

6

7 function iterate() {

8 //this iteration will only run once

9 for (i = 0; i < 10; i++) {

10 console.log(i); //outputs 0

11 iteratorHandler();

12 console.log(i); //outputs 10

13 }

14 }

15

16 iterate();

The example is contrived, but you can see the danger here. The solution is to declare you iterator variables with var.

1 var i = 0;

2

3 function iteratorHandler() {

4 i = 10;

5 }

6

7 function iterate() {

8 //this iteration will run 10 times

9 for (var i = 0; i < 10; i++) {

10 iteratorHandler();

11 console.log(i);

12 }

13 }

14

15 iterate();

This all goes back to our scope rules. Remember to use var properly.

Summary

JavaScript may be the only language people don’t need to learn before using it, but eventually you are going to run in to some unexplained trouble. Other than avoiding your own bugs, learning JavaScript makes a lot of sense these days considering it’s rebirth and widespread availability. This blog by no means attempts to be a complete panacea, but hopefully it will help a few people understand some of the fundamentals before being forced into writing more awful JavaScript code, secretly hoping to get reassigned to a backend project buried in database queries in happy PHP land.

Asynchronous JS: Callbacks, Listeners, Control Flow Libs and Promises

Original Article

http://sporto.github.io/blog/2012/12/09/callbacks-listeners-promises

Sebastian Porto, sporto.github.io

When it comes to dealing with asynchronous development in JavaScript there are many tool you can use. This post explains four of these tools and what their advantages are. These are Callbacks, Listeners, Control Flow Libraries and Promises.

Example Scenario

To illustrate the use of these four tools, let’s create a simple example scenario.

Let’s say that we want to find some records, then process them and finally return the processed results. Both operations (find and process) are asynchronous.

Callbacks

Let’s start with callback pattern, this is the most basic and the best known pattern to deal with async programming.

A callback looks like this:

1 finder([1, 2], function(results) {

2 ..do something

3 });

In the callback pattern we call a function that will do the asynchronous operation. One of the parameters we pass is a function that will be called when the operation is done.

Setup

In order to illustrate how they work we need a couple of functions that will find and process the records. In the real world these functions will make an AJAX request and return the results, but for now let’s just use timeouts.

1 function finder(records, cb) {

2 setTimeout(function () {

3 records.push(3, 4);

4 cb(records);

5 }, 1000);

6 }

7 function processor(records, cb) {

8 setTimeout(function () {

9 records.push(5, 6);

10 cb(records);

11 }, 1000);

12 }

Using the callbacks

The code that consumes these functions looks like this:

1 finder([1, 2], function (records) {

2 processor(records, function(records) {

3 console.log(records);

4 });

5 });

We call the first function, passing a callback. Inside this callback we call the second function passing another callback.

These nested callbacks can be written more clearly by passing a reference to another function.

1 function onProcessorDone(records){

2 console.log(records);

3 }

4

5 function onFinderDone(records) {

6 processor(records, onProcessorDone);

7 }

8

9 finder([1, 2], onFinderDone);

In both case the console log above with log [1,2,3,4,5,6]

Working example here:

Pros

· They are a very well know pattern, so they are familiar thus easy to understand.

· Very easy to implement in your own libraries / functions.

Cons

· Nested callbacks will form the infamous pyramid of doom as shown above, which can get hard to read when you have multiple nested levels. But this is quite easy to fix by splitting the functions also as shown above.

· You can only pass one callback for a given event, this can be a big limitation in many cases.

Photo credit: Brandon Christopher Warren / Foter / CC BY-NC

Listeners

Listeners are also a well known pattern, mostly made popular by jQuery and other DOM libraries. A Listener might look like this:

1 finder.on('done', function (event, records) {

2 ..do something

3 });

We call a function on an object that adds a listener. In that function we pass the name of the event we want to listen to and a callback function. ‘on’ is one of many common name for this function, other common names you will come across are ‘bind’, ‘listen’, ‘addEventListener’, ‘observe’.

Setup

Let’s do some setup for a listener demonstration. Unfortunately the setup needed is a bit more involving than in the callbacks example.

First we need a couple of objects that will do the work of finding and processing the records.

1 var finder = {

2 run: function (records) {

3 var self = this;

4 setTimeout(function () {

5 records.push(3, 4);

6 self.trigger('done', [records]);

7 }, 1000);

8 }

9 }

10 var processor = {

11 run: function (records) {

12 var self = this;

13 setTimeout(function () {

14 records.push(5, 6);

15 self.trigger('done', [records]);

16 }, 1000);

17 }

18 }

Note that they are calling a method trigger when the work is done, I will add this method to these objects using a mix-in. Again ‘trigger’ is one of the names you will come across, others common names are ‘fire’ and ‘publish’.

We need a mix-in object that has the listener behaviour, in this case I will just lean on jQuery for this:

1 var eventable = {

2 on: function(event, cb) {

3 $(this).on(event, cb);

4 },

5 trigger: function (event, args) {

6 $(this).trigger(event, args);

7 }

8 }

Then apply the behaviour to our finder and processor objects:

1 $.extend(finder, eventable);

2 $.extend(processor, eventable);

Excellent, now our objects can take listeners and trigger events.

Using the listeners

The code that consumes the listeners is simple:

1 finder.on('done', function (event, records) {

2 processor.run(records);

3 });

4 processor.on('done', function (event, records) {

5 console.log(records);

6 });

7 finder.run([1,2]);

Again the console run will output [1,2,3,4,5,6]

Working example here:

Pros

· This is another well understood pattern.

· The big advantage is that you are not limited to one listener per object, you can add as many listeners as you want. E.g.

finder .on(‘done’, function (event, records) { .. do something }) .on(‘done’, function (event, records) { .. do something else });

Cons

· A bit more difficult to setup than callbacks in your own code, you will probably want to use a library e.g. jQuery, bean.js.

Photo credit: Nod Young / Foter / CC BY-NC-SA

A Flow Control Library

Flow control libraries are also a very nice way to deal with asynchronous code. One I particularly like is Async.js.

Code using Async.js looks like this:

1 async.series([

2 function(){ ... },

3 function(){ ... }

4 ]);

Setup (Example 1)

Again we need a couple of functions that will do the work, as in the other examples these functions in the real world will probably make an AjAX request and return the results. For now let’s just use timeouts.

1 function finder(records, cb) {

2 setTimeout(function () {

3 records.push(3, 4);

4 cb(null, records);

5 }, 1000);

6 }

7 function processor(records, cb) {

8 setTimeout(function () {

9 records.push(5, 6);

10 cb(null, records);

11 }, 1000);

12 }

The Node Continuation Passing Style

Note the style used in the callbacks inside the functions above.

1 cb(null, records);

The first argument in the callback is null if no error occurs; or the error if one occurs. This is a common pattern in Node.js libraries and Async.js uses this pattern. By using this style the flow between Async.js and the callbacks becomes super simple.

Using Async

The code that will consume these functions looks like this:

1 async.waterfall([

2 function(cb){

3 finder([1, 2], cb);

4 },

5 processor,

6 function(records, cb) {

7 alert(records);

8 }

9 ]);

Async.js takes care of calling each function in order after the previous one has finished. Note how we can just pass the ‘processor’ function, this is because we are using the Node continuation style. As you can see this code is quite minimal and easy to understand.

Working example here:

Another setup (Example 2)

Now, when doing front-end development it is unlikely that you will have a library that follows the callback(null, results) signature. So a more realistic example will look like this:

1 function finder(records, cb) {

2 setTimeout(function () {

3 records.push(3, 4);

4 cb(records);

5 }, 500);

6 }

7 function processor(records, cb) {

8 setTimeout(function () {

9 records.push(5, 6);

10 cb(records);

11 }, 500);

12 }

13

14 // using the finder and the processor

15 async.waterfall([

16 function(cb){

17 finder([1, 2], function(records) {

18 cb(null, records)

19 });

20 },

21 function(records, cb){

22 processor(records, function(records) {

23 cb(null, records);

24 });

25 },

26 function(records, cb) {

27 alert(records);

28 }

29 ]);

It becomes a lot more convoluted but at least you can see the flow going from top to bottom.

Working example here:

Pros

· Usually code using a control flow library is easier to understand because it follows a natural order (from top to bottom). This is not true with callbacks and listeners.

Cons

· If the signatures of the functions don’t match as in the second example then you can argue that the flow control library offers little in terms of readability.

Photo credit: Helmut Kaczmarek / Foter / CC BY-NC-SA

Promises

Finally we get to our final destination. Promises are a very powerful tool, but they are the least understood.

Code using promises may look like this:

1 finder([1,2])

2 .then(function(records) {

3 .. do something

4 });

This will vary widely depending on the promises library you use, in this case I am using when.js.

Setup

Out finder and processor functions look like this:

1 function finder(records){

2 var deferred = when.defer();

3 setTimeout(function () {

4 records.push(3, 4);

5 deferred.resolve(records);

6 }, 500);

7 return deferred.promise;

8 }

9 function processor(records) {

10 var deferred = when.defer();

11 setTimeout(function () {

12 records.push(5, 6);

13 deferred.resolve(records);

14 }, 500);

15 return deferred.promise;

16 }

Each function creates a deferred object and returns a promise. Then it resolves the deferred when the results arrive.

Using the promises

The code that consumes these functions looks like this:

1 finder([1,2])

2 .then(processor)

3 .then(function(records) {

4 alert(records);

5 });

As you can see, it is quite minimal and easy to understand. When used like this, promises bring a lot of clarity to your code as they follow a natural flow. Note how in the first callback we can simply pass the ‘processor’ function. This is because this function returns a promise itself so everything will just flow nicely.

Working example here:

There is a lot to promises:

· they can be passed around as regular objects

· aggregated into bigger promises

· you can add handlers for failed promises

The big benefit of promises

Now if you think that this is all there is to promises you are missing what I consider the biggest advantage. Promises have a neat trick that neither callbacks, listeners or control flows can do. You can add a listener to promise even when it has already been resolved, in this case that listener will trigger immediately, meaning that you don’t have to worry if the event has already happened when you add the listener. This works the same for aggregated promises. Let me show you an example of this:

This is a huge feature for dealing with user interaction in the browser. In complex applications you may not now the order of actions that the user will take, so you can use promises to track use interaction. See this other post if interested.

Pros

· Really powerful, you can aggregate promises, pass them around, or add listeners when already resolved.

Cons

· The least understood of all these tools.

· They can get difficult to track when you have lots of aggregated promises with added listeners along the way.

Conclusion

That’s it! These are in my opinion the four main tools for dealing with asynchronous code. Hopefully I have help you to understand them better and gave you more options for you asynchronous needs.

The Design of Code: Organizing JavaScript

Original Article

http://alistapart.com/article/the-design-of-code-organizing-javascript

Anthony Colangelo, alistapart.com

Great design is a product of care and attention applied to areas that matter, resulting in a useful, understandable, and hopefully beautiful user interface. But don’t be fooled into thinking that design is left only for designers.

There is a lot of design in code, and I don’t mean code that builds the user interface–I mean the design of code.

Well-designed code is much easier to maintain, optimize, and extend, making for more efficient developers. That means more focus and energy can be spent on building great things, which makes everyone happy–users, developers, and stakeholders.

There are three high-level, language-agnostic aspects to code design that are particularly important.

1. System architecture–The basic layout of the codebase. Rules that govern how various components, such as models, views, and controllers, interact with each other.

2. Maintainability–How well can the code be improved and extended?

3. Reusability–How reusable are the application’s components? How easily can each implementation of a component be customized?

In looser languages, specifically JavaScript, it takes a bit of discipline to write well-designed code. The JavaScript environment is so forgiving that it’s easy to throw bits and pieces everywhere and still have things work. Establishing system architecture early (and sticking to it!) provides constraints to your codebase, ensuring consistency throughout.

One approach I’m fond of consists of a tried-and-true software design pattern, the module pattern, whose extensible structure lends itself to a solid system architecture and a maintainable codebase. I like building modules within a jQuery plugin, which makes for beautiful reusability, provides robust options, and exposes a well-crafted API.

Below, I’ll walk through how to craft your code into well-organized components that can be reused in projects to come.

The module pattern

There are a lot of design patterns out there, and equally as many resources on them. Addy Osmani wrote an amazing (free!) book on design patterns in JavaScript, which I highly recommend to developers of all levels.

The module pattern is a simple structural foundation that can help keep your code clean and organized. A “module” is just a standard object literal containing methods and properties, and that simplicity is the best thing about this pattern: even someone unfamiliar with traditional software design patterns would be able to look at the code and instantly understand how it works.

In applications that use this pattern, each component gets its own distinct module. For example, to build autocomplete functionality, you’d create a module for the textfield and a module for the results list. These two modules would work together, but the textfield code wouldn’t touch the results list code, and vice versa.

That decoupling of components is why the module pattern is great for building solid system architecture. Relationships within the application are well-defined; anything related to the textfield is managed by the textfield module, not strewn throughout the codebase–resulting in clear code.

Another benefit of module-based organization is that it is inherently maintainable. Modules can be improved and optimized independently without affecting any other part of the application.

I used the module pattern for the basic structure of jPanelMenu, the jQuery plugin I built for off-canvas menu systems. I’ll use that as an example to illustrate the process of building a module.

Building a module

To begin, I define three methods and a property that are used to manage the interactions of the menu system.

1 var jpm = {

2 animated: true,

3 openMenu: function( ) {

4 ...

5 this.setMenuStyle( );

6 },

7 closeMenu: function( ) {

8 ...

9 this.setMenuStyle( );

10 },

11 setMenuStyle: function( ) { ... }

12 };

The idea is to break down code into the smallest, most reusable bits possible. I could have written just one toggleMenu( ) method, but creating distinct openMenu( ) and closeMenu( ) methods provides more control and reusability within the module.

Notice that calls to module methods and properties from within the module itself (such as the calls to setMenuStyle( )) are prefixed with the this keyword–that’s how modules access their own members.

That’s the basic structure of a module. You can continue to add methods and properties as needed, but it doesn’t get any more complex than that. After the structural foundations are in place, the reusability layer–options and an exposed API–can be built on top.

jQuery plugins

The third aspect of well-designed code is probably the most crucial: reusability. This section comes with a caveat. While there are obviously ways to build and implement reusable components in raw JavaScript (we’re about 90 percent of the way there with our module above), I prefer to build jQuery plugins for more complex things, for a few reasons.

Most importantly, it’s a form of unobtrusive communication. If you used jQuery to build a component, you should make that obvious to those implementing it. Building the component as a jQuery plugin is a great way to say that jQuery is required.

In addition, the implementation code will be consistent with the rest of the jQuery-based project code. That’s good for aesthetic reasons, but it also means (to an extent) that developers can predict how to interact with the plugin without too much research. Just one more way to build a better developer interface.

Before you begin building a jQuery plugin, ensure that the plugin does not conflict with other JavaScript libraries using the $ notation. That’s a lot simpler than it sounds–just wrap your plugin code like so:

1 (function($) {

2 // jQuery plugin code here

3 })(jQuery);

Next, we set up our plugin and drop our previously built module code inside. A plugin is just a method defined on the jQuery ($) object.

1 (function($) {

2 $.jPanelMenu = function( ) {

3 var jpm = {

4 animated: true,

5 openMenu: function( ) {

6 ...

7 this.setMenuStyle( );

8 },

9 closeMenu: function( ) {

10 ...

11 this.setMenuStyle( );

12 },

13 setMenuStyle: function( ) { ... }

14 };

15 };

16 })(jQuery);

All it takes to use the plugin is a call to the function you just created.

1 var jpm = $.jPanelMenu( );

Options

Options are essential to any truly reusable plugin because they allow for customizations to each implementation. Every project brings with it a slew of design styles, interaction types, and content structures. Customizable options help ensure that you can adapt the plugin to fit within those project constraints.

It’s best practice to provide good default values for your options. The easiest way to do that is to use jQuery’s $.extend( ) method, which accepts (at least) two arguments.

As the first argument of $.extend( ), define an object with all available options and their default values. As the second argument, pass through the passed-in options. This will merge the two objects, overriding the defaults with any passed-in options.

1 (function($) {

2 $.jPanelMenu = function(options) {

3 var jpm = {

4 options: $.extend({

5 'animated': true,

6 'duration': 500,

7 'direction': 'left'

8 }, options),

9 openMenu: function( ) {

10 ...

11 this.setMenuStyle( );

12 },

13 closeMenu: function( ) {

14 ...

15 this.setMenuStyle( );

16 },

17 setMenuStyle: function( ) { ... }

18 };

19 };

20 })(jQuery);

Beyond providing good defaults, options become almost self-documenting–someone can look at the code and see all of the available options immediately.

Expose as many options as is feasible. The customization will help in future implementations, and flexibility never hurts.

API

Options are terrific ways to customize how a plugin works. An API, on the other hand, enables extensions to the plugin’s functionality by exposing methods and properties for the implementation code to take advantage of.

While it’s great to expose as much as possible through an API, the outside world shouldn’t have access to all internal methods and properties. Ideally, you should expose only the elements that will be used.

In our example, the exposed API should include calls to open and close the menu, but nothing else. The internal setMenuStyle( ) method runs when the menu opens and closes, but the public doesn’t need access to it.

To expose an API, return an object with any desired methods and properties at the end of the plugin code. You can even map returned methods and properties to those within the module code–this is where the beautiful organization of the module pattern really shines.

1 (function($) {

2 $.jPanelMenu = function(options) {

3 var jpm = {

4 options: $.extend({

5 'animated': true,

6 'duration': 500,

7 'direction': 'left'

8 }, options),

9 openMenu: function( ) {

10 ...

11 this.setMenuStyle( );

12 },

13 closeMenu: function( ) {

14 ...

15 this.setMenuStyle( );

16 },

17 setMenuStyle: function( ) { ... }

18 };

19

20 return {

21 open: jpm.openMenu,

22 close: jpm.closeMenu,

23 someComplexMethod: function( ) { ... }

24 };

25 };

26 })(jQuery);

API methods and properties will be available through the object returned from the plugin initialization.

1 var jpm = $.jPanelMenu({

2 duration: 1000,

3 ...

4 });

5 jpm.open( );

Polishing developer interfaces

With just a few simple constructs and guidelines, we’ve built ourselves a reusable, extensible plugin that will help make our lives easier. Like any part of what we do, experiment with this structure to see if it works for you, your team, and your workflow.

Whenever I find myself building something with a potential for reuse, I break it out into a module-based jQuery plugin. The best part about this approach is that it forces you to use–and test–the code you write. By using something as you build it, you’ll quickly identify strengths, discover shortcomings, and plan changes.

This process leads to battle-tested code ready for open-source contributions, or to be sold and distributed. I’ve released my (mostly) polished plugins as open-source projects on GitHub.

Even if you aren’t building something to be released in the wild, it’s still important to think about the design of your code. Your future self will thank you.

Why AMD?

A javascript module loader

Original Article

http://requirejs.org/docs/why.html

requirejs.org

This page talks about the design forces and use of the Asynchronous Module Definition (AMD) API for JavaScript modules, the module API supported by RequireJS. There is a different page that talks about general approach to modules on the web.

Module Purposes § 1

What are JavaScript modules? What is their purpose?

· Definition: how to encapsulate a piece of code into a useful unit, and how to register its capability/export a value for the module.

· Dependency References: how to refer to other units of code.

The Web Today § 2

1 (function () {

2 var $ = this.jQuery;

3

4 this.myExample = function () {};

5 }());

How are pieces of JavaScript code defined today?

· Defined via an immediately executed factory function.

· References to dependencies are done via global variable names that were loaded via an HTML script tag.

· The dependencies are very weakly stated: the developer needs to know the right dependency order. For instance, The file containing Backbone cannot come before the jQuery tag.

· It requires extra tooling to substitute a set of script tags into one tag for optimized deployment.

This can be difficult to manage on large projects, particularly as scripts start to have many dependencies in a way that may overlap and nest. Hand-writing script tags is not very scalable, and it leaves out the capability to load scripts on demand.

CommonJS § 3

1 var $ = require('jquery');

2 exports.myExample = function () {};

The original CommonJS (CJS) list participants decided to work out a module format that worked with today’s JavaScript language, but was not necessarily bound to the limitations of the browser JS environment. The hope was to use some stop-gap measures in the browser and hopefully influence the browser makers to build solutions that would enable their module format to work better natively. The stop-gap measures:

· Either use a server to translate CJS modules to something usable in the browser.

· Or use XMLHttpRequest (XHR) to load the text of modules and do text transforms/parsing in browser.

The CJS module format only allowed one module per file, so a “transport format” would be used for bundling more than one module in a file for optimization/bundling purposes.

With this approach, the CommonJS group was able to work out dependency references and how to deal with circular dependencies, and how to get some properties about the current module. However, they did not fully embrace some things in the browser environment that cannot change but still affect module design:

· network loading

· inherent asynchronicity

It also meant they placed more of a burden on web developers to implement the format, and the stop-gap measures meant debugging was worse. eval-based debugging or debugging multiple files that are concatenated into one file have practical weaknesses. Those weaknesses may be addressed in browser tooling some day, but the end result: using CommonJS modules in the most common of JS environments, the browser, is non-optimal today.

AMD § 4

1 define(['jquery'] , function ($) {

2 return function () {};

3 });

The AMD format comes from wanting a module format that was better than today’s “write a bunch of script tags with implicit dependencies that you have to manually order” and something that was easy to use directly in the browser. Something with good debugging characteristics that did not require server-specific tooling to get started. It grew out of Dojo’s real world experience with using XHR+eval and wanting to avoid its weaknesses for the future.

It is an improvement over the web’s current “globals and script tags” because:

· Uses the CommonJS practice of string IDs for dependencies. Clear declaration of dependencies and avoids the use of globals.

· IDs can be mapped to different paths. This allows swapping out implementation. This is great for creating mocks for unit testing. For the above code sample, the code just expects something that implements the jQuery API and behavior. It does not have to be jQuery.

· Encapsulates the module definition. Gives you the tools to avoid polluting the global namespace.

· Clear path to defining the module value. Either use “return value;” or the CommonJS “exports” idiom, which can be useful for circular dependencies.

It is an improvement over CommonJS modules because:

· It works better in the browser, it has the least amount of gotchas. Other approaches have problems with debugging, cross-domain/CDN usage, file:// usage and the need for server-specific tooling.

· Defines a way to include multiple modules in one file. In CommonJS terms, the term for this is a “transport format”, and that group has not agreed on a transport format.

· Allows setting a function as the return value. This is really useful for constructor functions. In CommonJS this is more awkward, always having to set a property on the exports object. Node supports module.exports = function () {}, but that is not part of a CommonJS spec.

Module Definition § 5

Using JavaScript functions for encapsulation has been documented as the module pattern:

1 (function () {

2 this.myGlobal = function () {};

3 }());

That type of module relies on attaching properties to the global object to export the module value, and it is difficult to declare dependencies with this model. The dependencies are assumed to be immediately available when this function executes. This limits the loading strategies for the dependencies.

AMD addresses these issues by:

· Register the factory function by calling define(), instead of immediately executing it.

· Pass dependencies as an array of string values, do not grab globals.

· Only execute the factory function once all the dependencies have been loaded and executed.

· Pass the dependent modules as arguments to the factory function.

//Calling define with a dependency array and a factory function define([‘dep1’, ‘dep2’], function (dep1, dep2) {

1 //Define the module value by returning a value.

2 return function () {};

});

Named Modules § 6

Notice that the above module does not declare a name for itself. This is what makes the module very portable. It allows a developer to place the module in a different path to give it a different ID/name. The AMD loader will give the module an ID based on how it is referenced by other scripts.

However, tools that combine multiple modules together for performance need a way to give names to each module in the optimized file. For that, AMD allows a string as the first argument to define():

1 //Calling define with module ID, dependency array, and factory function

2 define('myModule', ['dep1', 'dep2'], function (dep1, dep2) {

3

4 //Define the module value by returning a value.

5 return function () {};

6 });

You should avoid naming modules yourself, and only place one module in a file while developing. However, for tooling and performance, a module solution needs a way to identify modules in built resources.

Sugar § 7

The above AMD example works in all browsers. However, there is a risk of mismatched dependency names with named function arguments, and it can start to look a bit strange if your module has many dependencies:

1 define([ "require", "jquery", "blade/object", "blade/fn", "rdapi",

2 "oauth", "blade/jig", "blade/url", "dispatch", "accounts",

3 "storage", "services", "widgets/AccountPanel", "widgets/TabButton",

4 "widgets/AddAccount", "less", "osTheme", "jquery-ui-1.8.7.min",

5 "jquery.textOverflow"],

6 function (require, $, object, fn, rdapi,

7 oauth, jig, url, dispatch, accounts,

8 storage, services, AccountPanel, TabButton,

9 AddAccount, less, osTheme) {

10

11 });

To make this easier, and to make it easy to do a simple wrapping around CommonJS modules, this form of define is supported, sometimes referred to as “simplified CommonJS wrapping”:

1 define(function (require) {

2 var dependency1 = require('dependency1'),

3 dependency2 = require('dependency2');

4

5 return function () {};

6 });

The AMD loader will parse out the require(‘’) calls by using Function.prototype.toString(), then internally convert the above define call into this:

1 define(['require', 'dependency1', 'dependency2'], function (require) {

2 var dependency1 = require('dependency1'),

3 dependency2 = require('dependency2');

4

5 return function () {};

6 });

This allows the loader to load dependency1 and dependency2 asynchronously, execute those dependencies, then execute this function.

Not all browsers give a usable Function.prototype.toString() results. As of October 2011, the PS 3 and older Opera Mobile browsers do not. Those browsers are more likely to need an optimized build of the modules for network/device limitations, so just do a build with an optimizer that knows how to convert these files to the normalized dependency array form, like the RequireJS optimizer.

Since the number of browsers that cannot support this toString() scanning is very small, it is safe to use this sugared forms for all your modules, particularly if you like to line up the dependency names with the variables that will hold their module values.

CommonJS Compatibility § 8

Even though this sugared form is referred to as the “simplified CommonJS wrapping”, it is not 100% compatible with CommonJS modules. However, the cases that are not supported would likely break in the browser anyway, since they generally assume synchronous loading of dependencies.

Most CJS modules, around 95% based on my (thoroughly unscientific) personal experience, are perfectly compatible with the simplified CommonJS wrapping.

The modules that break are ones that do a dynamic calculation of a dependency, anything that does not use a string literal for the require() call, and anything that does not look like a declarative require() call. So things like this fail:

1 //BAD

2 var mod = require(someCondition ? 'a' : 'b');

3

4 //BAD

5 if (someCondition) {

6 var a = require('a');

7 } else {

8 var a = require('a1');

9 }

These cases are handled by the callback-require, require([moduleName], function (){}) normally present in AMD loaders.

The AMD execution model is better aligned with how ECMAScript Harmony modules are being specified. The CommonJS modules that would not work in an AMD wrapper will also not work as a Harmony module. AMD’s code execution behavior is more future compatible.

Verbosity vs. Usefulness

One of the criticisms of AMD, at least compared to CJS modules, is that it requires a level of indent and a function wrapping.

But here is the plain truth: the perceived extra typing and a level of indent to use AMD does not matter. Here is where your time goes when coding:

· Thinking about the problem.

· Reading code.

Your time coding is mostly spent thinking, not typing. While fewer words are generally preferable, there is a limit to that approach paying off, and the extra typing in AMD is not that much more.

Most web developers use a function wrapper anyway, to avoid polluting the page with globals. Seeing a function wrapped around functionality is a very common sight and does not add to the reading cost of a module.

There are also hidden costs with the CommonJS format:

· the tooling dependency cost

· edge cases that break in browsers, like cross domain access

· worse debugging, a cost that continues to add up over time

AMD modules require less tooling, there are fewer edge case issues, and better debugging support.

What is important: being able to actually share code with others. AMD is the lowest energy pathway to that goal.

Having a working, easy to debug module system that works in today’s browsers means getting real world experience in making the best module system for JavaScript in the future.

AMD and its related APIs, have helped show the following for any future JS module system:

· Returning a function as the module value, particularly a constructor function, leads to better API design. Node has module.exports to allow this, but being able to use “return function (){}” is much cleaner. It means not having to get a handle on “module” to do module.exports, and it is a clearer code expression.

· Dynamic code loading (done in AMD systems via require([], function (){})) is a basic requirement. CJS talked about it, had some proposals, but it was not fully embraced. Node does not have any support for this need, instead relying on the synchronous behavior of require(‘’), which is not portable to the web.

· Loader plugins are incredibly useful. It helps avoid the nested brace indenting common in callback-based programming.

· Selectively mapping one module to load from another location makes it easy to provide mock objects for testing.

· There should only be at most one IO action for each module, and it should be straightforward. Web browsers are not tolerant of multiple IO lookups to find a module. This argues against the multiple path lookups that Node does now, and avoiding the use of a package.json “main” property. Just use module names that map easily to one location based on the project’s location, using a reasonable default convention that does not require verbose configuration, but allow for simple configuration when needed.

· It is best if there is an “opt-in” call that can be done so that older JS code can partcipate in the new system.

If a JS module system cannot deliver on the above features, it is at a significant disadvantage when compared to AMD and its related APIs around callback-require, loader plugins, and paths-based module IDs.

AMD Used Today § 9

As of mid October 2011, AMD already has good adoption on the web:

· jQuery 1.7

· Dojo 1.7

· EmbedJS

· Ender-associated modules like bonzo, qwery, bean and domready

· Used by Firebug 1.8+

· The simplified CommonJS wrapper can be used in Jetpack/Add-on SDK for Firefox

· Used for parts of sites on the BBC (observed by looking at the source, not an official recommendation of AMD/RequireJS)

What You Can Do § 10

If you write applications:

· Give an AMD loader a try. You have some choices:

o RequireJS

o curl

o lsjs

o Dojo 1.7+

· If you want to use AMD but still use the load one script at the bottom of the HTML page approach:

o Use the RequireJS optimizer either in command line mode or as an HTTP service with the almond AMD shim.

If you are a script/library author:

· Optionally call define() if it is available. The nice thing is you can still code your library without relying on AMD, just participate if it is available. This allows consumers of your modules to:

o avoid dumping global variables in the page

o use more options for code loading, delayed loading

o use existing AMD tooling to optimize their project

o participate in a workable module system for JS in the browser today.

If you write code loaders/engines/environments for JavaScript:

· Implement the AMD API. There is a discussion list and compatibility tests. By implementing AMD, you will reduce multi-module system boilerplate and help prove out a workable JavaScript module system on the web. This can be fed back into the ECMAScript process to build better native module support.

· Also support callback-require and loader plugins. Loader plugins are a great way to reduce the nested callback syndrome that can be common in callback/async-style code.

JavaScript Dependency Injection

Original Article

http://merrickchristensen.com/articles/javascript-dependency-injection.html

Merrick Christensen, merrickchristensen.com

Inversion of control and more specifically dependency injection have been growing in popularity in the JavaScript landscape thanks to projects like Require.js and AngularJS. This article is a brief introduction to dependency injection and how it fits into JavaScript. It will also demystify the elegant way AngularJS implements dependency injection.

Dependency Injection In JavaScript

Dependency injection facilitates better testing by allowing us to mock dependencies in testing environments so that we only test one thing at a time. It also enables us to write more maintainable code by decoupling our objects from their implementations.

With dependency injection, your dependencies are given to your object instead of your object creating or explicitly referencing them. This means the dependency injector can provide a different dependency based on the context of the situation. For example, in your tests it might pass a fake version of your services API that doesn’t make requests but returns static objects instead, while in production it provides the actual services API.

Another example could be to pass ZeptoJS to your view objects when the device is running Webkit instead of jQuery to improve performance.

The main benefits experienced by adopting dependency injection are as follows:

1. Code tends to be more maintainable.

2. APIs are more elegant and abstract.

3. Code is easier to test.

4. Code is more modular and reuseable.

5. Cures cancer. (Not entirely true.)

Holding dependencies to an API based contract becomes a natural process. Coding to interfaces is nothing new, the server side world has been battle testing this idea for a long time to the extent that the languages themselves implement the concept of interfaces. In JavaScript we have to force ourselves to do this. Fortunately dependency injection and module systems are a welcome friend.

Now that you have some idea of what dependency injection is, lets take a look at how to build a simple implementation of a dependency injector using AngularJS style dependency injection as a reference implementation. This implementation is purely for didactic purposes.

AngularJS Style Injection

AngularJS is one of the only front end JavaScript frameworks that fully adopts dependency injection right down to the core of the framework. To a lot of developers the way dependency injection is implemented in AngularJS looks completely magic.

When creating controllers in AngularJS, the arguments are dependency names that will be injected into your controller. The argument names are the key here, they are leveraged to map a dependency name to an actual dependency. Yeah, the word “key” was used on purpose, you will see why.

1 /* Injected */

2 var WelcomeController = function (Greeter) {

3 /** I want a different Greeter injected dynamically. **/

4 Greeter.greet();

5 };

Basic Requirements

Lets explore some of the requirements to make this function work as expected.

1. The dependency container needs to know that this function wants to be processed. In the AngularJS world that is done through the Application object and the declarative HTML bindings. In our world we will explicitly ask our injector to process a function.

2. It needs to know what a Greeter before it can inject it.

Requirement 1: Making the injector aware.

To make our dependency injector aware of our WelcomeController we will simply tell our injector we want a function processed. Its important to know AngularJS ultimately does this same thing just using less obvious mechanisms whether that be the Application object or the HTML declarations.

1 var Injector = {

2 process: function(target) {

3 // Time to process

4 }

5 };

6

7 Injector.process(WelcomeController);

Ok, now that the Injector has the opportunity to process the WelcomeController we can figure out what dependencies the function wants, and execute it with the proper dependencies. This process is called dependency resolution. Before we can do that we need a way to register dependencies with our Injector object…

Requirement 2: Registering dependencies

We need to be able to tell the dependency injector what a Greeter is before it can provide one. Any dependency injector worth it’s bits will allow you to describe how it is provided. Whether that means being instantiated as a new object or returning a singleton. Most injection frameworks even have mechanisms to provide a constructor some configuration and register multiple dependencies by the same name. Since our dependency injector is just a simplified way to show how AngularJS does dependency mapping using parameter names, we won’t worry about any of that.

Without further excuses, our simple register function:

1 Injector.dependencies = {};

2

3 Injector.register = function(name, dependency) {

4 this.dependencies[name] = dependency;

5 };

All we do is store our dependency by name so the injector knows what to provide when certain dependencies are requested. Lets go ahead and register an implementation of Greeter.

1 var RobotGreeter = {

2 greet: function() {

3 return 'Domo Arigato';

4 }

5 };

6

7 Injector.register('Greeter', RobotGreeter);

Now our injector knows what to provide when Greeter is specified as a dependency.

Moving Forward

The building blocks are in place it’s time for the sweet part of this article. The reason I wanted to post this article in the first place, the nutrients, the punch line, the hook, the call toString() with some sweet reflection. This is where the magic is, in JavaScript we don’t have to execute a function immediately. The trick is to call toString on your function which returns the function as a string, this gives a chance to preprocess our functions as strings and turn them back into functions using the Function constructor, or just execute them with the proper parameters after doing some reflection. The latter is exactly what we will do here.

toString Returns Winning

1 var WelcomeController = function (Greeter) {

2 Greeter.greet();

3 };

4

5 // Returns the function as a string.

6 var processable = WelcomeController.toString();

You can try it in your console!

Now that we have the WelcomeController as a string we can do some reflection to figure out which dependencies to inject.

Dependency Checking

It’s time to implement the process method of our Injector. First lets take a look at injector.js from Angular. You’ll notice the reflection starts on line 54 and leverages a few regular expressions to parse the function. Let’s take a look at the regular expression, shall we?

1 var FN_ARGS = /^function\s*[^\(]*\(\s*([^\)]*)\)/m;

The FN_ARGS regular expression is used to select everything inside the parentheses of a function defintion. In other words the parameters of a function. In our case, the dependency list.

1 var args = WelcomeController.toString().match(FN_ARGS)[1];

2 console.log(args); // Returns Greeter

Pretty neat, right? We have now parsed out the WelcomeController’s dependency list in our Injector prior to executing the WelcomeController function! Suppose the WelcomeController had multiple dependencies, this isn’t terribly problematic since we can just split the arguments with a comma!

1 var MultipleDependenciesController = function(Greeter, OtherDependency) {

2 // Implementation of MultipleDependenciesController

3 };

4

5 var args = MultipleDependenciesController

6 .toString()

7 .match(FN_ARGS)[1]

8 .split(',');

9

10 console.log(args); // Returns ['Greeter', 'OtherDependency']

The rest is pretty straight forward, we just grab the requested dependency by name from our dependencies cache and call the target function passing the requested dependencies as arguments. Lets implement the function that maps our array of dependency names to their dependencies:

1 Injector.getDependencies = function(arr) {

2 var self = this;

3 return arr.map(function(dependencyName) {

4 return self.dependencies[dependencyName];

5 });

6 };

The getDependencies method takes the array of dependency names and maps it to a corresponding array of actual dependencies. If this map function is foreign to you check out the Array.prototype.map documentation.

Now that we have implemented our dependency resolver we can head back over to our process method and execute the target function with it’s proper dependencies.

1 target.apply(target, this.getDependencies(args));

Pretty awesome, right?

Injector.js

1 var Injector = {

2

3 dependencies: {},

4

5 process: function(target) {

6 var FN_ARGS = /^function\s*[^\(]*\(\s*([^\)]*)\)/m;

7 var text = target.toString();

8 var args = text.match(FN_ARGS)[1].split(',');

9

10 target.apply(target, this.getDependencies(args));

11 },

12

13 getDependencies: function(arr) {

14 var self = this;

15 return arr.map(function(value) {

16 return self.dependencies[value];

17 });

18 },

19

20 register: function(name, dependency) {

21 this.dependencies[name] = dependency;

22 }

23

24 };

Example & Excuses

You can see the functioning injector we created in this example on jsFiddle.

This contrived example is not something you would use in an actual codebase it was simply created to demonstrate the rich functionality JavaScript provides and to explain how AngularJS provides dependency injection. If this interests you I highly recommend reviewing their code further. It’s important to note this approach is not novel. Other projects use toString to preprocess code, for example Require.js uses a similar approach to parse and transpile CommonJS style modules to AMD style modules.

I hope you found this article enlightening and continue to explore dependency injection and how it applies to the client side world.

I really think there is something special brewing here.