This happens all the time time: I'm having a conversation with someone about the world of software engineering, and mention that I'm a JavaScript developer. "Oh yeah?" they say, "What frameworks do you use?" When I explain that I mostly don't use any frameworks, my conversation partner's eyes start to bulge, as they assume that I'm either some kind of a superhero, or just plain crazy. [rewrite/remove]
There's a shroud of mystery surrounding JavaScript. Most developers who're getting into JavaScript will jump right into frameworks like jQuery, or try to completely avoid dealing with JavaScript directly, and opt for compilable languages such as CoffeeScript, JSX, and TypeScript. There is an overwhelming multitude of JavaScript frameworks available. Which is the One True Framework? How do I choose? Should I use a combination of different frameworks?
Maintenance
In a perfect world, there would be a framework that makes everything easy: from procedural animations, to AJAX requests, to complex data models and event notifications. However, in reality, you will often find big projects that are comprised of many different frameworks, that were added by different developers over the course of time. Some had great visions of a unified object notification systems and used Backbone. Others were just searching StackOverflow for a way to create slide shows and fancy drop-down menus. Some brave souls even chose to cease direct control over the Document Object Model (DOM), and chose an MVVM framework, like Angular, Ember, or Knockout to handle DOM manipulation automatically via data bindings.
Despite all the best intentions, the greater the number of frameworks used, the harder the code becomes to read and maintain. Resolving conflicts between frameworks becomes a routine part of development; the number or AJAX requests and unhandled errors grows, and render times suffer. Perhaps one of the least expected and most frustrating will be dealing with bugs within a framework. We tend to assume that any major framework is perfect and bug-free. I assure you, that is far from the truth! As you discover this, you'll often have to submit a bug report, and put your project on hold while waiting for the bugfix to be released. Sometimes you just can't afford a delay, so you'll just have release that bugfix yourself. Is maintaining a third-party framework really worth what you were trying to accomplish in the first place?
Do you really need a framework?
Most use cases of frameworks that I've come across were trying to do very basic things: finding an element in the DOM, iterating over a loop, or updating a DOM node with a value or attribute. Not only are these built into JavaScript, but they are also faster, easier to read, and in most cases easier to use without a framework.
For demonstration purposes, we'll be assuming that Internet Exlorer 9 is our oldest supported browser. If you need support for older browsers, consider using polyfills [link], Modernizr [link], or Babel [link].
Query selectors
The most common use of jQuery that I've come across is selecting a DOM node. jQuery makes this simple: when in doubt, wrap it in a $()
:
$('.my-class');
This was a ground-breaking technology in 2006, when it was introduced in jQuery, but two years later it became part of the Selectors API specification and was integrated into all major browsers:
document.querySelector('.my-class');
"But $()
is so much more consice," you say, "why would I want to type all those extra characters?" Because querySelector
is more descriptive than $
. That's a Good Thing. Not only is it now obvious what operation you're performing, but it's also clear what object it's being applied to, and it will always return either the first matching DOM node or null. This makes it easier to check if a DOM node exists:
// Vanilla JavaScript
var node = document.querySelector('.my-class');
if (node) {
console.log('yay!');
}
// jQuery
var object = $('.my-class');
if (object.length) {
console.log('yay!');
}
Keep in mind that the object that jQuery will return is not a DOM node, so if you're passing it a function that expects a DOM node, you'll need to unwrap it:
// jQuery
validate(object.get(0));
// Vanilla JavaScript
validate(node);
You can get into a particularly nasty situation when the object returned by the jQuery selector is not what you expected:
// jQuery
var object = $(['.my-class']);
if (object.length) {
validate(object.get(0)); // Oops, passing a String instead of a Node
}
Another advantage of using the native querySelector
is that it can be applied to any DOM node. While the jQuery
or $
function accepts a second parameter to account for this, the syntax becomes a bit clunky and ambiguous:
// Vanilla JavaScript
var body = document.querySelector('body');
var node = body.querySelector('.my-class');
// jQuery
var body = $('body');
var node = $('.my-class', body);
Take a good look at the examples above. One is obviously shorter than the other, but which one reads better?
There's a hidden benefit to using the native querySelector
: strict errors. Suppose that a developer misspelled body
as bady
in the first selector:
// jQuery
var body = $('bady');
var object = $('.my-class', body);
This will set the object
var to undefined
! Now, consider that these lines are separated by a few dozen lines of code. You're debugging your program, and you're seeing that object
is undefined, without any context. What would your first instinct be? Mine would be that the .my-class
DOM node is either missing, or has an incorrect class name. This situation can get much worse if you pass the resulting object
to another library for further processing. Consider the same situation with a native selector:
// Vanilla JavaScript
var body = document.querySelector('bady');
var node = body.querySelector('.my-class');
Now you'll just get an error message that will say something like TypeError: body is null
, which precisely tells you that the problem is the body
node, not the .my-class
selector.
Changing values and attributes
Another very common use of jQuery is manipulating content inside a DOM node. These can be values, attributes, class names, etc.
// Vanilla JavaScript
console.log(
node.textContent,
node.innerHTML,
node.attributes,
node.attributes.src
node.classList,
node.classList.contains('my-class')
);
node.textContent = 'Line one.';
node.innerHTML = 'Line one.<br/>Line two.';
node.setAttribute('src', 'http://www.example.com');
node.removeAttribute('src');
node.classList.add('other-class');
node.classList.remove('my-class');
// jQuery
console.log(
object.text(),
object.html(),
// N/A
object.attr('src'),
// N/A
object.hasClass('my-class')
);
object.text('Line one.');
object.html('Line one.<br/>Line two.');
object.attr('src', 'http://www.example.com');
object.attr('src', null);
object.addClass('other-class');
object.removeClass('my-class');
You can read and write contents to DOM nodes just as easily with native JavaScript as you can with jQuery. You can even read all attributes or all classes from the node, which you cannot do with jQuery. classList
is not supported in IE 9, but it can be easily polyfilled.
The only real advantage that jQuery provides here over native implementations is the attr
function. This is because you are able to remove an attribute by specifying its value argument as null
or undefined
. This comes in handy when using conditionals:
// Vanilla JavaScript
attrVal ? node.setAttribute('src', attrVal) : node.removeAttribute('src');
// jQuery
object.attr(attrVal);
Setting or removing the attribute is more concise and readable in jQuery. Unfortunately setting a attribute to null
in native JavaScript will set it to a String
literal 'null'
, so we must use removeAttribute()
instead.
Is it worth to require a framework, just to get this neat feature? I would argue that this can easily be solved by overriding the default functionality of setAttribute
with a partial polyfill:
(IIFE() {
var _setAttribute = Element.prototype.setAttribute;
Element.prototype.setAttribute = function setAttribute(attr, value) {
value ? _setAttribute.call(this, attr, value) : this.removeAttribute(attr);
}
})();
Now if you call setAttribute
with anything falsey as the second argument, it will remove the attribute. There aren't a lot of valid uses for HTML attributes with a String
literal value of 'null'
, 'undefined'
, or 'false'
, but if you do need one, just specify the value explicitly as a String
literal:
node.setAttribute('mvvm-boolean-binding', 'false');
Loops
Both jQuery and Underscore are commonly used to iterate over loops. Back in the old days, there were only two ways to iterate over an object: the traditional for
loop (or while
, which is almost the same aside from syntax), and the for..in
loop. Both were clunky and had some serious caveats, so there was a pressing need for library function to gracefully handle iterating over a loop.
However, since ES5, several Array
functions were introduced to make loops more manageable. Namely, forEach
, every
, and some
. Let's take a look at some basic iteration:
// Underscore
_.each(array, function(value, index, array) {
...
});
// jQuery
_.each(array, function(index, value) {
...
});
// Vanilla JavaScript
array.forEach(function(value, index, array) {
...
});
All these are very similar, but the native forEach
iterator has the advantage of being called directly on the iterable object, which makes the code easier to read.
What about iterating over an object's properties?
var obj = {
'key1': 'val1',
'key2': 'val2',
'key3': 'val3'
}
An object's named properties are not inherently iterable. Only indexed properties, like those of an Array
or a NodeList
, are iterable. Libraries like jQuery and Underscore will abstract this fact away, but it's best to understand that when you're iterating over an object's property values, you're actually iterating over a list of the object's keys, which is itself an Array
, and then fetching each value from the corresponsing key. In native JavaScript you can do this explicitly:
Object.keys(obj).forEach(function printKeyVal(key) {
console.log(key, obj[key]);
});
This seems more verbose than object iterators built into libraries like jQuery and Underscore, but it's also more powerful. Since you're explicitly fetching a list of the object's keys, you can sort or reorder it!
Object.keys(obj).reverse().forEach(function printKeyVal(key) {
console.log(key, obj[key]);
});
Neat! What about conditionally breaking out of the loop? You can't do this with forEach
, but that's what every
and some
are for. every
will run through the loop until it returns a falsey value, while some
will run through the loop until it returns a truthy (evaluates to true
) value. Bear in mind though, if you don't return any value in every
, it's considered to be falsey and the program will break out of the loop. This is different from how jQuery or Underscore's iterators work, so make sure that you always return a value!
array.every(function conditionalIterator(value) {
console.log(value);
if (timeToLeave) return false; // Gotcha! Will break out of the loop, no matter what!
});
array.every(function conditionalIterator(value) {
console.log(value);
return timeToLeave ? false : true;
});
For this reason, it might be best to stick with some
for conditional iterators; just keep in mind that unlike jQuery and Underscore, you have to return true
to break out of the loop:
array.some(function conditionalIterator(value) {
console.log(value);
return timeToLeave; // This will work!
});
array.some(function conditionalIterator(value) {
console.log(value);
if (timeToLeave) return true; // This will also work!
});
All these native iterator functions are great when applied to an Array
, but what about looping over an object with indexed properties, that doesn't have any iterator functions? If you use native querySelectorAll
, you will run into this situation quite often, because the object it returns is a NodeList
, a perfectly iterable construct that has no iterators built in.
document.querySelectorAll('.my-class').forEach(console.log); // Will throw an error
Enter JavaScript's all-powerful call
and apply
functions, which allow you to dynamically bind a context to any function call. This means that you can take an array's forEach
function, and execute it with a NodeList
context:
[].forEach.call(document.querySelectorAll('.my-class'), console.log);
"What is this ugly bracket forEach thing?" asks a coworker. []
is shorthand for getting a reference to an Array
object, which we need to be able to call the forEach
function. You could use Array.prototype
instead. Next, we call the forEach
function using call
, which accepts the context (in our case a NodeList
) as the first argument. And finally, the second argument is our callback. This is a pretty standard practice, but admittedly it's not the prettiest thing in the world. You could, of course, make things easier by using another partial polyfill:
NodeList.prototype.forEach = [].forEach;
Now you can call forEach
directly on a NodeList!
document.querySelectorAll('.my-class').forEach(console.log); // Works after polyfill!
Map, reduce, filter
These Array functions can help you transform an iterable lists to another list or an arbitrary object. Think of them as advanced iterators. map
is used to translate each element in the list to a different value, appending that value to a new list. The resulting list will always contain the same number of elements as the original list. The simplest example of how this can be used is translating a list of words into a different language.
var englishWords = [
'All',
'Your',
'Base',
'Are',
'Belong',
'To',
'Us'
];
var frenchWords = englishWords.map(function translateToFrench(word) {
return Babelfish.translate('english', 'french', word);
}); // ['tout', 'votre', 'base', 'sont', 'appartenir', 'à', 'nous']
Similarly, you can use map
to collect values from a list of input fields, by using a querySelector
and transforming the resulting NodeList
to a list of corresponding values.
<input type="text" name="first-name" value="John"/>
<input type="text" name="last-name" value="Doe"/>
[].map.call(document.querySelectorAll('input[type=text]'), function getValue(node) {
return node.value;
}); // ['John', 'Doe']
reduce
is used to transform an iterable list into an arbitrary value. This value can be anything, including a String
, another list, or an object with named properties. The simplest example of this is producing an aggregate sum of all numbers in a list:
var numbers = [1, 2, 3];
numbers.reduce(function sum(aggregate, value) {
return aggregate + value;
});
This will calculate the sum of all the numbers in the list and return 6
. The first argument of the callback is the value that will eventually be returned – the aggregate. The second argument of the callback is the value of each element of the iterable list. The reduce
function itself takes an optional second argument that specifies the initial aggregate value. If you do not specify it, the aggregate argument is automatically set to the first element of the list, and the value argument is set to the next element: in our case 1 and 2, respectively. In many cases you will want to avoid this, by specifying an appropriate initial aggregate as the second argument to reduce
. In our case of calculating the sum of all numbers in a list, the initial aggregate should be 0.
var numbers = [1, 2, 3];
numbers.reduce(function sum(aggregate, value) {
return aggregate + value;
}, 0);
This will produce the same result as before, but the difference is that it will start with an aggregate of 0 and a value of 1.
Another good, practical example of how reduce
could be used is converting a list of named properties into a URL query string:
var obj = {
'a': 'val1',
'b': '',
'c': 'val2'
};
Object.keys(obj).reduce(function toQueryString(aggregate, value, index) {
return aggregate
+ (index ? '&' : '?')
+ value
+ '='
+ obj[value];
}, '');
This will produce ?a=val1&b=&c=val2
. We used an empty String
as the initial aggregate value, because our aggregate is a String
. If your aggregate is a object with named properties, you would use {}
(an empty object) as the initial aggregate value. This is great for standardizing a dictionary of values to be used in an API call:
var options = {
mainScreen: 'on',
chanceToSurvive: 0,
moveZig: true
}
var commandSequence = Object.keys(options).reduce(function standardize(aggregate, key) {
aggregate[key.replace(/([A-Z])/g, '_$1').toLowerCase()]
= options[key] === 'off' ? false : Boolean(options[key]);
return aggregate;
}, {}); // {chance_to_survive: false, main_screen: true, move_zig: true}
Note that although you're transforming a key-value map into another key-value map with the same number of keys, you cannot use map
to do this, since map
would always output an Array
, while reduce
can output any value.
filter
allows you to construct a subset of a list, where each element is conditionally kept or discarded:
[-1, -2, 0, 1, 2].filter(function positiveNumbers(value) {
return value > 0;
});
This will give us a list of positive numbers that is a subset of the original list: [1, 2]
. Returning a truthy value will cause the element to be added to the resulting subset, while returning a falsey value or not returning anything will cause the element to be excluded from the subset.
A great practical example of using filter
is contracting lists with empty elements. Let's first search StackOverflow for the most popular solution:
Array.prototype.clean = function(deleteValue) {
for (var i = 0; i < this.length; i++) {
if (this[i] == deleteValue) {
this.splice(i, 1);
i--;
}
}
return this;
};
test = new Array("", "One", "Two", "", "Three", "", "Four").clean("");
Looks fine at first sight, but is there a better way?
var test = ["", "One", "Two", "", "Three", "", "Four"].filter(Boolean);
Hold the phone! What is going on with filter(Boolean)
?! Remember, filter
takes an argument of a function that returns true
for any elements that should be kept, and false
for any elements that should be discarded. Boolean
happens to be a function that takes a value, and returns true
if the value is truthy, false
if it's falsey. Empty String
literals are falsey, so they are discarded.
Note that in our case, the values of input fields will always be String
literals, so the only elements that would be discarded are empty String
literals, which is what we want. However, be careful when using the Boolean
filter on lists that contain Number
literals that are equal to 0
, since those would be discarded as well.
Here is the filter, applied to our previous example of iterating over input fields with querySelectorAll
, and return a subset of all the non-empty values.:
[].map.call(document.querySelectorAll('input[type=text]'), function getValue(node) {
return node.value;
}).filter(Boolean);
There is no magic here, just Boolean logic. (hardy-har-har)
AJAX requests
This is a tough one. Ater all, how can you argue against being able to process an AJAX request, parse its response, and populate it into an HTML element with a syntax as concise as this?
$('.my-class').load('partial.html');
You might have even encountered something like this:
$('.my-class').load('/update/' + data).load('partial.html');
This will find an element, update something on the server, load and parse a partial, and render it inside the element! Sound too good to be true? It is. Try running it a couple of times. What actually happens is that both requests are initiated, and the responses are returned asynchronously, in no particular order. That means that in our example here, the partial.html
content could be loaded before the state-changing /update/ + data
is processed.
Now, what happens if one of the responses returns with an error? It's silently ignored. Of course there are better, more structured ways to do this in jQuery that account for errors and asynchronicity, such as the .ajax()
function, but let's look at how requests are processed in native JavaScript first.
var req = new XMLHttpRequest;
req.open('get', 'partial.html');
req.send();
req.addEventListener('error', function logError() {
console.log('Error sending request!');
});
req.addEventListener('progress', function noop() {}); // required by IE9
req.addEventListener('load', function processResponse() {
switch(req.status) {
case 200:
document.querySelector('.my-class').innerHTML = req.responseText;
break;
default:
console.log('Error! Got code:', req.status);
}
});
This is quite verbose, and contains a bit of code that is strictly legacy (IE9), but it gives you a good idea of things that can happen during a request, besides a successful response. Namely, the request could error out on something like domain resolution, or it could succeed but trigger an error response. Let's write a small wrapper that'll make things a bit easier:
function get(url) {
var successCallback;
var req = new XMLHttpRequest;
req.open('get', url);
req.send();
req.addEventListener('error', logError);
req.addEventListener('progress', function noop() {}); // required by IE9
req.addEventListener('load', function processResponse() {
switch(req.status) {
case 200:
successCallback && successCallback(req.responseText);
break;
default:
logError();
}
});
function logError() {
console.log('Error sending request!', req.status ? 'Got code: ' + req.status : '');
}
return {
onsuccess: function onsuccess(callback) {
successCallback = callback;
}
};
}
get('partial.html')
.onsuccess(function render(response) {
document.querySelector('.my-class').innerHTML = response;
});
This helper function will provide the basic functionality for sending a get request and processing the response, but it doesn't account for other types of requests, or provide an easy way to sequense response. This is one of the cases where you do need a third-party library. Here's how to do this in jQuery:
$.ajax('partial.html', {
success: function(response) {
document.querySelector('.my-class').innerHTML = response;
},
error: function() {
document.querySelector('.my-class').innerHTML = 'Unavailable';
console.error('Request failed!');
},
timeout: 3 * 1000 // 3 seconds
});
Note that just because we're using jQuery to handle AJAX, doesn't mean we have to use it for everything else. In fact, if we later choose to use a different library for AJAX, we'll have less code to rewrite.
Whether you use jQuery, write your own AJAX wrapper, it's important to account for inherent asynchronicity of your requests, as well as to handle situations when an error response is received, or when no response is received at all.
There are more sophisticated ways to handle AJAX, such as using promises and libraries like Asynquence. These will allow you to sequence your responses more naturally, while still being able to make the requests asynchronously:
[Example]
Custom events
The publish/subscribe pattern, or pubsub for short, is useful for having objects communicate with each other using events. The advantage of using this kind of pattern is decoupling the subscribers (the objects that receive messages) from the publisher (the object that emits a message), as well as from other subscribers. This means that neither the publisher, nor other subscribers need to know when a subscriber is added or removed.
DOM elements have this functionality built in, for a set of events that describe user interactions, e.g. clicking.
document.body.addEventListener('click', function printClicked(event) {
console.log('Clicked!');
});
If we want to add another subscriber to the 'click' event, we don't need to modify or override printClicked
. We can just add another subscriber, or listener
...
document.body.addEventListener('click', function printHooray(event) {
console.log('Hooray!');
});
Sometimes, you will need to generate custom events, which can be tied directly to a DOM event, a sequence of DOM events, or something else entirely. You can do this using CustomEvent
and dispatchEvent
.
document.body.addEventListener('reset', function handleReset(event) {
console.log('Hooray!');
});
document.body.dispatchEvent(new CustomEvent('reset'));
CustomEvent
is not supported in IE 10 and below, but it can be easily polyfilled:
if (typeof window.CustomEvent !== 'function') {
window.CustomEvent = function CustomEvent(event, params) {
params = params || {
bubbles: false,
cancelable: false,
detail: undefined
};
var customEvent = document.createEvent('CustomEvent');
customEvent.initCustomEvent(event, params.bubbles, params.cancelable, params.detail);
return customEvent;
}
window.CustomEvent.prototype = window.Event.prototype;
}
This works great for DOM objects, but there are plenty other use cases where the publish/subscribe pattern could be used. Wouldn't it be nice if we could just reuse the addEventListener
, removeEventListener
, and dispatchEvent
for plain objects? Turns out, these are actually very easy to implement:
function MakeEventTarget(obj) {
var listeners = {};
function addEventListener(type, callback) {
listeners[type] = listeners[type] || [];
listeners[type].push(callback);
};
function removeEventListener(type, callback) {
var queue = listeners[type];
if (queue && queue.length) {
// Remove specified callback(s) from the subscriber queue.
// If no callback specified, remove all subscribers of that event type.
if (!callback) {
listeners[type] = null;
} else {
listeners[type] = queue = queue.filter(function removeCallback(listener) {
return listener !== callback;
});
}
}
return queue;
};
function dispatchEvent(event) {
var queue = listeners[event.type];
event.target = this;
queue && queue.forEach(function trigger(callback) {
callback.call(event.target, event);
});
};
Object.defineProperties(obj, {
addEventListener: {
value: addEventListener
},
removeEventListener: {
value: removeEventListener
},
dispatchEvent: {
value: dispatchEvent
}
});
return obj;
}
This is a very small and modular implementation that allows you to trigger events the same way you would on DOM objects. Here's how you would use it:
// Create a plain object
var obj = {
justAnObject: true,
isDomObject: false
}
// Make it an EventTarget
MakeEventTarget(obj);
// Event callbacks
function foo(event) {
console.log('foo dets:', event.detail);
}
function bar(event) {
console.log('bar dets:', event.detail);
}
function nope(event) {
console.log('bar dets', event.detail);
}
// Add one listener to foo event and two listeners to bar event
obj.addEventListener('bar', bar);
obj.addEventListener('foo', foo);
obj.addEventListener('bar', bar);
// Trigger foo and bar events
obj.dispatchEvent(new CustomEvent('foo', {
detail: 'foolicious'
})); // foo
obj.dispatchEvent(new CustomEvent('bar', {
detail: 'barlicious'
})); // bar bar
// Try removing a non-existant listener and trigger an event
// with no listeners
obj.removeEventListener('nope', nope);
obj.dispatchEvent(new CustomEvent('nope')); // no output
// Remove the bar listeners and trigger the bar event
obj.removeEventListener('bar', bar);
obj.dispatchEvent(new CustomEvent('bar')); // no output
// Remove *all* listeners from foo and trigger the foo event
obj.removeEventListener('bar', bar);
obj.dispatchEvent(new CustomEvent('bar')); // no output
For the most part, the usage syntax is the same as the native addEventListener
, removeEventListener
, and dispatchEvent
in DOM objects. There is an added benefit of being able to remove all event listeners of a certain event type, at once.
Backbone.js and jQuery provide this capability as well:
...
// Backbone.js
_.extend(obj, Backbone.Events);
obj.on('foo', foo);
obj.trigger('foo', 'foolicious');
obj.off('foo', foo);
// jQuery
$(obj).on('foo', foo);
$(obj).trigger('foo', 'foolicious');
$(obj).off('foo', foo);
Additionally, there are others libraries such as Emitter, which provide just the pubsub implementation. This may be a better option if you are not already using jQuery or Backbone.js, since it's a much smaller and more modular library.
Having the pubsub library that is decoupled from the rest of your code base can also help you switch your project dependencies with ease. For instance, if you are relying on Backbone.js's pubsub capabilities, but consequently decide to replace Backbone.js with Angular, you would either have to replace the pubsub component, or include Backbone.js just for the pubsub features. In practice, the latter option is much more common.
This may not seem like a big deal, but included libraries tend to quickly pile up, and cause load times to suffer. And, without added guidance, developers won't realize that a certain library is meant to be used just for a feature like pubsub, and you may wind up with models that are managed by two different frameworks, which can cause problems that are very hard to resolve.
Whichever library you choose to handle pubsub, it's important that the usage syntax is based on some sort of a standard. My approach is to take already existing DOM standards and extend them to other Objects.
MVVM pitfalls and alternatives
There's a pretty big disconnect between HTML and JavaScript. Manually managing DOM with JavaScript is pretty unintuitive, especially when it comes to creating and rendering HTML tags.
You've probably seen something like this as a way to dynamically insert HTML into a webpage:
document.write('<div>' + text + '</div>');
$(el).html('<div>' + text + '</div>');
This is a very bad practice. For one, rendering variables as HTML content creates a risk of malicious code being injected into your web page. HTML code code like this also cannot be validated, so if you've accidentally missed a tag or a trailing slash, you're instantly in a world of hurt.
There is a much safer and more reliable way to add HTML to your page, by constructing tags programmatically:
var div = document.createElement('div');
div.textContent = text;
el.appendChild(div);
This is both reliable and safe, since you're only using the variable as a text content of your newly created node. However it isn't pretty, because all the visual cues that HTML markup provides, telling you how the elements are related to each other, are lost.
To solve this problem, template engines like Mustache and Handlebars were created, which allow you to store an HTML template that can be validated, and then populate it programmatically with values via data binding tags. MVVM frameworks such as Angular, Ember, and Knockout take this concept one step further, and have the binding between the model and the DOM object created automatically, by matching a tag attribute or value tag in HTML to an object property in the JavaScript controller:
<div ng-class="textClass">{{ text }}</div>
<div ng-class="textClass" ng-bind="text"></div>
$scope.textClass = 'first-name';
$scope.text = 'John';
For input tags, Angular would create a two-way binding, which allows you to both control your DOM object by changing your model, and to update your model with data that changed in the DOM object!
<input type="text" name="first-name" ng-model="firstName"/>
$scope.firstName = 'John';
The value inside the input box is initialized as "John", but if you change it to "George", the model ($scope.firstName
) will be updated to "George".
This is a very powerful feature. However, Angular only allows two-way bindings for form elements. You're out of luck if you try to use this on an editable text TextNode
, for instance:
<div ng-bind="text" contenteditable="true">John</div>
console.log($scope.text); // Empty string :(
In order to accomplish this, you'd have to manually attach an event listener to your text node and update the model when it changes. At that point, your text node would be automatically updated again by Angular, because your model has changed. In order to avoid that, you might opt to implement a new type of binding via a custom directive. Suddenly, the rabbit hole got a lot deeper. Implementing custom directive in Angular is not very straightforward, and drastically reduces the code readability.
The next problem you will likely incounter with an MVVM framework is using it in tandem with a UI framework like jQuery UI. This is what we attempted to do for one of our projects at SinglePlatform: use Angular for its awesome power of model-to-DOM binding, while combining it with jQuery UI's awesome drag-and-drop capability. The the problem, as it turned out, is that Angular loses the binding with the DOM node as soon as it's removed from its container:
[example]
Unfortunately, jQuery UI happens to do precisely that when you drag a 'sortable' DOM element. The cleanest solution to this problem that we've come up with is resetting the entire collection model, forcing all DOM nodes to clear out and re-generate:
$scope.photoGrid = jQuery( "#all-photos" ).disableSelection().sortable({
disabled: true,
placeholder: 'sortable-placeholder',
update: function update(evt, ui) {
var sortOrder = $scope.photoGrid.sortable('toArray');
var allPhotos = $scope.allPhotos;
$scope.allPhotos = [];
$scope.$apply();
$scope.allPhotos = allPhotos;
$scope.allPhotos.forEach(function reorder(photo) {
photo.sort_order = sortOrder.indexOf(photo.photo_id );
});
$scope.$apply();
}
});
Not only is this messy, but it also drastically slows down rendering performance! Needless to say, we quickly learned our lesson: using jQuery UI with Angular is a bad idea. The Angular way is managing the model that is bound to DOM, and never the DOM directly. However, Angular doesn't provide an implementation for drag-and-drop events. This meant that for our next project, we had to implement those using custom directives. It just seems like there's no winning with this one!
A fresh take on MVVM
Let's take the best of what MVVM frameworks have to offer: two-way model-to-DOM bindings, and make it work for us! First, we need to define a set of strict, simple rules:
- An HTML node has an attribute or tag name to indicate that it's a template.
- A template node is always hidden.
- A template node cannot have an id attribute.
- A template node is cloned every time it needs to be populated with data. The data populates into the cloned node.
- Bindings can be done via HTML attributes only. This way, there's no need to create a template parser; our template is just HTML with binding attributes.
- Only variable names are allowed in binding attributes, not function calls or conditions. This will greatly reduce the potential clutter that can happen inside an attribute value, and keep all implementations out of markup and in the controller, where they belong.
- All nodes can be one-way bound (
bind-read
), two-way bound (bind-edit
), or one-time bound (bind-read-once
). This will ensure that we don't trade any performance, while still having all the flexibility of two-way bindings. - All bindings will associate with the node's
value
property for form fields, andTextContent
property for all other nodes. - If a node is bound to an iterable object, such as an
Array
, it will be rendered once for each of the elements in that object. - And additional attribute
as
can be specified to localize the scope of all the inner elements the a model property that matches the attribute value.
The structure of the template markup will look like this:
<template template-id="menus">
<div bind-read-once="menus" as="menu">
<div bind-edit="menu.title">Main Course</div>
<div bind-read-once="menu.items" as="item">
<div bind-read-edit="item.title">Filet Mignon</div>
<div bind-read-edit="item.description">This classic comes medium-rare, with a side of potatoes au gratin.</div>
<div>
Quantity: <input bind-read-edit="item.quantity" value="1"/>
</div>
<div>
$<span bind-read-edit="item.price">27</span>
</div>
</div>
</div>
</template>
Note that unlike Angular, there are only three types of bindings: bind-read
, bind-edit
, and bind-once
. There is no separate attribute to handle repeating nodes, and there are no special attributes to handle form field bindings. All of these details will be abstracted away in the implementation of our custom MVVM framework.
Now let's take a look at our controller code!
app.menus = [
{
title: 'Main Course',
items: [
{
title: 'Filet Mignon',
description: 'This classic comes medium-rare, with a side of potatoes au gratin',
quantity: 1,
price: 27
}
],
items: [
{
title: 'Lamb Tagines',
description: 'Tender pieces of lamb cooked in a clay pot.',
quantity: 1,
price: 35
}
]
}
];
MakeEventTarget(app.menus);
//IMPL
app.menus.addEventListener('change', function updateBoundValues(obj) {
switch(attr) {
case 'item.quantity':
item.price = item.price * item.quantity;
break;
case 'item.price':
item.quantity = 1;
break;
}
});
You do need a framework!
It's probably not what you thought. No third-party framework is going to perfectly suit your needs. But at the same time, you'll want to organize your project into reusable components that will make future development easier. Now, you're going to be using the most important framework: your own.
Comments !