Automating your javascript unit tests – Karma

why

  • instant feedback while writing your code and eliminate the need to remember to run tests before checking in your code. thus leading to stable builds.
  • continuos integration with our hudson build system.
  • testing on real and multiple browsers.

what

  • set up karma to watch your source files and run tests on code changes.
  • set up karma to run the tests during hudson build and validate the build.
  • set up karma to track our code coverage.

how

We use mocha as our test framework. This along with chai (expect/should – BDD style) worked out great for us with an effective yet readable tests. I cannot emphasize the importance of readable tests enough. We had a team member who did a feature walk through by running through the tests which i thought was pretty rad. Product and QA could easily see what was the feature set, what was expected of and what was the outcome. I guess we have to do a write up sharing more of our excitement.

Before karma, we were running tests using individual test files. More often, you are working on multiple files and remembering to run tests on all these files manually was becoming cumbersome and error prone. So we started researching on test runners and karma seemed to fit all our necessities: automation, continuos integration, run tests on multiple real browsers and support for mocha.

set up karma to watch your source files and run tests on code changes

This was fairly straight forward. Karma’s setup is driven by a single configuration file where in you provide the location of files you want to watch for changes, browsers that you want to run tests, your testing framework and any preprocessors. Here’s a gist of our configuration file. The only tricky part was preprocessors. We use handlebars along with requirejs-handlebars-plugin for our templating purposes and serve our templates as individual html files. This was causing a problem karma was converting them into js strings because of its default preprocessor: html2js. It needed a bit of reading, but the fix was simple enough. The following additions to the config file fixed the problem.

preprocessors : [{'scripts/**/*.html' : ''}]
files:[...{pattern: 'scripts/**/*.html', served: true, included: false}]

set up karma to run the tests during hudson build and validate the build

We created another karma configuration file for this purpose. We added a junitReporter  so that we could export the tests in a format that could be interpreted by our hudson setup. The key differences are as follows. We are currently using phantomJS for testing on our build systems, but in near future, we want to extend this to real browsers.

reporters: ['progress', 'junit']
junitReporter: {outputFile: "testReports/unit-tests.xml"}
autoWatch: false
browsers: ['PhantomJS']
singleRun: true

set up karma to track our code coverage

Once we were able to configure karma to run in hudson, this was just a natural addition. The only additions to the karma configuration are as follows.

reporters: ['progress', 'junit', 'coverage']
coverageReporter: {
 type : 'cobertura',
 dir : 'coverage/'
}
preprocessors : {
 '**/scripts/**/*.js': 'coverage'
}

As you may have noticed, i may used simple and straight-forward words quite a few times and that is what karmajs is all about.

reads

http://karma-runner.github.io/0.10/index.html

http://googletesting.blogspot.com/2012/11/testacular-spectacular-test-runner-for.html

 

https://www.npmjs.org/package/karma-handlebars-preprocessor

Incremental Web Performance Improvements

Compression (gzip) of front end resources (js/css)

When we moved to Amazon’s Cloudfront (CDN), we lost the ability to serve gzipped version of our script files and stylesheets. We are single page app and we have a large javascript and css footprint and this was greatly affecting our application performance.  We had two options to fix this.

  • Upload a gzip version of the resource along with the original resource and set the content-encoding header for the file to gzip. CDN would then serve the appropriate resource based on request headers.
  • Use a custom origin server that is capable of compressing resources based on request headers. The default origin server which is a simple Amazon Simple Storage Service (S3) is not capable of this and hence the problem.

Fortunately for us, all our application servers use apache as a web server and we decided to leverage this setup as our custom origin server. We had to simply change our deployment process to have front-end resources deploy to our app servers as against a S3 bucket. This does make the deployment process tiny bit complex, but the benefits are huge.

 

Dividing our main javascript module into smaller modules.

As mentioned earlier, we are single page app and have a large javascript footprint. We bundle all our javascript files into a single module and fetch it during our app initialization. And as we grow, so will our javascript footprint and did not want to run into long load times during our app initialization as demonstrated by Steve Souders in his book: High Performance Web Sites.

We use requirejs to build all our javascript modules into one single module. Fortunately enough requirejs provides for combining modules into more than one module in a very flexible manner. We package all our “common” modules and the main module we would need on loading the application into one module. All other modules are dynamically loaded based on when they are required. More specific details will be posted soon.

Pre-caching our main javascript module.

I believe this is a very common practice and simple implementation that does reap a huge performance benefit. We now pre-fetch our main javascript module during our login process using iframe and html object tag. The iframe keeps the login page load times independent of the resources being fetched through it. Again, there are many ways to implement this as mentioned by Steve Souders, but we chose this for simplicity.

Additional Links

  • http://stackoverflow.com/questions/5442011/serving-gzipped-css-and-javascript-from-amazon-cloudfront-via-s3
  • http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
  • http://requirejs.org/docs/optimization.html

Reusing Web Modules with RequireJS

why

  • Our application contains several modules that uniquely addresses a key aspect of wealth management. eg: Investment checkup, Net worth, etc. The application brings together various such aspects to present a comprehensive one-stop wealth management tool to the user.
  • How-ever each user’s needs are unique and so are the motivations to use our application. So we wanted to build landing pages that ties a user’s need to a module thus providing more context and motivation to use the module and eventually our entire offerings.
  • We went native for our mobile presence to provide great user experience and that has served us well. How-ever they are few modules of our application that are self contained with simpler ui design and fewer interactions with rest of the application. We wanted to embed such modules in our native app and determine if we could still deliver a great seamless user experience.

what

  • Build “stand-alone” versions of our web modules.
  • Publish the module with a unique url that will be used to embed it in other apps.

how

We use Requirejs as our module loader and its optimization tool to package a module and its dependencies into single units for production use. The module definition would define all its dependencies. During development phase, all the dependencies are loaded as required. How-ever in production mode, all the dependencies are packaged into a single file along with the module code to minimize our http calls and optimize load times. For eg:

A simple module definition would be as follows:

define([
 'jquery'
 , 'underscore'
 ], function($, _){

 var foo = {};
 // module code
 return foo;
});

And we reference the above module as follows:

<script data-main="scripts/foo" src="scripts/require.js"></script>

But most real-world application will have several modules and would share considerable number of framework libraries (jquery, underscore, backbone and etc) and utility libraries. And our application is no exception. So we defined a common module that would require all the common libs and in turn require this as part of our main module. The sub modules would still continue to list of all its required dependencies, but as part of the build process, we exclude them for optimization and to prevent from any shared lib being re-initialized/reset.

When the application starts, it loads the main module first and there after required modules as user navigates the application. Following is a gist of our build file is defined:

({
 modules: [
 { 
 name: 'app_main'
 }
 , {
 name: 'foo'
 , exclude:[
 'jquery'
 , 'underscore'
 , 'backbone'
 , 'services'
 , 'hbs'
 ]
 }
 ]
})

app_main will be loaded at the start of the application and if a user navigated to a foo module, only then the application will load foo. This worked great for us in keeping our main module lean as we kept adding more features (modules) until the requirement of reusing some of the modules in other apps/clients (mobile).

Fortunately for us, this did not involve lot of work and more importantly did not result in any redundant code. All we had to do was to define another main module that would include our common module and foo module. That is it. We had to refactor our common module a little bit to include only the core libs. Other shared libs were loaded as part of the main module (app_main). This way we did not load any libs that were not needed for foo module.

({
 modules: [
 { 
 name: 'app_main'
 }
 , {
 name: 'foo'
 , exclude:[
 'jquery'
 , 'underscore'
 , 'backbone'
 , 'services'
 , 'hbs'
 ]
 }
 , {
 name: 'app_foo'
 }
 ]
})
//app_main.
require(['common'], function (common) {
 require(['main']);
});

//app_foo
require(['common'], function (common) {
 require(['foo']);
});

This was just not fortune. From the very start of this app development, we not only aimed for modular code, but also to have an ability to extend any module to work by itself. Our entire MV* stack was architected that way and we are glad the returns are truly amazing.

We have a SPA that is comprised of several modules and at any time, any module can be extended to its own SPA app with minimal effort and no codes changes to the modules themselves. Thanks to the amazing requirejs optimization tool and a little foresight.

More details on how we use requirejs and its optimization tools are presented here.

reads

http://addyosmani.github.com/backbone-fundamentals

http://requirejs.org/docs/optimization.html

http://requirejs.org/docs/optimization.html#wholemultipage

http://requirejs.org/

 

2013 HTML5 Dev Conf Key Takeaways

 
  • Embedding complex SVGs into HTML
    • great talk about SVG and how box.js has used it to make its amazing viewer. apparently its more performant than the in-house chrome pdf viewer.
    • highly recommended read.
  • Scale and Performance: Data visualizations in modern browser
    • while the earlier talk was about not using canvas, this was all about using canvas and its might.
    • very inspiring to what folks have done and seems to be most of them are in publishing business (NYT and various media companies)
    • this and couple other similar talks strongly advocate use of D3.js for visualizations
    • Learned about Datawrapper, a tool to create simple, embeddable and interactive charts.
  • Constraint validation: Native client side validation web forms
    • talks about native validation that is available and supported in most browsers
  • Transforming the presentation of official statistics
    • amazing infographics and very inspiring to see huge amounts of data represented in easy to understand, interactive graphics.
  • ReactJS
    • new mv* js library from Facebook … oh what !!
    • its pretty much against everything that the current set of mv* libraries advocates
    • no templates, re-rendering your whole application when data changes… what??
    • and that is why it is a interesting read…never listen to anybody that says “DONT RE-INVENT THE WHEEL”
  • JS Inconsistencies Across Browsers
    • very illuminating and gives you a brief insight into how browsers work and why the inconsistencies.
    • talk about event loop, iframes and xhr gives insight into how browser processes them and advocates good coding practices when using them.
  • Continuous Delivery for JS Apps
    • the highlight of the conference for me. great speaker and very motivating
    • key takeaway: AUTOMATION

State Tracking with Backbone.js

One of the hallmarks of a great web experience is the ability to retrace where you’ve been, either through use of the brower’s “back” button or returning to a previously viewed page.  It’s a simple html concept, but it can be challenging to implement in a single-page application as big as ours. At Personal Capital, we use Backbone.js as the foundation for our web application and it’s given us the ability to deliver a ton of feature-rich pages very quickly.  But when it comes to the tracking and retracing view states, Backbone.js only gives us the tools, not a framework to work with.

Routes, Query Params, and Internal Variables

There are three ways in which we can feed the state settings to our Backbone views:

  • Url Paths – Out of the box, Backbone gives us a router class that maps url paths to our client-side page views.  In our single-page application, we’ve segmented our Backbone views into sections, sub-sections, and sub-pages so that they mimic the feeling of a conventional site.  Using url paths helps to inform the user about the organization of our site.  The “portfolio” and “advisor” sections are good examples of where we nest multiple levels of sub-sections and sub-views within a section.
  • Query Params – We use plug-in called “backbone-query-parameters” to abstract query strings into our application.  Many of our pages/views are often data driven by a combination of factors, e.g. selected chart in Account Details with custom date range, and so sometimes it makes sense to expose these values to the url address bar so the state can easily be recreated and bookmarked.  Query params is also a great way to feed multiple state settings to the Backbone view without having to worry about their ordinal position.
  • Internal Variables – Values that we do not want to expose to the address bar and are only good for a single session are stored only in memory.  Our Account Details pages make use of internal variables, such as date range, to track and restore the state of the Backbone view when the user navigates between accounts.

Using any combination of these three options gives us complete flexibility in setting the state of our Backbone views.  But without a framework, it is difficult to implement state management consistently across a large-scale application such as ours.

View State Model Solution

From the standpoint of a Backbone view, it shouldn’t matter how state settings are provided, e.g. url paths, query params, or internal variables.  Wouldn’t it be nice if the settings provided all at once, say in a value object?

State Model

At Personal Capital, we built a state management framework, that stores state settings in a value object, which we call the “state model”, and it persists in memory through out the duration of a session.  The model is segmented into the three kinds of state tracking mechanisms and looks like this:


function CashFlowState () {
this.baseUrl = '/cash-flow';
this.path = ''
this.upStreamPath = '';

this.optionalUrlParams = [];
this.internalStateParams = ['startDate', 'endDate', 'userAccountIds'];
this.userAccountIds = ‘all’;
}

The three aspects of our state model are:

  • Url Path – Comprised of the following properties:
    • this.baseUrl – Used to establish the base hash fragment that corresponds to the top-level sections of our Backbone application.
    • this.path – This is set by router.processRouteParams() when the url address bar changes.
    • this.upStreamPath – Used to give context to sub-views in relation to this.path.
    • this.optionalUrlParams – An array of name that correspond to what queryString variables a section expects.   As indicated in the name, these values are optional and are only defined at runtime as a property within the state model when there’s corresponding query string.
    • this.internalStateParams – An array of property names, which correspond to what state values are being track internally in our Backbone application.

The state model’s role is to be the source of truth for any changes to a section’s state settings, which can be done through either a change to the url address bar or from within our Backbone application.

Other System Actors

There three other main actors that make this whole system work:

  • processRouteParams() in the router – This method takes changes that were made to the url address bar and updates the section’s state model.  The updated state model and then passed into corresponding the section, and if appropriate, is passed downstream into the section’s child sub-sections and sub-views.
  • State datastore – Our state models are kept and referenced in a plain js object, just a like a dictionary. We use an ID convention for the property name in the datastore to reference the state model. As an example, our state model for our Portfolio section would be referenced from the state datastore as “datasource[‘portfolio’]”
  • Backbone.view extensions – Methods added to the prototype provide the following state management functionalities:
    • Backbone.View.prototype.updateView – A public method used to provide a consistent way to send state changes to the view.
    • Backbone.View.prototype.trackViewState – Responsible for storing changes in the views and synchronizing changes between the state of the application and the url address bar.
    • Backbone.View.prototype.saveInternalState – Saves view changes that are meant only for internal storage, which is used as part of the state restoration when the section is viewed.
    • Backbone.View.prototype.pathableSubViews – An array used to register this section’s sub-sections and sub-views that can react to changes to url paths.  Each element in the array is an object with the following properties:
      • nodeName – A string representing the node in the path for the corresponding sub-section or sub-view
      • subView – The Backbone sub-section or sub-view
      • Example – this.pathableSubView[0] = {subviewName: ‘subView1′, subview: SubView}
      • So when the path string is ‘subView1/subSubView1′, we can see that ‘subView1′ has a mapping to SubView
  • Backbone.View.prototype.processPathForSubView – Clones the state model, removes the 1st element in this.path string and places it in this.upstreamPath.

With this system in place, we now have a highly scalable way to manage states amongst our Backbone views.  It’s still a relative young piece of technology for us, so we welcome your thoughts and suggestions.