Design for Mobile First

A mobile site is required; it is not a nice-to-have. Mobile Internet usage will be more frequent than desktop Internet usage in the near future, so it’s time to stop adapting desktop websites to fit on mobile devices. We’ve seen the rise of responsive layouts, and while they offer an elegant solution for resizing components for varying screen size, they do little to address the true differences between mobile and desktop usage. Both types of sites need to be designed, but from now on we start with mobile.

Mobile devices offer a number of constraints that do not exist on a desktop. They have smaller screens, and slower, less reliable connections (for now). Inputs are touch and gesture based. Mobile users access the Internet more often than desktop users. They might be waiting for a restaurant table, watching television, or even multi-tasking with their laptop. They often have less attention available than a desktop user. Most importantly, there is a growing number of users who access the internet almost exclusively via a mobile device. Design for these constraints first, and a desktop version will follow much more easily than constraining a desktop version for mobile.

By designing for mobile usage, we force ourselves to be succinct and simple. These constraints will not make our stakeholders happy, but abiding by them allows us to understand the priorities and hierarchy of the site much faster. Stakeholders are more willing to trim the unnecessary and compress their content when crafting a mobile site. Secondary content can be placed on separate screens, making the primary focus of each screen much clearer. When combined with clear and concise content, our users will understand what we want to show them in record time.

If our users can accomplish their tasks on a mobile site, our desktop site will be even better. We now have content that is quick and obvious, an interaction flow that takes little time and attention, and an obvious hierarchy of elements on each screen. Our site is lightweight enough for slow connections, and optimized for distracted users with short attention spans. We have clear calls to action, and large, easy-to-target controls. In theory, both usability and conversions should be better than ever.

While our mobile-optimized site works pretty well on a desktop, we need to take advantage of the platform. While fonts and controls will likely appear friendlier and easier to use, they will be so enormous as to require our users to move their cursors much farther when navigating our site. We also aren’t taking advantage of the extra real estate, screen resolution, computing power, more reliable connection, and precision and hover abilities of the mouse cursor.

When we repurpose our site for desktop usage, we can expand on any content that required trimming, but if our site already converts so well, adding more content will only be a distraction. The main things to do when repurposing a mobile-optimized site for desktop use is to ensure gesture-based interactions are replaced with mouse-friendly alternatives, and to add a little wow factor through the use of animations, transitions, hover states, and higher resolution images which are too much for a mobile site. We may also want to flatten some of the architecture to bring secondary content into sidebars, and provide a larger view of navigation. Just ensure that they don’t interfere with the primary tasks the users want to accomplish.

The Evolution of a Startup: Web Technologies

This a detailed post describing Personal Capital’s transition from Flex to Backbone.js+RequireJS including the reasoning behind it and the experience of doing it.

By now, it’s clear to anyone who’s actively involved in the web space, including consumers, who’s come out on top in the Flash vs Rest of the World showdown. So, as Adobe transitions into marketing some HTML animation apps after ditching Flex and abandoning Flash Player for mobile browsers, it might be interesting to take a look at why on earth our web product is currently Flex based, and why it won’t be much longer.

Use it or lose it – your market share that is

Between 2009 and 2012 marked at least two dramatic shifts in popular technology we deal with day-to-day, which is not all that unusual for technology in general, but in terms of front-end web development it’s been fast and huge. These two big shifts I’m talking about are the decline of Internet Explorer on the one hand and Flash on the other.

Ironically, the same folks that are relegating both technologies are also now currently trading punches: Apple and Google.

Apple got real busy selling iOS devices and banning Flash from them while telling the world, every day, that they were doing it. Apple was also hard at work gaining an extra 10% PC market share with OS X which doesn’t run Internet Explorer at all. In terms of consumer perception, I’d say it was all Apple. But really, in terms of enabling this new generation of technology, Google’s Chrome and V8 teams deserve 99% of the credit.

For your consideration please:

That’s a pretty phenomenal eating of Microsoft’s lunch as Chrome goes from 0 to 34% and IE goes from 69 to 33% (and thankfully 17% of that to IE9). That leaves a big fat 2% for Firefox and Safari to play with. So there’s probably a million reasons why that played out the way it did, but in my mind, the single biggest factor that drove this current round of web innovation is the javascript performance war that Google’s V8 engine started.

Back in September of 2010, when we first started working on the Personal Capital web product you see today, the shiny new IE8 clocked in at 4989.7ms running the SunSpider javascript performance benchmark. This morning, Chrome on my Macbook Air clocked in at a respectable 243ms. The slope of the line on that graph is steeper than the financial cliff we stumbled off of in 2008. It’s the reason we can have smooth animations in javascript and data-tables that sort themselves in milliseconds. Chrome’s speed and fidelity is the reason we’re even talking about this at all, not to mention all the work the Chrome team has put in along with others to get HTML5 and CSS3 out the door. That’s huge. As Javascript developers in 2012 we’ve got great performance and a blossoming feature set.

In 2010 we had neither; which is the void that Adobe Flash Player filled. Although I can’t find updated numbers for Flash Player penetration today (surprise), the 2010 Flash Player 10.1 had 87.4% (more than IE) of the US browser market covered with a powerful VM that could execute and render at high fidelity at the same time. That March, Adobe also released Flex 4 which is a great framework with a lot of thought put into separation of concerns and mature patterns like state management, complex data collections and intelligent view (re)rendering. The icing, though, were the 4th gen micro-frameworks that sat along side it. Frameworks like Swiz and Robotlegs made AOP and IoC first class citizens in the Flash/Flex world and eliminated hundreds of lines of boilerplate and decoupled code in the process.

That’s why our web product is written in Flex. It allowed us to create a highly interactive browser-based product quickly, at high quality, to our design specifications and deliver it to almost everyone while building it on a solid software architecture. Until Adobe ditched it.

You know, one thing that Flex never really had going for it was a great open source developer community, and I’m not quite sure why that is. We had plenty of good examples from python to ruby to even the jQuery plugin ecosystem. So when it didn’t make sense financially for Adobe to continue to drive Flex, there wasn’t enough of a community there to catch its fall and there wasn’t enough of one there to prevent it in the first place.

So, in 2012, Flex has been dumped unceremoniously into the Apache Foundation’s lap and is being valiantly lipo-sucked by the die-hards in the Spoon Project. Flash Player has been relegated to a video streaming and gaming platform who’s currently trying to integrate OpenGL. Internet Explorer, for its part, might make a comeback as IE10 purports to finally be on par with modern browsers both in terms of performance and a standardized feature set but I doubt Google or Mozilla is interested in sitting around and letting that happen.

The takeaway here is simple and it’s something every developer knows in their heart: you’re never done until you have no more customers. Microsoft and Adobe thought they were done and they both used their dominance to excuse themselves from continually delighting theirs so Google and Apple came in and did it for them. And they did a damn good job.

But enough of the history lesson! Let’s dive into what we’re actually doing now with Javascript and HTML5 and where we’d like to go in the future.

Our Javascript App Design Refresh

We started converting parts of our web application from Flex to Javascript around September of 2011, first as a POC, then for production but just on a simple jQuery enhanced prototype based architecture. At the same time, we were doing the research necessary in order to completely rebuild our entire development stack from scratch. Not only were we looking to maintain use of architectural patterns like dependency injection, event mediation and data binding, but we also needed to figure out how we were going to test, package and ship our software.

Here’s the current setup:

Application Framework

Backbone.js provides just enough framework to be useful but not so much that we can’t cultivate our architecture in the way we see fit. We take advantage of the router, propagated model/collection updates and simplified view setup. We’ve extended collections to do smart merges from our API backend, implemented client-side data caching for data injection (think dependency injection but with data) and have a set of rules to determine how we break down designs into individual view instances. We compared all the frameworks currently listed on TodoMVC and were impressed with how the platform is evolving, but it came down to Ember and Backbone at the time. We like the Backbone community and the learning curve although we’re aware we’ll eventually experience some growing pains. We’re closely following projects like Aura, Marionette, Thorax, and Chaplin. A more detailed write-up of our comparison here warrants another post and we are currently also experimenting with AngularJS for some more advanced UI components.

Architectural Framework

RequireJS is not just a script loader, it’s really dependency injection and gives you nodejs-ish module encapsulation in the browser. Dependency injection is important for efficiency without singletons or long-winded model locator syntax, for isolating units under test and promotes greater code reuse because of the loose coupling. The RequireJS project has the best momentum out of the alternatives we evaluated in our opinion and their shim support iced the cake. Read more about it here.

View Templating & Layout

Handlebars is a clear favorite here because even though we started out using Twitter’s Hogan implementation of Mustache we found the extra bit of flexibility worthwhile to people experienced in developing complicated UIs. Handlebars’ registerHelper mechanism is also a much cleaner way to promote code reuse of view transforms than the function mechanism in Mustache proper. And given the fact that we aren’t going with a framework which enforces a view language like AngularJS or Knockout there really isn’t anything better for dead-simple view definition.

Visual Design & Styling

Compass (and SASS) was chosen because of our desire to empower our design team more in the actual implementation of products; our goal being to provide the HTML structure and JS behavior and let the more opinionated people perfect the pixels. Given the broad library of mixins that create cross-browser CSS and the fact that SASS doesn’t have any significant disadvantages compared to LESS or Stylus, Compass was a natural choice for us. Currently, developers use Scout for compilation but this will eventually be integrated into our build process.

Unit Testing

SinonJS with Chai extensions enables simple and powerful behavior expression. If you’re used to BDD testing in Ruby the patterns here will be familiar. It’s a library that takes full advantage of the language it is written as well as the experience of other communities in evolving test frameworks. You get to write easy to read unit tests for almost anything without much pain. Seriously, if you haven’t been writing unit tests up until now, go check it out.

Build Process

Hudson is our default continuous integration server because we were already using it. While there are other choices out there, it’s got a great community of plugins and it’s quite impressive how well it’s held up building, testing and deploying everything from ActiveX plugins, WAR files and Flex projects to our Javascript applications. We use the RequireJS optimizer to concatenate and uglify our applications prior to release.

Graphing

Raphaël could use some documentation love but that doesn’t take away from its excellence as a cross-browser vector graphics library. All the HTML graphs you see in our site are custom Raphaël plugins that are really close to the heart of our value proposition. We want to give you the best insight into your finances possible and oftentimes that manifests itself as a highly interactive, highly custom visualizations that we hope bring you happiness and prosperity. We also seriously evaluated Highcharts and the graphing extension to Raphaël but ultimately our passion for numbers (and a great product team that can manipulate them) drove us to pick something productive yet 100% customizable. [TODO: link to Justin’s Raphael post here]

A Quick Note About Debugging

The visual debugger in Flash Builder was the single biggest selling point for Flash Builder, but it required a debug version of Flash Player to be installed and Flash Builder (Eclipse) to be running. Yikes.

Now I want to take a moment to wholeheartedly thank the Firebug team for pioneering the in-browser debugging console back in 2006 and doing it right. These days the Chrome Developer Tools and Safari Web Inspector are top notch and even IE9+ has some semblance of a useful developer console which allows breakpointing, runtime manipulation and analysis AND is deployed on customer machines all without having to boot up a separate program. We’re even able to enable rendering hints in webkit browsers. This is one area in which I feel Javascript development is clearly superior to any other technology I’ve worked with.

Functional Testing

Unfortunately, we haven’t nailed this down yet and both our QA team and developers are still doing evaluations. If you have suggestions we’d love to get some in the comments below.

In Summary

You can see that our philosophical approach to all these areas is “just enough is enough” without having to buy in to an opinionated, vendor-locked or overly complex piece of technology. We’re staying low-level so that we can build out a coherent platform that suits our priorities and not spend a lot of time fighting frameworks. And there’s actually a very good reason for that.

Enough with the boilerplate!

So, you get it, we’re excited about what we can do and how we can do it in HTML and javascript. But here’s some cold hard truth: we lost important architectural patterns when making the move from Flex 4 and Swiz to Backbone. Namely:

  1. Data-binding
  2. An intelligent view component lifecycle
  3. Declarative views
  4. Event mediation

To get these back in you need to write a lot of boilerplate or use a more abstracted library on top of Backbone (which is fine). However, a couple of the newer frameworks attempt to implement items 1-3. Ember is actually designed around them and it feels like AngularJS and Knockout are in some part inspired by these features of Flex, but they are all pretty awkward in their own way. While Ember 1.0.PRE explicitly supports a declarative view hierarchy with {{outlets}} and the Objective-C style first responder system is nice, writing your own bindable EmberHandlebars helpers needs to be streamlined as does the syntax for declaring view actions. The overly complex interplay between the binding mechanisms and the models, collections and controllers make them time consuming to debug when they don’t play nice. The router is actually nice to write for though the heavy reliance on string literals for instance lookups and even for property sets/gets (which is an unnecessary recurring theme in JS frameworks) makes me sad as it’s more brittle. Angular, in its own way, is a lovely piece of engineering but wiring that much functionality into the DOM is bit adventurous, opaque and the project just wasn’t far enough long when we first started re-writing our apps. Knockout is in the same family.

We handle event mediation simply through an application event bus that enables modules like metrics and logging to peek at what’s happening in the running application without their code being strewn everywhere. It also allows modules to request the triggering of an action without needing to care about who actually performs it.

Regardless, we’ve been constantly impressed by all the thought and sweat that’s gone into each of the technologies we’ve talked about and as part of our maturation as a team and a company, we’d like to start to contributing back to this wonderful community. We’re currently pushing to get some of this foundational work on our roadmap. In the meantime, I guess we’ll keep writing our own boilerplate.

But wait! There’s more!

Projects We’re Super Excited About

  1. Meteor
  2. Derby
  3. AngularJS

If you’re excited about any real-time or other technologies that help create a fantastic user experience please let us know in the comments below, we like great technology and excitement.

Tech Shoutouts

Finally, we’d like to take some space to say thank you a few of the myriad other open-source projects and utilities that have allowed us to move as quickly as we have and helped us fall in love with the community. In case you haven’t heard of one of these, check them out, they might help you too. In no particular order: livestamp.js, lodash, slimScroll, momentjs, require-handlebars-plugin, express & jade, detectmobilebrowser, and of course, the jquery core team for doing such a good job pushing the edge for so long.

Bundling Front-end Resources with RequireJS

why

  • faster load times because of fewer http calls and lesser data over the wire.
  • effectively invalidate browser cache when the newer version of a resource is available.
  • lends itself to manage these resources via CDN

what

  • combines and minifies javascript
  • combines and minifies css
  • revises file names of js/css files to support heavy browser caching
  • updates the html to reference these new hyper-optimized js/css files

how

Lets start with combining javascript. This could be as trivial as writing a shell script to concatenate all of your script files into one single file. Works great for monolithic applications but for large-scale javascript application where significant amount of data manipulation and display is done at the browser level, having a modular architecture greatly helps.

Modules are an integral piece of any robust application’s architecture and typically help in keeping the units of code for a project both cleanly separated and organized. They need to be highly decoupled and represent distinct pieces of functionality thus facilatiting easier maintainibility and easily replaceable without affecting the entire system.

“The secret to building large apps is never build large apps. Break your applications into small pieces. Then, assemble those testable, bite-sized pieces into your big application” -Justin Meyer, author JavaScriptMVC

However, there are two main problems in writing modular javascript:

  • No means to import modules of code in a clean and organized manner.
    • Developers at present use variations of the module or object literal patterns to write modules. The modules are exported as namespaces of a single global object where it is possible to incur name collisions as the application and team grows.
  • No cleaner way to handle dependency management.
    • Developers use the exported namespace to include the modules. And then load the corresponding script files in the order they are included. This becomes really difficult as the application grows and particularly when scripts start to have overlapping and nesting dependecies.

There are various script loaders that help out but again there are no native means that solve the two problems in a holistic manner. And this where Asynchronous Module Definition (AMD) greatly shines. It has greatly helped out in architecting our application, the benefits of which will be a different post, but the following pretext from Addy Osmani should summarize well its use.

The AMD module format itself is a proposal for defining modules where both the module and dependencies can be asynchronously loaded. It has a number of distinct advantages including being both asynchronous and highly flexible by nature which removes the tight coupling one might commonly find between code and module identity. Many developers enjoy using it and one could consider it a reliable stepping stone towards the module system proposed for ES Harmony. – Addy Osmani

The following example demonstrates how we address the first problem with AMD:

define(
 	module_id /*optional*/,
 	['dog', 'actions'] /*other module dependencies if any*/,
 	function( dog, actions){ /*function for instantiating the module or object*/ 
 		//create your module here
 		var myModule = { 
 			dogActions: function(){
 				console.log('dog actions');
 			}
 		} 
 	//return a value that defines the module export
 	return myModule; 
 });

We have solved the second problem by using RequireJS, a popular script loader and we felt it is the natural fit for AMD. It uses the AMD module specification for defining and requiring modules, and it loads these modules via a built in script loader. It also provides for loading non-script dependencies such as text files and we have used that ability to load our handlebar templates.

Lets extend the above example to demonstrate how well AMD and requireJS compliment each other to keep your code modular, yet bought together without one worrying about module dependencies and name space conflicts.

<script src="scripts/require.js" data-main="scripts/main"></script>
// In scripts/main.js: your application bootstrap
require( 
	['dog', 'actions', 'hbs!templates/dogActions'],
	function( Dog, Actions, DogActionsTemplate ){
 		document.body.innerHTML = DogActionsTemplate({
 			dog: Dog,
 		 	actions: Actions
 		});
 	}
);
// In scripts/dog.js
define([], function( ){
	var dog = {
 		name: 'lula'
 	};
	return dog;
});
// In scripts/actions.js
define([], function( ){
	var actions = {
		name: 'fetchBall'
	};
 	return actions;
});
// In scripts/templates/dogActions
<h1>{{dog.name}}</h1>
<ul>
<ul>{{#each actions}}
	<li>{{this}}</li>
</ul>
</ul>
{{/each}}

In addition, RequireJS also provides an optimizer tool that we could use to build our entire javascript into a single file or two for production. And thus, how we address combining javascript.

Combining css was easy. We use sass to write our style sheets. As the name itself suggests they are awesome and were introduced to me by awesome fellow developer, Justin. We use scout or similar tools to output a single css file from our sass files that acts as the stylesheet for the entire application.

Now that we have successfully combined javascript and css files, lets compress them. We have earlier mentioned script loaders and how require’s optimizer tool helps out. Currently require provides two options: uglifyjs (default) and closure compiler to minify. we use uglifyjs and have it configured to “beautify” in our dev environment so that we could still effectively debug. Source maps seems to be an interesting concept in this arena that would help you debug “production” scripts.

Now that we have combined, minified our resources, our next step is to add a build fingerprint to the resource file names so that the browser treats them as new resources and invalidates its cached version of the resource.

We have used bunch of maven-ant scripts to achieve this. More on this in my next post.

challenges

One of the major challenges with requireJS is using non-AMD compatible scripts, mainly the third-party libraries. When we initially started with requireJS, we simply got the AMD versions of the third party libraries we needed or modified the libraries to be of AMD format. But we quickly realized that is not a feasible approach and was a major pain point when we had to add a new library. Tim Branyen had a good blog post on how to overcome this problem which was later incorporated into requireJS 2.0. With the introduction of shim, this problem was easily addressed in the config file as follows:

require.config({
 shim: {
 'backbone': {
 deps: ['underscore', 'jquery']
 , exports: 'Backbone'
 }
 , 'underscore': {
 exports: '_'
 }
 , 'raphael': {
 exports: 'Raphael'
 }
 }
 , hbs: {
 templateExtension: 'html'
 , disableI18n: true
 , helperPathCallback: function(name){
 return 'templates/helpers/' + name;
 }
 }
 , paths: {
 jquery: 'libs/vendor/jquery-1.7.2'
 , underscore: 'libs/vendor/underscore'
 , Handlebars: 'libs/vendor/handlebars_hbs'
 , raphael: 'libs/vendor/raphael'
 , backbone: 'libs/vendor/backbone'
 , hbs: 'libs/vendor/hbs'
 }
});

 

reads

http://addyosmani.github.com/backbone-fundamentals
https://github.com/h5bp/
http://www.nczonline.net/blog/2010/07/06/data-uris-make-css-sprites-obsolete/
http://addyosmani.com/blog/yeoman-at-your-service/
http://www.youtube.com/watch?v=Mk-tFn2Ix6g
http://www.html5rocks.com/en/tutorials/developertools/sourcemaps/