Six Tips on Defining WebService APIs

Frameworks such as PhoneGap or Sencha created a great promise of rapid development of fast and impressive apps across different platforms by focusing on client code reuse. We want to share a more holistic architecture across the server and client that let you go native on each client platform and still take advantage of code reuse.

The idea is simple: if you have control over both the server and the client design, you should not settle for a solution that only optimizes one.  If you “look at the whole board” and optimize the server for the clients, you won’t need to sacrifice the user experience for the sake of client code reuse.

Primary design goals for Personal Capital include:

  • Perfect balance between an feature-rich application and an intuitive navigation scheme
  • Creating the absolute best user experience; and
  • 100% accurate data presentation

With these design goals, we’re not satisfied with the defaults of each client platform, let alone defaults of shared code across multiple client platforms! At Personal Capital we have customized every component and every interaction in the pursuit of user happiness and achieving UX nirvana: a wowed and happy user each time that she uses our service.

Our Architecture

Our approach is simple: Go Native but optimize the server’s Web Services APIs for the client by following six simple tips to gain the time and flexibility we need to make our app stand out, on each platform, in its own way. We started experimenting with these rules two years ago when we created our Second Generation APIs, and optimized them as much as possible for the clients. These six principles allow us to streamline design and QA, and free up our client developers to spend the majority of their time on what they do best: making amazing user experiences.

Tip #1 Let Your Client Developers Write the Server API Definitions

API definitions are normally dictated by the server engineers, and REST enthusiasts and typical server-driven development paradigms have made it uncommon for client developers to be more than tangentially involved in server API definition before coding starts. We flipped this model on its head and relied on our client developers to drive the API definition, and as a result the server became much more attuned to how the clients expect the data to be structured, and minimized re-work later by maximizing communication early on.

Tip #2 Let Your Server Developers Take As Much Logic Out of Client Code as Possible

Avoid replicating code across many clients that you support and move as much logic as possible to the server. Having business logic in the client code is analogous to hard coding a configuration value. The pivotal point for us was realizing it’s better to have simple, thin, and pretty clients than complex, thick ones that have duplicated and hard coded business logic. Smart clients are a burden. Now each of the discussions between client and server developers is about “how can the server make the client’s code simpler?”

Tip #3 Don’t Be Afraid of Specialized APIs

Let’s just say this: it’s OK to serve the same data from more than one API. Why? Because when clients get data in the format they need, directly from the server, they can get it on screen faster by skipping complex transformations. And when the data format changes on the server, those changes are transparent to the clients. This has the added benefit of also not needing to care how the data is being used elsewhere in the client, making your client more loosely coupled and modular.

Tip #4 Let the Server Enforce Uniformity Across Clients

When it comes time to support multiple clients, the more you push responsibilities like rounding of amounts, calculation, and string formatting up to the server, the more time you will save your development and QA teams. You won’t need to spend time re-writing the same formatting and calculation rules on each client and fixing the same bugs on each platform. If it’s correct coming from the server, it will be correct everywhere.

Tip #5 No Workflow State Machines on the Clients

Likewise, when dealing with multiple clients, the more you push the complex state machines that deal with business logics into the server, the faster you will be able to iterate. For example instead of having a logic on the client that says “if user is in state x, and this is the first time that she is attempting to do y, show message z” just tell the client to show message z. All that logic can be encapsulated on the server and the API can just tell the client what to do. The time to market gained between each client writing and testing a complex flow versus each client simply responding to server flags is huge. It’s the difference between crazy nested ifs and a simple switch statement. Keep the complex state machines tucked safely in an API. Let your clients focus on display logic, and not managing business state.

Tip #6 Fast, Rich and Flexible APIs

If you follow all these tips, the payoff is huge: shorter time to market, simpler client code, less bugs. But if you want to pull this off, you must:

  • Make calling an API fast, really fast. Round-trip time of a request has to be as short as possible. This means server-side caching, is a must.
  • Create rich APIs that can deliver a lot of data in one call; this is especially important for mobile applications where the network overhead is much greater.
  • Add enough controls in the API definition to allow the client to ask for the right amount of data based on their flow. E.g. iPhone may not show transaction details and would just need the summary, but iPad and web do want to show these data. Give the clients the control to request the right data amount.
  • Gzip your responses as much as possible. The performance lift you get on mobile and web apps from this simple change are amazing!
  • Client-side caching of the API responses is just as important and reduces reliance on network stability to a great deal.

Go Native!

With loosely coupled client modules that receive pre-formatted data, and client developers that don’t need to implement tons of complex business logic, you can focus your client developers on what they do best. You’ve successfully freed up enough cycles that you can afford the extra spit and polish that will make your app stand out from the rest.

Last month we held a Meetup that we discussed these principles and how through this architecture you can reduce your client code base by not sharing client code, but rather sharing server code. You can watch the video here.

JS Hacks: Accessing Variables in a Hidden Scope

Modular application structure is great for loosely coupling code to make it more reusable and more maintainable, always has been, always will be.

However, encapsulation has typically flown in the face of cross-cutting concerns like analytics and logging, which for any production web app is absolutely essential. There are a few JavaScript AOP type libraries out there (based on a quick Google search) but in JavaScript we don’t have a class definition lookup like in ActionScript or the forceful reflection APIs that allow us to bust open private scopes like in Ruby.

The idea here is we’d like to be able to track events that happen in our application, along with the associated detailed information, while littering our application code with metrics crap as little as possible. For example, I’d much rather see this:

Analytics.track('signUpComplete');

than:

Analytics.track('signUpComplete', { username: username, browser: 'Chrome 11', donkeys: isItADonkey() });

the point being that I don’t want my application code to care about what the metrics code needs. One solution is to create and pass an accessor function that captures the scope you want to pull additional information from and pass that accessor function to the external module that needs the additional access.

In your application code:

var accessor = function(name){ try{ return eval(name); }catch(e){ /* log error */ return null; } };
Analytics.track('signUpComplete', accessor);

and in your other module’s code:

var eventsSpec = {
     'signUpComplete': {
          username: 'path.to.username'
          , browser: 'path.to.browser'
          , donkeys: 'isItADonkey()'
     }
};

var track = function(eventName, accessor){
    var capturedProperties = {};
    for(var key in eventsSpec[eventName]){
        capturedProperties[key] = accessor(eventsSpec[eventName][key]);
    }
    // fire off call to google analytics or whatever
};

While this means that your metrics code is tightly coupled to your application code and the way it is structured there isn’t really any way around that, the data lives where it lives and you need to get it. On the other hand, your application code is not tightly coupled to your metrics code, which is the goal, sweet!

Copy/paste the lines below into your browser’s console to see a full featured example of accessing properties, property chains and functions in the various closures:

var globalScopeObject = {
     prop: 'dot accessor in outer block scope'
};

var globalFunction = function() {
     return 'global function call';
};

var wrapped = function(){
     var outerBlockScope = 'this is outer block scope';
     var outerBlockScopeObject = {
          prop: 'dot accessor in outer block scope'
     };

     var outerBlockFunction = function() {
          return 'outer block function call';
     };

     return function(name){
          var innerBlockScope = 'this is inner block scope';
          var innerBlockScopeObject = {
               prop: 'dot accessor in inner block scope'
          };
          var innerBlockFunction = function() {
               return 'inner block function call';
          };

          try{
               return eval(name);
          }catch(e){
               if(typeof console != 'undefined'){
                    console.log(e);
               }
               return null;
          };
     };
};

var test = wrapped();

console.log(test('globalScope'));
console.log(test('outerBlockScope'));
console.log(test('innerBlockScope'));
console.log(test('globalScopeObject.prop'));
console.log(test('outerBlockScopeObject.prop'));
console.log(test('innerBlockScopeObject.prop'));
console.log(test('globalFunction()'));
console.log(test('outerBlockFunction()'));
console.log(test('innerBlockFunction()'));

Try out it and let me know!

November Architecture Update

 

We have a simple engineering philosophy: our user experience has to be the absolute best, and our calculations 100% accurate, while everything else just needs to be “good enough” for “now.” This is why we can release high quality products early and fast. Behind the scenes, however, we are continuously refactoring our code base as “good enough” changes over time.

Our November release went live with a new HTML5 dashboard and account detail screens, as well as many interaction, performance, and refactoring upgrades for the web application as a whole. Here is how we did it:

Distributing Static Content We now serve all web application assets packaged and minified from Amazon’s CloudFront. This helps us serve our static content (js/css/media) to end users with low latency and high data transfer speeds. It also helps us separate the concerns of static content (front end) and server webpages (back end), which reference the static content. This gives us the ability to change static content at any time without having to re-deploy our entire application. Faster iterations. Better products. You can read more about it here.

Responsive Design One of the biggest variables in web interface engineering is client screen.  Our new HTML5 dashboard uses CSS media queries to offer a responsive design, rendering a single column for common resolutions, and two columns for wide resolutions. This wider resolution provides an efficient use of space and access to interface elements without the need to scroll. This gets us one step closer to providing that “financial picture at a glance” goal.

Financial Visualizations These are the interface embellishments that bring our web application to life.  We chose Raphael.js, a vector drawing API which makes programming SVG with javascript a pleasure.  We took this customized approach, rather than an out-of-the-box charting solution, because at Personal Capital, the defaults just aren’t good enough. We work very hard to design the most effective charts we can, and even harder to make sure that when we release them they live up to customer expectations. New visualizations featuring custom algorithms can be seen on the new dashboard and account management screens.

Datagrids This is the bread to our visualization butter. In one month, we switched over to a custom HTML5 implementation (from our previous Flex-based implementation) improving not just performance, but the set of features as well. Searching, sorting, and editing are all upgraded and we added a couple new features like multi-row editing and tag sorting. Angular.js is the technology that allowed us to achieve these gains in such a short period of time. Angular makes re-rendering based on live updates a non-issue, as the markup itself clearly and concisely documents what it will express as the data changes.

Polymorphic Backbone Views: Backbone framework has proven to be both scalable and fast. On the scalability side, the ability to extend and polymorph Backbone’s views helped tremendously in simplifying the code for Account Details. On the performance side, we were able to greatly reduce the number of unnecessary HTML redraws by writing custom functions that would update specific DOM elements.

That’s it for the new web app updates this month. Stay tuned for the next round of web app improvements in January!

Distributing Static Content From CDN

why

  • Distribute content to end users with low latency and faster response times.
  • Ability to deploy new static content without having to re-deploy the entire application.

what

  • Identify a Content Distribution Network (CDN) solution that meets our needs
  • Setup origin server(s) to store the content that will be distributed via CDN.
  • Setup a distribution to register the origin server(s).
  • Reference all static content in the application with the distribution’s url.
  • Version static content to bust the cache both in the browser and CDN.

how

Before we get into implementation details, what do we mean by “static content”?  Here, we define static content as all the web resources that are not specific to a particular user’s session.  In our case, this includes css files, javascript files and all media files.

Identify CDN solution

Since most of our infrastructure is hosted in Amazon Web Services, the AWS CloudFront CDN was a logical default choice to try.  Like most of the AWS application-level services, CloudFront tries to hit a reasonable balance of the 80/20 rule:  it is low-cost, easy to operate, and covers most basic functionality that you want in a application service, but does not offer much in the way of high-end differentiating features.

Setup origin server(s) to store the content that will be distributed via CDN.

An origin server in CloudFront can be either a web server, or, as in our case, an AWS S3 bucket. We have setup an origin for each of our environments (dev, qa, prod). We then have a Hudson job (build) for each of our environments. The Hudson job checks out the corresponding branch from our git repo for the static content; processes it as mentioned in this post; updates a version.xml file with the Hudson build number; zips the file; and copies it to a S3 bucket. This bucket, however, is not the CloudFront origin. It is like a docking area to store all zipped build files from our various Hudson jobs. We have another Hudson job (deploy) that copies a given zipped build file to an origin bucket. More on that in a minute.

We have a different Hudson job for each environment because the static content could be processed differently based on an environment. For example: in our dev environment, we do not combine or minify our js files. In our qa environment, we combine but do not minify our js files. In prod, we combine and minify our js files.

Back to the Hudson deploy job mentioned above. This job takes two input params: the name of the zipped build file to be deployed and the environment to be deployed. It simply unzips the build file into a temporary directory and uses s3sync to upload the content to the appropriate S3 origin bucket for the given environment.  And from there, it is available for distribution via CloudFront.

In addition, in our dev environments, we use a Continuous Integration (CI) process, where our Hudson jobs send SNS messages to our the web server when a build is available.  The web servers pulls the static content build from the S3 staging bucket and then serves it directly from Apache.  This allows for a more targeted integration test of the static content without bringing the CDN mechanisms into the mix.

Setup a CloudFront distribution to register the origin server(s).

We have a CloudFront distribution for each of our origins.  A CloudFront distribution associates a URL with one (or more) origin servers.  Creating the distribution is easy via the AWS Console.  In our distribution, we force HTTPS access – since our containing pages are served via https, we want to ensure the embedded static content is as well, to avoid browser security warnings.

CloudFront does allow you to associate a DNS CNAME with your distribution, so that you can associate your own subdomain (like static.personalcapital.com) with your CloudFront distribution.  This is more user-friendly than the default generated CloudFront domain names, which are like d1q4amq3lgzrzf.cloudfront.net. However, one gotcha is that CloudFront does not allow you to upload your own SSL certificate.   So, you have to choose between either having your own subdomain or having https – you can’t have both (some other CDN services do allow you to load your own SSL certs).   In our case, we chose https, and use the default cloudfront.net URL.

Reference all static content in the application with the distribution’s URL.

We have an environment config property that stores the corresponding CloudFront distribution’s URL. This property is available for all server-side webpages where it is referenced as follows:

 <link rel="stylesheet" type="text/css" href="<%=staticUrl%>/static/styles/css/main.css">

We then needed to make sure that all our static content references its resources via relative URLs to keep them independent of the distribution URL. For example, an image reference in main.css would be as follows:

 background: url('/static/img/dashboard/zeroState.png')

which would resolve to the root URL of main.css which is the distribution URL. I would like to know if there is a better way to solve this.

All our javascript uses relative paths anyway because of “requirejs” so we did not have to make any changes there.

All other references to static resources were on the server side where they had the config property to correctly reference the static resource.

Version static content to bust the cache both in the browser and CDN.

All our static content have aggressive cache headers. Once a static resource is fetched by the browser from the server, all future requests to that resource will be fetched from browser’s cache. This is great but when a newer version of this resource is available in server, the browser won’t no know about it, until its cache entry for that resource expires.

To prevent this, we use a common technique called URL fingerprinting wherein we add a unique fingerprint to the filename of that resource and change all its references in the webpages (JSPs) with the new filename. The browser, as it renders the updated webpage, will now request the resource from the server since it treats it as an entirely a new resource because of its new name.

The Hudson build job mentioned above processes our static resources, versions them with the build number and also stores the build number in version.xml. The version.xml file is then used by the application to retrieve the version number and pass it onto web pages at run-time. This helps us achieve our second goal that of keeping our static (front-end) development independent from our sever (back-end) development. This is very powerful as it gives us the ability to change our static content any time, have it deployed to production and not worry about updating server webpages with the latest version number. Pretty neat ah !!

Versioning of the resources also helped us out a great deal with our CloudFront distribution. CloudFront distribution behaves very similar to how browser handles resource caching. It does not fetch a newer version of the resource from the origin server unless one invalidates the current resource in the distribution. This has to be done per resource and it has a cost too.  The CloudFront documentation offers more considerations regarding invalidation vs versioning.

There is one other workaround you could use to force CloudFront distribution to fetch the content from its origin server. Set the distribution to consider query strings in the resource urls. And then pass a unique querystring along with the resource url and it will force the distribution to fetch the resource from the origin server.

That is it !!

reads

The Evolution of a Startup: Web Technologies

This a detailed post describing Personal Capital’s transition from Flex to Backbone.js+RequireJS including the reasoning behind it and the experience of doing it.

By now, it’s clear to anyone who’s actively involved in the web space, including consumers, who’s come out on top in the Flash vs Rest of the World showdown. So, as Adobe transitions into marketing some HTML animation apps after ditching Flex and abandoning Flash Player for mobile browsers, it might be interesting to take a look at why on earth our web product is currently Flex based, and why it won’t be much longer.

Use it or lose it – your market share that is

Between 2009 and 2012 marked at least two dramatic shifts in popular technology we deal with day-to-day, which is not all that unusual for technology in general, but in terms of front-end web development it’s been fast and huge. These two big shifts I’m talking about are the decline of Internet Explorer on the one hand and Flash on the other.

Ironically, the same folks that are relegating both technologies are also now currently trading punches: Apple and Google.

Apple got real busy selling iOS devices and banning Flash from them while telling the world, every day, that they were doing it. Apple was also hard at work gaining an extra 10% PC market share with OS X which doesn’t run Internet Explorer at all. In terms of consumer perception, I’d say it was all Apple. But really, in terms of enabling this new generation of technology, Google’s Chrome and V8 teams deserve 99% of the credit.

For your consideration please:

That’s a pretty phenomenal eating of Microsoft’s lunch as Chrome goes from 0 to 34% and IE goes from 69 to 33% (and thankfully 17% of that to IE9). That leaves a big fat 2% for Firefox and Safari to play with. So there’s probably a million reasons why that played out the way it did, but in my mind, the single biggest factor that drove this current round of web innovation is the javascript performance war that Google’s V8 engine started.

Back in September of 2010, when we first started working on the Personal Capital web product you see today, the shiny new IE8 clocked in at 4989.7ms running the SunSpider javascript performance benchmark. This morning, Chrome on my Macbook Air clocked in at a respectable 243ms. The slope of the line on that graph is steeper than the financial cliff we stumbled off of in 2008. It’s the reason we can have smooth animations in javascript and data-tables that sort themselves in milliseconds. Chrome’s speed and fidelity is the reason we’re even talking about this at all, not to mention all the work the Chrome team has put in along with others to get HTML5 and CSS3 out the door. That’s huge. As Javascript developers in 2012 we’ve got great performance and a blossoming feature set.

In 2010 we had neither; which is the void that Adobe Flash Player filled. Although I can’t find updated numbers for Flash Player penetration today (surprise), the 2010 Flash Player 10.1 had 87.4% (more than IE) of the US browser market covered with a powerful VM that could execute and render at high fidelity at the same time. That March, Adobe also released Flex 4 which is a great framework with a lot of thought put into separation of concerns and mature patterns like state management, complex data collections and intelligent view (re)rendering. The icing, though, were the 4th gen micro-frameworks that sat along side it. Frameworks like Swiz and Robotlegs made AOP and IoC first class citizens in the Flash/Flex world and eliminated hundreds of lines of boilerplate and decoupled code in the process.

That’s why our web product is written in Flex. It allowed us to create a highly interactive browser-based product quickly, at high quality, to our design specifications and deliver it to almost everyone while building it on a solid software architecture. Until Adobe ditched it.

You know, one thing that Flex never really had going for it was a great open source developer community, and I’m not quite sure why that is. We had plenty of good examples from python to ruby to even the jQuery plugin ecosystem. So when it didn’t make sense financially for Adobe to continue to drive Flex, there wasn’t enough of a community there to catch its fall and there wasn’t enough of one there to prevent it in the first place.

So, in 2012, Flex has been dumped unceremoniously into the Apache Foundation’s lap and is being valiantly lipo-sucked by the die-hards in the Spoon Project. Flash Player has been relegated to a video streaming and gaming platform who’s currently trying to integrate OpenGL. Internet Explorer, for its part, might make a comeback as IE10 purports to finally be on par with modern browsers both in terms of performance and a standardized feature set but I doubt Google or Mozilla is interested in sitting around and letting that happen.

The takeaway here is simple and it’s something every developer knows in their heart: you’re never done until you have no more customers. Microsoft and Adobe thought they were done and they both used their dominance to excuse themselves from continually delighting theirs so Google and Apple came in and did it for them. And they did a damn good job.

But enough of the history lesson! Let’s dive into what we’re actually doing now with Javascript and HTML5 and where we’d like to go in the future.

Our Javascript App Design Refresh

We started converting parts of our web application from Flex to Javascript around September of 2011, first as a POC, then for production but just on a simple jQuery enhanced prototype based architecture. At the same time, we were doing the research necessary in order to completely rebuild our entire development stack from scratch. Not only were we looking to maintain use of architectural patterns like dependency injection, event mediation and data binding, but we also needed to figure out how we were going to test, package and ship our software.

Here’s the current setup:

Application Framework

Backbone.js provides just enough framework to be useful but not so much that we can’t cultivate our architecture in the way we see fit. We take advantage of the router, propagated model/collection updates and simplified view setup. We’ve extended collections to do smart merges from our API backend, implemented client-side data caching for data injection (think dependency injection but with data) and have a set of rules to determine how we break down designs into individual view instances. We compared all the frameworks currently listed on TodoMVC and were impressed with how the platform is evolving, but it came down to Ember and Backbone at the time. We like the Backbone community and the learning curve although we’re aware we’ll eventually experience some growing pains. We’re closely following projects like Aura, Marionette, Thorax, and Chaplin. A more detailed write-up of our comparison here warrants another post and we are currently also experimenting with AngularJS for some more advanced UI components.

Architectural Framework

RequireJS is not just a script loader, it’s really dependency injection and gives you nodejs-ish module encapsulation in the browser. Dependency injection is important for efficiency without singletons or long-winded model locator syntax, for isolating units under test and promotes greater code reuse because of the loose coupling. The RequireJS project has the best momentum out of the alternatives we evaluated in our opinion and their shim support iced the cake. Read more about it here.

View Templating & Layout

Handlebars is a clear favorite here because even though we started out using Twitter’s Hogan implementation of Mustache we found the extra bit of flexibility worthwhile to people experienced in developing complicated UIs. Handlebars’ registerHelper mechanism is also a much cleaner way to promote code reuse of view transforms than the function mechanism in Mustache proper. And given the fact that we aren’t going with a framework which enforces a view language like AngularJS or Knockout there really isn’t anything better for dead-simple view definition.

Visual Design & Styling

Compass (and SASS) was chosen because of our desire to empower our design team more in the actual implementation of products; our goal being to provide the HTML structure and JS behavior and let the more opinionated people perfect the pixels. Given the broad library of mixins that create cross-browser CSS and the fact that SASS doesn’t have any significant disadvantages compared to LESS or Stylus, Compass was a natural choice for us. Currently, developers use Scout for compilation but this will eventually be integrated into our build process.

Unit Testing

SinonJS with Chai extensions enables simple and powerful behavior expression. If you’re used to BDD testing in Ruby the patterns here will be familiar. It’s a library that takes full advantage of the language it is written as well as the experience of other communities in evolving test frameworks. You get to write easy to read unit tests for almost anything without much pain. Seriously, if you haven’t been writing unit tests up until now, go check it out.

Build Process

Hudson is our default continuous integration server because we were already using it. While there are other choices out there, it’s got a great community of plugins and it’s quite impressive how well it’s held up building, testing and deploying everything from ActiveX plugins, WAR files and Flex projects to our Javascript applications. We use the RequireJS optimizer to concatenate and uglify our applications prior to release.

Graphing

Raphaël could use some documentation love but that doesn’t take away from its excellence as a cross-browser vector graphics library. All the HTML graphs you see in our site are custom Raphaël plugins that are really close to the heart of our value proposition. We want to give you the best insight into your finances possible and oftentimes that manifests itself as a highly interactive, highly custom visualizations that we hope bring you happiness and prosperity. We also seriously evaluated Highcharts and the graphing extension to Raphaël but ultimately our passion for numbers (and a great product team that can manipulate them) drove us to pick something productive yet 100% customizable. [TODO: link to Justin’s Raphael post here]

A Quick Note About Debugging

The visual debugger in Flash Builder was the single biggest selling point for Flash Builder, but it required a debug version of Flash Player to be installed and Flash Builder (Eclipse) to be running. Yikes.

Now I want to take a moment to wholeheartedly thank the Firebug team for pioneering the in-browser debugging console back in 2006 and doing it right. These days the Chrome Developer Tools and Safari Web Inspector are top notch and even IE9+ has some semblance of a useful developer console which allows breakpointing, runtime manipulation and analysis AND is deployed on customer machines all without having to boot up a separate program. We’re even able to enable rendering hints in webkit browsers. This is one area in which I feel Javascript development is clearly superior to any other technology I’ve worked with.

Functional Testing

Unfortunately, we haven’t nailed this down yet and both our QA team and developers are still doing evaluations. If you have suggestions we’d love to get some in the comments below.

In Summary

You can see that our philosophical approach to all these areas is “just enough is enough” without having to buy in to an opinionated, vendor-locked or overly complex piece of technology. We’re staying low-level so that we can build out a coherent platform that suits our priorities and not spend a lot of time fighting frameworks. And there’s actually a very good reason for that.

Enough with the boilerplate!

So, you get it, we’re excited about what we can do and how we can do it in HTML and javascript. But here’s some cold hard truth: we lost important architectural patterns when making the move from Flex 4 and Swiz to Backbone. Namely:

  1. Data-binding
  2. An intelligent view component lifecycle
  3. Declarative views
  4. Event mediation

To get these back in you need to write a lot of boilerplate or use a more abstracted library on top of Backbone (which is fine). However, a couple of the newer frameworks attempt to implement items 1-3. Ember is actually designed around them and it feels like AngularJS and Knockout are in some part inspired by these features of Flex, but they are all pretty awkward in their own way. While Ember 1.0.PRE explicitly supports a declarative view hierarchy with {{outlets}} and the Objective-C style first responder system is nice, writing your own bindable EmberHandlebars helpers needs to be streamlined as does the syntax for declaring view actions. The overly complex interplay between the binding mechanisms and the models, collections and controllers make them time consuming to debug when they don’t play nice. The router is actually nice to write for though the heavy reliance on string literals for instance lookups and even for property sets/gets (which is an unnecessary recurring theme in JS frameworks) makes me sad as it’s more brittle. Angular, in its own way, is a lovely piece of engineering but wiring that much functionality into the DOM is bit adventurous, opaque and the project just wasn’t far enough long when we first started re-writing our apps. Knockout is in the same family.

We handle event mediation simply through an application event bus that enables modules like metrics and logging to peek at what’s happening in the running application without their code being strewn everywhere. It also allows modules to request the triggering of an action without needing to care about who actually performs it.

Regardless, we’ve been constantly impressed by all the thought and sweat that’s gone into each of the technologies we’ve talked about and as part of our maturation as a team and a company, we’d like to start to contributing back to this wonderful community. We’re currently pushing to get some of this foundational work on our roadmap. In the meantime, I guess we’ll keep writing our own boilerplate.

But wait! There’s more!

Projects We’re Super Excited About

  1. Meteor
  2. Derby
  3. AngularJS

If you’re excited about any real-time or other technologies that help create a fantastic user experience please let us know in the comments below, we like great technology and excitement.

Tech Shoutouts

Finally, we’d like to take some space to say thank you a few of the myriad other open-source projects and utilities that have allowed us to move as quickly as we have and helped us fall in love with the community. In case you haven’t heard of one of these, check them out, they might help you too. In no particular order: livestamp.js, lodash, slimScroll, momentjs, require-handlebars-plugin, express & jade, detectmobilebrowser, and of course, the jquery core team for doing such a good job pushing the edge for so long.