3 Ways to Build Engaging Forms

Why

The Input form is an essential element of our web application and is widely used to gather key information to build personalized features for the user. And in many cases, it represents the key engagement driver for our conversion points.

What

Build engaging forms that drive conversion.

How

Add personality

Most of the times web forms are completed by users without any human guidance. So, it is essential we are communicating our personality and being relatable. Thus making the process more enjoyable and human. With our most recent form, we have approached the language to be more colloquial with the effort of making it more of a conversation style with a financial advisor. We have presented the form with a chart background to make it more contextual and demonstrate how the form data affects the chart.

Screen Shot 2015-09-14 at 4.53.58 AM

Our next level would be to add some quirkiness and make it more fun. Turbotax seems to do this quite well with a very casual style for form labels and every user input evoking a quirky/fun response :) For example, check out the responses in gray text below each input in the following screenshot.

Screen Shot 2015-09-14 at 4.51.28 AM

Add interactivity

In general, people want to interact with elements that feel alive and look familiar. The key to make the form alive is by providing instant feedback as user interacts with the form. And we have achieved this in most recent effort by illustrating how each component of the end-feature is built as the user progresses through the form. With this, the user should be able to relate how the inputs they have provided have helped us build the feature rather than overwhelm the user with asking bunch of inputs and present the feature at the end. This should also educate user as to what they need to adjust to have the desired effect on the end goal.

Other forms of interaction that we have used is providing context based helpful hints and tips, smart defaults,  avoid unnecessary inputs and minimizing number of inputs by deriving them based on provided data and instant validations. Using appropriate input controls also helps a long way to make the form more interactive. For eg: using a slider for how much you save input vs using a text input for retirement age.

Take a look at this short video to see how these all come together.

Personal Capital is uniquely positioned to suggest values for most of the financial data based on financial accounts that a user has aggregated with, thus making it one less entry for user, but more importantly, one less mental calculation that user needs to perform. Instead we use data science to more accurately calculate these values. This will be discussed at length in a different post.

 

Break up the forms

Last year, we ran a A/B test with a long form vs short form variation. The long form had all the inputs up front and had a chart that would update based on the provided inputs. The short form grouped inputs into smaller set (2-4 questions in one set) and presented as a sequence of steps and at the end of each step updated the chart as an instant feedback to the user inputs.

The results of the test were that the long form was more engaging and the short form converted better. We have learned that breaking up forms into bite-sized chunks and building a sense that user is completing steps and working towards the end goal is better for conversion and drives users to the next level.

So, when we built our most recent feature, we used these findings to build two different experiences for a first time user and an engaged user. For users coming to the feature for the first time, we take them through a short form variation while an engaged (returning) user would see the long form.

This has proved very successful for us in building a complex feature that requires a fair amount of data and present it in a way that is engaging, interactive and provides a path to completion towards the end goal.

Reads

http://www.smashingmagazine.com/2011/06/useful-ideas-and-guidelines-for-good-web-form-design/

http://www.lynda.com/Web-Interactive-User-Experience-tutorials/Web-Form-Design-Best-Practices/83786-2.html

Automate E2E Testing with Javascript

Why

  • Faster delivery of software
  • Set us on the path of Continuos Delivery
  • Immediate discovery of regression
  • And why javascript – Facilitate front-end team to actively contribute to tests alongside feature development

What

  • Build an automation framework which will simulate user interactions, and run these tests consistently on real and headless browsers alike
  • Set up integration with CI server(Jenkins)
  • Import test results into our Test Management Platform(QMetry)

How

The front-end team uses Mocha/Chai/Karma for writing unit tests. The QA team uses Selenium for automation. When we picked these frameworks and tools, we evaluated them thoroughly for our needs. We also wanted to leverage our existing frameworks and tools as much as we could so that there would be less of a learning curve. Fortunately for us, we found Selenium bindings in Javascript. Actually there are quite a few, but the most prominent of them are webdriver.io and Selenium’s webdriverJs.

We chose Selenium’s webdriverJs primarily for the following reasons:

  • It is the official implementation by the Selenium team who have written bindings in various other languages
  • The patterns of writing a test is very similar to a test written in java world with which our QA team was familiar
  • Its use of promises to prevent callback hell

For more detailed explanation with examples, please refer here.

Our next piece of the puzzle was to figure out if we could use PhantomJs (headless browser) with webdriverJs. We needed this so that we could run the tests on our CI server where we may not necessarily have a real browser. We did have some initial challenges to run webdriverJs in combination with PhantomJs without using Selenium’s Remote Control Server, but looking at the source code (in Javascript) helped us debug and get this to work. The challenges could also be attributed to the lack of complete understanding of Selenium’s automation world.

The last piece of the puzzle was integration with the CI server. With PhantomJs already in place, all we needed to figure out was the reporting format of the tests that our CI server (Jenkins) could understand. One of the reasons we had picked Mocha is for its extensive reporting capabilities. Xunit was the obvious choice because of Jenkins support for it.

And there it is – our automation framework – all of it in Javascript stack.

Testing Stack for Web

In the past couple months, we have successfully automated our E2E tests that provide coverage for regression of our web platform, and use it on a daily basis. Now that we have an established framework and gained immense experience writing tests, we are one step closer to Continuous Delivery. Integration with our Test Management Platform is in the works and we will post our findings soon.

reads

https://code.google.com/p/selenium/wiki/WebDriverJs
http://visionmedia.github.io/mocha/
http://chaijs.com/
http://chaijs.com/plugins/chai-webdriver
http://phantomjs.org/page-automation.html
https://speakerdeck.com/ariya/phantomjs-for-web-page-automation
http://casperjs.org/
http://engineering.wingify.com/posts/e2e-testing-with-webdriverjs-jasmine/
http://xolv.io/blog/2013/04/end-to-end-testing-for-web-apps-meteor
http://code.tutsplus.com/tutorials/headless-functional-testing-with-selenium-and-phantomjs–net-30545

Automating your javascript unit tests – Karma

why

  • instant feedback while writing your code and eliminate the need to remember to run tests before checking in your code. thus leading to stable builds.
  • continuos integration with our hudson build system.
  • testing on real and multiple browsers.

what

  • set up karma to watch your source files and run tests on code changes.
  • set up karma to run the tests during hudson build and validate the build.
  • set up karma to track our code coverage.

how

We use mocha as our test framework. This along with chai (expect/should – BDD style) worked out great for us with an effective yet readable tests. I cannot emphasize the importance of readable tests enough. We had a team member who did a feature walk through by running through the tests which i thought was pretty rad. Product and QA could easily see what was the feature set, what was expected of and what was the outcome. I guess we have to do a write up sharing more of our excitement.

Before karma, we were running tests using individual test files. More often, you are working on multiple files and remembering to run tests on all these files manually was becoming cumbersome and error prone. So we started researching on test runners and karma seemed to fit all our necessities: automation, continuos integration, run tests on multiple real browsers and support for mocha.

set up karma to watch your source files and run tests on code changes

This was fairly straight forward. Karma’s setup is driven by a single configuration file where in you provide the location of files you want to watch for changes, browsers that you want to run tests, your testing framework and any preprocessors. Here’s a gist of our configuration file. The only tricky part was preprocessors. We use handlebars along with requirejs-handlebars-plugin for our templating purposes and serve our templates as individual html files. This was causing a problem karma was converting them into js strings because of its default preprocessor: html2js. It needed a bit of reading, but the fix was simple enough. The following additions to the config file fixed the problem.

preprocessors : [{'scripts/**/*.html' : ''}]
files:[...{pattern: 'scripts/**/*.html', served: true, included: false}]

set up karma to run the tests during hudson build and validate the build

We created another karma configuration file for this purpose. We added a junitReporter  so that we could export the tests in a format that could be interpreted by our hudson setup. The key differences are as follows. We are currently using phantomJS for testing on our build systems, but in near future, we want to extend this to real browsers.

reporters: ['progress', 'junit']
junitReporter: {outputFile: "testReports/unit-tests.xml"}
autoWatch: false
browsers: ['PhantomJS']
singleRun: true

set up karma to track our code coverage

Once we were able to configure karma to run in hudson, this was just a natural addition. The only additions to the karma configuration are as follows.

reporters: ['progress', 'junit', 'coverage']
coverageReporter: {
 type : 'cobertura',
 dir : 'coverage/'
}
preprocessors : {
 '**/scripts/**/*.js': 'coverage'
}

As you may have noticed, i may used simple and straight-forward words quite a few times and that is what karmajs is all about.

reads

http://karma-runner.github.io/0.10/index.html

http://googletesting.blogspot.com/2012/11/testacular-spectacular-test-runner-for.html

 

https://www.npmjs.org/package/karma-handlebars-preprocessor

Incremental Web Performance Improvements

Compression (gzip) of front end resources (js/css)

When we moved to Amazon’s Cloudfront (CDN), we lost the ability to serve gzipped version of our script files and stylesheets. We are single page app and we have a large javascript and css footprint and this was greatly affecting our application performance.  We had two options to fix this.

  • Upload a gzip version of the resource along with the original resource and set the content-encoding header for the file to gzip. CDN would then serve the appropriate resource based on request headers.
  • Use a custom origin server that is capable of compressing resources based on request headers. The default origin server which is a simple Amazon Simple Storage Service (S3) is not capable of this and hence the problem.

Fortunately for us, all our application servers use apache as a web server and we decided to leverage this setup as our custom origin server. We had to simply change our deployment process to have front-end resources deploy to our app servers as against a S3 bucket. This does make the deployment process tiny bit complex, but the benefits are huge.

 

Dividing our main javascript module into smaller modules.

As mentioned earlier, we are single page app and have a large javascript footprint. We bundle all our javascript files into a single module and fetch it during our app initialization. And as we grow, so will our javascript footprint and did not want to run into long load times during our app initialization as demonstrated by Steve Souders in his book: High Performance Web Sites.

We use requirejs to build all our javascript modules into one single module. Fortunately enough requirejs provides for combining modules into more than one module in a very flexible manner. We package all our “common” modules and the main module we would need on loading the application into one module. All other modules are dynamically loaded based on when they are required. More specific details will be posted soon.

Pre-caching our main javascript module.

I believe this is a very common practice and simple implementation that does reap a huge performance benefit. We now pre-fetch our main javascript module during our login process using iframe and html object tag. The iframe keeps the login page load times independent of the resources being fetched through it. Again, there are many ways to implement this as mentioned by Steve Souders, but we chose this for simplicity.

Additional Links

  • http://stackoverflow.com/questions/5442011/serving-gzipped-css-and-javascript-from-amazon-cloudfront-via-s3
  • http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
  • http://requirejs.org/docs/optimization.html

Reusing Web Modules with RequireJS

why

  • Our application contains several modules that uniquely addresses a key aspect of wealth management. eg: Investment checkup, Net worth, etc. The application brings together various such aspects to present a comprehensive one-stop wealth management tool to the user.
  • How-ever each user’s needs are unique and so are the motivations to use our application. So we wanted to build landing pages that ties a user’s need to a module thus providing more context and motivation to use the module and eventually our entire offerings.
  • We went native for our mobile presence to provide great user experience and that has served us well. How-ever they are few modules of our application that are self contained with simpler ui design and fewer interactions with rest of the application. We wanted to embed such modules in our native app and determine if we could still deliver a great seamless user experience.

what

  • Build “stand-alone” versions of our web modules.
  • Publish the module with a unique url that will be used to embed it in other apps.

how

We use Requirejs as our module loader and its optimization tool to package a module and its dependencies into single units for production use. The module definition would define all its dependencies. During development phase, all the dependencies are loaded as required. How-ever in production mode, all the dependencies are packaged into a single file along with the module code to minimize our http calls and optimize load times. For eg:

A simple module definition would be as follows:

define([
 'jquery'
 , 'underscore'
 ], function($, _){

 var foo = {};
 // module code
 return foo;
});

And we reference the above module as follows:

<script data-main="scripts/foo" src="scripts/require.js"></script>

But most real-world application will have several modules and would share considerable number of framework libraries (jquery, underscore, backbone and etc) and utility libraries. And our application is no exception. So we defined a common module that would require all the common libs and in turn require this as part of our main module. The sub modules would still continue to list of all its required dependencies, but as part of the build process, we exclude them for optimization and to prevent from any shared lib being re-initialized/reset.

When the application starts, it loads the main module first and there after required modules as user navigates the application. Following is a gist of our build file is defined:

({
 modules: [
 { 
 name: 'app_main'
 }
 , {
 name: 'foo'
 , exclude:[
 'jquery'
 , 'underscore'
 , 'backbone'
 , 'services'
 , 'hbs'
 ]
 }
 ]
})

app_main will be loaded at the start of the application and if a user navigated to a foo module, only then the application will load foo. This worked great for us in keeping our main module lean as we kept adding more features (modules) until the requirement of reusing some of the modules in other apps/clients (mobile).

Fortunately for us, this did not involve lot of work and more importantly did not result in any redundant code. All we had to do was to define another main module that would include our common module and foo module. That is it. We had to refactor our common module a little bit to include only the core libs. Other shared libs were loaded as part of the main module (app_main). This way we did not load any libs that were not needed for foo module.

({
 modules: [
 { 
 name: 'app_main'
 }
 , {
 name: 'foo'
 , exclude:[
 'jquery'
 , 'underscore'
 , 'backbone'
 , 'services'
 , 'hbs'
 ]
 }
 , {
 name: 'app_foo'
 }
 ]
})
//app_main.
require(['common'], function (common) {
 require(['main']);
});

//app_foo
require(['common'], function (common) {
 require(['foo']);
});

This was just not fortune. From the very start of this app development, we not only aimed for modular code, but also to have an ability to extend any module to work by itself. Our entire MV* stack was architected that way and we are glad the returns are truly amazing.

We have a SPA that is comprised of several modules and at any time, any module can be extended to its own SPA app with minimal effort and no codes changes to the modules themselves. Thanks to the amazing requirejs optimization tool and a little foresight.

More details on how we use requirejs and its optimization tools are presented here.

reads

http://addyosmani.github.com/backbone-fundamentals

http://requirejs.org/docs/optimization.html

http://requirejs.org/docs/optimization.html#wholemultipage

http://requirejs.org/