At Personal Capital creating better financial lives through technology and people.

I have just joined the front end team and I’m loving it. The way we live in a startup pace, moving agile and collaborating every cycle surrounded by amazing people makes it perfect for professional development.

We have a great tech/skills stack and I wanted to share it with everybody especially those who are thinking in joining our team.

We believe in the power of technology to change the financial industry, making it more accessible, affordable, and honest. And we believe in the power of people to change the nature of investment advice, making it more transparent, objective, and personal.

Living in a world that is constantly talking about saving money and figuring out complex investment decisions, our front end engineers face a huge challenge: to make things as simple as one click.

To make that possible, we process lots of data. We utilize user information to generate what we call a progressive profile. Every user has a progressive profile which is generated from their individual inputs.

By doing this we can recommend plans, proposals and functionalities so our users can get the most out of our platform. This is how our award-winning platform allows 1.2 million users to track and invest their assets.

Ensuring that our technology operates on multiple devices is a must, not only a web platform, but within easy-to-use apps. Our users want access to their finances at home and on the go and that is what we deliver.

The Front-End Stack

This article will be cover our web platform tech stack, the one that we use day to day to make things as easy as one click/tap.

On development and build time we use Node.js, which we mix with some browser-sync flavor to proxy all non-static requests to a development server in the Amazon Cloud, Node and browser-sync serve the static resources to keep it simple. The team specifically chose not to use any build system like Gulp or Grunt to avoid having extra external dependencies.

Since the front-end layer is simple, NPM manages all the tasks.

Single Page Application

Our back-end services are written in Java, and we consume them using Backbone and Angular. This allows us to reuse tons of pre-implemented open source plug-ins and create reusable components that can be implemented across all of Personal Capital application’s domain.


Most of our platform is written using handlebars templates. This gives us the unique ability to use lots of pre-build helpers and accelerating the prototyping and development process.


We are migrating from Bootstrap to Inuit, refactoring code and creating our own styling framework based in OOCSS/BEM. By doing this we can ensure coding style consistency across our styles sheets, generating maintainable beautiful code that is reusable and easy to understand.

This is especially important since the company is growing rapidly, and by doing this we help our new engineers to jump in the train faster and deliver value sooner.

Data Visualization

If you use our platform, you have seen a lot of beautiful visual components to our design. We use graphics whenever possible to make complex data easier to understand.

Our graphics manage complex datasets. They are dynamic so you can interact with them and get from the most generic information to the more specific transactional details.

To achieve this, we used Raphael. But as it is no longer being maintained, we are transitioning to D3 library, which is much more flexible and powerful, giving us the tools to generate almost any kind of graphics that we can think of.


The front-end team uses Mocha/Chai/Karma for writing unit tests. The QA team uses Selenium for automation. When we picked those frameworks and tools, we evaluated them thoroughly to fit our needs.

Mocha, along with Chai (expect/should – BDD style) worked out well for us with a effective yet readable tests. I cannot emphasize the importance of readable tests enough. Now the feature set can be seen across the organization, comparing the expected outcome with the outcome. We’lI have to do a write up sharing more of our excitement.

The use of Mocha and Karma helped us on the path to continuous delivery and immediate regression reports. Tests are executed periodically by our continuous integration server, which monitors the health, watches for the codebase consistency and gives us the code coverage information. This is really important to ensure that we’re delivering a first class service/product.

Back in 2014 we wrote an article about this, you can find it here.

Automation and CI

We use Jenkins as the orchestrator to build, test and deploy all of our environments. Our DevOps team wrote the tasks that run automatically or on-demand with one click.

Jenkins triggers Karma to run our Mocha test and Selenium to run-regression. Those tests are run in different browsers, the ones with head and without it (AKA headless browsers).

BTW, Karma watches our files and runs tests in development time too, eliminating the necessity of doing so before pushing code and giving us immediate feedback on what we are coding.

We also have integration with Husky that prevents our dev team from pushing code up to the origin/remote if there are failing unit tests.


We use Git and we love it. The workflow is pretty similar to git-flow. We just extend it, adding a few extra branches like a sandbox one, where we make al kinds of experiments.

Once we run “git push”, the linters and code style police go through our code to ensure consistency and good practices. This wasn’t the same before, which was a big problem for our engineers due to JavaScript flexibility. Now the codebase keeps getting consistency across the apps.


At Personal Capital, everyone is a team player. We volunteer to get tasks assigned based on our expertise. We use scrum across all of the engineering teams and our sprints last one extreme week.


The Input form is an essential element of our web application and is widely used to gather key information to build personalized features for the user. And in many cases, it represents the key engagement driver for our conversion points.


Build engaging forms that drive conversion.


Add personality

Most of the times web forms are completed by users without any human guidance. So, it is essential we are communicating our personality and being relatable. Thus making the process more enjoyable and human. With our most recent form, we have approached the language to be more colloquial with the effort of making it more of a conversation style with a financial advisor. We have presented the form with a chart background to make it more contextual and demonstrate how the form data affects the chart.

Screen Shot 2015-09-14 at 4.53.58 AM

Our next level would be to add some quirkiness and make it more fun. Turbotax seems to do this quite well with a very casual style for form labels and every user input evoking a quirky/fun response 🙂 For example, check out the responses in gray text below each input in the following screenshot.

Screen Shot 2015-09-14 at 4.51.28 AM

Add interactivity

In general, people want to interact with elements that feel alive and look familiar. The key to make the form alive is by providing instant feedback as user interacts with the form. And we have achieved this in most recent effort by illustrating how each component of the end-feature is built as the user progresses through the form. With this, the user should be able to relate how the inputs they have provided have helped us build the feature rather than overwhelm the user with asking bunch of inputs and present the feature at the end. This should also educate user as to what they need to adjust to have the desired effect on the end goal.

Other forms of interaction that we have used is providing context based helpful hints and tips, smart defaults,  avoid unnecessary inputs and minimizing number of inputs by deriving them based on provided data and instant validations. Using appropriate input controls also helps a long way to make the form more interactive. For eg: using a slider for how much you save input vs using a text input for retirement age.

Take a look at this short video to see how these all come together.

Personal Capital is uniquely positioned to suggest values for most of the financial data based on financial accounts that a user has aggregated with, thus making it one less entry for user, but more importantly, one less mental calculation that user needs to perform. Instead we use data science to more accurately calculate these values. This will be discussed at length in a different post.


Break up the forms

Last year, we ran a A/B test with a long form vs short form variation. The long form had all the inputs up front and had a chart that would update based on the provided inputs. The short form grouped inputs into smaller set (2-4 questions in one set) and presented as a sequence of steps and at the end of each step updated the chart as an instant feedback to the user inputs.

The results of the test were that the long form was more engaging and the short form converted better. We have learned that breaking up forms into bite-sized chunks and building a sense that user is completing steps and working towards the end goal is better for conversion and drives users to the next level.

So, when we built our most recent feature, we used these findings to build two different experiences for a first time user and an engaged user. For users coming to the feature for the first time, we take them through a short form variation while an engaged (returning) user would see the long form.

This has proved very successful for us in building a complex feature that requires a fair amount of data and present it in a way that is engaging, interactive and provides a path to completion towards the end goal.




This is your opportunity to be a disruptor in the financial industry while creating a service that serves families and improves lives.

What sets us apart is the way we use both data aggregation and machine learning technologies along with certified financial advisors to model a family’s financial life. No one has ever had the data, the technology and the people to do what we do: combine the power of an abstract computer model with the expertise of certified financial advisors to help families understand their current financial situation and help them optimize it for the next stages of their lives.

The amazing thing about working at Personal Capital is that when I leave the office each evening, I’m pumped with more energy, enthusiasm and optimism than when I came in. And that’s because I get to build a noble service with great people; it can’t get any better than that. Every day I work with the most talented engineers that I have ever worked with, building sophisticated services that empower thousands of American families to take control of their finances.

We are looking for curious engineers. We are looking for thinkers and doers. You need to be smart and build smart products. You need to be ambitious. This is not an easy job. You will need to wear multiple hats, work with many unknowns, travel many unpaved roads to tackle large-scale problems. But it will be your finest work and creation, and an amazing engineering team is here to collaborate with you and support you.

Here are our current open positions:




  • Faster delivery of software
  • Set us on the path of Continuos Delivery
  • Immediate discovery of regression
  • And why javascript – Facilitate front-end team to actively contribute to tests alongside feature development


  • Build an automation framework which will simulate user interactions, and run these tests consistently on real and headless browsers alike
  • Set up integration with CI server(Jenkins)
  • Import test results into our Test Management Platform(QMetry)


The front-end team uses Mocha/Chai/Karma for writing unit tests. The QA team uses Selenium for automation. When we picked these frameworks and tools, we evaluated them thoroughly for our needs. We also wanted to leverage our existing frameworks and tools as much as we could so that there would be less of a learning curve. Fortunately for us, we found Selenium bindings in Javascript. Actually there are quite a few, but the most prominent of them are webdriver.io and Selenium’s webdriverJs.

We chose Selenium’s webdriverJs primarily for the following reasons:

  • It is the official implementation by the Selenium team who have written bindings in various other languages
  • The patterns of writing a test is very similar to a test written in java world with which our QA team was familiar
  • Its use of promises to prevent callback hell

For more detailed explanation with examples, please refer here.

Our next piece of the puzzle was to figure out if we could use PhantomJs (headless browser) with webdriverJs. We needed this so that we could run the tests on our CI server where we may not necessarily have a real browser. We did have some initial challenges to run webdriverJs in combination with PhantomJs without using Selenium’s Remote Control Server, but looking at the source code (in Javascript) helped us debug and get this to work. The challenges could also be attributed to the lack of complete understanding of Selenium’s automation world.

The last piece of the puzzle was integration with the CI server. With PhantomJs already in place, all we needed to figure out was the reporting format of the tests that our CI server (Jenkins) could understand. One of the reasons we had picked Mocha is for its extensive reporting capabilities. Xunit was the obvious choice because of Jenkins support for it.

And there it is – our automation framework – all of it in Javascript stack.

Testing Stack for Web

In the past couple months, we have successfully automated our E2E tests that provide coverage for regression of our web platform, and use it on a daily basis. Now that we have an established framework and gained immense experience writing tests, we are one step closer to Continuous Delivery. Integration with our Test Management Platform is in the works and we will post our findings soon.



I grew up on the east coast, currently go to school in the midwest, and was fortunate enough to spend my summer on the west coast working with the Personal Capital engineering team. In addition to working on an amazing engineering team, I became familiar with the workings of a fast paced tech environment and learned a great deal about web and mobile automation. Javascript is now my strongest programming language, and I learned to appreciate its value to a commercial company (not just a coding assignment). I could have not asked for a better summer experience.

My coworker, Nick Fong, already wrote a post here describing the main points of our project this summer. So as to not be repetitive, I will be writing more about the problems and road blocks we faced along the way and how we overcame them. I highly suggest reading his post first to get a better idea of the general framework that I will be talking about. You can Nick’s post here.

Working with selenium WebDriverJS, there were many concepts that were new to me, but Javasript promises and how they worked in an asynchronous fashion, were one of the most confusing. First, promises were necessary for the scripts we wrote because they were the only way to access information from the driver. Below is an example of using a promise to access the pin field while linking accounts. In this piece of code, it is verifying that a pin field is there by checking if the information returned by the driver is not null.

driver.findElements(webdriver.By.css('[name="PASSWORD1"]')).then(function(pin) { 
    var length = pin.length; 
    if (pin.length > 0) { //makes sure the pin location is there 
        helper.enterInput('[name="PASSWORD1"]', accounts['L'+index].v); // Name distinct for Firstrade 

This in itself was not that difficult to do in our scripts. We created many ‘helper’ functions, which you can see used above, that use promises to access and manipulate the driver. What took some time to grasp was in asynchronous scripts; anything that happens within the promise stays within that promise. This turned into a scoping issue when I would edit global variables inside the promise and have another promise read the original value of this variable.

For the builds to pass, the scripts must run in PhantomJS, a headless browser we ran everything on before pushing to productions, with no errors. However, just because it worked on PhantomJS did not mean it would work on the other browsers. We found after much trial and error that PhantomJS behaved the most like Safari, but this did not guarantee a script working in Safari would work in PhantomJS. A very peculiar error I faced occurred when writing automation scripts for www.personalcapital.com in chrome. When I was testing the links on the page, everyone but the last one would fail and for the longest time I had no idea why. Eventually I figured out that the banner that followed the user down the page when he/she scrolled was blocking the link because our code would scroll so the link was the closest to x:0 y:0 before clicking. To change this:

driver.executeScript(‘window.scrollTo(0,’ + loc.y + ‘)’);
was changed to this:
driver.executeScript(‘window.scrollTo(0,’ + loc.y-50 + ‘)’);

This change, although extremely simple, took a long time to figure out. It also gave me a greater appreciation for this work and how much time it actually takes. Before working here, I would, like most developers, spend a lot of time debugging my code. Only after working for a company that is actually pushing a product out to a customer did I truly appreciate the time needed to get everything right.

I divided all the tests into two categories: tests that were completely internal and that that used outside information. Internal test would be something like checking to make sure our information gathering survey worked or that the marketing page’s links were working. The latter type consisted of such tests as linking accounts or checking transactions. One of the tests I wrote contained a script for added accounts to test IDs and checked to make sure everything was linked correctly. Not only did the parameters of this test change three times, thanks guys, but also I had to deal with naming conventions that were out of our control. For the most part, they were consistent for username and password, but when other fields were added, all bets were off.

Although I joked about the changing of parameters, it actually was an important part of my summer because it exposed me to the compromises that automation scripts need to accommodate. The debate was how dynamic the script would be. Obviously, in an ideal world, the script could link any account in any way. However, after a lot of work, and because we had to rely on third party information, this was not possible. So the question remained whether we wanted a smoother, simpler script that tests the basic functionality for a few accounts, or tries our best to be fully dynamic. Eventually we decided on the former, setting aside five accounts of different types to aggregate.

There is so much more I could talk about, but that is for another time. I would recommend using Selenium WebdriverJS, found here, for writing these automation scripts to anyone who might be interested. I want to thank all the people at personal capital for making me feel at home this summer; it was a pleasure coding with you.