Efficient SDLC

To be tested continuously, as term CI suggests, the code under the test must be running fast, failing fast, be as idempotent as possible, be easily configurable and produce adequate and quickly actionable output and logging

  • If requirements like these are taken into account up-front, they change dramatically the way we think about code we release. They may sound like irrelevant concerns in the beginning of development cycle, but practice shows that at the end, software that satisfies such requirements is stronger, more resilient, and of higher quality. Also, it implies that we are “forced” to think about big picture when “simply coding” in our seemingly isolated environments
  • If code that we release is to be tested continuously it must be “nice” to the database, file system and any limited or expensive resource that it may eventually hit. Also, it is very desirable that state of any “external” resource after the test will be left intact (not always possible, but at least desirable)
  • Notifications of the test failures should be concise and well targeted – this why we should be concerned with efficient and precise error logging and reporting at all stages of development
  • During Agile planning it is very useful to keep in mind Unit Testing. If scope of the Task makes Unit Testing effort impossible, it could be a sign of warning and prompt for an additional breakdown of a Task or a Story (it is not a rule that set in stone, but it is useful point to keep in mind)
  • Thinking of Fast tests vs Slow tests, or Unit Tests vs Integration Tests and their effects on CI process helps to optimize this process, make the most of it, and in addition, helps to better understand the code base itself


Knowledge Transfer and Documentation by example

  • One of the best ways to explain how system works internally is to show a fellow engineer a few running Unit Tests. Of course, tests could be too fine-grained to reveal the big picture. This why it is always good to have a few very straightforward, high-level tests that simply demonstrate the essence of the feature or selected piece of logic. We can think of them as Demo Test, or Spec Test. In a team environment and especially in any distributed team environment it could mean a lot
  • Good Unit Test will self-document the code and can efficiently complement or illustrate any Javadoc entry. More importantly,  “documentation” of this kind will naturally be kept up to date – otherwise build will break 🙂
  • Good Unit Test can demonstrate not only a feature, but the idea or concept behind the feature


POC and R&D work plus some non standard applications

  • When prototyping a new feature or when doing preliminary research we often end up writing standalone programs that are eventually become a throwaway code. Implementing these tasks as Unit Tests has certain merit – they become part of the code base (but not part of the production code line), they are easily executable as part of the standard testing framework, available to be revisited later, and discoverable by others for review and study (again, it does not apply to every situation, but it is something to think about)
  • Many proprietary frameworks and 3rd party systems often feature certain internal or proprietary  configuration parameters that frequently lack documentation. We often make use of such parameters, usually after considerable effort to tune them up for certain needs of our applications. But those parameters and their effect may change from release to release, or from environment to environment (for example, Oracle DB, its optimizer with its proprietary hints…, or only suggested JDBC fetch sizes used for batch processing). If effects and baselines of such “boundary” configurations are captured as part of automated Unit Testing, any unwanted change in behaviour could be caught proactively before it will reach Production environment. For example, Unit Test can automatically produce and capture explain plan for the query used in Java code, and assert against absence of Full Table Scans. This technique is also useful to detect changing data selectivity as it  affects applicability of existing indexes.


Efficient bug fixing and troubleshooting

  • When Unit Test coverage for some module or feature is sufficient enough, we can often completely avoid expensive cycles of deploying and running application in the container, requirements of database connectivity and other configurations, as well as need to attach to the running process with IDE or debugger of choice. Running a few unit tests (even after some modifications) to reproduce the scenario based on data collected from error logs or from user input, may be all that is required to locate the culprit


Happy Unit Testing!

Formal Unit Testing is usually treated as a trivial and boring subject concerned with tedious testing of low-level implementations. Developers often see it as an overhead, as something that takes time away from feature development, and requires constant maintenance as code changes and evolves. But it is not so, or at least this is not the whole story…


The truth is that if approached properly Unit Testing becomes a powerful method or technique, that not only assures certain quality of the end product, but naturally enlightens all stages of the Software Development Life Cycle (SDLC). It is not something that needs to to be done at the end of the development cycle, but rather a discipline that should be practiced starting at the very inception of the design process.  And as such, it becomes a fundamental design principle in its own accord – just like the famous Open/Close principle, for example. For the sake of this post, let’s name this design principle as Principle of Testability (PoT).


This principle by no means guarantees “bugs free software.” It rather implies that the software product should be designed and implemented in such a way that it could be efficiently tested on any chosen level and in any chosen environment (from local machine to CI server)! It also gives a promise, that if we do focus on achieving and retaining Testability throughout the development cycle, the whole software ecosystem naturally aligns, self-organizes and yields high quality results on many levels.


Let’s now review a some practical examples that illustrate how PoT naturally leads toward strong design and implementation solutions


Selected design principles and best practices

  • Favoring Composition over Inheritance

It is much easier to test code designed around Composition approach rather than code based on Inheritance – because when we can “take system apart” we can focus on testing its individual parts in isolation – which is in fact a definition of Unit Testing. Thinking of PoT will innately promote Composition

  • Favoring Convention over Configuration

From such trivial aspect as naming convention for discovering tests based on their name patterns (e.g. FooManager vs FooManagerTest), to conventions for discovering properties and context files at certain locations at runtime – we can see that lightweight Conventions could be extremely useful and efficient tool for multipurpose testing framework. The PoT will lead to giving preference to flexible Conventions over involved Configurations – and not only in the domain of testing, but potentially in the respective areas of software product itself

  • Adhering to Open/Close Principle

All software systems evolve over time and require changes and modifications. Focusing on PoT will naturally promote keeping public APIs sealed, as well as leaving intact their original tests. Instead of changing existing code, and making a considerable effort to ensure backward compatibility by modifying all corresponding tests, it is simply wiser and safer to add new flavors of APIs, and to complement them with number of new tests – which in essence is favoring OCP over other possible approaches

  • Adopting IoC and DI

For efficient testing of core logic in classes that depend on other classes and resources, it is extremely important to be able to exclude at runtime all unrelated dependencies (example: database based audit trail or logging could be completely irrelevant when testing actual biz logic of a given method; but because auditing is an integral part of the method, it requires full setup and access to the database during the test. When DI is used we can easily provide another audit manager implementation that does nothing at runtime). As such, thinking of these concerns up-front will naturally lead toward use of IoC and DI to enable mocking and stubbing later on for the sake of testing

  • Favoring “Fail Fast” behaviour

When we think about PoT up-front, we are aware that any test may legitimately fail at any point. As such, it may leave the system in some “unfinished state” preventing subsequent re-execution. Detecting failure as early as possible, aborting further processes, including asynchronous and resetting the state of the system are properties that are required for efficient unit testing. At the same time they will most probably benefit the system as a whole

  • Built in retries, recovery, self healing features

When application runs in Production most of use cases related to failures are often interactive – user may be prompted to try again, may be asked to re-login or call support. During the test we can’t rely on such luxuries. We need sometimes to be able to retry number of times, simply as part of the valid test, or allow for a failure in a negative test, followed by restoration of a valid state (even if asynchronously) before the next test. Thinking of these concerns up-front may benefit the overall design. It may help to choose adequate 3rd party framework based on such capabilities (good example is @Rollback(true) feature of Spring Testing Framework)


Optimal granularity and composition of the system

  • Self-contained and properly scoped classes, packages, jar and war files of optimal composition

It is hard to strike the ideal breakdown, but thinking about PoT in this context will help a lot in quest for proper balance. A single class’s content may be revised when we think up-front about complexity of corresponding test class. We may opt for breaking one monolithic class down into number of composing classes, simply in order to simplify and better scope their corresponding test classes. Same may apply to packages and deployment modules. Another aspect that comes to mind is benefits of keeping test classes in the same packages as their corresponding “parents.” This will allow testing of any protected methods (such protected methods could be a legitimate trade off to use of a traditionally prescribed private methods – especially with heavy logic)

  • Using APIs parameters adequate to the application tier or layer

Often we can see parameters that clearly belong to the presentation layer make their way down to the persistence layer, for example (think of HTTP Request, or Session, or some Thread Local Context established at the entry point into the system). It simplifies signature, may be…, but completely hides required inputs and need for validation, making it very hard to test such a method. In addition is wrong conceptually. This can be avoided by applying PoT early enough – more precisely at the time of class / method creation

  • Resisting a tendency to treat commodity services provided by the Container as something granted

Modern application containers provide a full array of commodity services at our disposal: enterprise messaging, transaction management, distributed cache services, etc. These services are easily available almost everywhere in the code. While container offers ease and transparency in accessing these – otherwise hard to configure services – the application itself becomes fully entangled with its container. We realize it, when trying to unit test areas of the application: as the use of the container provided services proliferates through the code, it becomes harder and harder to test application outside of the running container. The PoT suggests number of approaches to this subtle and often overlooked problem. Typically they boil down to creating a well defined layer where application registers with services available in the container and overall treats them as something external. There we can choose to register with the services provided in some alternative way (mocked or standalone). Or not register them at all. Of course, provided that the code consistently checks for their availability and has some default behavior where it is acceptable (example: methods under the test may broadcast some audit events using JMS; this can be completely irrelevant to the business logic that we are trying to test; if we consistently check for JMS provider availability before using it, we can avoid exceptions during test phase, log relevant message and proceed). At the end, addressing or at least acknowledging concerns like these, will result in stronger, more flexible and resilient architecture

  • Wire-On / Wire-Off mechanisms

One setting or wiring may enable or disable a whole feature or a module. It could be extremely important when you want to be able to test some potentially invasive new code without “releasing” it or making it available in the UI. Being concernted with mere Testability in a case like this may bring into the focus originally overlooked but beneficial functionality of Wire-On/Off

(to be continued)


  • Faster delivery of software
  • Set us on the path of Continuos Delivery
  • Immediate discovery of regression
  • And why javascript – Facilitate front-end team to actively contribute to tests alongside feature development


  • Build an automation framework which will simulate user interactions, and run these tests consistently on real and headless browsers alike
  • Set up integration with CI server(Jenkins)
  • Import test results into our Test Management Platform(QMetry)


The front-end team uses Mocha/Chai/Karma for writing unit tests. The QA team uses Selenium for automation. When we picked these frameworks and tools, we evaluated them thoroughly for our needs. We also wanted to leverage our existing frameworks and tools as much as we could so that there would be less of a learning curve. Fortunately for us, we found Selenium bindings in Javascript. Actually there are quite a few, but the most prominent of them are webdriver.io and Selenium’s webdriverJs.

We chose Selenium’s webdriverJs primarily for the following reasons:

  • It is the official implementation by the Selenium team who have written bindings in various other languages
  • The patterns of writing a test is very similar to a test written in java world with which our QA team was familiar
  • Its use of promises to prevent callback hell

For more detailed explanation with examples, please refer here.

Our next piece of the puzzle was to figure out if we could use PhantomJs (headless browser) with webdriverJs. We needed this so that we could run the tests on our CI server where we may not necessarily have a real browser. We did have some initial challenges to run webdriverJs in combination with PhantomJs without using Selenium’s Remote Control Server, but looking at the source code (in Javascript) helped us debug and get this to work. The challenges could also be attributed to the lack of complete understanding of Selenium’s automation world.

The last piece of the puzzle was integration with the CI server. With PhantomJs already in place, all we needed to figure out was the reporting format of the tests that our CI server (Jenkins) could understand. One of the reasons we had picked Mocha is for its extensive reporting capabilities. Xunit was the obvious choice because of Jenkins support for it.

And there it is – our automation framework – all of it in Javascript stack.

Testing Stack for Web

In the past couple months, we have successfully automated our E2E tests that provide coverage for regression of our web platform, and use it on a daily basis. Now that we have an established framework and gained immense experience writing tests, we are one step closer to Continuous Delivery. Integration with our Test Management Platform is in the works and we will post our findings soon.



I grew up on the east coast, currently go to school in the midwest, and was fortunate enough to spend my summer on the west coast working with the Personal Capital engineering team. In addition to working on an amazing engineering team, I became familiar with the workings of a fast paced tech environment and learned a great deal about web and mobile automation. Javascript is now my strongest programming language, and I learned to appreciate its value to a commercial company (not just a coding assignment). I could have not asked for a better summer experience.

My coworker, Nick Fong, already wrote a post here describing the main points of our project this summer. So as to not be repetitive, I will be writing more about the problems and road blocks we faced along the way and how we overcame them. I highly suggest reading his post first to get a better idea of the general framework that I will be talking about. You can Nick’s post here.

Working with selenium WebDriverJS, there were many concepts that were new to me, but Javasript promises and how they worked in an asynchronous fashion, were one of the most confusing. First, promises were necessary for the scripts we wrote because they were the only way to access information from the driver. Below is an example of using a promise to access the pin field while linking accounts. In this piece of code, it is verifying that a pin field is there by checking if the information returned by the driver is not null.

driver.findElements(webdriver.By.css('[name="PASSWORD1"]')).then(function(pin) { 
    var length = pin.length; 
    if (pin.length > 0) { //makes sure the pin location is there 
        helper.enterInput('[name="PASSWORD1"]', accounts['L'+index].v); // Name distinct for Firstrade 

This in itself was not that difficult to do in our scripts. We created many ‘helper’ functions, which you can see used above, that use promises to access and manipulate the driver. What took some time to grasp was in asynchronous scripts; anything that happens within the promise stays within that promise. This turned into a scoping issue when I would edit global variables inside the promise and have another promise read the original value of this variable.

For the builds to pass, the scripts must run in PhantomJS, a headless browser we ran everything on before pushing to productions, with no errors. However, just because it worked on PhantomJS did not mean it would work on the other browsers. We found after much trial and error that PhantomJS behaved the most like Safari, but this did not guarantee a script working in Safari would work in PhantomJS. A very peculiar error I faced occurred when writing automation scripts for www.personalcapital.com in chrome. When I was testing the links on the page, everyone but the last one would fail and for the longest time I had no idea why. Eventually I figured out that the banner that followed the user down the page when he/she scrolled was blocking the link because our code would scroll so the link was the closest to x:0 y:0 before clicking. To change this:

driver.executeScript(‘window.scrollTo(0,’ + loc.y + ‘)’);
was changed to this:
driver.executeScript(‘window.scrollTo(0,’ + loc.y-50 + ‘)’);

This change, although extremely simple, took a long time to figure out. It also gave me a greater appreciation for this work and how much time it actually takes. Before working here, I would, like most developers, spend a lot of time debugging my code. Only after working for a company that is actually pushing a product out to a customer did I truly appreciate the time needed to get everything right.

I divided all the tests into two categories: tests that were completely internal and that that used outside information. Internal test would be something like checking to make sure our information gathering survey worked or that the marketing page’s links were working. The latter type consisted of such tests as linking accounts or checking transactions. One of the tests I wrote contained a script for added accounts to test IDs and checked to make sure everything was linked correctly. Not only did the parameters of this test change three times, thanks guys, but also I had to deal with naming conventions that were out of our control. For the most part, they were consistent for username and password, but when other fields were added, all bets were off.

Although I joked about the changing of parameters, it actually was an important part of my summer because it exposed me to the compromises that automation scripts need to accommodate. The debate was how dynamic the script would be. Obviously, in an ideal world, the script could link any account in any way. However, after a lot of work, and because we had to rely on third party information, this was not possible. So the question remained whether we wanted a smoother, simpler script that tests the basic functionality for a few accounts, or tries our best to be fully dynamic. Eventually we decided on the former, setting aside five accounts of different types to aggregate.

There is so much more I could talk about, but that is for another time. I would recommend using Selenium WebdriverJS, found here, for writing these automation scripts to anyone who might be interested. I want to thank all the people at personal capital for making me feel at home this summer; it was a pleasure coding with you.

This summer I have the privilege of working as an Engineering Intern at Personal Capital. Not only do I have the pleasure of working with really great people, I am also learning about how various engineering teams come together to build an awesome product. There is only so much you can learn in a classroom; this is the real world we’re talking about!

My main project is implementing an automated test suite for Personal Capital’s marketing website and web app. Automated tests make the feedback loop faster, reduce the workload on testers, and allows testers to do more exploratory and higher-value activities. Overall, we’re trying to make the release process more efficient.

Our automated testing stack consists of Selenium WebDriverJS, Mocha + Chai, Selenium Server, and PhantomJS. Tests are run with each build by our continuous integration tool Hudson, and we can mark a build as a success or fail based on its results. Our tests are written in JavaScript since our entire WebUI team is familiar with it.

In an effort to keep our test scripts clean and easily readable, Casey, one of our Web Developers, ingeniously thought of creating helper functions. So instead of having numerous driver.findElement()’s and a chai.expect() throughout our scripts, these were integrated into a single function. An example of a one is below.

var expectText = function(selector, text) {
	scrollToElement(selector).then(function(el) {

We were having issues when testing in Chrome (while Hudson runs PhantomJS, our tests are written to work in Firefox, Chrome, and Safari) where elements weren’t visible so we need to scroll to their location first. We then have our scrollToElement() method that is chained with every other helper function.

var scrollToElement = function(selector) {
	var d = webdriver.promise.defer(),

	// Get element by CSS selector
		// Get top and left offsets of element
		.then( function(elt)	{
			el = elt;
			return elt.getLocation(); 
		} )
		// Execute JS script to scroll to element's top offset
		.then(	function(loc)	{ 
			driver.executeScript('window.scrollTo(0,' + loc.y + ')');
		} )
		// If successful, fulfill promise.  Else, log ERR
			function(success)	{ 
			function(err)	{ 
				d.reject('Unable to locate element using selector: ' + selector);
			} );

	return d.promise;

Then a typical test script would look like this:
Super clean, simple, and awesome. Anyone can write an automation script!

One of the main challenges in automation is timing. Some browsers (I’m looking at you Chrome) are faster than others, and the driver will attempt to execute commands before elements on the page can be interacted with. So to overcome this we used a mixture of implicit and explicit waits. There are two ways to do an implicit wait. The first is setting WebDriverJS’s implicitlyWait() by having the following line of code after defining the driver:


This is global, so before throwing an error saying an element cannot be found or be interacted with, WebDriverJS will wait up to 1.3 seconds. The second method is waiting for an element to be present on the page, and setting a timeout. This is helpful if we need more than 1.3 seconds on a certain element. We have a helper function called cssWait() that looks like this:

var cssWait = function(selector, timeout) {
	driver.wait(function() {
		return driver.isElementPresent(webdriver.By.css(selector));
	}, timeout);

On top of those we use explicit waits that are simply “driver.sleep(<time>)”. Sometimes we need to hard code a wait to get the timing just right.

Unfortunately that’s it for this post. If you have any questions feel free to leave a comment and I’ll get back to you. In my next blog post, or one that will be written by Aaron, I will talk more about some of the challenges we faced and how we dealt with them.

To get started with Web Automation, I suggest heading over to SimpleProgrammer.com where John Sonmez put together some instructions on getting your environment set up. While his are for Windows, the Mac version is pretty similar.