Join Personal Capital

We are growing our Engineering team and we are looking for talented senior software engineers who want to work in a rewarding, collaborative, fast paced environment.

The Opportunities: You’ll work on data that will amaze you, work with a team that will inspire you, and help create products that truly add value to our users.  Personal Capital’s server side team is brilliant and agile, our data is rich and large, and this is your chance to be one of the primary team members and see your work make a significant impact not only Personal Capital but also on the financial lives of our users.

We are looking for very curious engineers – a Java Engineer and Senior Data Engineer. You need to be a thinker and a doer. You need to be smart and build smart products. You need to be ambitious. These are not easy jobs: you will need to wear multiple hats, work with many unknowns, travel many unpaved roads to tackle large-scale problems. But it will be your finest work and creation, and an amazing engineering team is here to collaborate with you and support you.

To apply please visit here.

Template Based approach for SOAP webservices.

Problem statement: 

The problem can be generalized as: Maintenance issues with auto generated stubs & skeletons in webservices (particularly when request needs to be generated dynamically based on account types).

As Pershing already exposed an openAccount webservice we can generate stubs & skeletons etc and generate the request & response. In this module we have to dynamically generate requests based on the account types. In order to do this we have to actually maintain the mapping at two places one is at the business level i.e. field mapping between personal capital in personal cap& other one is at database level so, the server code can access it & use it in generating dynamic requests. This may lead to inconsistencies between the mappings business maintains & the code base server team maintains. Also just to maintain the code base is really difficult as the openAccount webservice contains so many properties to be set based on the account type.
Solution:
To solve this we created the templates required as String based templates (xml format)  & values to be populated are dynamically constructed from the excel file maintained by the business (So, always the source of truth is the excel file maintained by the Business). We construct the request based on the account type and  invoke openaccount webservice endpoint url without considering the stubs & skeletons that are auto generated as shown below. This resulted in less maintenance in code and also made only 1 place to maintain the mappings which resulted in avoiding in consistencies.
For example:

SOAPConnectionFactory scf = SOAPConnectionFactory.newInstance();

SOAPConnection con = scf.createConnection();

// Create a message factory.

MessageFactory mf = MessageFactory.newInstance();

// Create a message from the message factory.

SOAPMessage soapMsg = mf.createMessage();

// Create objects for the message parts

SOAPPart soapPart = soapMsg.getSOAPPart();

StreamSource msgSrc = new StreamSource(new StringReader(inputSoapMsg));

soapPart.setContent(msgSrc);

// Save the message

soapMsg.saveChanges();

// soapMsg.writeTo(System.out);

URLEndpoint urlEndpoint = new URLEndpoint(this.getPershingEndPoint());

reply = con.call(soapMsg, urlEndpoint);

if (reply != null)

{

ByteArrayOutputStream out = new ByteArrayOutputStream();

reply.writeTo(out);

if (out != null)

{

output = out.toString();

logger.info(“The Response message is:” + output);

}

}

Mobile Development: Testing for Multiple Device Configurations

All Android developers should have at least 1 “old” device running OS 2.3.3 and a current “popular” device. Ideally, one should also have a current device that is considered “maxed out” on specs. A company should additionally have the latest “Google” device (currently the Nexus series), and an HTC, Sony, and Samsung device. (These manufacturers are mentioned because of popularity and/or significant differences not found when developing on other devices.) Additionally, OS 4.2, 4.3, and 4.4, though minor OS increments, offer differences that should be considered.

Though development for iPhone/iPad is more forgiving given the fewer configurations, it still offers challenges. For example, if you are developing on a Mac running OS X Mavericks with a version of Xcode above 5.0 for a product that still needs to support iOS 5.x, you will need a physical device because the iOS 5.x simulator isn’t available for that development configuration.

If testing mobile websites, the configurations can be endless.

At Apps World 2014, Perfecto Mobile (http://www.perfectomobile.com) introduced me to mobile cloud device testing. Their product offers access to real devices (not emulators or simulators) connected to actual carriers physically hosted at one of their sites around the world.

The concept of mobile cloud device testing allows the ability to test on a multitude of configurations of devices, locations/timezones, carriers, and operating systems.

Beyond access to multiple devices, Perfecto Mobile offers automation testing across these platforms via scripts written in Java. I wasn’t able to personally delve as far as I wanted into these automation tests, the recording feature, or the object mapper before my trial ran out, but the demo at Apps World gave me the impression it behaves similar to Xcode’s Automation instrument but expanded to all devices.  The scripts enable your team to target certain device configurations and automatically launch, execute the given tests, clean and close the devices, and export the test results to your team.  I wish I could say more because it looked really promising but without actual usage, I can only mention what I viewed during the demo.

It’s impossible to cover every configuration during native Android application development, but after a release, for all platforms, if your product is experiencing issues and a crash report doesn’t reveal enough, mobile cloud device testing offers the a real option for true coverage.

Below is a listing of some features of interest Perfecto Mobile offers:
- Access to real devices connected to actual carriers (not emulators or simulators) physically hosted at one of Perfecto’s sites around the world. Since these are real devices, you can dial numbers, make calls, send text messages, and install apps.
- UI for devices available displays availability, manufacturer, model, os + version, location, network, phone number, device id, firmware, resolution.
- Ability to open multiple devices at the same time.
- Requests for devices and configurations not available are responded to in real-time.
- Ability to take screenshots and record sessions to save and/or share results with others.
- Ability to share the device screen in real-time with your team.
- Ability to collect information for a device such as battery level, CPU, memory, and network activity.
- Export of device logs.
- Beta of MobileCloud for Jenkins plug-in that allows running scripts on actual cloud devices after a build so you can see reports on a single device after a build (multiple devices is not available yet).

AppWorlds 2014

Overall Impression

Last year there were a lot of companies that concentrate on analytics and HTML5 while this year the trends seem to be more diverse. The companies that caught my eye were

  • Jumio (http://www.jumio.com/netswipe/). Their Netswipe product provides real time Credit/Debit Card number recognition. It’s a card scan by optical card recognition (vs magnetic strip swipe)
  • Appsee (http://www.appsee.com). This company concentrates on A/B testing and they have given me a comprehensive demo of what A/B testing is and what you can do with it
  • Moxtra (www.moxtra.com). This company presented MeetSDK which is an SDK for embedding online meeting experience into custom apps. The experience is similar to Goto meeting or Google hangouts, where in addition to video chat there is a screen sharing capability.

Attended talks

Use A/B testing to build better apps (Chris Beauchamp from Kii)

A/B testing seems to be the next evolutionary step for just collecting metrics. The idea is to control provide optional data, screen presentations and screen flows for real-time users and then collect metrics. A/B testing requires a slightly different development approach and thus most of the talk was broken down into pros and cons of A/B testing.

Pros:

  • Features can be turned on / off instantaneously. A real life example given by Chris describing how he accidentally released one of the features intended for demo purposes and the feature was only taken down 2 weeks later after AppStore approved patch release.
  • Better segmentation. If currently with KISSMetrics we collect data on ALL of the users, A/B testing platforms can only target certain segments of their user base without disrupting the core users.

Cons:

  • QA and dev overhead. When we release support with various flow and display options we must QA these beforehand. With iOS7 and autoupdates it seems like an alternative could be just having each bi-weekly release as an experiment

Agile development for mobile (Crashlytics and Twitter)

Quick and short list of takeaways

  • Releasing a new version of your mobile app bi-weekly is acceptable amongst users. More frequent updates are not.
  • Dogfooding. Using your company employees as beta testers.  While your app is awaiting approval in the app store, you can have a local dogfood release to your fellow coworkers. So, once the app is in the App Store , you might already know the top 10 issues with it and start working on a patch release immediately.
  • In the experience of Twitter mobile dev top most frequent bugs compose 90% of the issues.

Automating your javascript unit tests – Karma

The Spectacular Test Runner for Javascript Indeed !!

why

  • instant feedback while writing your code and eliminate the need to remember to run tests before checking in your code. thus leading to stable builds.
  • continuos integration with our hudson build system.
  • testing on real and multiple browsers.

what

  • set up karma to watch your source files and run tests on code changes.
  • set up karma to run the tests during hudson build and validate the build.
  • set up karma to track our code coverage.

how

We use mocha as our test framework. This along with chai (expect/should – BDD style) worked out great for us with an effective yet readable tests. I cannot emphasize the importance of readable tests enough. We had a team member who did a feature walk through by running through the tests which i thought was pretty rad. Product and QA could easily see what was the feature set, what was expected of and what was the outcome. I guess we have to do a write up sharing more of our excitement.

Before karma, we were running tests using individual test files. More often, you are working on multiple files and remembering to run tests on all these files manually was becoming cumbersome and error prone. So we started researching on test runners and karma seemed to fit all our necessities: automation, continuos integration, run tests on multiple real browsers and support for mocha.

set up karma to watch your source files and run tests on code changes

This was fairly straight forward. Karma’s setup is driven by a single configuration file where in you provide the location of files you want to watch for changes, browsers that you want to run tests, your testing framework and any preprocessors. Here’s a gist of our configuration file. The only tricky part was preprocessors. We use handlebars along with requirejs-handlebars-plugin for our templating purposes and serve our templates as individual html files. This was causing a problem karma was converting them into js strings because of its default preprocessor: html2js. It needed a bit of reading, but the fix was simple enough. The following additions to the config file fixed the problem.

preprocessors : [{'scripts/**/*.html' : ''}]
files:[...{pattern: 'scripts/**/*.html', served: true, included: false}]

set up karma to run the tests during hudson build and validate the build

We created another karma configuration file for this purpose. We added a junitReporter  so that we could export the tests in a format that could be interpreted by our hudson setup. The key differences are as follows. We are currently using phantomJS for testing on our build systems, but in near future, we want to extend this to real browsers.

reporters: ['progress', 'junit']
junitReporter: {outputFile: "testReports/unit-tests.xml"}
autoWatch: false
browsers: ['PhantomJS']
singleRun: true

set up karma to track our code coverage

Once we were able to configure karma to run in hudson, this was just a natural addition. The only additions to the karma configuration are as follows.

reporters: ['progress', 'junit', 'coverage']
coverageReporter: {
 type : 'cobertura',
 dir : 'coverage/'
}
preprocessors : {
 '**/scripts/**/*.js': 'coverage'
}

As you may have noticed, i may used simple and straight-forward words quite a few times and that is what karmajs is all about.

reads

http://karma-runner.github.io/0.10/index.html

http://googletesting.blogspot.com/2012/11/testacular-spectacular-test-runner-for.html

 

https://www.npmjs.org/package/karma-handlebars-preprocessor