Evolving End-User Authentication

EV Certificate Display

The adoption of EV Certificates has rendered the login image obsolete.

This week, Personal Capital discontinued the use of the “login image”, as part of an upgrade to our security and authentication processes.   By “login image”, I mean the little personalized picture that is shown to you on our login page, before you enter your password.

Mine was a picture of a starfish.

Several users have asked us about this decision and, beyond the simple assertion that the login image is outmoded, a little more background is offered here.


The founders and technology principals in Personal Capital were responsible for introducing the login image for website authentication, a decade ago. In 2004, Personal Capital’s CEO Bill Harris founded, along with Louie Gasparini (now with Cyberflow Analytics), a company called PassMark Security, which invented and patented the login image concept, and the associated login flow process. Personal Capital’s CTO, Fritz Robbins, and our VP of Engineering, Ehsan Lavassani, led the engineering at PassMark Security and designed and built the login image technology, as well as additional security and authentication capabilities.

Server login images (or phrases, in some implementations) were a response to the spate of phishing scams that were a popular fraud scheme in the early- and mid-2000s.  When phishing, fraudsters create fake websites that impersonate financial institutions, e-commerce sites, and other secure websites.  The fraudsters send spam email containing links to the fake sites, and unsuspecting users click on the links and end up at the fake site. The user then enters their credentials (username/password), thinking they are at the real site. The hacker running the fake site then has the user’s username/password for the real site and, well, you know what happens next. It’s hard to believe that anyone actually falls for those sorts of things, but plenty of people have. (Phishing is still out there, and has gotten a lot more sophisticated (see spear-phishing for example), but that is a whole other topic).

So, the login image/phrase was a response to the very real question of:  “How can I tell that I am at the legitimate website rather than a fraudulent site?”  With login image/phrase, the user would pick/upload a personalized image or phrase at the secure website. And the login flow changed to a two-step flow: the user enters their username, then the secure site displays the personal image/phrase, and then, assured that they are at the legitimate secure site when they recognize the image/phrase, the user enters their password. The use of login image/phrase was a simple and elegant solution to a vexing problem. And when the FFIEC (U.S. banking regulatory agency) mandated stronger authentication standards for U.S. banking sites in 2005, login image quickly became ubiquitous across financial websites, including Bank of America and many others, during the mid-2000s.

From a security perspective, the login image/phrase is a kind of a shared secret between the secure site and the user. Not as important a secret as the password, of course, but important nonetheless, and here’s why: If a hacker posing as the real user enters the user name at the secure site, and the site displays the user’s login image/phrase then the hacker can steal the image/phrase and use it in constructing their fake website. Then the fake website would then look like the real website (since it would have the image/phrase) and could then fool the user to giving up the real prize (the password) at the fake phishing site. So, the issue of “how to protect the security of the login image?” becomes a relevant question.

Device identification is the answer:  If the website is able to recognize the device that is sending a request containing the username, and if the site knows that device has been authorized by the user, then the site can safely show the login image/phrase, and the user feels secure, and enters their password. This is essentially a process of exchanging more information in each step of the authentication conversation, a process of incremental and escalating trust, culminating in the user entering their password and being granted full access to the site.

But the use of device identification to protect the login image is secondary to the real technology advance of this approach: the use of device identification and device forensics as a second factor in authentication. Combining the device identity with the password creates a lightweight form of two-factor authentication, widely recognized as being far superior to single-factor (password only) authentication.

The simplest form of device identification involves placing a web cookie in the user’s browser. Anyone out there not heard of cookies and need an explanation? OK, good, I didn’t think so. Cookies work pretty well for a lot of purposes, but they have a couple of problems when being used for device identification: (1) the user can remove them from the machine; and (2) malware on the user’s machine can steal them.

The technology of device identification quickly evolved, at PassMark and other security companies, to move beyond cookies and to look at inherent characteristics of the web request, the browser, and the device being used. Data such as the IP address, User-Agent header (the browser identity information), other HTTP headers, etc. Not just the raw data elements, but derived data as well, such as geolocation and ISP data from the IP address. And, looking at patterns and changes in the data across multiple requests, including request velocity, characteristic time-of-day login patterns, changes in data elements such as User-Agent string etc.  Some providers started using opt-in plugins or browser extensions to extract deeper intrinsic device characteristics, such as hardware network (MAC) address, operating system information, and other identifiers.

“Device forensics” evolved as the practice of assembling large numbers of data points about the device and using sophisticated statistical techniques to create device “fingerprints” with a high degree of accuracy. The whole arena of device identification and device forensics is now leveraged in a variety of authentication and fraud-detection services, including at Personal Capital. This is the real value that grew out of the “login image” effort.

But, while the use of device identification and device forensics was flourishing and becoming a more central tool in the realm of website authentication, the need for the login image itself was becoming less compelling.

Starting in the late 2000s, the major SSL Certificate Authorities, (such as Verisign), and the major browser providers (such as IE, Firefox, Chrome, Safari) began adopting Extended Validation (EV) certificates. These certificates require a higher level of validation of the certificate owner (i.e. the website operator, such as Personal Capital), so they are more trusted. And, just as important, the browsers adopted a common user interface idiom for EV certificates, which include the display of the company name (e.g. “Personal Capital Corporation”) that owns the certificate, displayed in a distinctive color (generally, green) in the browser address bar (see picture). The adoption of EV certificates has essentially tackled the original question that led to the use of the login image (i.e. “how does the user know they are at the real website?”).

Which brings us to today. Personal Capital has removed the login image from our authentication flow. It is a simpler and more streamlined flow for our users, and has the added benefit of reducing complexity in the login process. It is a security truism that, all else being equal, simpler implementations are more secure implementations – fewer attack vectors, fewer states, fewer opportunities for errors. Personal Capital continues to use device identification and device forensics, allowing users to “remember” authorized devices and to de-authorize devices. We also augment device identification with “out of band” authentication, using one-time codes and even voice-response technology to verify user identity when they want to login from a non-authorized or new device.

I’ll admit that I will miss my little starfish picture when I log in to Personal Capital. But this small loss is offset by my knowledge that we are utilizing best, and current, security practices.

PassMark Security, circa 2005

“Ugly Shirt Fridays” at PassMark Security, circa 2005

Hibernate Second Level Cache Using Ehcache


Put aside all the good and bad things about hibernate second-level cache, Personal Capital has been using second-level cache for the entities and queries of data that merely change. We had our issues while using it. But once we had these issues behind us, it really performs well for us.

How to enable in Hibernate

There are two types of second-level cache we use. One is entity cache, where hibernate caches loaded data objects on the Session Factory level, crossing user and session boundary, and crossing transaction boundary; Another type is query cache, where SQL query is the cache key and the query result set is the cached value. Whenever there is a same SQL query is submitted through hibernate, hibernate will try to use the cached result set instead of dipping into the database.

To globally set these caching up, just need to define a few hibernate properties. We use JPA, the following properties need to be defined in JPA persistence.xml.

hibernate.cache.use_query_cache true
hibernate.cache.use_second_level_cache true
hibernate.cache.region.factory_class org.hibernate.cache.ehcache.EhCacheRegionFactory
hibernate.cache.use_structured_entries true

 How to enable second-level caching for data objects in application

Hibernate provides a @Cache annotation to enable second-level caching for data objects in application.

This annotation can be used on @Entity class to enable second-level cache on the table level. For example,

@Table(name = “account_type”, catalog = “sp_schema”)
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
public class AccountTypeImpl extends AbstractTableWithDeleteImpl implements AccountType, Serializable
… …

it can also be used on @Column property level to enable

@Table(name = “account”, catalog = “sp_schema”)
public class AccountImpl extends AbstractTableWithDeleteImpl implements AccountType, Serializable
… …
@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
public String getAccountDetail()
… …

How to enable second-level caching for query cache in application

Query cache is different from data object cache. To enable query cache, you need to tell that to the query. In JPA query, a cache hint should be set to enable query cache.

query.setHint(“org.hibernate.cacheable”, true);

How to configure ehcache for data object second-level cache

When ehcache is used for second-level cache, hibernate tries to work with caches named after the fully qualified names of Domain Objects. Only if there is no matching cache name for a domain object, it tries to cache/retrieve this domain object using defaultCache.

So ehcache’s defaultCache should be configured carefully since any @Cache annotated domain objects will be cached in this cache, if no cache is configured using these domain objects’ fully qualified names.

All ehcache’s cache names can be defined and configured in ehcache.xml. ehcache.xml must be on application’s class path. A simple example;

<ehcache updateCheck=”false”><diskStore path=”java.io.tmpdir/hibernateCache” /><defaultCache maxElementsInMemory=”100″ eternal=”false” timeToIdleSeconds=”1″ timeToLiveSeconds=”1″ overflowToDisk=”false” diskPersistent=”false” diskExpiryThreadIntervalSeconds=”1″ memoryStoreEvictionPolicy=”LRU” /><cache name=”com.personalcapital.aggregation.data.impl.AccountTypeImpl”
maxEntriesLocalHeap=”1000″ eternal=”false” timeToIdleSeconds=”1200″
timeToLiveSeconds=”1200″ overflowToDisk=”false”
diskExpiryThreadIntervalSeconds=”60″ memoryStoreEvictionPolicy=”LRU” />

How to configure ehcache for second-level query cache

For second-level query cache, Hibernate uses caches named as “StandardQueryCache” and “UpdateTimestampsCache” to cache query result sets. If these caches are not configured in ehcache.xml, Hibernate will fall back to use “defaultCache” to cache query result sets, which may not be what is expected.

A note. The full qualified StandardQueryCache names are different in Hibernate 3 and 4. In Hibernate 3, the name is “org.hibernate.cache.StandardQueryCache”; and in Hibernate 4, the name is “org.hibernate.cache.internal.StandardQueryCache”. This was one thing had bite us when we were upgrading to Hibernate 4. The following is an example for Hibernate 4 ehcache.xml configuration for query cache.

“UpdateTimestampsCache” is really important too. Here is an excerpt from ehcache online document (http://ehcache.org/documentation/user-guide/hibernate#enable-second-level-cache-and-query-cache-settings), describing what “UpdateTimestampsCache” is: “Tracks the timestamps of the most recent updates to particular tables. It is important that the cache timeout of the underlying cache implementation be set to a higher value than the timeouts of any of the query caches. In fact, it is recommend that the the underlying cache not be configured for expiry at all.

… …<cache name=”org.hibernate.cache.internal.StandardQueryCache” maxEntriesLocalHeap=”2000″ eternal=”false” timeToIdleSeconds=”0″ timeToLiveSeconds=”300″ overflowToDisk=”false” diskExpiryThreadIntervalSeconds=”60″ memoryStoreEvictionPolicy=”LRU” /><cache name=”org.hibernate.cache.spi.UpdateTimestampsCache” maxEntriesLocalHeap=”5000″ eternal=”true” />… …

Native Query and Query Cache

Result set of native query can be cached in query cache the same way as result set of regular HQL query, you just need to tell that to the query. In JPA, the same cache hint should be set to enable query cache for native query.

query.setHint(“org.hibernate.cacheable”, true);

But there is a drawback to second-level cache when using native query. Because it is not HQL, Hibernate has no idea what data objects are affected by the native query, so it invalidates any cached data objects by default. This behavior usually is not desired. In order to avoid this global invalidation, the data entity is affected by the native query should be indicated before the query execution, so the query will only invalidate data objects of the indicated entity. If nothing is affected by the query and don’t want to invalidate anything, any empty query space should be specified.


Some ending words

Again, Hibernate second-level cache can turn into a monster if is not applied carefully in application. Caching strategy should be thought out carefully before applying to application. Otherwise, the application may have all kinds of unexpected behaviors. What are described above only cover the caching strategy we are using, basically it is a simple time-to-live timeout-bound caching. This is good for our seed data that have small data set and don’t change frequently.

Template Based approach for SOAP webservices.

Problem statement: 

The problem can be generalized as: Maintenance issues with auto generated stubs & skeletons in webservices (particularly when request needs to be generated dynamically based on account types).

As Pershing already exposed an openAccount webservice we can generate stubs & skeletons etc and generate the request & response. In this module we have to dynamically generate requests based on the account types. In order to do this we have to actually maintain the mapping at two places one is at the business level i.e. field mapping between personal capital in personal cap& other one is at database level so, the server code can access it & use it in generating dynamic requests. This may lead to inconsistencies between the mappings business maintains & the code base server team maintains. Also just to maintain the code base is really difficult as the openAccount webservice contains so many properties to be set based on the account type.
To solve this we created the templates required as String based templates (xml format)  & values to be populated are dynamically constructed from the excel file maintained by the business (So, always the source of truth is the excel file maintained by the Business). We construct the request based on the account type and  invoke openaccount webservice endpoint url without considering the stubs & skeletons that are auto generated as shown below. This resulted in less maintenance in code and also made only 1 place to maintain the mappings which resulted in avoiding in consistencies.
For example:

SOAPConnectionFactory scf = SOAPConnectionFactory.newInstance();

SOAPConnection con = scf.createConnection();

// Create a message factory.

MessageFactory mf = MessageFactory.newInstance();

// Create a message from the message factory.

SOAPMessage soapMsg = mf.createMessage();

// Create objects for the message parts

SOAPPart soapPart = soapMsg.getSOAPPart();

StreamSource msgSrc = new StreamSource(new StringReader(inputSoapMsg));


// Save the message


// soapMsg.writeTo(System.out);

URLEndpoint urlEndpoint = new URLEndpoint(this.getPershingEndPoint());

reply = con.call(soapMsg, urlEndpoint);

if (reply != null)


ByteArrayOutputStream out = new ByteArrayOutputStream();


if (out != null)


output = out.toString();

logger.info(“The Response message is:” + output);



Mobile Development: Testing for Multiple Device Configurations

All Android developers should have at least 1 “old” device running OS 2.3.3 and a current “popular” device. Ideally, one should also have a current device that is considered “maxed out” on specs. A company should additionally have the latest “Google” device (currently the Nexus series), and an HTC, Sony, and Samsung device. (These manufacturers are mentioned because of popularity and/or significant differences not found when developing on other devices.) Additionally, OS 4.2, 4.3, and 4.4, though minor OS increments, offer differences that should be considered.

Though development for iPhone/iPad is more forgiving given the fewer configurations, it still offers challenges. For example, if you are developing on a Mac running OS X Mavericks with a version of Xcode above 5.0 for a product that still needs to support iOS 5.x, you will need a physical device because the iOS 5.x simulator isn’t available for that development configuration.

If testing mobile websites, the configurations can be endless.

At Apps World 2014, Perfecto Mobile (http://www.perfectomobile.com) introduced me to mobile cloud device testing. Their product offers access to real devices (not emulators or simulators) connected to actual carriers physically hosted at one of their sites around the world.

The concept of mobile cloud device testing allows the ability to test on a multitude of configurations of devices, locations/timezones, carriers, and operating systems.

Beyond access to multiple devices, Perfecto Mobile offers automation testing across these platforms via scripts written in Java. I wasn’t able to personally delve as far as I wanted into these automation tests, the recording feature, or the object mapper before my trial ran out, but the demo at Apps World gave me the impression it behaves similar to Xcode’s Automation instrument but expanded to all devices.  The scripts enable your team to target certain device configurations and automatically launch, execute the given tests, clean and close the devices, and export the test results to your team.  I wish I could say more because it looked really promising but without actual usage, I can only mention what I viewed during the demo.

It’s impossible to cover every configuration during native Android application development, but after a release, for all platforms, if your product is experiencing issues and a crash report doesn’t reveal enough, mobile cloud device testing offers the a real option for true coverage.

Below is a listing of some features of interest Perfecto Mobile offers:
- Access to real devices connected to actual carriers (not emulators or simulators) physically hosted at one of Perfecto’s sites around the world. Since these are real devices, you can dial numbers, make calls, send text messages, and install apps.
- UI for devices available displays availability, manufacturer, model, os + version, location, network, phone number, device id, firmware, resolution.
- Ability to open multiple devices at the same time.
- Requests for devices and configurations not available are responded to in real-time.
- Ability to take screenshots and record sessions to save and/or share results with others.
- Ability to share the device screen in real-time with your team.
- Ability to collect information for a device such as battery level, CPU, memory, and network activity.
- Export of device logs.
- Beta of MobileCloud for Jenkins plug-in that allows running scripts on actual cloud devices after a build so you can see reports on a single device after a build (multiple devices is not available yet).

AppWorlds 2014

Overall Impression

Last year there were a lot of companies that concentrate on analytics and HTML5 while this year the trends seem to be more diverse. The companies that caught my eye were

  • Jumio (http://www.jumio.com/netswipe/). Their Netswipe product provides real time Credit/Debit Card number recognition. It’s a card scan by optical card recognition (vs magnetic strip swipe)
  • Appsee (http://www.appsee.com). This company concentrates on A/B testing and they have given me a comprehensive demo of what A/B testing is and what you can do with it
  • Moxtra (www.moxtra.com). This company presented MeetSDK which is an SDK for embedding online meeting experience into custom apps. The experience is similar to Goto meeting or Google hangouts, where in addition to video chat there is a screen sharing capability.

Attended talks

Use A/B testing to build better apps (Chris Beauchamp from Kii)

A/B testing seems to be the next evolutionary step for just collecting metrics. The idea is to control provide optional data, screen presentations and screen flows for real-time users and then collect metrics. A/B testing requires a slightly different development approach and thus most of the talk was broken down into pros and cons of A/B testing.


  • Features can be turned on / off instantaneously. A real life example given by Chris describing how he accidentally released one of the features intended for demo purposes and the feature was only taken down 2 weeks later after AppStore approved patch release.
  • Better segmentation. If currently with KISSMetrics we collect data on ALL of the users, A/B testing platforms can only target certain segments of their user base without disrupting the core users.


  • QA and dev overhead. When we release support with various flow and display options we must QA these beforehand. With iOS7 and autoupdates it seems like an alternative could be just having each bi-weekly release as an experiment

Agile development for mobile (Crashlytics and Twitter)

Quick and short list of takeaways

  • Releasing a new version of your mobile app bi-weekly is acceptable amongst users. More frequent updates are not.
  • Dogfooding. Using your company employees as beta testers.  While your app is awaiting approval in the app store, you can have a local dogfood release to your fellow coworkers. So, once the app is in the App Store , you might already know the top 10 issues with it and start working on a patch release immediately.
  • In the experience of Twitter mobile dev top most frequent bugs compose 90% of the issues.