Incremental Web Performance Improvements

Compression (gzip) of front end resources (js/css)

When we moved to Amazon’s Cloudfront (CDN), we lost the ability to serve gzipped version of our script files and stylesheets. We are single page app and we have a large javascript and css footprint and this was greatly affecting our application performance.  We had two options to fix this.

  • Upload a gzip version of the resource along with the original resource and set the content-encoding header for the file to gzip. CDN would then serve the appropriate resource based on request headers.
  • Use a custom origin server that is capable of compressing resources based on request headers. The default origin server which is a simple Amazon Simple Storage Service (S3) is not capable of this and hence the problem.

Fortunately for us, all our application servers use apache as a web server and we decided to leverage this setup as our custom origin server. We had to simply change our deployment process to have front-end resources deploy to our app servers as against a S3 bucket. This does make the deployment process tiny bit complex, but the benefits are huge.

 

Dividing our main javascript module into smaller modules.

As mentioned earlier, we are single page app and have a large javascript footprint. We bundle all our javascript files into a single module and fetch it during our app initialization. And as we grow, so will our javascript footprint and did not want to run into long load times during our app initialization as demonstrated by Steve Souders in his book: High Performance Web Sites.

We use requirejs to build all our javascript modules into one single module. Fortunately enough requirejs provides for combining modules into more than one module in a very flexible manner. We package all our “common” modules and the main module we would need on loading the application into one module. All other modules are dynamically loaded based on when they are required. More specific details will be posted soon.

Pre-caching our main javascript module.

I believe this is a very common practice and simple implementation that does reap a huge performance benefit. We now pre-fetch our main javascript module during our login process using iframe and html object tag. The iframe keeps the login page load times independent of the resources being fetched through it. Again, there are many ways to implement this as mentioned by Steve Souders, but we chose this for simplicity.

Additional Links

  • http://stackoverflow.com/questions/5442011/serving-gzipped-css-and-javascript-from-amazon-cloudfront-via-s3
  • http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/ServingCompressedFiles.html
  • http://requirejs.org/docs/optimization.html

Distributing Static Content From CDN

why

  • Distribute content to end users with low latency and faster response times.
  • Ability to deploy new static content without having to re-deploy the entire application.

what

  • Identify a Content Distribution Network (CDN) solution that meets our needs
  • Setup origin server(s) to store the content that will be distributed via CDN.
  • Setup a distribution to register the origin server(s).
  • Reference all static content in the application with the distribution’s url.
  • Version static content to bust the cache both in the browser and CDN.

how

Before we get into implementation details, what do we mean by “static content”?  Here, we define static content as all the web resources that are not specific to a particular user’s session.  In our case, this includes css files, javascript files and all media files.

Identify CDN solution

Since most of our infrastructure is hosted in Amazon Web Services, the AWS CloudFront CDN was a logical default choice to try.  Like most of the AWS application-level services, CloudFront tries to hit a reasonable balance of the 80/20 rule:  it is low-cost, easy to operate, and covers most basic functionality that you want in a application service, but does not offer much in the way of high-end differentiating features.

Setup origin server(s) to store the content that will be distributed via CDN.

An origin server in CloudFront can be either a web server, or, as in our case, an AWS S3 bucket. We have setup an origin for each of our environments (dev, qa, prod). We then have a Hudson job (build) for each of our environments. The Hudson job checks out the corresponding branch from our git repo for the static content; processes it as mentioned in this post; updates a version.xml file with the Hudson build number; zips the file; and copies it to a S3 bucket. This bucket, however, is not the CloudFront origin. It is like a docking area to store all zipped build files from our various Hudson jobs. We have another Hudson job (deploy) that copies a given zipped build file to an origin bucket. More on that in a minute.

We have a different Hudson job for each environment because the static content could be processed differently based on an environment. For example: in our dev environment, we do not combine or minify our js files. In our qa environment, we combine but do not minify our js files. In prod, we combine and minify our js files.

Back to the Hudson deploy job mentioned above. This job takes two input params: the name of the zipped build file to be deployed and the environment to be deployed. It simply unzips the build file into a temporary directory and uses s3sync to upload the content to the appropriate S3 origin bucket for the given environment.  And from there, it is available for distribution via CloudFront.

In addition, in our dev environments, we use a Continuous Integration (CI) process, where our Hudson jobs send SNS messages to our the web server when a build is available.  The web servers pulls the static content build from the S3 staging bucket and then serves it directly from Apache.  This allows for a more targeted integration test of the static content without bringing the CDN mechanisms into the mix.

Setup a CloudFront distribution to register the origin server(s).

We have a CloudFront distribution for each of our origins.  A CloudFront distribution associates a URL with one (or more) origin servers.  Creating the distribution is easy via the AWS Console.  In our distribution, we force HTTPS access – since our containing pages are served via https, we want to ensure the embedded static content is as well, to avoid browser security warnings.

CloudFront does allow you to associate a DNS CNAME with your distribution, so that you can associate your own subdomain (like static.personalcapital.com) with your CloudFront distribution.  This is more user-friendly than the default generated CloudFront domain names, which are like d1q4amq3lgzrzf.cloudfront.net. However, one gotcha is that CloudFront does not allow you to upload your own SSL certificate.   So, you have to choose between either having your own subdomain or having https – you can’t have both (some other CDN services do allow you to load your own SSL certs).   In our case, we chose https, and use the default cloudfront.net URL.

Reference all static content in the application with the distribution’s URL.

We have an environment config property that stores the corresponding CloudFront distribution’s URL. This property is available for all server-side webpages where it is referenced as follows:

 <link rel="stylesheet" type="text/css" href="<%=staticUrl%>/static/styles/css/main.css">

We then needed to make sure that all our static content references its resources via relative URLs to keep them independent of the distribution URL. For example, an image reference in main.css would be as follows:

 background: url('/static/img/dashboard/zeroState.png')

which would resolve to the root URL of main.css which is the distribution URL. I would like to know if there is a better way to solve this.

All our javascript uses relative paths anyway because of “requirejs” so we did not have to make any changes there.

All other references to static resources were on the server side where they had the config property to correctly reference the static resource.

Version static content to bust the cache both in the browser and CDN.

All our static content have aggressive cache headers. Once a static resource is fetched by the browser from the server, all future requests to that resource will be fetched from browser’s cache. This is great but when a newer version of this resource is available in server, the browser won’t no know about it, until its cache entry for that resource expires.

To prevent this, we use a common technique called URL fingerprinting wherein we add a unique fingerprint to the filename of that resource and change all its references in the webpages (JSPs) with the new filename. The browser, as it renders the updated webpage, will now request the resource from the server since it treats it as an entirely a new resource because of its new name.

The Hudson build job mentioned above processes our static resources, versions them with the build number and also stores the build number in version.xml. The version.xml file is then used by the application to retrieve the version number and pass it onto web pages at run-time. This helps us achieve our second goal that of keeping our static (front-end) development independent from our sever (back-end) development. This is very powerful as it gives us the ability to change our static content any time, have it deployed to production and not worry about updating server webpages with the latest version number. Pretty neat ah !!

Versioning of the resources also helped us out a great deal with our CloudFront distribution. CloudFront distribution behaves very similar to how browser handles resource caching. It does not fetch a newer version of the resource from the origin server unless one invalidates the current resource in the distribution. This has to be done per resource and it has a cost too.  The CloudFront documentation offers more considerations regarding invalidation vs versioning.

There is one other workaround you could use to force CloudFront distribution to fetch the content from its origin server. Set the distribution to consider query strings in the resource urls. And then pass a unique querystring along with the resource url and it will force the distribution to fetch the resource from the origin server.

That is it !!

reads

Queue Management Using Amazon SQS

Personal Capital aggregates financial data from a wide range of third-party financial institutions using web service API calls. Each financial institution can have its own response time interval and polling time to retry later. Since each web service call might take more than 5-30 seconds to respond, we don’t want the user threads in our front-end application to block on the request. These requirements make our financial data aggregation service a good candidate for using asynchronous communication.  Asynchronous communication allows clients (user agents, like browsers or mobile apps) to initiate aggregation requests that proceed asynchronously without blocking user interaction, and then when the aggregated data is available, polling threads in the client software will update the data displayed to the user.  One widely used architectural pattern for asynchronous communication is messaged-based processing, using message queues to decouple processing steps.

In our first implementation of our message queues, we used a MySQL database as a simple way to accept the request from the front-end application and let the back-end workload application to pick up the request and process. Over time, as load increased, both from the messaging infrastructure, and from application code accessing the shared database, we had concurrent request problems and the “queue” did not scale well.  We then evaluated two purpose-built message queue systems: Apache ActiveMQ, and Amazon Simple Queue Service (SQS). In our evaluation we found that Amazon SQS works well, and we got it running within 10 minutes.   Amazon SQS offers a reliable, highly scalable, hosted queue for securely sending/receiving/deleting/storing messages as they travel between application servers.  And it operates as a service, with no need to deploy or maintain server instances or software. ActiveMQ is a mature framework, and is one of the leading message queue frameworks in the open-source world.  It is scalable and high-performance, and offers a more sophisticated feature set than Amazon SQS. However, ActiveMQ needs separate server, configuration, and queue maintenance for each environment and associated tuning of the system and infrastructure stacks for performance and scalability.   Since our infrastructure is hosted in Amazon WS, we decided to start with SQS for simplicity and see how far it could take us.

Our requirements for our message queue system include:  delayed processing of messages; ordered sequence; batch processing; and message priority. Amazon SQS met all of our functional requirements except for priority queues.  We achieved equivalent functionality of priority queues by using a pair of queues, QUEUE_TOP and QUEUE_NORMAL, for each application queue. Our front-end applications send messages to QUEUE_TOP and batch applications send messages to QUEUE_NORMAL.  We wrote a generic receiver endpoint, which listens to both queues in a pair, processing the QUEUE_TOP messages first and then processing QUEUE_NORMAL messages only when QUEUE_TOP is empty.

When user logs in to our system, the front-end application accepts the request and creates N number of queue messages and put in Amazon SQS.  We use JSON as the data format for the message payloads, and the Command pattern for handling messages that invoke specific workload tasks.   The queue server checks for available threads and reads available messages from the SQS queue(s), up to a configurable limit (typically 10).  The queue server checks the top-priority queue first and then the normal-priority queue, so that top priority messages are always processed faster.   One challenge with using SQS is that it does not support blocking reads, so an empty queue must be polled by your application code.  In order to not have our app servers fast-poll an empty queue (and use up all the CPU), we use a configurable polling interval (typically from 100 to 1000 msecs) with some internal buffering of messages in the app server, to ensure that worker threads stay busy between polling calls to the SQS queue.

Our implementation has shown us that Amazon SQS is solid alternative to ActiveMQ for message queues that are used for simple workflow decomposition.   Of course, one fundamental constraint is that Amazon SQS is only available if your infrastructure is deployed in Amazon Web Services.  For other hosting providers, or if operating your own data center, open-source software like ActiveMQ or RabbitMQ are more appropriate choices.

Since introducing SQS into our architecture, we have extended our use of dedicated workload servers, reading SQS queues, for:  integration with financial services partners; integration with SalesForce.com; and compute- and DB-intensive batch processes.  Stepping into the use of message queues to solve a specific problem (asynchronous processing of account aggregation), and succeeding there, has led us to a queue-based architecture for most of our back-end processing, with great improvements in scalability, flexibility, and reliability.