Queue Management Using Amazon SQS

Personal Capital aggregates financial data from a wide range of third-party financial institutions using web service API calls. Each financial institution can have its own response time interval and polling time to retry later. Since each web service call might take more than 5-30 seconds to respond, we don’t want the user threads in our front-end application to block on the request. These requirements make our financial data aggregation service a good candidate for using asynchronous communication.  Asynchronous communication allows clients (user agents, like browsers or mobile apps) to initiate aggregation requests that proceed asynchronously without blocking user interaction, and then when the aggregated data is available, polling threads in the client software will update the data displayed to the user.  One widely used architectural pattern for asynchronous communication is messaged-based processing, using message queues to decouple processing steps.

In our first implementation of our message queues, we used a MySQL database as a simple way to accept the request from the front-end application and let the back-end workload application to pick up the request and process. Over time, as load increased, both from the messaging infrastructure, and from application code accessing the shared database, we had concurrent request problems and the “queue” did not scale well.  We then evaluated two purpose-built message queue systems: Apache ActiveMQ, and Amazon Simple Queue Service (SQS). In our evaluation we found that Amazon SQS works well, and we got it running within 10 minutes.   Amazon SQS offers a reliable, highly scalable, hosted queue for securely sending/receiving/deleting/storing messages as they travel between application servers.  And it operates as a service, with no need to deploy or maintain server instances or software. ActiveMQ is a mature framework, and is one of the leading message queue frameworks in the open-source world.  It is scalable and high-performance, and offers a more sophisticated feature set than Amazon SQS. However, ActiveMQ needs separate server, configuration, and queue maintenance for each environment and associated tuning of the system and infrastructure stacks for performance and scalability.   Since our infrastructure is hosted in Amazon WS, we decided to start with SQS for simplicity and see how far it could take us.

Our requirements for our message queue system include:  delayed processing of messages; ordered sequence; batch processing; and message priority. Amazon SQS met all of our functional requirements except for priority queues.  We achieved equivalent functionality of priority queues by using a pair of queues, QUEUE_TOP and QUEUE_NORMAL, for each application queue. Our front-end applications send messages to QUEUE_TOP and batch applications send messages to QUEUE_NORMAL.  We wrote a generic receiver endpoint, which listens to both queues in a pair, processing the QUEUE_TOP messages first and then processing QUEUE_NORMAL messages only when QUEUE_TOP is empty.

When user logs in to our system, the front-end application accepts the request and creates N number of queue messages and put in Amazon SQS.  We use JSON as the data format for the message payloads, and the Command pattern for handling messages that invoke specific workload tasks.   The queue server checks for available threads and reads available messages from the SQS queue(s), up to a configurable limit (typically 10).  The queue server checks the top-priority queue first and then the normal-priority queue, so that top priority messages are always processed faster.   One challenge with using SQS is that it does not support blocking reads, so an empty queue must be polled by your application code.  In order to not have our app servers fast-poll an empty queue (and use up all the CPU), we use a configurable polling interval (typically from 100 to 1000 msecs) with some internal buffering of messages in the app server, to ensure that worker threads stay busy between polling calls to the SQS queue.

Our implementation has shown us that Amazon SQS is solid alternative to ActiveMQ for message queues that are used for simple workflow decomposition.   Of course, one fundamental constraint is that Amazon SQS is only available if your infrastructure is deployed in Amazon Web Services.  For other hosting providers, or if operating your own data center, open-source software like ActiveMQ or RabbitMQ are more appropriate choices.

Since introducing SQS into our architecture, we have extended our use of dedicated workload servers, reading SQS queues, for:  integration with financial services partners; integration with SalesForce.com; and compute- and DB-intensive batch processes.  Stepping into the use of message queues to solve a specific problem (asynchronous processing of account aggregation), and succeeding there, has led us to a queue-based architecture for most of our back-end processing, with great improvements in scalability, flexibility, and reliability.

 

Design for Mobile First

A mobile site is required; it is not a nice-to-have. Mobile Internet usage will be more frequent than desktop Internet usage in the near future, so it’s time to stop adapting desktop websites to fit on mobile devices. We’ve seen the rise of responsive layouts, and while they offer an elegant solution for resizing components for varying screen size, they do little to address the true differences between mobile and desktop usage. Both types of sites need to be designed, but from now on we start with mobile.

Mobile devices offer a number of constraints that do not exist on a desktop. They have smaller screens, and slower, less reliable connections (for now). Inputs are touch and gesture based. Mobile users access the Internet more often than desktop users. They might be waiting for a restaurant table, watching television, or even multi-tasking with their laptop. They often have less attention available than a desktop user. Most importantly, there is a growing number of users who access the internet almost exclusively via a mobile device. Design for these constraints first, and a desktop version will follow much more easily than constraining a desktop version for mobile.

By designing for mobile usage, we force ourselves to be succinct and simple. These constraints will not make our stakeholders happy, but abiding by them allows us to understand the priorities and hierarchy of the site much faster. Stakeholders are more willing to trim the unnecessary and compress their content when crafting a mobile site. Secondary content can be placed on separate screens, making the primary focus of each screen much clearer. When combined with clear and concise content, our users will understand what we want to show them in record time.

If our users can accomplish their tasks on a mobile site, our desktop site will be even better. We now have content that is quick and obvious, an interaction flow that takes little time and attention, and an obvious hierarchy of elements on each screen. Our site is lightweight enough for slow connections, and optimized for distracted users with short attention spans. We have clear calls to action, and large, easy-to-target controls. In theory, both usability and conversions should be better than ever.

While our mobile-optimized site works pretty well on a desktop, we need to take advantage of the platform. While fonts and controls will likely appear friendlier and easier to use, they will be so enormous as to require our users to move their cursors much farther when navigating our site. We also aren’t taking advantage of the extra real estate, screen resolution, computing power, more reliable connection, and precision and hover abilities of the mouse cursor.

When we repurpose our site for desktop usage, we can expand on any content that required trimming, but if our site already converts so well, adding more content will only be a distraction. The main things to do when repurposing a mobile-optimized site for desktop use is to ensure gesture-based interactions are replaced with mouse-friendly alternatives, and to add a little wow factor through the use of animations, transitions, hover states, and higher resolution images which are too much for a mobile site. We may also want to flatten some of the architecture to bring secondary content into sidebars, and provide a larger view of navigation. Just ensure that they don’t interfere with the primary tasks the users want to accomplish.

The Evolution of a Startup: Web Technologies

This a detailed post describing Personal Capital’s transition from Flex to Backbone.js+RequireJS including the reasoning behind it and the experience of doing it.

By now, it’s clear to anyone who’s actively involved in the web space, including consumers, who’s come out on top in the Flash vs Rest of the World showdown. So, as Adobe transitions into marketing some HTML animation apps after ditching Flex and abandoning Flash Player for mobile browsers, it might be interesting to take a look at why on earth our web product is currently Flex based, and why it won’t be much longer.

Use it or lose it – your market share that is

Between 2009 and 2012 marked at least two dramatic shifts in popular technology we deal with day-to-day, which is not all that unusual for technology in general, but in terms of front-end web development it’s been fast and huge. These two big shifts I’m talking about are the decline of Internet Explorer on the one hand and Flash on the other.

Ironically, the same folks that are relegating both technologies are also now currently trading punches: Apple and Google.

Apple got real busy selling iOS devices and banning Flash from them while telling the world, every day, that they were doing it. Apple was also hard at work gaining an extra 10% PC market share with OS X which doesn’t run Internet Explorer at all. In terms of consumer perception, I’d say it was all Apple. But really, in terms of enabling this new generation of technology, Google’s Chrome and V8 teams deserve 99% of the credit.

For your consideration please:

That’s a pretty phenomenal eating of Microsoft’s lunch as Chrome goes from 0 to 34% and IE goes from 69 to 33% (and thankfully 17% of that to IE9). That leaves a big fat 2% for Firefox and Safari to play with. So there’s probably a million reasons why that played out the way it did, but in my mind, the single biggest factor that drove this current round of web innovation is the javascript performance war that Google’s V8 engine started.

Back in September of 2010, when we first started working on the Personal Capital web product you see today, the shiny new IE8 clocked in at 4989.7ms running the SunSpider javascript performance benchmark. This morning, Chrome on my Macbook Air clocked in at a respectable 243ms. The slope of the line on that graph is steeper than the financial cliff we stumbled off of in 2008. It’s the reason we can have smooth animations in javascript and data-tables that sort themselves in milliseconds. Chrome’s speed and fidelity is the reason we’re even talking about this at all, not to mention all the work the Chrome team has put in along with others to get HTML5 and CSS3 out the door. That’s huge. As Javascript developers in 2012 we’ve got great performance and a blossoming feature set.

In 2010 we had neither; which is the void that Adobe Flash Player filled. Although I can’t find updated numbers for Flash Player penetration today (surprise), the 2010 Flash Player 10.1 had 87.4% (more than IE) of the US browser market covered with a powerful VM that could execute and render at high fidelity at the same time. That March, Adobe also released Flex 4 which is a great framework with a lot of thought put into separation of concerns and mature patterns like state management, complex data collections and intelligent view (re)rendering. The icing, though, were the 4th gen micro-frameworks that sat along side it. Frameworks like Swiz and Robotlegs made AOP and IoC first class citizens in the Flash/Flex world and eliminated hundreds of lines of boilerplate and decoupled code in the process.

That’s why our web product is written in Flex. It allowed us to create a highly interactive browser-based product quickly, at high quality, to our design specifications and deliver it to almost everyone while building it on a solid software architecture. Until Adobe ditched it.

You know, one thing that Flex never really had going for it was a great open source developer community, and I’m not quite sure why that is. We had plenty of good examples from python to ruby to even the jQuery plugin ecosystem. So when it didn’t make sense financially for Adobe to continue to drive Flex, there wasn’t enough of a community there to catch its fall and there wasn’t enough of one there to prevent it in the first place.

So, in 2012, Flex has been dumped unceremoniously into the Apache Foundation’s lap and is being valiantly lipo-sucked by the die-hards in the Spoon Project. Flash Player has been relegated to a video streaming and gaming platform who’s currently trying to integrate OpenGL. Internet Explorer, for its part, might make a comeback as IE10 purports to finally be on par with modern browsers both in terms of performance and a standardized feature set but I doubt Google or Mozilla is interested in sitting around and letting that happen.

The takeaway here is simple and it’s something every developer knows in their heart: you’re never done until you have no more customers. Microsoft and Adobe thought they were done and they both used their dominance to excuse themselves from continually delighting theirs so Google and Apple came in and did it for them. And they did a damn good job.

But enough of the history lesson! Let’s dive into what we’re actually doing now with Javascript and HTML5 and where we’d like to go in the future.

Our Javascript App Design Refresh

We started converting parts of our web application from Flex to Javascript around September of 2011, first as a POC, then for production but just on a simple jQuery enhanced prototype based architecture. At the same time, we were doing the research necessary in order to completely rebuild our entire development stack from scratch. Not only were we looking to maintain use of architectural patterns like dependency injection, event mediation and data binding, but we also needed to figure out how we were going to test, package and ship our software.

Here’s the current setup:

Application Framework

Backbone.js provides just enough framework to be useful but not so much that we can’t cultivate our architecture in the way we see fit. We take advantage of the router, propagated model/collection updates and simplified view setup. We’ve extended collections to do smart merges from our API backend, implemented client-side data caching for data injection (think dependency injection but with data) and have a set of rules to determine how we break down designs into individual view instances. We compared all the frameworks currently listed on TodoMVC and were impressed with how the platform is evolving, but it came down to Ember and Backbone at the time. We like the Backbone community and the learning curve although we’re aware we’ll eventually experience some growing pains. We’re closely following projects like Aura, Marionette, Thorax, and Chaplin. A more detailed write-up of our comparison here warrants another post and we are currently also experimenting with AngularJS for some more advanced UI components.

Architectural Framework

RequireJS is not just a script loader, it’s really dependency injection and gives you nodejs-ish module encapsulation in the browser. Dependency injection is important for efficiency without singletons or long-winded model locator syntax, for isolating units under test and promotes greater code reuse because of the loose coupling. The RequireJS project has the best momentum out of the alternatives we evaluated in our opinion and their shim support iced the cake. Read more about it here.

View Templating & Layout

Handlebars is a clear favorite here because even though we started out using Twitter’s Hogan implementation of Mustache we found the extra bit of flexibility worthwhile to people experienced in developing complicated UIs. Handlebars’ registerHelper mechanism is also a much cleaner way to promote code reuse of view transforms than the function mechanism in Mustache proper. And given the fact that we aren’t going with a framework which enforces a view language like AngularJS or Knockout there really isn’t anything better for dead-simple view definition.

Visual Design & Styling

Compass (and SASS) was chosen because of our desire to empower our design team more in the actual implementation of products; our goal being to provide the HTML structure and JS behavior and let the more opinionated people perfect the pixels. Given the broad library of mixins that create cross-browser CSS and the fact that SASS doesn’t have any significant disadvantages compared to LESS or Stylus, Compass was a natural choice for us. Currently, developers use Scout for compilation but this will eventually be integrated into our build process.

Unit Testing

SinonJS with Chai extensions enables simple and powerful behavior expression. If you’re used to BDD testing in Ruby the patterns here will be familiar. It’s a library that takes full advantage of the language it is written as well as the experience of other communities in evolving test frameworks. You get to write easy to read unit tests for almost anything without much pain. Seriously, if you haven’t been writing unit tests up until now, go check it out.

Build Process

Hudson is our default continuous integration server because we were already using it. While there are other choices out there, it’s got a great community of plugins and it’s quite impressive how well it’s held up building, testing and deploying everything from ActiveX plugins, WAR files and Flex projects to our Javascript applications. We use the RequireJS optimizer to concatenate and uglify our applications prior to release.

Graphing

Raphaël could use some documentation love but that doesn’t take away from its excellence as a cross-browser vector graphics library. All the HTML graphs you see in our site are custom Raphaël plugins that are really close to the heart of our value proposition. We want to give you the best insight into your finances possible and oftentimes that manifests itself as a highly interactive, highly custom visualizations that we hope bring you happiness and prosperity. We also seriously evaluated Highcharts and the graphing extension to Raphaël but ultimately our passion for numbers (and a great product team that can manipulate them) drove us to pick something productive yet 100% customizable. [TODO: link to Justin’s Raphael post here]

A Quick Note About Debugging

The visual debugger in Flash Builder was the single biggest selling point for Flash Builder, but it required a debug version of Flash Player to be installed and Flash Builder (Eclipse) to be running. Yikes.

Now I want to take a moment to wholeheartedly thank the Firebug team for pioneering the in-browser debugging console back in 2006 and doing it right. These days the Chrome Developer Tools and Safari Web Inspector are top notch and even IE9+ has some semblance of a useful developer console which allows breakpointing, runtime manipulation and analysis AND is deployed on customer machines all without having to boot up a separate program. We’re even able to enable rendering hints in webkit browsers. This is one area in which I feel Javascript development is clearly superior to any other technology I’ve worked with.

Functional Testing

Unfortunately, we haven’t nailed this down yet and both our QA team and developers are still doing evaluations. If you have suggestions we’d love to get some in the comments below.

In Summary

You can see that our philosophical approach to all these areas is “just enough is enough” without having to buy in to an opinionated, vendor-locked or overly complex piece of technology. We’re staying low-level so that we can build out a coherent platform that suits our priorities and not spend a lot of time fighting frameworks. And there’s actually a very good reason for that.

Enough with the boilerplate!

So, you get it, we’re excited about what we can do and how we can do it in HTML and javascript. But here’s some cold hard truth: we lost important architectural patterns when making the move from Flex 4 and Swiz to Backbone. Namely:

  1. Data-binding
  2. An intelligent view component lifecycle
  3. Declarative views
  4. Event mediation

To get these back in you need to write a lot of boilerplate or use a more abstracted library on top of Backbone (which is fine). However, a couple of the newer frameworks attempt to implement items 1-3. Ember is actually designed around them and it feels like AngularJS and Knockout are in some part inspired by these features of Flex, but they are all pretty awkward in their own way. While Ember 1.0.PRE explicitly supports a declarative view hierarchy with {{outlets}} and the Objective-C style first responder system is nice, writing your own bindable EmberHandlebars helpers needs to be streamlined as does the syntax for declaring view actions. The overly complex interplay between the binding mechanisms and the models, collections and controllers make them time consuming to debug when they don’t play nice. The router is actually nice to write for though the heavy reliance on string literals for instance lookups and even for property sets/gets (which is an unnecessary recurring theme in JS frameworks) makes me sad as it’s more brittle. Angular, in its own way, is a lovely piece of engineering but wiring that much functionality into the DOM is bit adventurous, opaque and the project just wasn’t far enough long when we first started re-writing our apps. Knockout is in the same family.

We handle event mediation simply through an application event bus that enables modules like metrics and logging to peek at what’s happening in the running application without their code being strewn everywhere. It also allows modules to request the triggering of an action without needing to care about who actually performs it.

Regardless, we’ve been constantly impressed by all the thought and sweat that’s gone into each of the technologies we’ve talked about and as part of our maturation as a team and a company, we’d like to start to contributing back to this wonderful community. We’re currently pushing to get some of this foundational work on our roadmap. In the meantime, I guess we’ll keep writing our own boilerplate.

But wait! There’s more!

Projects We’re Super Excited About

  1. Meteor
  2. Derby
  3. AngularJS

If you’re excited about any real-time or other technologies that help create a fantastic user experience please let us know in the comments below, we like great technology and excitement.

Tech Shoutouts

Finally, we’d like to take some space to say thank you a few of the myriad other open-source projects and utilities that have allowed us to move as quickly as we have and helped us fall in love with the community. In case you haven’t heard of one of these, check them out, they might help you too. In no particular order: livestamp.js, lodash, slimScroll, momentjs, require-handlebars-plugin, express & jade, detectmobilebrowser, and of course, the jquery core team for doing such a good job pushing the edge for so long.

Bundling Front-end Resources with RequireJS

why

  • faster load times because of fewer http calls and lesser data over the wire.
  • effectively invalidate browser cache when the newer version of a resource is available.
  • lends itself to manage these resources via CDN

what

  • combines and minifies javascript
  • combines and minifies css
  • revises file names of js/css files to support heavy browser caching
  • updates the html to reference these new hyper-optimized js/css files

how

Lets start with combining javascript. This could be as trivial as writing a shell script to concatenate all of your script files into one single file. Works great for monolithic applications but for large-scale javascript application where significant amount of data manipulation and display is done at the browser level, having a modular architecture greatly helps.

Modules are an integral piece of any robust application’s architecture and typically help in keeping the units of code for a project both cleanly separated and organized. They need to be highly decoupled and represent distinct pieces of functionality thus facilatiting easier maintainibility and easily replaceable without affecting the entire system.

“The secret to building large apps is never build large apps. Break your applications into small pieces. Then, assemble those testable, bite-sized pieces into your big application” -Justin Meyer, author JavaScriptMVC

However, there are two main problems in writing modular javascript:

  • No means to import modules of code in a clean and organized manner.
    • Developers at present use variations of the module or object literal patterns to write modules. The modules are exported as namespaces of a single global object where it is possible to incur name collisions as the application and team grows.
  • No cleaner way to handle dependency management.
    • Developers use the exported namespace to include the modules. And then load the corresponding script files in the order they are included. This becomes really difficult as the application grows and particularly when scripts start to have overlapping and nesting dependecies.

There are various script loaders that help out but again there are no native means that solve the two problems in a holistic manner. And this where Asynchronous Module Definition (AMD) greatly shines. It has greatly helped out in architecting our application, the benefits of which will be a different post, but the following pretext from Addy Osmani should summarize well its use.

The AMD module format itself is a proposal for defining modules where both the module and dependencies can be asynchronously loaded. It has a number of distinct advantages including being both asynchronous and highly flexible by nature which removes the tight coupling one might commonly find between code and module identity. Many developers enjoy using it and one could consider it a reliable stepping stone towards the module system proposed for ES Harmony. – Addy Osmani

The following example demonstrates how we address the first problem with AMD:

define(
 	module_id /*optional*/,
 	['dog', 'actions'] /*other module dependencies if any*/,
 	function( dog, actions){ /*function for instantiating the module or object*/ 
 		//create your module here
 		var myModule = { 
 			dogActions: function(){
 				console.log('dog actions');
 			}
 		} 
 	//return a value that defines the module export
 	return myModule; 
 });

We have solved the second problem by using RequireJS, a popular script loader and we felt it is the natural fit for AMD. It uses the AMD module specification for defining and requiring modules, and it loads these modules via a built in script loader. It also provides for loading non-script dependencies such as text files and we have used that ability to load our handlebar templates.

Lets extend the above example to demonstrate how well AMD and requireJS compliment each other to keep your code modular, yet bought together without one worrying about module dependencies and name space conflicts.

<script src="scripts/require.js" data-main="scripts/main"></script>
// In scripts/main.js: your application bootstrap
require( 
	['dog', 'actions', 'hbs!templates/dogActions'],
	function( Dog, Actions, DogActionsTemplate ){
 		document.body.innerHTML = DogActionsTemplate({
 			dog: Dog,
 		 	actions: Actions
 		});
 	}
);
// In scripts/dog.js
define([], function( ){
	var dog = {
 		name: 'lula'
 	};
	return dog;
});
// In scripts/actions.js
define([], function( ){
	var actions = {
		name: 'fetchBall'
	};
 	return actions;
});
// In scripts/templates/dogActions
<h1>{{dog.name}}</h1>
<ul>
<ul>{{#each actions}}
	<li>{{this}}</li>
</ul>
</ul>
{{/each}}

In addition, RequireJS also provides an optimizer tool that we could use to build our entire javascript into a single file or two for production. And thus, how we address combining javascript.

Combining css was easy. We use sass to write our style sheets. As the name itself suggests they are awesome and were introduced to me by awesome fellow developer, Justin. We use scout or similar tools to output a single css file from our sass files that acts as the stylesheet for the entire application.

Now that we have successfully combined javascript and css files, lets compress them. We have earlier mentioned script loaders and how require’s optimizer tool helps out. Currently require provides two options: uglifyjs (default) and closure compiler to minify. we use uglifyjs and have it configured to “beautify” in our dev environment so that we could still effectively debug. Source maps seems to be an interesting concept in this arena that would help you debug “production” scripts.

Now that we have combined, minified our resources, our next step is to add a build fingerprint to the resource file names so that the browser treats them as new resources and invalidates its cached version of the resource.

We have used bunch of maven-ant scripts to achieve this. More on this in my next post.

challenges

One of the major challenges with requireJS is using non-AMD compatible scripts, mainly the third-party libraries. When we initially started with requireJS, we simply got the AMD versions of the third party libraries we needed or modified the libraries to be of AMD format. But we quickly realized that is not a feasible approach and was a major pain point when we had to add a new library. Tim Branyen had a good blog post on how to overcome this problem which was later incorporated into requireJS 2.0. With the introduction of shim, this problem was easily addressed in the config file as follows:

require.config({
 shim: {
 'backbone': {
 deps: ['underscore', 'jquery']
 , exports: 'Backbone'
 }
 , 'underscore': {
 exports: '_'
 }
 , 'raphael': {
 exports: 'Raphael'
 }
 }
 , hbs: {
 templateExtension: 'html'
 , disableI18n: true
 , helperPathCallback: function(name){
 return 'templates/helpers/' + name;
 }
 }
 , paths: {
 jquery: 'libs/vendor/jquery-1.7.2'
 , underscore: 'libs/vendor/underscore'
 , Handlebars: 'libs/vendor/handlebars_hbs'
 , raphael: 'libs/vendor/raphael'
 , backbone: 'libs/vendor/backbone'
 , hbs: 'libs/vendor/hbs'
 }
});

 

reads

http://addyosmani.github.com/backbone-fundamentals
https://github.com/h5bp/
http://www.nczonline.net/blog/2010/07/06/data-uris-make-css-sprites-obsolete/
http://addyosmani.com/blog/yeoman-at-your-service/
http://www.youtube.com/watch?v=Mk-tFn2Ix6g
http://www.html5rocks.com/en/tutorials/developertools/sourcemaps/

 

WYSIWYG…but there’s a whole lot more to see without it

Personal Capital recently updated its iPhone/iPad application to include a new 401k fund fee feature as well as add support for iOS 6 and iPhone 5. Adding support for a new OS as well as a new device, especially one with a new screen size, is always a great test to a project’s architecture and code. A properly architected project allows the developer to add support for new features with minimal adjustments to the project.

The Personal Capital iPhone/iPad project is a testament to the value of not using WYSIWYG tools. The application supports a range of operating systems as far back as 4.3, and takes advantage of each device’s retina display if available. Every screen is created purely from code and takes advantage of Apple’s APIs that allow a view to re-size and/or adjust the position and functionality of objects based on the current screen size and device rotation. Based on this architecture, updating the project to support iOS 6 and iPhone 5′s 4-inch display only required a few changes.

WYSIWYG tools are SDK dependent. This means that when a new feature is introduced, there will need to be an update to the SDK and accompanying WYSIWYG tool to support these features. Since these features did not exist in the previous WYSIWYG, you will need to update each of the corresponding WYSIWYG files associated with the tool to correctly utilize these features.  For example, the latest version of XCode’s Interface Builder allows support for the iPhone 5’s new 4-inch retina display by providing an additional “Size” property for views.   If your application uses Interface Builder to create its screens, you will need to update each of your nib files if you want your application’s views to re-size correctly.  If your project was built without Interface Builder and is coded for a dynamic screen size by utilizing the frame and bound properties of a view, you don’t need to do anything to your code to support additional screen sizes and layouts.

When a new OS rolls out, there are exciting new features to take advantage of as well as deprecated methods.  If your project is fully code-based, you can do a simple word search to find the points in your code that you want to update.  WYSIWYG tools are unique to each SDK but most are not easily searchable since they are designed to be visual tools.  If the WYSIWYG files aren’t binary, you can do a word search on them but most likely the keywords you need to search upon are specific to the tool and will be different than the property or method name associated with the class’s code-based counterpart.  This means even if you can search upon a WYSIWYG file, your efforts are doubled.

This brings me to the concept of developing for multiple clients.  Successful applications will eventually port to multiple clients and the need to not have to “reinvent the wheel” for each client is essential.  If you are building an application that doesn’t need to utilize UI elements specific to a platform, you can develop your entire application in C so it compiles for iPhone/iPad devices and also support Android devices using the Android NDK.  Obviously, this approach is great for code-reuse but doesn’t allow you to utilize some platform specific features that are accessed via platform specific classes.  I’d love to build an application completely in C so that the code is shared across platforms, but this way can be a little masochistic if you actually want to incorporate platform specific elements into your application.

So what options are left?  If the exact code can’t be reused across clients, you can still reuse the logic.  The Personal Capital application features a variety of dynamic and interactive visual elements that are all built using custom code.  Since they are built entirely in Objective-C without the use of a WYSIWYG, I was able to take the exact code and “Javacize” it for the Android platform.  This meant literally taking the code and just changing it to Java but leaving the logic.  I had not looked at parts of the code in months, so some of it was a little foreign to me.  Since I was simply converting Objective-C to Java, though, it didn’t matter.  This means sections of an application that had been complicated to create on one platform can be delegated to anyone who knows both Objective-C and Java even if they aren’t familiar with the code.

Reusing code in this manner also facilitates the QA process of application development.  Since the same logic is used on all platforms, your QA department doesn’t have to deal with testing a completely new product.  Any issues that arise can be fixed by resolving the issue on one client and then quickly mapped to the other clients.

If portions of your application are created with platform specific WYSIWYG tools, though, this process begins to fall apart.  Besides being platform specific, WYSIWYG tools create a black-box effect on a project.  Most tools do not go into depth the one-to-one relationship between what is being achieved and how to accomplish it in code.  If a developer is familiar with the platform API and its classes, the process becomes mappable, but at this level of understanding, most developers opt out of using a WYSIWYG for actual code since it’s more dynamic and customizable.

There are cross-platform WYSIWYG tools, most of which are HTML5 based.  Many of the simpler applications that you see in the market place actually could have been built using these tools which would have saved the developer the hassle of managing multiple code-bases.  But the key word is simple.  HTML5 applications will never perform as well as a native application and can’t integrate platform specific functionality at the level that a native application can.

There is nothing wrong with using a WYSIWYG if you’re new to a development environment and need to get up and running quickly.  But if your developing complex, dynamic applications that extend beyond out-of-the-box platform elements, you will find that most of your work will be done in code and you’ll most likely drop the use of any WYSIWYG implementation to maintain a clean code-base that can be ported to different platforms.