This article is available in other languages:
In our last post I wrote about some of the differences between running your app on public commodity cloud hosting and building your own private cloud and running it there, and how you should choose between the two.
“This event was off the charts”
Gary Vaynerchuk was so impressed with TNW Conference 2016 he paused mid-talk to applaud us.
This time, I want to tell you that you don’t necessarily have to choose. When explaining how to choose your hosting environment, I left out the key assumption that made it an either/or proposition, which was that the entirety of your application has a heterogenous set of performance, availability, and budget constraints.
It’s an easy assumption to make, as generally most applications are developed in a monolithic fashion, with one code base, one standard development environment, and ultimately just one production deployment. By breaking up your monolithic app into smaller components, you’re now free to host each component on an environment that ideally suits it’s own constraints.
Service or Resource Oriented Architecture
This process of splitting up your application is generally referred to as Service Oriented Architecture, or Resource Oriented Architecture. The right name to use depends on the way you slice up your application, and isn’t really of interest to us right now.
In this article I’m talking about the benefits and drawbacks of SOA as it specifically pertains to hosting. The scope making the change to SOA is much larger than that, and I highly recommend you do a bunch more reading on the topic before committing to it.
A Specific Example
In the last article, I used the example of the digital content marketplace and how we moved it off the cloud onto a private virtualized environment, mostly from a desire for faster performance and more control.
Over time, as we opened up new marketplaces, the load profile of the application changed dramatically. We still had the same steady stream of e-commerce customers requiring speedy service, but with each new marketplace came new kinds of files being uploaded with vastly different post-processing requirements, and they would come in very sporadic windows.
At this point, our production environment was working too hard being “all things to all people”. If we looked at the app as something primarily designed to make it as easy as possible to buy things, the decision to host on our private cloud was still sound. If however, we looked at it as primarily a platform for people to contribute content (which is then available for sale) at a large scale, but with a very unpredictable load pattern, then the elastic nature of the public cloud makes more sense for the app.
Split the application in two
As the cost of maintaining enough slack capacity to handle the big influxes of new content, and the complexity mounted in maintaining file storage to hold it all, we split the application and moved 100% of asset storage and processing onto the cloud.
The main production environment would hold onto any uploaded content long enough to move it to cloud storage, and notify the new asset processing service that it was there was new content available. The asset service would do all the necessary file validation, transformation, CDN publishing, etc and then notify the main website that the content was ready for use.
The first big benefit was on the bottom line. By reducing the amount of slack capacity in our private cloud, and moving the work onto the public cloud where we could add or remove capacity as needed, we spent less to get more “work” done.
The next was reduced complexity in the primary environment. The primary environment was now 100% focused on serving the website, and now had less moving parts to maintain. As performance was one of the main drivers for choosing the private cloud in the first place, the simplification of that environment gave us fewer potential bottlenecks, and made troubleshooting problems as they came up a lot easier.
Finally, it made maintaining a high availability for our customers easier. By keeping a strict separation between the web site and asset processing, it became very difficult for problems in our asset system to cascade through to the web site.
Unfortunately, you don’t get all of the benefits for free.
The first drawback is that by having different components in different data-centres, or even in the same data-centre as some providers have physical and cloud hosting on offer, is that you’re going to introduce latency into your overall system, and it’s going to be highly variable.
You will need to expend a fair amount of engineering effort making all of the interactions between the two systems asynchronous and fault tolerant. There is now a lot of network hardware not under your control between your two systems. Packets will arrive late, or not at all. If your web site only functions if the other service returns data promptly, your web site will stop functioning.
Security is also an issue as your previously “internal” systems communications will now pass over the big bad internet. Setting up a VPN is a good thing to do, or alternatively if you’re doing your inter-service communications over HTTP is to ensure all your services have robust authentication and run on SSL.
Finally, development and deployment complexity increases when you go from one monolithic app to multiple services independently hosted. Rolling out new functionality will require significant coordination effort to ensure all the services continue to work together.
If you’re having trouble figuring out if a public or private cloud is right for your app, the answer might be both.
Look carefully at your application. There might be two (or more) smaller apps struggling to break out, and when they do, finding the right hosting environment for each one should be a lot easier.