Challenges and Strategies for Log Management In The Cloud

Log Management in Different Cloud Environments can be Challenging – But Having Access to Log Data is Essential In Order to Get the Visibility You need to Optimize and Secure your Cloud Environment.

One of the biggest challenges for the enterprise is sorting out how to run its operations from the cloud in order to perform proper log management. On a local network, logging is easy. You point your devices to a local log management solution, and off you go, alerting, reporting and searching through your logs. Chances are, your local network has tons of available bandwidth for not only standard traffic, but log traffic as well.

Event Log Management /Security Information Event Management (SIEM) are considered IT best practice, and for regulated industries it’s a requisite for audit compliance.

A trend that we’ve been seeing in the log management and SIEM space is that SIEM and Log Management vendors are moving toward securing the cloud. It was inevitable.

Some equate log management to simply log aggregation, display, and storage– a simple approach that fails to address these complex challenges. Most SIEM products offer basic event consolidation, simple correlation rules, limited real-time analysis, poor reporting and investigation flexibility, and no identity or infrastructure context. Many still require special collectors, add-on modules, additional systems and significant expertise.

This raises a number of questions. How valuable is the data I’d like to store in the cloud? Is this data absolutely critical to my business? If so, will I use a private cloud to store and work with that data, where I have full control over how to access and manipulate that data? Or, will I push that data up to the cloud where you I have limited access and limited control? Should I segregate aspects of the toolset between private and public cloud? These are all legitimate questions. And if the cloud makes sense in ANY of these situations, it leads to the following question:

“How do we log this stuff?” 

Now the SIEM correlation piece is easy. Correlation is nothing more than a bell to ring. The majority of SIEM products offer basic event consolidation, simple correlation rules, limited real-time analysis, poor reporting, zero investigation flexibility, and zero context around the infrastructure. Most require special collectors, add-on modules, additional systems and significant expertise. And don’t get me started on the professional service conversation.

Security logs for correlation make up a paltry 5-10% of log data. Forwarding the security events to the cloud is easy. Filtered and forwarded it’s very low overhead. But the big data, 90-95% of it, is the pain point. This is handled by the front end log management tool– the workhorse.

But what do we do with the rest of the log data? What do we do with that boat load of operational data? What can we do with the forensics data? If we’re to push this to the cloud, how do we get it there? Chances are it’s a ton of data! For hybrid clouds, will the cloud log management solution save me enough money to justify the bandwidth costs, just to get the data up there?

Well first, let’s examine the use cases for log management.

Most common use case is compliance. SOX, PCI-DSS, HIPAA/HiTech, NERC, GLBA, ISO, ITIL. If you’re bound by any one of these mandates, or others, there is generally a requirement to store all log data for one to 10 years. Why? Forensics. Accountability. Responsibility.

Second? Security. The security team wants not only alerting (correlated, targeted or behavioral) to tell them when something happens, but the forensics data to tell them the “who, what, when and where”. The alert will just announce that something happened. It’s an incomplete conversation if there is no context around that alert.

Third? Operational. Log data is the bees knees for not only notification of an operational alert (blown hard drive, overheating server, firewall policy changes, downed devices, restarting appliances, etc.), but is also fantastic for high-level reports to view anomalies. So, reporting would be the 50,000 foot view of the forest vs. searching raw log data which is extremely granular, and perfect for root cause analysis.

Now back to the cloud.

Whether you’re in the cloud now with your business tools and data, or you’re looking to move there, establishing how you’ll do the log management aspect is our topic of discussion.

Private Cloud:

Logging in a private cloud is business as usual. Where the enterprise controls the physical and virtual environments, log management and correlation engines can easily offer visibility into both virtual environments, which live in their private clouds. This is the easiest and most expensive route. Reliability, responsibility and accountability are that of the enterprise. It’s your cloud.

Public Cloud:

Logging in a public cloud, however, is much more challenging. Visibility is severely reduced when system access and system/application controls are limited. Although cloud-based applications can boost productivity and availability of data, they can’t offer the same activity level that more traditional data-centers and public clouds can offer.

Regardless of whether the IaaS or PaaS environments are segregated by some sort of organizational multi-tenant cloud solution, there remain complications in keeping track of all the activity that occurs at different virtualized layers. Regardless of physical or virtual, identity and access management are still important ingredients in the log management stew, even if the data and the applications exist outside of traditional network boundaries. So essentially by pushing log management and correlation to the cloud, the current offerings from Log Management and SIEM vendors offer a loss of visibility and control. This is compounded because of shared infrastructures across multiple enterprises in the dynamic ebb and flow of resource use.

Hybrid Cloud:

Hybrid clouds can give you the best of both worlds. The local, private cloud is often where the bulk of log data is created and managed. There, operational use is maximized as is the forensics side of the house. This would be the steak and potato, whereas, the forwarding of that 5% of security events for correlation, is the dessert. It’s a tiny use case, but when it works, it can be sweet. And there you can offer better evidence of regulatory compliance and government mandates.

In Conclusion:Regardless of whether you go hybrid, public or private with the cloud, it is critical to acquire the key log data that offers a clear view into the operational and security events that will benefit the enterprise. Many cloud providers are offering log management in the cloud. Use it. It’s not only IT best practices, but it can prevent operation and securityrisks from being diminished. And if your cloud provider doesn’t offer a log management service, push for one. You need transparency. If they can’t offer you what you need, perhaps this isn’t the right cloud vendor for you. Lastly, responsibility of a data breach still falls on the shoulders of the customer. That means it’s still your problem. Liability is still a risk. Mitigate that with log management and correlation in order to get the visibility you need to optimize and secure your environment.
Advertisements

Securing your Private Cloud Environment

Strategies and Considerations for Securing Private Cloud Environments

On the back end of private cloud environments you’ll find multiple flavors of virtual software loaded directly onto hardware. This virtual software is essentially the host operating system. VM Host is the base hypervisor and hardware. Think of it as the house. The guest operating systems (Guest OSs) are the virtual machines living in the house.

Securing Private Cloud EnvironmentsAs the basis for all public and private clouds, virtual infrastructure is how it’s done. So this conversation we’re about to have is related to the back end of the private cloud. If you’re building one, it is important for you and your organization to understand how to maximize the benefits and mitigate the risks of a private cloud infrastructure. There are several key things to keep in mind when trying to secure the virtual environment before even loading guest operating systems.

Most virtual solutions are transparent, by design, to the guest operating systems. The same way machines are secured in physical environments, they are secured in a virtual environment. This includes segregating networks, defining domain security policies and installing antivirus.

But unlike their physical hardware cousins, Virtual Machine infrastructure security seems to be lagging behind, and although this virtualization is consuming datacenters worldwide, many organizations fail to recognize that security basics are still security basics.

According to Gartner, 16% of server workloads were running on virtual machines by the end of 2009. Gartner expects this to grow 50% (to 58 million) by 2012. Unfortunately, Gartner also predicts that 60% of these virtual machines will be less secure than their physical hardware predecessors.

Why are Virtual Machines insecure?

The reason that VM servers are less secure than their traditional hardware counterparts are as follows:

Security isn’t considered at the beginning of the project, which is often the case. In many situations a public cloud project is begun, and from there each project becomes a knee jerk reaction.

If the VM host OS layer is compromised, all guest OSs can be compromised. This is called Hyperjacking. More on that later.

Although most public cloud vendors maintain adequate controls for admin access to the virtual machine monitor, many private clouds do not.

Segregate and separate. VM hosts create flat networks. You’ll need to change that. In a non-virtual world, traditional data-centers had segregation and network traffic could be inspected, filtered and monitored by a number of security products. In a virtual world, these are a rare commodity. The local communication between virtual servers is largely untouched and unmonitored. If the traffic runs through a virtual switch it’s practically invisible because it never hits the wire. It’s just traffic between virtual hosts on virtual links. So virtual traffic between virtual machines needs to be monitored.

Separation of duties is something that we in security often push. Unfortunately, in a virtual server environment, the back-end of a private cloud environment, you’ll often find that the server team and the operations team are the same people who do both provisioning of machines and managing virtual switches. So that means that there’s rarely any integration between the tools and security controls to be implemented for the network and security groups. And what THAT means, is that without visibility into configuration and policy changes, topology specifications and audits, the network and security team has zero view into what’s taking place at the access layer.

In security circles, we also talk about the “principle of least privilege.” This says that you should not give anyone more security than the minimum security they need to do their job. Defining roles that can be used to give different levels of security will make life much easier.

How do you secure a traditional server? First, lock down the server OS (usually Windows or Linux). Now, as you go virtual, instead of just securing the server OS, you also have to lock down VMKernel and the VM layer (the host OS), as well as the console. The same thing you’d do with your weekly Microsoft patch plan is what you should do with your VM Infrastructure. Although extremely secure, stay up to date on patches. There are security updates in there, not just bug fixes.

What you really want to avoid is Hyperjacking, which involves compromising the hypervisor. This is the lowest level of the OS stack, and the hypervisor has more privileges than any other account. At this level it’s impossible for any OS running on the hypervisor to even detect that a hack has taken place. So the hacker can control any guest VM running on the host.

When you go virtual, you add that other layer to the mix, the hypervisor. So again, the hypervisor needs to be secured at all costs. It’s mission critical because an attack on the hypervisor can lead to the compromise of all hosted workloads, and successful attacks on virtual workloads can lead to a compromised hypervisor. Another concern would be a VM Escape, which is an exploit that is run on a compromised guest OS to attack and take over the underlying hypervisor, which can then result in a hyperjacking.

Moving up the stack are those OS patches. Although it’s super easy to spin up another guest operating system, admins sometimes forget that the virtualization software is there. So make sure that just as fast as virtual machines are spun up, patch distribution software should also be installed, and antivirus, service packs and security policy changes should be made to all of those virtual guest operating systems.

The biggest security concern, however, is the insecurity of the underlying virtual guest OS. The VM software you use will separate the guests from both each other as well as the VM Host, so if one of the guest VMs does get compromised it’s unlikely it could affect the host, with the exception of using more memory/processor/network resources.

Be aware that the easier move for a hacker to steal data would be VM Hopping. This is a situation where an attacker to compromise a virtual server and use that as a staging ground to attack other servers on the same physical hardware.

The last threat to VMs themselves would be VM Theft. This is the virtual equivalent of stealing a physical server. Take the whole box and run off with it. Then fire it up later and steal the data. Same concept, however, in this situation it is theft of the virtual machine file electronically, and then attack it later.

How can you make the private cloud more secure?

Start with the base layer. The lower stack. The hardware and traffic. Force all traffic between hosts to be inspected by an IDS (intrusion detection system). Each VM Host should also have a different ingress/egress VLAN pair. Then the IPS should be set with VLAN translation to configure each ingress VLAN and egress VLANs. The goal is to define all VM-to-VM traffic being sent across the wire where it can be inspected, monitored and potentially filtered. Of course, as the private cloud grows, it can become complicated and costly between data-centers and DR sites.

Another option is to define an IPS and firewall on each VM Host, and policies be configured to inspect the traffic. This makes sure all intra-virtual communication is inspected. Of course, there are some performance hits you’ll take running all those additional VM IPS and Firewalls, plus the monitoring for all traffic, however, in the end security should be paramount.

Next up is a ‘love connection’ between the above two. It’s a mix of both. Route traffic to an actual IPS where it’s filtered, can be monitored, etc. Then send the traffic off to a destination VM.

Another security option as you move up the stack after traffic would be securing your hypervisor. Keep your hypervisor console patched. Just like any OS, VM Servers will have security patches that need to be deployed. The majority of these patches are related to the Linux-based OS inside most service consoles.

Additionally, ensure that virtual machines are fully updated and patched and all provisioning is done with security tools etc before they are turned on in a production environment. Although VMs are much easier to move around and faster to deploy than physical machines, there is a greater risk that a VM that is not fully updated or patched might be deployed. To manage this risk effectively, use the same methods and procedures to update VMs as you use to update physical servers.

Another tip would be to use a dedicated Network Interface Card (NIC) for management of the virtualization server. By default, NIC0 is for the parent partition. Use this for management of the Host machine. Don’t expose it to untrusted network traffic and please don’t allow any VMs to use this NIC card. Use one or more different dedicated NICs for VM networking.

Lastly, before installing the private cloud services, (the reason you build this infrastructure) I’d recommend using disk encryption to protect the VMs. If one is stolen, it’s worthless to the thief, and the data at rest is also safe.

Summary

When building a private cloud, it’s important to remember that going VM isn’t as easy as moving servers to a host machine. Security is a bit different, but security practices are still the same. Recognize the additional components. Recognize that going VM often ends up in a flat network, and you’ll need to change that for security’s sake. Lock down your console and hypervisor. And remember the same lock down procedure you used in traditional servers is also critical, if not paramount. Follow these tips and you’ll be ready to host those apps to your user community in no time. Securely.

Cloud Providers: A Roadmap of What NOT to do to Your Customers

I’ve gone on and on about how cloud providers offer services as soon as said services are available. Security isn’t the priority. Feature set is driven by monetary momentum. Development dollars swirl around new features for new money. That’s just part of the joy of capitalism. Cloud providers, this one is for you. I present to you a list of “What Not to Do to Your Customers.”

Everyone loves a good upgrade, especially when that upgrade brings new tools, additional time/resource/development savings, a new feature set and helps customers in some way. But as a cloud provider, when you offer this upgrade, it should never be an over-night switch without a whole host of contingency built in. Customers losing access to the services, losing their data or losing their own customers is a risk. Rather than put yourself in a compromised position with your customers, their data and your bottom dollar, you have some work to do. So no hasty cloud upgrades!

If you are a software-as-a-service provider, whether it be contact management, email, a social media website, or something else entirely, make sure your customers know the upgrade is coming and can take steps to ensure that your upgrade doesn’t leave them treading water while you fix those unforeseen bugs . Customers can’t tread water forever. And you’d hate for your customers to drown because of your negligence. So, keep your customers in the loop. Make sure they understand what changes are happening, how the changes will benefit them, what the time frame is for the upgrade, and what the potential risks are. Don’t just pull the trigger and hope that your customers will be okay with the result.

Customers should have the ability to opt out of the upgrade if necessary. I know it can be a big deal to have a mixed software environment, but losing your customers is a bigger deal. The cloud isn’t a dictatorship.

If the upgrade goes to hell in a hand basket, offer a rollback to the last stable version for your customers. Or do a slow roll out across the customer base so that customer issues can be solved, familiarity with the upgrade problems can be established (lab testing is only so reliable), and resolutions can be found. Real world upgrades seem to always have a gremlin or two hiding in the data. So when doing the upgrade to a new version, or adding features, keep the installation disks of that prior version around. It’s also not a bad idea to offer customers a choice of time frames in which to do the upgrade. Most businesses have a code freeze from December to February because of the Christmas holiday. Hence, it’s a bad idea for you to do the upgrade in that time frame. But not every business is bound by the holiday season. For some, the busy season might be tax season. Or summertime. Or Halloween. Who knows. Best to give the option of upgrade time back to the customers based on their “busy season.”

For your customers, this isn’t a hobby. It’s business. And mitigating any risk possible is a requirement, not an option. So as a cloud provider, whether you’re offering SAAS or PAAS or IAAS, offer your customers a time frame to opt into for the upgrade. Don’t force feed it. Remember, you’re adding value to the software. You’re helping them—if you do it right. If you’re still hell bent on doing the upgrade, make sure the configuration settings of the current platform are kept for the new platform. With SAAS, PAAS or IAAS, customers will customize their tool to meet their needs. This requires man-hour resources, potential head count resources and maybe even development resources. So if you’re going to do the upgrade, make sure that at least the configuration of the software is maintained through the upgrade path to ease the process. Just remember your SLA. Chances are, you’re bound by some legal agreement to your customers. They will take you to the cleaners if you can’t maintain continuity per your contract, and in today’s social media world, bad press can and will stick with you. Best to put your best foot forward at all times, make your customers happy and enjoy the growth of your business as your customers enjoy the growth of theirs. You’re in a partnership. Your job is to make them successful. And as a result, you succeed too.

With upgrades come risk. Risk of losing data, risk of mixed-version environments, risk of security breach. Doing a full set of security testing on the software you plan on rolling out is best practice if you want to protect yourself, your customers and all of the people they do business with.

A Deep Dive Into Hyperjacking

In the early 2000’s virus writing was all the rage. Think of the massive virus outbreaks that took place. Millions of infected hosts. The Sober virus that infected 218 million machines in seven days. The MyDoom virus sent over 100 million virus-infected emails. The “I Love You” virus infected 55 million machines and collected their usernames and passwords. These attacks were about bragging rights, not money.

Hyperjacking 101 - A Deep Dive

That’s so last decade. Virus writing is so passé. Following it, however, was the age of malware and phishing. This was monetary bound, and sought to phish the user/pass and actually use them. Western Union must have made a bazillion dollars as the fence for criminals and crooks worldwide.

There’s a bigger picture. As we’ve moved from local workstations and servers containing copious amounts of tasty private information and data, toward the cloud where all of that data sexiness lives behind the locked doors of the Cloud Providers, the game is changing.

There are new technical challenges.

The back end to the cloud is Virtual infrastructure. And if you think of a cake, there would be the pan which is the physical piece, or the hardware. Above that is a yummy bit called the hypervisor which is a software piece that governs the host machine. This is essentially the cake part, the soft mushy filler between the pan and the frosting. But it’s software. The hypervisor dictates the basic operation of the host machine. It’s the abstraction layer between physical hardware and virtual machines in any virtualized platform. And on top of that abstraction layer is the frosting, or the guest operating systems. These systems are the “Virtual Machines”.

Enough of the cake references. I’m getting hungry.

In my previous articles I outlined the fact that going virtual adds simplicity for IT departments. It’s easier to provision servers, it’s easier to move servers, it’s easier to decommission servers. It’s easier to set up networks. It’s easier from a management perspective all around. But in order to attain this simplicity, we are adding complexity on the platform side (the hypervisor), and not enough complexity to the network side. (i.e., virtualization creates flat networks which out of the box virtualization creates.) So by creating simplicity in the management side, we need to add complexity by adding a hypervisor and add security back to a flat network. Segregation. Divide and conquer. Yin and Yang.

One element of this added complexity is hyperjacking. Still in its infancy, hyper-jacking revolves around the corporate world’s newfound enthusiasm for application, operating system and solution virtualization. Hyperjacking is a verb that describes the hypervisor stack jacking. Hyperjacking involves installing a rogue hypervisor that can take complete control of a server. Regular security measures are ineffective because the OS will not even be aware that the machine has been compromised. Sneaky sneaky.

Why? Or rather, how? Because the hypervisor actually runs underneath the operating system, making it a particular juicy target for nefarious types who want not only to own (or PWN to those in the know) the host servers in order to attack guest VMs, but also to maintain persistence. (Okay, they’ve gotten into your hypervisor and stolen your data but, they’d also like the ability to come back whenever they want to steal more data.) Hyper-jacking is a great way to not only compromise a server and steal some data, but also to maintain that persistence. Get control of the hypervisor and you control everything running on the machine. The hypervisor is the single point of failure in security and if you lose it, you lose the protection of sensitive information. This increases the degree of risk and exposure substantially.

Now, what we’ve seen are some lab-based examples of hyper-jacking-style threats. But not many are in the wild. Or rather, we haven’t identified anything in the wild yet. Whether or not this has actually taken place across a mass amount of VMs we don’t know. But some examples of hyper-jacking-style threats include a specific type of malware called virtual machine based root-kits.

For a hyperjacking attempt to succeed, an attacker would need a processor capable of doing hardware-assisted virtualization. So the best way to avoid hyper-jacking is to use hardware-rooted trust and secure launching of the hypervisor. This requires the technology to exist in the processor and chipsets themselves and offers trusted execution technology as well as a trusted platform modules to execute a trusted launch of a system from the hardware through the hypervisor.

Now many vendors have subscribed to a set of standards from the Trusted Computing Groups compliance of measured launch and root trusts. And with that, there are also several security vendors who are offering hypervisor security products that work by checking the integrity of the hypervisor while it’s still in motion.

To do this, the programs examine the hypervisors and program memory, as well as the registers inside the CPU, to see if there are any unknown program elements. Root-kits in the hypervisor hide by modifying certain registers of the CPU and relocating the hypervisor somewhere else. And in this case, the hypervisor integrity software locates this relocated hypervisor. The hypervisor integrity software does this without the hypervisor being aware that it’s taking place. Of course, adding hooks into the process for examining the hypervisor also makes me uncomfortable, but that’s a conversation for another day.

This is all software, and software has rules that can be bent. So while addressing risks associated with the hypervisor we must remember that the hypervisor and the guest operating systems it supports are software and as such, need to be patched and hardened. Period.

Now with that in mind, you’re probably wondering “Well, how can I harden my environment?” Well, there are risks with all, but at least you’re thinking about it. Proactive thinking vs. reactive. Well done. Let’s put some basic design features in your environment to mitigate your risks:

1. Like security? Well, keep the management of said hypervisor in a secure environment. This is more of a network design issue than anything. It’s not hypervisor related, but still something to be considered. Never connect your hypervisor management functions to a less secure environment, than the virtual instances. Don’t put your management function access in the DMZ. Although this sounds like a common sense thing, we’ve all heard horror stories. Don’t be the horror story.

2. Your guest operating systems should never have access to the hypervisor. Nor should they know about it. Guest operating systems should never know they’re virtual. And your hypervisor should never tell a guest OS that it’s hosted. Secret secret.

3. Embedded solutions are much safer. I know this is common sense to most. Operating system-based solutions are traditionally less secure because there are more moving parts. My father always said, “keep it simple, stupid.” Same deal. Embedded modules are more simple and functional, as well as easier to manage, maintain and secure.

Understanding the hypervisor risks is one step toward securing them. Although much of the hyper-jacking conversation has been theoretical, don’t set yourself up in a place where you could be compromised if some 0-day hypervisor attack pops up. Secure your hypervisor, secure your world. Or at least part of it.

The Three Categories of Cloud Computing: What’s Your Flavor?

Cloud Computing is made Feasible Through the Deployment and Interoperability of Three Platform Types: IaaS, PaaS, and SaaS.

We often hear the term “Cloud Computing.” It’s the pet rock of 2011. The very mention of it gets marketing departments excited, vendors offering all sorts of “cloud doo-dads” and IT departments ramping up for the next best thing. But, let’s break down the parameters of the Cloud in order to identify what is and isn’t cloud. To do this, we’ll talk about the NIST definition of “cloud,” and then tackle the stack.

Cloud computing, if done correctly, delivers unprecedented cost efficiency, scalability, flexibility, elasticity, interoperability, reliability and security. But take a step back. “Cloud Computing” is actually a general classification of three services. It’s the broad term for the stack that NIST breaks down as follows:

“Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.”

In other words, it’s the ability for end users to quickly and easily acquire and utilize bulk resources. Period. And those resources are pooled across multiple customers. Also included in the NIST definition are a number of characteristics such as “on demand self-service.” This defines the end users’ ability to receive services without any of the usual long delays from IT departments.

The NIST definition also highlights rapid elasticity, which allows the service to scale quickly when necessary, based on high demand peaks. Cloud computing is a metered service much like other service utilities (gas, electric, water).

End users should have the ability to access the services provided via standard platforms such as mobile devices, desktops and laptops.

Tons of information has been written about the benefits of cloud computing in the areas of cost, scalability and security. All of these points are valid. But let’s break down what the different pieces of the cloud stack are, in order to identify how your organization can capitalize on its many benefits.

First we’ll go high level. Cloud is made feasible through the deployment and interoperability of three platform types. These three layers are:

IaaS – Infrastructure as a Service

PaaS – Platform as a Service

SaaS – Software as a Service

Now this stack is easily broken down as follows: Think of the “Infrastructure-as-a-Service” as the road. It’s the basis for communication. It’s the bottom layer that you build your platform on. The platform are the cars traveling on the infrastructure. PaaS rides on IaaS. But on the top of that, the goods and passengers inside the cars are the SaaS. It’s the end user experience. It’s the end result. Let’s take that a step further.

Infrastructure-as-a-Service (IaaS)

Cloud Providers offering Infrastructure as a Service tout data-center space, and servers; as well as network equipment such as routers/switches and software for businesses. These data-centers are fully outsourced, you need not lift a finger, upgrade an IOS or re-route data. Although this is the base layer, it allows for scalability and reliability; as well as better security than an organization may have in a local co-lo or local datacenter. In addition, these services are charged as utilities, so you pay for what you use, like your water, electric and gas. Depending on your capacity or usage, your payment is a variable.

Because the IaaS vendors purchase equipment in such bulk, you, Mr. Customer, get the best gear for the lowest price. Hence, the financial benefits of IaaS are cheaper access to infrastructure.

With the pay-as-you-go model, instead of investing in a fixed capacity infrastructure, which will either fall short or exceed the organizational need, customers are able to save quite a bit of coin. Buying hardware that’s barely used is a waste of hardware, air conditioning, space and power.

Operational expenses versus Capital expenses: Cloud is better. Because these computing resources are basically used and paid for like a utility they can be paid via the operating expenditures budget versus being paid for via capital investments. In other words, instead of depreciating the gear over three years, you’re able to expense the monthly charge this year. And the next year. And the year after that. It’s an elastic service.

Platform-as-a-Service (Paas)

Provisioning a full hardware architecture and software framework to allow applications to run is the essence of Platform-as-a-Service. There’s a huge market for customers who require flexible, robust web-based applications. But, in order for these applications to run, there needs to be platform supporting it that is just as robust and flexible. Cloud providers offer this environment and framework as a service. Their developers can write their code regardless of the OS behind it. So instead of software being written for Apple, Linux or Windows, it’s being written for a development environment provided by Cloud Providers such as Amazon, Microsoft and Google.

Software-as-a-Service (SaaS)

Software-as-a-Service (which I’ll refer to simply as SaaS) is the process of provisioning commercially available software but giving access over the net. The customer doesn’t have to worry about software licenses, since they are handled by the service provider. The provider also handles upgrades, patches, or bug fixes. Some examples of this software might be office productivity software, which you may access online, like Google Docs. You can also essentially rent contact management software, content management software, email software (Google mail?), project management software, and scheduling software. It’s all online. All easily available on the internet. Why is this a big deal? Well, you no longer have to pay for expensive hardware to host the software, or get to the software (VPNs, dedicated links, etc.), you don’t need the employees (and their associated salaries, benefits, office costs, etc.) to install, configure or maintain the software. The application is handled on the back end by the SaaS provider. That’s sort of a big deal regardless the size of your business. Money is money. Your IT staff is then able to use its time and resources to work on other projects or you can simply eliminate unnecessary IT staff.

Just think about how many unnecessary resources can be eliminated when users no longer need all sorts of local applications on their local computers and the associated troubleshooting frustrations. And now, because it’s cloud based, there’s a Service Level Agreement for problems.

The other thing to think about is that by moving your infrastructure (IaaS) to the cloud, you no longer have the headache of building out and maintaining that infrastructure. It scales when you scale. By pushing the development platform out to the cloud (PaaS), development of software on a stable, secure, reliable environment allows resources to work on just that–development. And by putting software in the cloud and accessing the software through a web browser, applications are no longer bound by an operating system. The operating system platform becomes nothing more than a stage for a web browser to access the software to do the work. Work becomes device-agnostic. So, users who want to be on Linux workstations can do just that. Prefer a Mac? No problem. PC is your game? We can do that, too.

With the cloud, there are many ways to save money, as well as increase reliability and security. Understanding the options will help you do just that.