Disaster Recovery and The Cloud: A Recipe for Success

Cloud Disaster Recovery – ingredients for a Recipe that Saves Money and Offers a Safe, More Secure Situation with Greater Accessibility

Cloud computing and disaster recovery are like peanut butter and chocolate – two great flavors that taste even better together. There are several companies that have recently entered the “Disaster Recovery in the Cloud” arena, offering services such as data backup, business continuity and disaster recovery services for MSPs packaged together into a single suite. Before jumping on that bandwagon, let’s deep dive into these three topics with a bit more detail.

When businesses hear the phrase “Cloud Computing,” their initial question is (understandably) how much control they will retain. There is the fear and uncertainty of added risk as well as the fear of losing control of their data. This is a common thought pattern, and is completely justified.

So why move to the cloud?

The promise of cost savings derived from cloud computing is very attractive, but concrete financial returns are not always quickly achieved. Except, perhaps, when it comes to disaster recovery.

Cloud Computing, by nature, is a distributed concept with some backup already available. However, the concern of the reduced reliance on local infrastructure on physical hardware, and the subsequent perceived risk of trusting another vendor with the business continuity of your business certainly gives some organizations pause. With due diligence and an understanding of the available feature-set, though, cloud disaster recovery is a very attractive solution. The additional cost savings doesn’t hurt, either.

At the end of the day, cloud-based disaster recovery allows you to add important capabilities to your IT infrastructure at a reduced cost—especially when you consider your alternative options.

Companies that have balked at the cost of building out their own disaster recovery infrastructure often find the cloud more cost effective. Offloading the expense hardware, software and network infrastructure to be a “what-if” solution can be very expensive. Think about it: your primary and secondary gear as well as the maintenance and support of lot can be tough to swallow, especially considering failover gear just sits in standby until something fails. Why pay for a room full of gear with the sole purpose of waiting for a failure?

Many companies do in fact use an outside vendor for disaster recovery, so a move to the cloud isn’t much of a change.

Here are some major points you should keep in mind when thinking about your approach to cloud disaster recovery:

1. Make sure your cloud provider offers business continuity as a necessary service, and that it’s part of your SLA.

2. The cloud provider should be in the know about its hardware/software and any sort of managed gear for failures. They should have multiple datacenters in multiple locations in order to quickly move data around or bring up backup and additional VMs if necessary.

3. Choose business continuity. Backup solutions are wonderful, but take it a step further with business continuity. Although they sound one-in-the-same, the key difference is offline backups vs. online, or online-accessible at a different location. Simply flip the switch, and you’re back in business.

While one of the key drivers for cloud computing is reduced cost and more feature-set, restoring data in the cloud is also much quicker than other disaster-recovery scenarios, and there’s no hardware to buy. A full disaster recovery solution at a reduced cost will sweeten the pot. Cloud computing and disaster recovery, much like peanut butter and chocolate, have a tasty future ahead of them, with the sweetest part coming when you see the savings on your bottom line. So, if you choose to dip your spoon into cloud security, these points can be your key ingredients for a delicious recipe that saves your organization money and offers a safe, more secure situation with greater accessibility.

The Haj of Netsec Nerds Worldwide: Blackhat Las Vegas

I arrived yesterday, ready for Blackhat again. Since this time last year, I’ve attended Blackhat: DC, Blackhat: Abu Dhabi and Blackhat: Europe. And here I am again. Blackhat Las Vegas.

It’s bar none my favorite show of the year. This is the big show. The haj of netsec nerds worldwide. This is our mecca. This is Blackhat/Defcon. The anticipation began to creep up a few weeks back when I came to Las Vegas for Cisco Live, which too, was a great show. But it’s not like this. Cisco Live is a networking event supported by sponsors. Blackhat is about the nerds. It’s about we who live and breathe security. It’s about the blackhats and the whitehats. And a bit of grey in between. This is a show for nerds by nerds.

Setup happened today for the training which starts tomorrow. I’m excited. Tomorrow is BackTrack training and rumor has it, BackTrack5 is being released. That’s really exciting as Backtrack is the premier penn testing tool used worldwide by hackers and security engineers worldwide.

This may sound like a shock to you, but I’ve seldom used BackTrack. My personal style has involved online tools to mask my identity. Online tools to do hours and hours of recon to craft my attack long before the trigger is pulled. I’ve always had the impression that BackTrack was more or less a brute forcers dream. So, I’ve never taken the plunge. I’ve used Metasploit, and Wireshark and a host of other recon and/or attack tools, but never once have I used a suite such as BackTrack to take a run at a network, hack hosts or take down applications. It’s such a different animal to me.

There’s a difference between hackers and penetration testers. Much of it comes down to time, but time plays a big part. A Pennetration Testers job is based on an hourly rate or by a salary. But he can’t take 6 months to penn test a network. So generally Penn testers go in, run through their checklist of ports to probe, OS’s to fingerprint and SQL to inject. Or the salary employee will try to push through the task as fast as possible to finish as fast as possible.

But the reality is… that’s not how hackers do it. When you hack… time is on your side. Time is your friend. You have lots of it. You’re not in a rush. Low and slow is the saying, and its never been more true than it is now.

As time goes by, I find myself saying that phrase quite a bit more lately than previously. “Low and slow.” And I can’t help but feel like it comes down to one basic thing that’s prompting that.

There are several technologies on the market today which are ridiculously expensive, and I can’t help but feel like they are nothing more than Dumbo’s feather for Security Architects and CISOs who don’t know any better. It gives them a false sense of confidence, OR they lose complete confidence in security due to the constant number of false positives being received.

Tomorrow starts the BackTrack course I’m auditing. And I’m excited to get started.

I’ll post more on how it goes, my thoughts on the tool and the teaching.

Bose Quietcomfort 15 headphones: Noise canceling like wow.

If I were to sit down and go through some boxes to figure out how many pairs of headphones I have in the house, I’d probably come up with five or ten pair. Maybe closer to 15. I don’t know. I just know that I’ve tried everything from Sony to Shure to Apple to Bang & Olufsen. But as of this moment, I’ve found the holy grail of headphones.
The Bose Quietcomfort 15.

Oh. my. god. There’s no way on earth to make the entire earth disappear in one flip of a switch. HVAC? Gone. Road noise? Gone. Dryer? Gone. Dishwasher? Gone. Annoying neighbor talking to you? Well, they’re not gone, but at least you hear them less.

If I were to recommend a pair of headphones… THESE would be the ones.
I have a pair of Sony noise canceling? Bose shit on these headphones.
I have a pair of Bang & Olufsen headphones? They are fancy, but sound like crap.
I even have a pair of Bose “passive noise canceling” headphones. What a joke. It just cups your ear.

These Bose headphones are so serious that when you flip the switch it feels like someone has dunked you into a deprivation chamber. Your ears feel pressure. Your mind thinks you’re at high altitude. But you’re not. You’re in your own world, free to enjoy your time in isolation enjoying music, a good book and/or a life without the neighbor. at least a little bit.

The new iPod Nano: Oh, you sexy little beast.

The hot shit from the year 2000 - 32 megs. bam!My first MP3 player, back in 2000, was a Casio G-shock. It was ahead of it’s time. 32 megs. It was slick. Almost held an entire cd on it. Back then people said “WTF is an MP3?”. It came in the form factor of a watch. It sat giant on my wrist, the headphone cord running up my shirt to my ears.

Fast forward 7 years.

Commuting from San Francisco to San Jose on my motorcycle, I had a faithful little nugget called the iPod Nano. It clipped to my belt, and gave me 1gb of music all the way to work. Fantastic battery life. Just set the playlist and off I went.

Fast forward to now. The new iPod Nano. Color screen. Radio. Pedometer. And 16 gigs of music. What a wicked little machine.

Though it’s impossible to change music wearing motorcycle gloves on the actual device. Perhaps with the headphones with the button I can. But blasting down the highway may prove to be a challenge. It’s loud if you want it to be but it’s clear. Crisp highs. Solid lows. It’s amazing audio in a very very small form factor.

In the 6th generation of the iPod nano, apple has really put something together that is fantastic.

There are a ton of accessories available for the nano including something that hits close to home. A bit of nostalgia. It’s a watch band that will make the nano into a watch… Much like that first mp3 player I had. The gshock, but better.

In 11 years we’ve gone from 32 megabytes of music in a watch to 16,384 megabytes.

Where will we go next?

Evaluating Cloud Solutions – What Type of Cloud is Right for Me?

Evaluating Cloud Computing Solutions – Public vs. Private Clouds? Hybrid Clouds? Which is Right for Your Business?

The first known reference to the “Cloud” as it related to computing was in Douglas Parkhill’s 1966 book The Challenge of Computer Utility. Parkhill explained his conception of a “Private Computer Utility.” He compared computing with the electrical industry and its extensive use of hybrid supply models. When the electricity grid was built, private on-site power generators were quickly cycled out. No longer did local businesses have to build, buy and maintain the hardware to create electricity, which was expensive both from a hardware as well as a human resource perspective. While it did carry some risk, electricity as a utility made sense in terms of finance and risk management. In the world of Cloud Computing, there are three different types of “clouds” – public cloudsprivate clouds and hybrid clouds. Depending on what type of service or data you’re dealing with, you’ll want to compare the different options of what private, public and hybrid can offer. In most cases, the most important variable is the degree of security and management the hardware or application requires.

While we as an industry like to think that Cloud Computing is new, it’s not. The concept was coined forty years earlier.

With that said, it’s time to figure out which cloud architecture is right for you.

Private Cloud

A private cloud is one in which the services and infrastructure are maintained on a private network—generally a local datacenter within an organization. These clouds offer the greatest level of security and control, but they still require the company to purchase and maintain all the software and infrastructure, which can significantly reduce cost savings. A private cloud is the obvious choice when:

·   Data is your business, so security and control are paramount on your list of requirements.

·   Your company is large enough to run a hyper-scalable cloud datacenter efficiently and effectively on its own. This generally implies large enterprises.

·   Your business is bound and gagged to conform to strict security and data privacy issues as well as compliance mandates like PCI-DSS and SOX.

Some vendors use the term “Private Cloud” to describe products and services as “cloud-like”, or that are described in their market-ecture as the ability to “emulate cloud computing on private networks.” These products are often virtualized solutions that have the ability to host applications and Virtual Machines in a company datacenter. Frankly, I see little value in “Private Clouds” as they’re more focused on virtualization than cloud computing.

Don’t get me wrong, I think virtualization has its place as well. It’s certainly used in cloud computing, but that doesn’t make cloud computing what it is. Virtual technologies are valuable to businesses but often tend to obscure the full capabilities of cloud computing. The term “private cloud” borders on deceptive advertising; it fails to deliver on the potential of cloud computing and those who attempt to use it are hanging onto the coattails of the cloud.

Depending on your industry, though, private clouds do offer some benefits including shared hardware costs, quick recovery from failure and upscaling/downscaling depending on demand. And that’s fantastic. But the organization still has to buy, build, support and manage the infrastructure. This solution doesn’t benefit from up-front capital costs and it lacks the economic model that makes cloud computing so compelling in the first place.

Public Cloud

A public cloud is one in which the services and infrastructure are provided off-site over the internet. At its essence, “Cloud Computing” refers to the public cloud. These clouds offer the greatest level of efficiency in shared resources as well as efficiency in cutting spending. However, they are also more vulnerable than private clouds. A public cloud is the obvious choice when:

·   You need incremental capacity, or, the ability to add computer capacity for peak times. When the proverbial crap hits the fan, you’ll have capacity available to handle that, but those resources can be used by other VMs for their own tasks when not in peak capacity mode.

·   Your standardized tools and applications are used by many employees. Examples include e-mail, contact management systems or a company intranet site.

·   You need a sandbox to develop applications across geographic locations. Development and testing are a great use case for Cloud, especially when collaboration is needed.

·   You have a SAAS (Software as a Service) application which is offered from a vendor who takes a hard line approach to security.

Public Cloud as a computing concept offers cheap, commoditized computing resources which outweigh the benefits of in-house resources that have limited added value (no capex, access to resources everywhere at any time, minimal support costs and employees for maintaining the resource, shared overall costs and no peak load concerns).

But one of the concerns associated with public clouds is security and reliability. Make sure you have your security and compliance/governance strategies well planned as the short term cost savings could become a long term nightmare.

Hybrid Cloud

A hybrid cloud offers a variety of public and private options with multiple providers. By using a hybrid approach, you’re able to spread things out over a number of providers to keep each aspect of your business in the most efficient possible environment. The major downside here is having to keep track of multiple security platforms and make sure all aspects of your business can communicate with each other. So, if the following situations describe your environment, then the hybrid cloud may be the best option for you:

·   Your company uses a SaaS application, but has security concerns. Private clouds are often used with VPNs (Virtual Private Networks) for additional security.

·   When your market is multiple verticals, you may be in a situation where you want to use private clouds for client interaction, but their sensitive data is kept in a Private cloud. This is an optimal use case for Hybrid Clouds.

When managing private, public and traditional datacenter models all at the same time, management can become complex. Maintaining a tool which will federate these separate pieces for the sake of SLAs and troubleshooting becomes the challenge.

Most of what people are calling “private clouds” share a number of qualities with public clouds and can thus be classed as a “hybrid cloud” architecture. Most large enterprises will be looking to run a hybrid architecture for several years to come (though many early adopters have already taken the plunge). The waters are tepid in different clouds for different reasons.

In summary, Public, Private and Hybrid cloud environments can all viable solutions based on your use case. Public clouds offer the greatest cost savings, but the least amount of security and control. Private clouds offer just the opposite, with costs being much higher due to hardware/software and maintenance costs; however, security and control are supreme. Hybrid is the best of both words, but can often be very complex to manage.

Take a step back, identify your use cases and requirements and then take the plunge. Cloud is not just the future. It’s today.

Why Cloud Tenancy and Apartments Have More in Common Than You Think

One of the most common questions about cloud security is around privacy and regulatory compliance. Questions around government mandates and industry requirements abound from IT managers considering a shift to the cloud—most of which relate to multi-tenancy.

Since there’s been so much discussion about multi-tenancy in the cloud lately, I thought I’d explain what it means for both cloud providers and cloud customers.

But first, a tangent:

Cloud Computing Tenancy

I live in an apartment. I have a small apartment that has everything I need. I enjoy living in an apartment because I don’t have to maintain the plumbing or electrical. I don’t have to rake the yard or clean the gutters. I don’t have to clean the pool or fix broken equipment in the gym. I don’t have to worry about security; it’s provided as part of my rent. I only have to worry about my belongings in my apartment. As a result, I get to enjoy the cost savings of a shared environment, and the robust amenities of my building. My apartment building has lots of other residents. Some work different jobs, some work at different companies. But we all share the same building, with the same resources and amenities.

My apartment building is a multi-tenant cloud. More on that later.

Cloud providers (landlords) love multi-tenancy clouds (apartment buildings) because they rent the same resources to a large number of renters, and the renters get to enjoy all the financial savings of those shared resources. The cost savings is good news to all parties involved. The cloud, as you know, can provide amazing cost savings, fantastic up-times and someone else to blame or sue when there is a mistake, when there has been an accident or if problems go unfixed.

Today we’ll talk about security, reliability, auditability, quality of service and regulatory compliance in multi-tenant solutions vs. single-tenant solutions.

Reliability:

In my apartment building, there is a bank of washers and dryers. Everyone can use them—10 washers and 10 dryers for 300 families. But if the power goes out in the laundry room, we’re all wearing dirty underwear. This is a multi-tenant problem.

In the cloud, when a multi-tenant app goes down, it takes everyone with it. Take WordPress. They went down a few months back. One app down, everyone went with it. Imagine other Web apps out there. What if Salesforce went down? What if Gmail went down? One App. Life stops. Now what if that’s an industry specific app everyone is using. You see where this is going.

The upside? One application upgrade, one application maintenance: one application across the board saves time and money for the customer.

But what about single tenancy? 

Let’s talk again about washing machines. If we go single tenant on a washing machine, then each apartment that wants to pay for a washing machine can have one. It’s their washing machine, they don’t share it. It’s a single tenant solution. If their washing machine goes down, it doesn’t affect the other washing machines in the building. In this case, the cost savings obviously isn’t as pronounced as in a multi-tenant setting. Because in a building of 300 families, if even half of them want their own washer/dryer, we’re looking at 150 washers and 150 dryers, all of which need to be maintained, all of which can fail, all of which need to be supported individually and all of which carry their own price tag.

Security

How about security in a multi-tenant vs. single-tenant situation?

Securing a building is one thing, but what about securing each apartment from other tenants? That also needs to be considered. Firewalls at the front of the network keep external threats out, much like a doorman. But what’s to stop your neighbor from breaking into your apartment? There’s no doorman to stop someone on the inside.

So, because of shared resources, security needs to be handled at a much lower level: segmentation of resources. You have to segment your apartment from your neighbor’s apartment. On the network side that would be segmenting those shared resources using Mac Address Control address pools, VLAN tagging (Virtual Local Area Networks) with more advanced security controls such as tag zoned segments, private VLANS and ACLs (access control lists) to define a secure environment, enforce the policies of the secure environment and maintain that secure environment.

For storing your business data, your critical data and your customer data, you’ll want to make sure that the architecture users LUN (Logical Unit Number) masking, at rest encryption, zoning and VSANs (Virtual Storage Area Networks) to keep cloud insiders and cloud outsiders out. Ultimately, there needs to be as much security between you and your neighbor as there is from an outsider trying to break into the building.

Auditability

If you enter the lobby of my apartment building, the doorman will either allow you in, or he’ll turn you away. In other buildings, you need to authenticate yourself by using a key fob for entry. And make no mistake: there is always an audit trail:

“Dimitri McKay entered the building at 3:05am”

“Ms. Jameson came to visit Dimitri McKay at 4:15am”

If your business is governed by industry mandates or government regulatory compliance, you need to make sure you have data such as raw logs to keep your auditors (and upper management) happy. Local or in the cloud, it’s your responsibility to practice due diligence. There are providers that offer security and accountability. You can have your Kate and Edith too.

Quality of Service

My neighbor complains constantly about noise from my apartment. And he should. My subwoofer at volume level 10 shakes the apartment, his apartment, the people upstairs, the mail room, the garage and three blocks away at the local watering hole. I tend to use it at 3am when I have insomnia. You don’t want your cloud to have the “Dimitri McKay subwoofer” problem. In other words, I’m a tenant who is affecting the processes of another tenant—in this case, their sleep. By putting some quality of service in place, that segregation of work keeps my noise from impacting his sleep. It’s the same situation in the cloud: your workload shouldn’t be affected by your annoying neighbor.

The cloud is a shared environment, much like my apartment building. Where I’ve used a simple example of multi tenancy would be an apartment building. In an apartment building you have a “shared environment” where multiple “renters” share a common infrastructure (the plumbing, the electrical grid, hallways, etc.) but still have segregated areas where the users keep their stuff (host their applications and/or data).

Multi-tenancy is highly desirable to cloud providers because they can provide a platform or service (applications, infrastructure, etc.) and rent it to a large number of customers without having to make massive customizations, tons of labor-intensive upgrades, troubleshooting sessions and associated costs. Single tenancy has merit in situations where sharing the same app among a broad scope isn’t a viable option.

On a large scale such as the infrastructure side, the cloud provider will always opt for multi-tenant, but the customers themselves will likely seek single tenant in the following situations: custom apps, customers who are bound by specific regulatory compliance mandates, or those who care more about security than price.

One example of this is anyone who needs to have raw log data from all of their IT infrastructure, OS and apps in one place. They could have a single tenant log management tool in the cloud that only collects data from their specific cloud network devices, cloud applications and server operating systems. In this situation, segregation makes more dollars and sense.

Just as is the case with public, private and hybrid clouds, there’s no be-all, end-all situation when it comes to choosing between single- and multi-tenant deployments. It depends on what your goals are, as well as your budget.

Challenges and Strategies for Log Management In The Cloud

Log Management in Different Cloud Environments can be Challenging – But Having Access to Log Data is Essential In Order to Get the Visibility You need to Optimize and Secure your Cloud Environment.

One of the biggest challenges for the enterprise is sorting out how to run its operations from the cloud in order to perform proper log management. On a local network, logging is easy. You point your devices to a local log management solution, and off you go, alerting, reporting and searching through your logs. Chances are, your local network has tons of available bandwidth for not only standard traffic, but log traffic as well.

Event Log Management /Security Information Event Management (SIEM) are considered IT best practice, and for regulated industries it’s a requisite for audit compliance.

A trend that we’ve been seeing in the log management and SIEM space is that SIEM and Log Management vendors are moving toward securing the cloud. It was inevitable.

Some equate log management to simply log aggregation, display, and storage– a simple approach that fails to address these complex challenges. Most SIEM products offer basic event consolidation, simple correlation rules, limited real-time analysis, poor reporting and investigation flexibility, and no identity or infrastructure context. Many still require special collectors, add-on modules, additional systems and significant expertise.

This raises a number of questions. How valuable is the data I’d like to store in the cloud? Is this data absolutely critical to my business? If so, will I use a private cloud to store and work with that data, where I have full control over how to access and manipulate that data? Or, will I push that data up to the cloud where you I have limited access and limited control? Should I segregate aspects of the toolset between private and public cloud? These are all legitimate questions. And if the cloud makes sense in ANY of these situations, it leads to the following question:

“How do we log this stuff?” 

Now the SIEM correlation piece is easy. Correlation is nothing more than a bell to ring. The majority of SIEM products offer basic event consolidation, simple correlation rules, limited real-time analysis, poor reporting, zero investigation flexibility, and zero context around the infrastructure. Most require special collectors, add-on modules, additional systems and significant expertise. And don’t get me started on the professional service conversation.

Security logs for correlation make up a paltry 5-10% of log data. Forwarding the security events to the cloud is easy. Filtered and forwarded it’s very low overhead. But the big data, 90-95% of it, is the pain point. This is handled by the front end log management tool– the workhorse.

But what do we do with the rest of the log data? What do we do with that boat load of operational data? What can we do with the forensics data? If we’re to push this to the cloud, how do we get it there? Chances are it’s a ton of data! For hybrid clouds, will the cloud log management solution save me enough money to justify the bandwidth costs, just to get the data up there?

Well first, let’s examine the use cases for log management.

Most common use case is compliance. SOX, PCI-DSS, HIPAA/HiTech, NERC, GLBA, ISO, ITIL. If you’re bound by any one of these mandates, or others, there is generally a requirement to store all log data for one to 10 years. Why? Forensics. Accountability. Responsibility.

Second? Security. The security team wants not only alerting (correlated, targeted or behavioral) to tell them when something happens, but the forensics data to tell them the “who, what, when and where”. The alert will just announce that something happened. It’s an incomplete conversation if there is no context around that alert.

Third? Operational. Log data is the bees knees for not only notification of an operational alert (blown hard drive, overheating server, firewall policy changes, downed devices, restarting appliances, etc.), but is also fantastic for high-level reports to view anomalies. So, reporting would be the 50,000 foot view of the forest vs. searching raw log data which is extremely granular, and perfect for root cause analysis.

Now back to the cloud.

Whether you’re in the cloud now with your business tools and data, or you’re looking to move there, establishing how you’ll do the log management aspect is our topic of discussion.

Private Cloud:

Logging in a private cloud is business as usual. Where the enterprise controls the physical and virtual environments, log management and correlation engines can easily offer visibility into both virtual environments, which live in their private clouds. This is the easiest and most expensive route. Reliability, responsibility and accountability are that of the enterprise. It’s your cloud.

Public Cloud:

Logging in a public cloud, however, is much more challenging. Visibility is severely reduced when system access and system/application controls are limited. Although cloud-based applications can boost productivity and availability of data, they can’t offer the same activity level that more traditional data-centers and public clouds can offer.

Regardless of whether the IaaS or PaaS environments are segregated by some sort of organizational multi-tenant cloud solution, there remain complications in keeping track of all the activity that occurs at different virtualized layers. Regardless of physical or virtual, identity and access management are still important ingredients in the log management stew, even if the data and the applications exist outside of traditional network boundaries. So essentially by pushing log management and correlation to the cloud, the current offerings from Log Management and SIEM vendors offer a loss of visibility and control. This is compounded because of shared infrastructures across multiple enterprises in the dynamic ebb and flow of resource use.

Hybrid Cloud:

Hybrid clouds can give you the best of both worlds. The local, private cloud is often where the bulk of log data is created and managed. There, operational use is maximized as is the forensics side of the house. This would be the steak and potato, whereas, the forwarding of that 5% of security events for correlation, is the dessert. It’s a tiny use case, but when it works, it can be sweet. And there you can offer better evidence of regulatory compliance and government mandates.

In Conclusion:Regardless of whether you go hybrid, public or private with the cloud, it is critical to acquire the key log data that offers a clear view into the operational and security events that will benefit the enterprise. Many cloud providers are offering log management in the cloud. Use it. It’s not only IT best practices, but it can prevent operation and securityrisks from being diminished. And if your cloud provider doesn’t offer a log management service, push for one. You need transparency. If they can’t offer you what you need, perhaps this isn’t the right cloud vendor for you. Lastly, responsibility of a data breach still falls on the shoulders of the customer. That means it’s still your problem. Liability is still a risk. Mitigate that with log management and correlation in order to get the visibility you need to optimize and secure your environment.

Securing your Private Cloud Environment

Strategies and Considerations for Securing Private Cloud Environments

On the back end of private cloud environments you’ll find multiple flavors of virtual software loaded directly onto hardware. This virtual software is essentially the host operating system. VM Host is the base hypervisor and hardware. Think of it as the house. The guest operating systems (Guest OSs) are the virtual machines living in the house.

Securing Private Cloud EnvironmentsAs the basis for all public and private clouds, virtual infrastructure is how it’s done. So this conversation we’re about to have is related to the back end of the private cloud. If you’re building one, it is important for you and your organization to understand how to maximize the benefits and mitigate the risks of a private cloud infrastructure. There are several key things to keep in mind when trying to secure the virtual environment before even loading guest operating systems.

Most virtual solutions are transparent, by design, to the guest operating systems. The same way machines are secured in physical environments, they are secured in a virtual environment. This includes segregating networks, defining domain security policies and installing antivirus.

But unlike their physical hardware cousins, Virtual Machine infrastructure security seems to be lagging behind, and although this virtualization is consuming datacenters worldwide, many organizations fail to recognize that security basics are still security basics.

According to Gartner, 16% of server workloads were running on virtual machines by the end of 2009. Gartner expects this to grow 50% (to 58 million) by 2012. Unfortunately, Gartner also predicts that 60% of these virtual machines will be less secure than their physical hardware predecessors.

Why are Virtual Machines insecure?

The reason that VM servers are less secure than their traditional hardware counterparts are as follows:

Security isn’t considered at the beginning of the project, which is often the case. In many situations a public cloud project is begun, and from there each project becomes a knee jerk reaction.

If the VM host OS layer is compromised, all guest OSs can be compromised. This is called Hyperjacking. More on that later.

Although most public cloud vendors maintain adequate controls for admin access to the virtual machine monitor, many private clouds do not.

Segregate and separate. VM hosts create flat networks. You’ll need to change that. In a non-virtual world, traditional data-centers had segregation and network traffic could be inspected, filtered and monitored by a number of security products. In a virtual world, these are a rare commodity. The local communication between virtual servers is largely untouched and unmonitored. If the traffic runs through a virtual switch it’s practically invisible because it never hits the wire. It’s just traffic between virtual hosts on virtual links. So virtual traffic between virtual machines needs to be monitored.

Separation of duties is something that we in security often push. Unfortunately, in a virtual server environment, the back-end of a private cloud environment, you’ll often find that the server team and the operations team are the same people who do both provisioning of machines and managing virtual switches. So that means that there’s rarely any integration between the tools and security controls to be implemented for the network and security groups. And what THAT means, is that without visibility into configuration and policy changes, topology specifications and audits, the network and security team has zero view into what’s taking place at the access layer.

In security circles, we also talk about the “principle of least privilege.” This says that you should not give anyone more security than the minimum security they need to do their job. Defining roles that can be used to give different levels of security will make life much easier.

How do you secure a traditional server? First, lock down the server OS (usually Windows or Linux). Now, as you go virtual, instead of just securing the server OS, you also have to lock down VMKernel and the VM layer (the host OS), as well as the console. The same thing you’d do with your weekly Microsoft patch plan is what you should do with your VM Infrastructure. Although extremely secure, stay up to date on patches. There are security updates in there, not just bug fixes.

What you really want to avoid is Hyperjacking, which involves compromising the hypervisor. This is the lowest level of the OS stack, and the hypervisor has more privileges than any other account. At this level it’s impossible for any OS running on the hypervisor to even detect that a hack has taken place. So the hacker can control any guest VM running on the host.

When you go virtual, you add that other layer to the mix, the hypervisor. So again, the hypervisor needs to be secured at all costs. It’s mission critical because an attack on the hypervisor can lead to the compromise of all hosted workloads, and successful attacks on virtual workloads can lead to a compromised hypervisor. Another concern would be a VM Escape, which is an exploit that is run on a compromised guest OS to attack and take over the underlying hypervisor, which can then result in a hyperjacking.

Moving up the stack are those OS patches. Although it’s super easy to spin up another guest operating system, admins sometimes forget that the virtualization software is there. So make sure that just as fast as virtual machines are spun up, patch distribution software should also be installed, and antivirus, service packs and security policy changes should be made to all of those virtual guest operating systems.

The biggest security concern, however, is the insecurity of the underlying virtual guest OS. The VM software you use will separate the guests from both each other as well as the VM Host, so if one of the guest VMs does get compromised it’s unlikely it could affect the host, with the exception of using more memory/processor/network resources.

Be aware that the easier move for a hacker to steal data would be VM Hopping. This is a situation where an attacker to compromise a virtual server and use that as a staging ground to attack other servers on the same physical hardware.

The last threat to VMs themselves would be VM Theft. This is the virtual equivalent of stealing a physical server. Take the whole box and run off with it. Then fire it up later and steal the data. Same concept, however, in this situation it is theft of the virtual machine file electronically, and then attack it later.

How can you make the private cloud more secure?

Start with the base layer. The lower stack. The hardware and traffic. Force all traffic between hosts to be inspected by an IDS (intrusion detection system). Each VM Host should also have a different ingress/egress VLAN pair. Then the IPS should be set with VLAN translation to configure each ingress VLAN and egress VLANs. The goal is to define all VM-to-VM traffic being sent across the wire where it can be inspected, monitored and potentially filtered. Of course, as the private cloud grows, it can become complicated and costly between data-centers and DR sites.

Another option is to define an IPS and firewall on each VM Host, and policies be configured to inspect the traffic. This makes sure all intra-virtual communication is inspected. Of course, there are some performance hits you’ll take running all those additional VM IPS and Firewalls, plus the monitoring for all traffic, however, in the end security should be paramount.

Next up is a ‘love connection’ between the above two. It’s a mix of both. Route traffic to an actual IPS where it’s filtered, can be monitored, etc. Then send the traffic off to a destination VM.

Another security option as you move up the stack after traffic would be securing your hypervisor. Keep your hypervisor console patched. Just like any OS, VM Servers will have security patches that need to be deployed. The majority of these patches are related to the Linux-based OS inside most service consoles.

Additionally, ensure that virtual machines are fully updated and patched and all provisioning is done with security tools etc before they are turned on in a production environment. Although VMs are much easier to move around and faster to deploy than physical machines, there is a greater risk that a VM that is not fully updated or patched might be deployed. To manage this risk effectively, use the same methods and procedures to update VMs as you use to update physical servers.

Another tip would be to use a dedicated Network Interface Card (NIC) for management of the virtualization server. By default, NIC0 is for the parent partition. Use this for management of the Host machine. Don’t expose it to untrusted network traffic and please don’t allow any VMs to use this NIC card. Use one or more different dedicated NICs for VM networking.

Lastly, before installing the private cloud services, (the reason you build this infrastructure) I’d recommend using disk encryption to protect the VMs. If one is stolen, it’s worthless to the thief, and the data at rest is also safe.

Summary

When building a private cloud, it’s important to remember that going VM isn’t as easy as moving servers to a host machine. Security is a bit different, but security practices are still the same. Recognize the additional components. Recognize that going VM often ends up in a flat network, and you’ll need to change that for security’s sake. Lock down your console and hypervisor. And remember the same lock down procedure you used in traditional servers is also critical, if not paramount. Follow these tips and you’ll be ready to host those apps to your user community in no time. Securely.

Cloud Providers: A Roadmap of What NOT to do to Your Customers

I’ve gone on and on about how cloud providers offer services as soon as said services are available. Security isn’t the priority. Feature set is driven by monetary momentum. Development dollars swirl around new features for new money. That’s just part of the joy of capitalism. Cloud providers, this one is for you. I present to you a list of “What Not to Do to Your Customers.”

Everyone loves a good upgrade, especially when that upgrade brings new tools, additional time/resource/development savings, a new feature set and helps customers in some way. But as a cloud provider, when you offer this upgrade, it should never be an over-night switch without a whole host of contingency built in. Customers losing access to the services, losing their data or losing their own customers is a risk. Rather than put yourself in a compromised position with your customers, their data and your bottom dollar, you have some work to do. So no hasty cloud upgrades!

If you are a software-as-a-service provider, whether it be contact management, email, a social media website, or something else entirely, make sure your customers know the upgrade is coming and can take steps to ensure that your upgrade doesn’t leave them treading water while you fix those unforeseen bugs . Customers can’t tread water forever. And you’d hate for your customers to drown because of your negligence. So, keep your customers in the loop. Make sure they understand what changes are happening, how the changes will benefit them, what the time frame is for the upgrade, and what the potential risks are. Don’t just pull the trigger and hope that your customers will be okay with the result.

Customers should have the ability to opt out of the upgrade if necessary. I know it can be a big deal to have a mixed software environment, but losing your customers is a bigger deal. The cloud isn’t a dictatorship.

If the upgrade goes to hell in a hand basket, offer a rollback to the last stable version for your customers. Or do a slow roll out across the customer base so that customer issues can be solved, familiarity with the upgrade problems can be established (lab testing is only so reliable), and resolutions can be found. Real world upgrades seem to always have a gremlin or two hiding in the data. So when doing the upgrade to a new version, or adding features, keep the installation disks of that prior version around. It’s also not a bad idea to offer customers a choice of time frames in which to do the upgrade. Most businesses have a code freeze from December to February because of the Christmas holiday. Hence, it’s a bad idea for you to do the upgrade in that time frame. But not every business is bound by the holiday season. For some, the busy season might be tax season. Or summertime. Or Halloween. Who knows. Best to give the option of upgrade time back to the customers based on their “busy season.”

For your customers, this isn’t a hobby. It’s business. And mitigating any risk possible is a requirement, not an option. So as a cloud provider, whether you’re offering SAAS or PAAS or IAAS, offer your customers a time frame to opt into for the upgrade. Don’t force feed it. Remember, you’re adding value to the software. You’re helping them—if you do it right. If you’re still hell bent on doing the upgrade, make sure the configuration settings of the current platform are kept for the new platform. With SAAS, PAAS or IAAS, customers will customize their tool to meet their needs. This requires man-hour resources, potential head count resources and maybe even development resources. So if you’re going to do the upgrade, make sure that at least the configuration of the software is maintained through the upgrade path to ease the process. Just remember your SLA. Chances are, you’re bound by some legal agreement to your customers. They will take you to the cleaners if you can’t maintain continuity per your contract, and in today’s social media world, bad press can and will stick with you. Best to put your best foot forward at all times, make your customers happy and enjoy the growth of your business as your customers enjoy the growth of theirs. You’re in a partnership. Your job is to make them successful. And as a result, you succeed too.

With upgrades come risk. Risk of losing data, risk of mixed-version environments, risk of security breach. Doing a full set of security testing on the software you plan on rolling out is best practice if you want to protect yourself, your customers and all of the people they do business with.

A Deep Dive Into Hyperjacking

In the early 2000’s virus writing was all the rage. Think of the massive virus outbreaks that took place. Millions of infected hosts. The Sober virus that infected 218 million machines in seven days. The MyDoom virus sent over 100 million virus-infected emails. The “I Love You” virus infected 55 million machines and collected their usernames and passwords. These attacks were about bragging rights, not money.

Hyperjacking 101 - A Deep Dive

That’s so last decade. Virus writing is so passé. Following it, however, was the age of malware and phishing. This was monetary bound, and sought to phish the user/pass and actually use them. Western Union must have made a bazillion dollars as the fence for criminals and crooks worldwide.

There’s a bigger picture. As we’ve moved from local workstations and servers containing copious amounts of tasty private information and data, toward the cloud where all of that data sexiness lives behind the locked doors of the Cloud Providers, the game is changing.

There are new technical challenges.

The back end to the cloud is Virtual infrastructure. And if you think of a cake, there would be the pan which is the physical piece, or the hardware. Above that is a yummy bit called the hypervisor which is a software piece that governs the host machine. This is essentially the cake part, the soft mushy filler between the pan and the frosting. But it’s software. The hypervisor dictates the basic operation of the host machine. It’s the abstraction layer between physical hardware and virtual machines in any virtualized platform. And on top of that abstraction layer is the frosting, or the guest operating systems. These systems are the “Virtual Machines”.

Enough of the cake references. I’m getting hungry.

In my previous articles I outlined the fact that going virtual adds simplicity for IT departments. It’s easier to provision servers, it’s easier to move servers, it’s easier to decommission servers. It’s easier to set up networks. It’s easier from a management perspective all around. But in order to attain this simplicity, we are adding complexity on the platform side (the hypervisor), and not enough complexity to the network side. (i.e., virtualization creates flat networks which out of the box virtualization creates.) So by creating simplicity in the management side, we need to add complexity by adding a hypervisor and add security back to a flat network. Segregation. Divide and conquer. Yin and Yang.

One element of this added complexity is hyperjacking. Still in its infancy, hyper-jacking revolves around the corporate world’s newfound enthusiasm for application, operating system and solution virtualization. Hyperjacking is a verb that describes the hypervisor stack jacking. Hyperjacking involves installing a rogue hypervisor that can take complete control of a server. Regular security measures are ineffective because the OS will not even be aware that the machine has been compromised. Sneaky sneaky.

Why? Or rather, how? Because the hypervisor actually runs underneath the operating system, making it a particular juicy target for nefarious types who want not only to own (or PWN to those in the know) the host servers in order to attack guest VMs, but also to maintain persistence. (Okay, they’ve gotten into your hypervisor and stolen your data but, they’d also like the ability to come back whenever they want to steal more data.) Hyper-jacking is a great way to not only compromise a server and steal some data, but also to maintain that persistence. Get control of the hypervisor and you control everything running on the machine. The hypervisor is the single point of failure in security and if you lose it, you lose the protection of sensitive information. This increases the degree of risk and exposure substantially.

Now, what we’ve seen are some lab-based examples of hyper-jacking-style threats. But not many are in the wild. Or rather, we haven’t identified anything in the wild yet. Whether or not this has actually taken place across a mass amount of VMs we don’t know. But some examples of hyper-jacking-style threats include a specific type of malware called virtual machine based root-kits.

For a hyperjacking attempt to succeed, an attacker would need a processor capable of doing hardware-assisted virtualization. So the best way to avoid hyper-jacking is to use hardware-rooted trust and secure launching of the hypervisor. This requires the technology to exist in the processor and chipsets themselves and offers trusted execution technology as well as a trusted platform modules to execute a trusted launch of a system from the hardware through the hypervisor.

Now many vendors have subscribed to a set of standards from the Trusted Computing Groups compliance of measured launch and root trusts. And with that, there are also several security vendors who are offering hypervisor security products that work by checking the integrity of the hypervisor while it’s still in motion.

To do this, the programs examine the hypervisors and program memory, as well as the registers inside the CPU, to see if there are any unknown program elements. Root-kits in the hypervisor hide by modifying certain registers of the CPU and relocating the hypervisor somewhere else. And in this case, the hypervisor integrity software locates this relocated hypervisor. The hypervisor integrity software does this without the hypervisor being aware that it’s taking place. Of course, adding hooks into the process for examining the hypervisor also makes me uncomfortable, but that’s a conversation for another day.

This is all software, and software has rules that can be bent. So while addressing risks associated with the hypervisor we must remember that the hypervisor and the guest operating systems it supports are software and as such, need to be patched and hardened. Period.

Now with that in mind, you’re probably wondering “Well, how can I harden my environment?” Well, there are risks with all, but at least you’re thinking about it. Proactive thinking vs. reactive. Well done. Let’s put some basic design features in your environment to mitigate your risks:

1. Like security? Well, keep the management of said hypervisor in a secure environment. This is more of a network design issue than anything. It’s not hypervisor related, but still something to be considered. Never connect your hypervisor management functions to a less secure environment, than the virtual instances. Don’t put your management function access in the DMZ. Although this sounds like a common sense thing, we’ve all heard horror stories. Don’t be the horror story.

2. Your guest operating systems should never have access to the hypervisor. Nor should they know about it. Guest operating systems should never know they’re virtual. And your hypervisor should never tell a guest OS that it’s hosted. Secret secret.

3. Embedded solutions are much safer. I know this is common sense to most. Operating system-based solutions are traditionally less secure because there are more moving parts. My father always said, “keep it simple, stupid.” Same deal. Embedded modules are more simple and functional, as well as easier to manage, maintain and secure.

Understanding the hypervisor risks is one step toward securing them. Although much of the hyper-jacking conversation has been theoretical, don’t set yourself up in a place where you could be compromised if some 0-day hypervisor attack pops up. Secure your hypervisor, secure your world. Or at least part of it.