9 things you shouldn’t virtualize

Virtualization can be a good way to make your servers more efficient. However, it isn’t right for every organization or every use case.

Servers with green lights in a data center.
Image: railwayfx/Adobe Stock

Virtualization has changed a lot in the past decade. It goes hand-in-hand with buzzwords like cloud computing, but virtualization has been the technology behind shared servers for a long time. Virtualized machines are private — unlike a public, shared cloud. Today, they are often used for server consolidation or application testing. This allows for enterprises to both continue to use legacy apps across multiple system types and partition their servers.

SEE: Hiring Kit: Cloud Engineer (TechRepublic Premium)

When should you use virtualization? It has many benefits, including potentially saving space and money on physical hardware and offering better consistency and reliability. Virtualization today is often done using containerization, in which operating system libraries and applications dependencies are packaged in a container which can run on any infrastructure. It’s “portable,” cutting down on the time it takes to create, configure and deploy.

However, there are some things that are better served by a physical environment. Knowing these will help decide how to split your setup between physical and digital realms.

Top 9 things you shouldn’t virtualize

Projects with small IT teams

Just as it’s possible to have too many cooks in a kitchen, it’s possible to have too many kitchens and too few cooks. Take careful stock of your existing people and resources before making the decision to virtualize.

John Livesay, vice president and chief sales officer of InfraNet Technologies Group, said businesses that have “less IT staff and fewer security concerns” might be better served by a cloud provider than by handling virtualization on their own. Creating a virtual machine won’t iron out struggles the team may already be facing due to stretching too far.

Highest-performance systems

Although virtualization has kept up to date with the ability to handle streaming and other relatively high-performance processes, some memory-intensive projects aren’t a good fit. Not having enough memory or overcommitting the memory you do have can lead to performance issues. Server virtualization may make it easier for you to save physical space, but it still requires a lot of memory.

Anything too new to have good redundancy

When it comes to power sources, it’s best practice to always have a backup. The same is true of virtualizing servers. Don’t go out on a limb with virtualizing something and end up removing the redundancy the original had. Make sure you’ve tested that the virtualized server and its backup work well before you make any changes you can’t reverse.

Keystones of your physical environment

What if the VM you’re trying to repair also controls the retinal scanner that is supposed to let you into the building? Now you have a second problem. Software on VMs shouldn’t be the only way to access physical controls, especially if they’re mission critical or could cause problems for the people working on the servers themselves. Also consider whether high performance could cause any physical jittering on older servers.

High-security information

A VM is never quite as secure as a physical one. Why? It needs some kind of connectivity to run, and even restricted permissions might enable someone from within the company to open the VM up to a wider pool of people than it was originally intended for.

One possible solution to this is to keep an eye on all security regulations and/or guidelines and make sure the VM can’t be used to get around them. There isn’t necessarily someone malicious within your organization; loopholes for convenience can arise that can get around permissions as well. A read-only domain controller is also a good idea in cases such as a branch office where something needs to be open to another group of people.

Anything that could create circular dependency

VMs need to be carefully looked after in case of circular dependency. You don’t want to find that your VM depends on another virtualized service that may be taken offline unexpectedly and out of your control. Good communication can help with this, but doesn’t account for emergencies. Local log-ons or splitting control between virtual and physical systems can help prevent this. A common theme here is also a general best practice: Try to avoid having a single point of failure.

Too many applications

Microsoft recommends that, when setting up a Hyper-v server, you avoid adding any other applications. That’s because fewer applications and services running in place means better performance for the main event. It’s also good security practice, falling neatly under the umbrella of reduced attack surface. After all, the fewer applications and services available, the fewer attackers have to take advantage of.

Hyperthreading

Particularly in SQL servers, hyperthreading can be a gift and a curse. Looping in virtual cores through hyperthreading can cause performance issues because the threads can compete for space with the SQL server. SQL Server virtualization is resource-intensive already and needs the virtual CPUs it needs, regardless of the presence of hyperthreading.

Systems at risk of virtual machine sprawl

Just as with physical systems, it’s important to take inventory of what you actually have already before starting a new project. With different VMs spread out across different software or physical locations, it’s relatively easy to find yourself with VM sprawl, having too many VMs for what you actually need. This can reduce efficiency and cause performance issues if it goes too far.

Leave a Reply

Your email address will not be published. Required fields are marked *