A quick look at Virtual Kubernetes Clusters

A quick post on running fully working Kubernetes clusters that run on top of other Kubernetes clusters

Utkarsh Shigihalli
4 min readMar 3, 2024

I was recently introduced to the concept of virtual clusters. I spent a few hours in the past few days and wanted to capture my observations. This post just explores a few basic features and highlights some benefits of vclusters.

What are virtual clusters and why do I need them?

Traditionally engineers working on Kubernetes often find themselves in need of testing features, deployments, or configurations for various environments without disrupting existing setups or other developers’ work. Traditionally, this necessitated isolated spaces to ensure changes wouldn’t interfere with other configurations. Kubernetes administrators typically provided this isolation by assigning separate namespaces to each engineer or by setting up distinct clusters altogether. While namespaces offered a degree of isolation, developers still encountered limitations in modifying system Custom Resource Definitions (CRDs) and often required more than one namespace. This required additional work as administrators had to meticulously plan Role-Based Access Control (RBAC) or establish completely separate isolated clusters designated for different purposes such as sandboxing, development (DEV), or integration (INT) — this results in wasted cost and can adversely impact the climate.

See: How To Love Kubernetes and Not Wreck The Planet

This is where virtual clusters come to the rescue.

Virtual Cluster as the name suggests, a complete working cluster with API Server and state running virtually inside a namespace of a host cluster.

But to users, it appears as a complete standalone and dedicated cluster and they do not even notice that they are using virtual clusters.

For more information on the architecture of vclusters see: https://www.vcluster.com/docs/what-are-virtual-clusters

How to create a virtual cluster?

Before you run the virtual cluster, let us first take a look at our current cluster and its namespaces.

Showing namespaces within the host cluster

After installing the vcluster, you start by creating a virtual cluster. The command is vcluster create <cluster-name>

I am using Rancher Dekstop on my Mac and created a separate namespace named vcluster-demo and ran the vcluster create my-vclustercommand inside the namespace. This command creates the cluster and brings an isolated API Server.

Once it is done, you can run any kubectl command and it will be run within the virtual cluster. As you can see you do not see other namespaces from the host cluster (e.g keda)

Showing namespaces within the vcluster

Testing a demo deployment

Within the vcluster let's create a simple nginx deployment using nginx image with 2 replicas.

Now if you check the deployment, you will see pods of this deployment

Let us confirm that these exist in vcluster only by first disconnecting from vcluster.

Now that you are in host cluster, look for the nginx deployment.

As you can see that there is no deployment nginx-deployment because that deployment only lives inside the virtual cluster.

We can check that by checking the pods as below

Isn’t it cool?

Conclusion

In conclusion, by leveraging virtual clusters, developers can test features and configurations without disrupting existing setups or other team members’ work. This technology provides isolated spaces within a single Kubernetes cluster, eliminating the need for separate physical clusters or complex RBAC configurations. This not only enhances productivity but also saves resources by optimizing cluster utilization.

--

--

Utkarsh Shigihalli

Microsoft MVP | Developer | Passionate about Cloud, .NET and DevOps