Running Kubernetes on VMware – a brief overview about the options

After I shared a brief overview about running Containers on vSphere, I’d like to go a little further this time and share the VMware intergration points with Kubernetes. As you are probably aware of, VMware has just released it’s own Kubernetes distribution called “Pivotal Container Service” which became generally available on Februrary 12, 2018. While this is VMware’s and Pivotal’s recommended and preferred way to run Kubernetes in an Enterprise environment, there are several options to either consume any other Kubernetes distribution or build Kubernetes solely from opensource. No matter what path you choose, VMware has a variety of solutions to offer.

 

Compute / Cloud Provider

Network & Security

Storage

Monitoring

Container Management

  • Project Harbor: An Enterprise-class Container Registry Server based on Docker Distribution, embedded in Pivotal Container Service and vSphere Integrated Containers
  • vRealize Automation: governance and self-service to request K8s clusters, nodes and e.g. namespaces; custom integration/blueprints required

3rd Party Documentation

Running containers on vSphere – a brief overview about the options

Inspired by a tweet by Kendrick Coleman I decided to quickly summarize the current VMware and Pivotal provided options to running containers on vSphere.

  • vSphere Integrated Containers (VIC): running containers as VMs on vSphere
    • Virtual Container Host (VCH): nearly complete Docker API support, one container per Container Host VM (“Container VM”)
    • Docker Container Hosts (DCH): native Docker API support, multiple containers per Container Host VM
  • Pivotal Container Service (PKS): enterprise grade K8s distribution
  • Pivotal Application Service (PAS); formerly known as Pivotal Cloud Foundry (PCF): integrated Platform-as-a-Service offering based on Cloud Foundry
  • VMware Integrated OpenStack with Kubernetes (VIOK): running K8s on top of OpenStack
  • Photon OS: open source minimal Linux container host optimized for cloud-native applications for vSphere
  • Container Service Extension for vCloud Director: a vCloud Director add-on that manages the life cycle of Kubernetes clusters for tenants.

 

In addition to these integrated solutions, you can also build or bring your own solution and leverage projects such as:

  • Project Hatchway: persistent storage for Cloud Native Applications
  • NSX: software-defined networking and security for containers
  • Project Harbor: Enterprise-class Container Registry Server based on Docker Distribution
  • Project Admiral: Highly Scalable Container Management Platform
  • Weathervane: application-level performance benchmark designed to allow the investigation of performance tradeoffs in modern virtualized and cloud infrastructures

vSphere Integrated Containers 1.1 has been released

Great news for all vSphere customers that are looking at ways to run Containers on vSphere. VMware’s product team just released vSphere Integrated Containers 1.1 on VMware.com.

A quick reminder: the VMware product vSphere Integrated Containers references the core Engine (open source project), the Registry (aka open source project Harbor) as well as the Management Portal (aka open source project Admiral). You can pick the individual open components from GitHub and run them on your own terms (see e.g. Harbor example from a recent Golang conference in China) or use the integrated & supported product delivered by VMware.

Key highlights from the Release Notes:

  • A unified OVA installer for all three components
  • Upgrade from version 1.0
  • Official support for vSphere Integrated Containers Management Portal
  • A unified UI for vSphere Integrated Containers Registry and vSphere Integrated Containers Management Portal
  • A plug-in for the HTML5 vSphere Client
  • Support for Docker Client 1.13 and Docker API version 1.25
  • Support for using Notary with vSphere Integrated Containers Registry
  • Support for additional Docker commands. For the list of Docker commands that this release supports, see Supported Docker Commands in Developing Container Applications with vSphere Integrated Containers. (Link)

 

You can also use vic-machine upgrade to upgrade the Virtual Container Hosts. From the Upgrade Guide:

When you upgrade a running VCH, the VCH goes temporarily offline, but container workloads continue as normal during the upgrade process. Upgrading a VCH does not affect any mapped container networks that you defined by setting the vic-machine create --container-network option. The following operations are not available during upgrade:

  • You cannot access container logs
  • You cannot attach to a container
  • NAT based port forwarding is unavailable

IMPORTANT: Upgrading a VCH does not upgrade any existing container VMs that the VCH manages. For container VMs to boot from the latest version of bootstrap.iso, container developers must recreate them.

With the release of vSAN 6.6 and vCenter 6.5d, you might want to test out VIC 1.1 in your test/lab environment and leverage it to build a great platform for your development teams. Speaking of compatibility:

 

There is also a new demo video that shows the product & the updated User Interfaces in more detail. Check out the video here:

Time to update the lab!

vSphere Integrated Containers Engine 0.7 has been released

vic-engine

Only few days ago, the vSphere Integrated Containers team released the newest version 0.7 on GitHub and Bintray. I just want to summarize a few resources for tests with this release here and document some gotchas that have already been raised. Remember: this code is still a beta release so don’t deploy it to production immediately. You can also read up on the announcement of VIC as part of vSphere 6.5 in the official press release from VMworld.

Here are the links:

During the installation, you can now specify a fixed IP address instead of DHCP for your Virtual Container Host (VCH) – this is one of the new features in the 0.7 release. Please make sure to use –dns-server with your vic-machine command to set the DNS server address in the VCH. Otherwise it will use the network gateway which results in some timeout errors during the installation. There is already an issue documented at https://github.com/vmware/vic/issues/3060.

If you deploy VIC in your environment and encounter any issues, please open a issue on GitHub (https://github.com/vmware/vic/issues). You can also reach out to myself via Twitter and I’ll try to get back to you as soon as possible.

Reset to Standard vSwitch from Distributed vSwitch on homelab Intel NUC

I just had to reset my homelab Intel NUC’s ESXi 6.0 network configuration because I wanted to test a specific setting in vSphere Integrated Containers. Unfortunately, the Intel NUC only has one physical uplink and that uplink (and VMkernel Portgroup) was configured on a Distributed vSwitch – I needed it on a Standard vSwitch for the test. Migrating the VMkernel Portgroup from the Distributed to a Standard vSwitch was a little challenging and I didn’t want to set up an external monitor to use the Direct Console User Interface (DCUI). But with the help of William’s ESXi virtual appliance and some hints in the vSphere documentation, I was able to reproduce the necessary keyboard inputs and perform it with only a USB keyboard attached to the NUC. Instead of summarizing it only for myself, I though I’ll share it here as I couldn’t find similar instructions on google.

Please don’t do this in a production environment, blindly configuring a system isn’t a good idea.

tl;dr: the steps are: F2 – TAB – <root_password> – ENTER – DOWN – DOWN – DOWN – DOWN – ENTER – DOWN – ENTER – F11

 

What is actually going on if you could view DCUI? First, you need to use/press F2 (and potentially “fn” or similar) to get into ESXi’s DCUI system management:

Bildschirmfoto 2016-08-01 um 07.58.16

It will ask you to authenticate first (pressing TAB – <root_password> – ENTER):

Bildschirmfoto 2016-08-01 um 08.15.31

Then, you need to go to “Network Restore Options” in the System Customization menu (pressing DOWN – DOWN – DOWN – DOWN – ENTER):

Bildschirmfoto 2016-08-01 um 07.58.48

And in the “Network Restore Options”, you’ll have the option to “Restore Standard Switch” (pressing DOWN – ENTER – F11):

Bildschirmfoto 2016-08-01 um 07.59.11

After selecting “Standard Switch”, you’ll need to confirm a new dialog with “F11” and then a new vSwitch will be created on your host. Mine worked like a charm, I found a new Standard vSwitch with vmk0 using my “old” management IP address for ESXi.