#vK8s – friends don’t let friends run Kubernetes on bare-metal

Over the past months, I had multiple conversations on why you would want to virtualize containers or Kubernetes. The “containers are somewhat providing virtualization – why should I do it at the server level as well?” myth has been around for some time now. Before I start addressing this, let me take a quick step back here. When I started my career roughly 10 years ago in datacenter operations, virtualization wasn’t mainstream in many environments. I learned a lot about operating physical machines before I got to work on virtual infrastructures at scale. I also worked with multiple vendors and used several “Lights Out Management” solutions and their basic automation capabilities to get my hardware up and running. But it was always a “yes, it’s getting easier from now on” moment when vSphere was ready for configuration. While I enjoyed working in operations, I was always happy to set something up without plugging cables in or working on a server in the datacenters. 

I have worked with customers that fully embraced virtualization and have been 100% virtualized for years. They have benefited so much from this move and were able to simplify so many of their operational tasks while doing this. Even if they chose a 1:1 mapping for few extremly demanding VMs to a given host, this was still the better option. Having a consistent infrastructure and operational framework outpaces the potential drawbacks or “virtualization overhead” (another myth) if you look at the bigger picture. Even though I haven’t been working in operations for some time now, I still remember what it means to be called during the night or deal with spontaneous changes in plans/projects all the time. And businesses and therefore IT are only moving faster – automation, “software-defined” and constant improvements should be part of everyone’s daily business in operations.

For me, this applies to all workloads – from your traditional legacy applications to modern application runtime frameworks such as Kubernetes or event-driven architectures that are leveraging Functions-as-a-Service capabilities. Most of them co-exist all the time and it’s not a one-or-the-other conversation but an AND conversation. Even highly demanding workloads such as core telco applications are put on virtual infrastructure these days, enabled by automation and Open Source API definitions. All of these can be operated on a consistent infrastructure layer with a consistent operational model. Infrastructure silos have been broken down over the past decade and VMware has invested a lot to make vSphere a platform for all workloads. So when someone mentions bare-metal these days all I can ask myself is “why would I ever want to go back”? I sometimes wonder if all the challenges that virtualization took away have simply been forgotten – it just ran too well.

So what are my personal reasons to run containers on a virtual infrastructure & vSphere in specific?

  1. Agility, Independence & Abstraction: scale, repair, lifecycle & migrate underlying components independently from your workloads; if you ever worked in operations, this is daily business (datacenter move, new server vendor selected, major storage upgrades, … there are tons of reasons why this is still a thing)
  2. Density: run multiple k8s clusters/tenants on same hardware cluster, avoid idle servers e.g. due to N+1 availability concepts
  3. Availability and QoS: you can plan for failures without compromising density, you can even ensure SLOs/SLAs by enforcing policies (networking, storage, compute, memory) that will also be enforced during outages (NIOC, SIOC, Resource Pools, Reservations, Limits, …)
  4. Performance: better-than-physical performance & resource management (core ESXi scheduling, DRS & vMotion, vGPUs, …)
  5. Infrastructure as Code: automate all the things on an API-driven Software Defined Datacenter stack
  6. Security & Isolation: yep, still a thing
  7. Fun fact: even Google demoes K8s on vSphere as part of their “GKE on-prem” offering 😉 

There has been a ton of material published around this topic recently (and some awesome foundational work by Michael Gasch incl. his KubeCon talk), I want to list a few of the public resources here:

Introducing: #vK8s

So, no matter what your favorite Kubernetes framework is these days – I am convinced it runs best on a virtual infrastructure and of course even better on vSphere. Friends don’t let friends run Kubernetes on bare-metal. And what hashtag could summarize this better than something short and crips like #vK8s ? I liked this idea so much that I created some “RUN vK8s” images (inspired by my colleagues Frank Denneman and Duncan Epping – guys, it’s been six years since RUN DRS!) that I want to share with all of you. You can find the repository on GitHub – feel free to use them whereever you like. 

CNA weekly #009

The good thing about flight delays and spending time in hotel rooms is that it finally gives me the opportunity to do some long overdue work on the CNA weekly. There are so many things that I want to share in this edition and I hope you’ll find it useful again.

Let me start with a loud shout-out to the global Harbor community. I am so extremely happy to see this great open source project receiving some well-deserved recognition: Harbor joins Cloud Native Computing Foundation (CNCF) and is now the newest adopted sandbox project!

As many of you know, besides its highly successful existance in the Open Source community, Harbor is also an important piece in VMware’s Cloud-Native Applications efforts, specifically in vSphere Integrated Containers as well as Pivotal Container Service. Both of them saw several updates since the last edition of the weekly: PKS 1.1 is now available (incl. K8s 1.10, Multi Availabilty Zone Support, Multi-Master in beta, …) and VIC 1.4 has also been released. Check out the sections below for more details and links to the downloads.

But wait, there is more: VMware also announced a new cloud service called VMware Kubernetes Engine (VKE). VKE will be a multi-cloud managed Kubernetes-as-a-Service offering with some pretty unique features like the “Smart Cluster” implementation that picks the optimal instance types for your k8s cluster and much more. Right now it is built natively on AWS but it’ll head to Azure as well – but you can manage it with the same set of policies! Learn more about VKE in the links below – and you can also sign up for the beta there.

Another topic that is very close to my heart: how do you want to run your containers and platforms? When I started my career in IT in a large organization, I quickly learned the value and benefits that virtualization brings not only to the consumers but also to the operators of the infrastructure. And running containers is no exception here. Make sure to look into a great new whitepaper (“Containers on Bare-Metal or Virtual Machines?“) and look out for a must-watch VMworld 2018 session presented by Michael Gasch and Frank Denneman.

But let’s move on to some content:

Open Source & Community updates

Harbor

Pivotal Container Service (PKS)

VMware Kubernetes Engine

vSphere Integrated Containers

Function-as-a-Service & Serverless

Platform Reliability Engineering & Operations

Other news from VMware

Keeping it fun

CNA weekly #008

Hello everyone,

After some pretty excting weeks, I am finally back with a new edition of my CNA weekly. I had the pleasure to attend KubeCon in Copenhagen and I am still amazed by all the great sessions & collaborative culture across the event. I had so many energizing conversations and can only confirm the observations that my colleague Tim shared in his blogpost about the “hallway track“. 

 

vSphere Integrated Containers

Pivotal Container Service

Open Source

KubeCon 2018 

Other News

CNA weekly #007

Hello everyone, 

Wow – what an exciting time! While I am super energized about all the news in cloud-native land, I always love to see a new generation of VMware’s core stack being released. All hypervisor and management software (vSphere) got a significant update and there are tons of highly interesting updates that could also impact cloud-native platforms running on top of it. Just check out e.g. VMFork that is now part of the core hypervisor in version 6.7 and imagine the possibilities. What is VMFork you ask? VMFork enables “forking” instances of a live, powered-on VM, each with its own unique identity. By leveraging the existing linked-clone technology for disks and extending the hypervisor to enable Copy-on-Write memory and VM state, VMFork fosters instant creation of VMs with little CPU overhead. William Lam already blogged about some examples as well.

But I am just scratching the surface here – learn more about the updates from the linked blogposts below… and now into the content!

The world is gathering in Hannover/Germany this week for the Hannover Messe to discuss Industry 4.0 and Digital Transformation across industries. VMware is showcasing some of the solutions at the edge and IoT space as well – find out more in the video on Hannover Messe’s website.

My great colleague Tom Scanlan worked on a slightly smaller IoT & Kubernetes usecase and released a new blog article titled “Winery Application Demo: IoT Pipeline with Kubernetes on vSphere” – great read!

As I mentioned on a previous edition of my updates, there is now a dedicated Special Interest Group (SIG) in the Kubernetes Community that is focussed on VMware. Our colleagues outlined more about the structure, purpose and how to engage with the SIG in a recent blog article

William Lam keeps blogging about Pivotal Container Service. This week it’s about a Monitoring Tool Overview – by the way: the Wavefront team also released a blog article on Develop Cloud-Native Applications, Leave the Heavy Lifting of Monitoring to Wavefront which also includes a pretty cool 8 minute demo!

Besides William’s excellent series, my colleague Cormac Hogan also shared a great blogpost around a very simple Pivotal Container Service deployment

There is now also a free new ebook on “Accelerating Digital Transformation with Containers and Kubernetes” available from VMware. The book “introduces you to containers and Kubernetes, explains their business value, explores their use cases, and illuminates how they can accelerate your organization’s digital transformation“.

KubeCon is just around the corner. Make sure to add  The NewStack Pancake Breakfast & Podcast: Securing Kubernetes to your schedule! Please reach out if you are attending KubeCon – VMware is a Diamond Sponsor this year and we will have a presence there as well! 

 

Some additional news and updates:

All vSphere 6.7 release notes & download links

New vSphere 6.7 APIs worth checking out

New Instant Clone Architecture in vSphere 6.7 – Part 1

CNA weekly #006

Happy Monday everyone!

Another exciting week has passed and I had several great meetings across the region. And I love my new PKS socks – socks are the new stickers 😉

But lets take a look at some of the content from last week:

William Lam (@lamw) has continued his work on the Getting started with VMware Pivotal Container Service (PKS) blogpost series with a seventh part about the integration with the container registry Harbor (after OverviewPKS ClientNSX-TOps Manager and BOSHPKS Control PlaneKubernetes Go!)

Pivotal Container Service (PKS) 1.0.2 has been released last week. It includes a minor update to K8s 1.9.5 and several enhancements, find out more in the Download and Release Notes 

Speaking of PKS: there will be a PKS roadshow across the US and then coming to cities across Europe very soon. Make sure to sign-up here if you are interested in learning more. I’ll publish additional dates as soon as they are releases. 

At the same time, NSX-T and the NSX Container Plugin (NCP) have been released in version 2.1.2. NSX-T now supports Kubernetes 1.10.

Speaking of NSX-T: the team just released a Terraform provider for NSX-T and demonstrates its capabilities in a 20min video focused on Infrastructure as Code with NSX-T. 

vRealize Automation 7.4 has been released! There are many great updates listed on the Overview Blogpost but you can also find out more in the Download and Release Notes

VMware Distributed Resource Scheduler (DRS) has been embedded in VMware’s core virtualization product for over a decade and VMware customer are leveraging its algorithms to let the infrastructure load-balance itself in a “driverless” fashion. A newly released whitepaper gives some insights into what’s new and what’s current with vSphere 6.5’s implementation of DRS and explains many of the concepts and metrics in more detail. 

I somehow missed the release of a great whitepaper titled “Performance of Enterprise Web Applications in Docker Containers on VMware vSphere 6.5.

Dispatch Framework 0.1.11 has been released as well – “Lots of new stuff including open service broker support for services and language packs to easily expand supported language runtimes.”

An interesting perspective and findings from reality are included in the blogpost called “Another reason why your Docker containers may be slow”. Quoting the article: “It re-iterates on the fact that containerization != virtualization and demonstrates how containerized processes can compete for resources even if all cgroup limits are set to reasonable values, and there’s plenty of computing power available on the host machine.”