Sunday, September 10, 2017

Kubernetes container services at scale with Dragonflow SDN Controller

Cloud native ecosystem is getting very popular, but VM based workloads are not going away. Enabling developers to connect VMs and containers to run hybrid workloads, means shorter time to market, more stable production environment and ability to leverage the maturity of the VM ecosystem.

Dragonflow is a distributed, modular and extendable SDN controller that enables to connect cloud network instances (VMs, Containers and Bare Metal servers) at scale. Kuryr allows you to use Neutron networking to connect the containers on your OpenStack cloud. Combining them allows to use the same networking solution for all workloads.

In this post I will  briefly cover both Dragonflow and Kuryr, explain how Kubernetes cluster networking is supported by Dragonflow and provide details about various Kubernetes cluster deployment options.

Introduction

Dragonflow Controller in a nutshell

Dragonflow adopts a distributed approach to solve the scaling issues for large scale deployments. With Dragonflow the load is distributed to the compute nodes running local controller. Dragonflow manages the network services for the OpenStack compute nodes by distributing network topology and policies to the compute nodes where they are translated into OpenFlow rules and programmed into Open vSwitch datapath.
Network services are implemented as Applications in the local controller.
OpenStack can use Dragonflow as its network provider through the Modular Layer 2 (ML2) Plugin.

Kuryr

Project Kuryr uses OpenStack Neutron to provide networking for containers. With kuryr-kubernetes, Kuryr project enables native Neutron-based networking for Kubernetes.
Kuryr provides solution for Hybrid workloads, enabling Bare Metal, Virtual Machines and Containers to share the  same Neutron network or to choose different routable network segments.


Kubernetes - Dragonflow Integration

To leverage Dragonflow SDN Controller as Kubernetes network provider, we use Kuryr to act as the container networking interface (CNI) for Dragonflow.


Diagram 1: Dragonflow-Kubernetes integration


Kuryr Controller watches K8s API for Kubernetes events and translates them into Neutron models. Dragonflow translates Neutron model changes into a network topology that gets stored in the distributed DB and propagates network policies to its local controllers that apply changes on open vSwitch pipeline.
Kuryr CNI driver binds Kubernetes pods on worker nodes into Dragonflow logical ports ensuring requested level of isolation.
As you can see in the diagram above, there is no kube-proxy component. Kubernetes services are implemented with the help of Neutron load balancers. Kuryr-Controller translates Kubernetes service into Load Balancer, Listener and Pool. Service endpoints are mapped to the members in the pool. See the following diagram diagram:
Diagram 2: Kubernetes service translation

Currently either Octavia or HA Proxy can be used as Neutron LBaaSv2 providers. In the Queens release, Dragonflow will provide native LBaaS implementation, as drafted in the following specification.

Deployment Scenarios

With Kuryr-Kubernetes it’s possible to choose to run both OpenStack VMs and Kubernetes Pods on the same network provided by Dragonflow if your workloads require it or to use different network segments and, for example, route between them. Below you can see the details of various scenario, including devstack recipes.  

Bare Metal deployment

Kubernetes cluster can be deployed on Bare Metal servers. Logically there are 3 different types of servers.

OS Controller hosts - required control service, such as Neutron Server, Keystone and Dragonflow Northbound Database. Of course, they can be distributed on number of servers.

K8s Master hosts - components that provide the cluster’s control plane. Kuryr-Controller is part of the cluster control plane.

K8s Worker nodes - hosts components that  run on every node, maintaining running pods and providing the Kubernetes runtime environment.

Kuryr-CNI is invoked by Kubelet. It binds Pods into Open vSwitch bridge that is managed by Dragonflow Controller.


If you want to try Bare Metal deployment with devstack, you should enable Neutron, Keystone, Dragonflow and Kuryr components. You can use this local.conf:


Nested (Containers in VMs) deployment

Another deployment option is nested-VLAN, where containers are created inside OpenStack VMs by using the Trunk ports support. Undercloud OS environment has all the needed components to create VMs (e.g., Glance, Nova, Neutron, Keystone, ...), as well as the needed Dragonflow configurations such as enabling the trunk support that will be needed for the VM to enable running Containers to use undercloud networking. The overcloud deployment inside the VM contains Kuryr components along Kubernetes Control plane components.


If you want to try nested-VLAN deployment with devstack, you can use Dragonflow Kuryr Bare Metal config with the following changes:
  1. Do not enable kuryr-kubernetes plugin and kuryr related services as they will be installed inside VM.
  2. Nova and Glance components need to be enabled to be able to create the VM where we will install the overcloud.
  3. Dragonflow Trunk service plugin need to be enable to ensure Trunk ports support.
Then create Trunk and spawn overcloud VM on the Trunk port.
Install overcloud, following the instructions as listed here.


Hybrid environment

Hybrid environment enables diverse use cases where containers, regardless if they are deployed on Bare Metal or inside Virtual Machines, are in the same Neutron network as other co-located VMs.
To bring up such environment with devstack, just follow the instructions as stated in the nested deployment section.

Testing the cluster
Once the environment is ready, we can test that network connectivity works among Kubernetes pods and services. You can check the cluster configuration according to this default configuration guide. You can run simple example application and verify the connectivity and configuration reflected in the Neutron and Dragonflow data model. Just follow the instructions to try sample kuryr-kubernetes application.

Resources


No comments:

Post a Comment