Friday, 26 February 2021

The default network to which the new containers get connected to is ? Docker Networking Types - Bridge - Overlay - Macvlan Networks

 

Understanding (What is) Docker Networking?

 The Docker containers can talk to each other and with outside world through host machine. This is facilitated through a layer of networking. Docker have various type of networks which can be used for different use cases.

Different kind of application which runs on Docker have different type of network setup. An application could be stand-alone application or a application with database, load-balancer or may require to run multiple Docker containers. Thus the networking part plays a major role for the internal and outside world interaction.

Here is how to run a simple static website in Docker container:

https://www.youtube.com/watch?v=4PvlcTtaAhw

Default Networking (docker0) In Docker

docker0 is a default bridge network which gets created when Docker is installed. Whenever a Docker container spawns, docker0 network gets attached to the container automatically, if no other network is specified while launching the container.

Other than docker0, there are two more networks which Docker creates automatically

  • host - No isolation between host and containers on this network, to the outside world they are on the same network.
  • none - Attached containers run on container-specific network stack.
Free Docker Course - https://www.youtube.com/watch?v=JPpmI0ovjdo&list=PLovNQ_x19aUgRwQP-ICRs69bqBrE6Ux-V

Types of Docker Network

Docker comes with network drivers geared towards different use cases. The most common network types being:  bridge,  overlay, and  macvlan.

Docker have different types of network drivers which can be used for different use cases. Following are the general network types:

  • Bridge Networks
  • Overlay Networks
  • Macylan Networks 

1. Overlay Networks

On the top of physical network, an overlay network uses software virtualization to create addition layer of network abstraction. Generally, an overlay network driver is used for multi-host network communication. To provide portability between cloud, on-premise and virtual environments, overlay network driver uses Virtual Extensible LAN (VXLAN). VXLAN solves portability issues by extending layer 2 subnets across layer 3 network boundaries. Therefore containers can run on foreign IP subnets.

In order to create an overlay network with name "example-overlay-net", you will need the ---subnet parameter to specify the network block that Docker will use to assign IP addresses to the containers. For example:

$ docker network create -d overlay --subnet=192.168.10.0/24 example-overlay-net

2. Bridge Networks

The most common Docker network type is Bridge Network. It is limited to containers within a single host running the Docker engine. These are easy to create, manage and troubleshoot.

In order to communicate or reachable from outside world, port mapping is needed for the containers running in bridge network mode. For example, we have a web application running in Docker container and it runs on port 80. The container is attached to the bridge network on a private subnet, a port on host system like 8080 needs to be mapped to port 80 on the container, then only the outside traffic will reach the web application.

To create a bridge network named  example-bridge-net  , pass the argument  bridge  to the  -d  (driver) parameter as shown below:

$ docker network create -d bridge example-bridge-net

3. Macvlan Networks

Macvlan driver uses layer 2 segmentation to connect Docker containers directly to the host network interfaces. In this case, its not required to map port or network address translation (NAT). Here, container can be assigned a public IP address which can be accessible from the outside world directly.

The benefit of Macvlan network is that the latency is low because the packets are routed directly from Docker host network interface to the containers.

The disadvantage is that the macvlan should be configured per host and the support of physical NIC, sub-interface, network bonded interfaces and even teamed interfaces.

Traffic is explicitly filtered by the host kernel modules for isolation and security. To create a macvlan network named macvlan-net, you’ll need to provide a --gateway parameter to specify the IP address of the gateway for the subnet, and a -o parameter to set driver specific options. In this example, the parent interface is set to eth0 interface on the host:

$ docker network create -d macvlan \
  --subnet=192.168.40.0/24 \
  --gateway=192.168.40.1 \
  -o parent=eth0 my-macvlan-net

How Containers Communicate with Each Other

Different networks provide different communication patterns (for example by IP address only, or by container name) between containers depending on network type and whether it’s a Docker default or a user-defined network.

Container discovery on docker0 network (DNS resolution)

Docker will assign a name and hostname to each container created on the default docker0 network, unless a different name/hostname is specified by the user. Docker then keeps a mapping of each name/hostname against the container’s IP address. This mapping allows pinging each container by name as opposed to IP address.

Furthermore, consider the following example which starts a Docker container with a custom name, hostname and DNS server:

$ docker run --name test-container -it \
--hostname=test-con.example.com \
--dns=8.8.8.8 \
ubuntu /bin/bash

In this example, processes running inside test-container, when confronted with a hostname not in /etc/hosts, will connect to address 8.8.8.8 on port 53 expecting a DNS service.

Directly linking containers

It is possible to directly link one container to another using the --link option when starting a container. This allow containers to discover each other and securely transfer information about one container to another container. However, Docker has deprecated this feature and recommends creating user-defined networks instead.

As an example, imagine you have a mydb container running a database service. We can then create an application container named myweb and directly link it to mydb:

$ docker run --name myweb --link mydb:mydb -d -P myapp python app.py

How Containers Communicate with the Outside World

There are different ways in which Docker containers can communicate with the outside world, as detailed below.

Docker - Exposing Ports and Forwarding Traffic

In most cases, Docker networks use subnets without access from the outside world. To allow requests from the Internet to reach the container, you’ll need to map container ports to ports on the container’s host. For example, a request to hostname:8000 will be forwarded to whatever service is running inside the container on port 80, if a mapping from host port 8000 to container port 80 to was previously defined.

Containers Connected to Multiple Networks

Fine-grained network policies for connectivity and isolation can be achieved by joining containers to multiple networks. By default each container will be attached to a single network. More networks can be attached to a container by creating it first with docker create (instead of docker run) and then running the command docker network connect. For example:

$ docker network create net1 # creates bridge network name net1
$ docker network create net2 # creates bridge network name net2
$ docker create -it --net net1 --name cont1 busybox sh # creates container named cont1 attached to network net1
$ docker network connect net2 cont1 # further attaches container cont1 to network net2

The container is now connected to two distinct networks simultaneously.

How IPv6 Works on Docker

By default, Docker configures the container networks for IPv4 only. To enable IPv4/IPv6 dual stack the --ipv6 flag needs to be applied when starting the Docker daemon. Then the docker0 bridge will get an IPv6 link-local address fe80::1. To assign globally routable IPv6 addresses to your containers, use the flag --fixed-cidr-v6 followed by ipv6 address.

Common Operations

Some common operations with Docker networking include:

  • Inspect a network: To see a specific network’s configuration details like subnet information, network name, IPAM driver, network ID, network driver, or connected containers, use the docker network inspect command.
  • List all networks: Run docker network ls to display all networks (along with their type and scope) present on the current host.
  • Create a new network: To create a new network, use the docker network create command and specify if it’s of type bridge (default), overlay or macvlan.
  • Run or connect a container to a specific network: Note first of all, the network must exist already on the host. Either specify the network at container creation/startup time (docker create or docker run) with the --net option; or attach an existing container by using the docker network connect command. For example:
docker network connect my-network my-container
  • Disconnect a container from a network: The container must be running to disconnect it from the network using the docker network disconnect command.
  • Remove an existing network: A network can only be removed using the command docker network rm if there are no containers attached to it. When a network is removed, the associated bridge will be removed as well.

Docker Networking with Multiple Hosts

When working with multi-host, there is a need to use higher-level Docker orchestration tools to ease management of networking between a cluster of machines. Popular orchestration tools today include Docker Swarm, Kubernetes, and Apache Mesos.

Docker Swarm

Docker Swarm is a Docker Inc. native tool used to orchestrate Docker containers. It enables you to manage a cluster of hosts as a single resource pool.

Docker Swarm makes use of overlay networks for inter-host communication. The swarm manager service is responsible for automatically assigning IP addresses to the containers.

For service discovery, each service in the swarm gets assigned a unique DNS name. Additionally, Docker Swarm has an embedded DNS server. You can query every container running in the swarm through this embedded DNS server.

Kubernetes

Kubernetes Guide is a system used for automating deployment, scaling, and management of containerized applications, either on a single host or across a cluster of hosts.

Kubernetes approaches networking in a different way as compared to Docker, using native concepts like services and pods. Each pod has an IP address and no linking of pods is required, neither do you need to explicitly map container ports to host ports. There are DNS-based service discovery plugins which can be used for service discovery.
Apache Mesos
Apache Mesos is an open-source project used to manage a cluster of containers, providing efficient resource sharing and isolation across distributed applications.

Mesos uses IP address management (IPAM) server and client to manage containers networking. The role of the IPAM server is to assign IP addresses on demand while the IPAM client acts as a bridge between a network isolator module and the IPAM server. A network isolator module is a lightweight module that’s loaded into the Mesos agent. It looks at scheduler task requirements and uses IPAM and network isolator services to provide IP addresses to the containers.

Mesos-dns is a DNS-based service discovery for Mesos. It allows applications and services running on Mesos to find each other through the DNS service.

Here is the link to short and precise free Kubernetes course - https://www.youtube.com/watch?v=Ut7qSWUZJ1M&list=PLovNQ_x19aUgNuFSNXGq6MLfFUQQP-ilk

Creating a New Network Driver Plugin

Docker plugins are out-of-process extensions which add capabilities to the Docker Engine. The Docker engine network plugins API allows for support of a wide range of networking technologies to be realized. Once a networking plugin has been developed and installed, they behave just like the built-in bridge, overlay and macvlan network drivers.

Summary

Docker offers a mature networking model. There are three common Docker network types – bridge networks, used within a single host, overlay networks, for multi-host communication, and macvlan networks which are used to connect Docker containers directly to host network interfaces.

In this page we explained how Docker containers discover and communicate with each other and how they communicate with the outside world. We showed how to perform common operations such as inspecting a network, creating a new network and disconnecting a container from a network. Finally, we briefly reviewed how docker networking works in the context of common orchestration platforms – Docker Swarm, Kubernetes Guide and Apache Mesos.



1 comments:

 

Copyright @ 2013 Appychip.

Designed by Appychip & YouTube Channel