虚拟化容器,大数据,DBA,中间件,监控。

CentOS 7下Docker桥接网络,实现独立IP与独立端口效果

21 12月
作者:admin|分类:容器虚拟化

CentOS 7下Docker桥接网络,

实现与宿主同一网络,实现独立IP与独立端口效果

为什么要让docker桥接物理网络?


docker默认提供了一个隔离的内网环境,启动时会建立一个docker0的虚拟网卡,每个容器都是连接到docker0网卡上的。而docker0的ip段为172.17.0.1,若想让容器与宿主机同一网段的其他机器访问,就必须在启动docker的时候将某个端口映射到宿主机的端口上才行,例如:docker run -itd -p 22 centos。这是我们所不能接受的,想想每个应用都要绞尽脑汁的去设置端口,因为不能重复,如果应用有多端口那更是不堪设想啊。所以为了让容器与宿主机同一个网段,我们需要建立自己的桥接网络。

 

 centos7宿主机上建立Docker桥接物理网络过程


宿主机网卡信息:

DEVICE=ens33
IPADDR=192.168.179.99
GATEWAY=192.168.179.2
NETMASK=255.255.255.0
DNS1=114.114.114.114
DNS2=8.8.8.8

创建桥接物理网络

(1)新建br0桥接网络,brctl show可以查看(需安装bridge-utils)

(2)将宿主机物理网卡IP、掩码、网关、dns(或者dhcp)配置到br0上

(3)删除宿主机物理网卡IP、掩码、网关、dns(或者dhcp)配置

(4)将宿主机物理网卡加入到br0

 

过程步骤如下


自定义br0桥接网卡 

[root@localhost ~]# cd  /etc/sysconfig/network-scripts/
[root@localhost network-scripts]# cp ifcfg-ens33 ifcfg-ens33.bak
[root@localhost network-scripts]# vim ifcfg-ens33
BRIDGE="br0"

#加上一行任意位置,指定桥接网卡的名字,告诉ens33 br0的流量可以通过ens33

创建一个br0网卡,ens33是物理网卡,br0是桥接网卡

[root@localhost network-scripts]# cp ifcfg-ens33 ifcfg-br0
[root@localhost network-scripts]# vim ifcfg-br0 
BOOTPROTO="static"
DEVICE="br0"
ONBOOT="yes"
TYPE="Bridge"  #桥接网卡,不是物理网卡,流量会桥接到真实网卡上面去
IPADDR=192.168.179.99
GATEWAY=192.168.179.2
NETMASK=255.255.255.0
DNS1=114.114.114.114
DNS2=8.8.8.8
[root@localhost network-scripts]# ifup ifcfg-br0 
[root@localhost network-scripts]# systemctl restart network
[root@localhost network-scripts]# 

[root@localhost network-scripts]# ifconfig
br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.179.99  netmask 255.255.255.0  broadcast 192.168.179.255
        inet6 fe80::20c:29ff:feb4:8832  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:b4:88:32  txqueuelen 1000  (Ethernet)
        RX packets 19  bytes 1288 (1.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 18  bytes 1668 (1.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        inet6 fe80::42:55ff:feda:5064  prefixlen 64  scopeid 0x20<link>
        ether 02:42:55:da:50:64  txqueuelen 0  (Ethernet)
        RX packets 103  bytes 6328 (6.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 71  bytes 78767 (76.9 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 00:0c:29:b4:88:32  txqueuelen 1000  (Ethernet)
        RX packets 8524  bytes 8954821 (8.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2524  bytes 278175 (271.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@localhost ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br0		8000.000c29934261	no		ens33
docker0		8000.0242b435f546	no		veth5404ab5

将docke0干掉 

[root@localhost network-scripts]# ifconfig docker0 down
[root@localhost network-scripts]# ifconfig
br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.179.99  netmask 255.255.255.0  broadcast 192.168.179.255
        inet6 fe80::20c:29ff:feb4:8832  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:b4:88:32  txqueuelen 1000  (Ethernet)
        RX packets 159  bytes 10966 (10.7 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 84  bytes 10820 (10.5 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 00:0c:29:b4:88:32  txqueuelen 1000  (Ethernet)
        RX packets 8664  bytes 8966459 (8.5 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2591  bytes 288933 (282.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


[root@localhost ~]# brctl delbr docker0
[root@localhost ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br0		8000.000c29934261	no		ens33

以后的容器使用br0这个网桥,就不再使用docker0了,因为新建了桥接网卡就不需要docker0了,只要保证有一个桥接网卡就行。

这个时候每一个容器的ip都和物理机的IP在一个网段了,容器可以直接上网,别的机器也可以连接容器了,跨网段也可以访问了。

[root@www ~]# vim /usr/lib/systemd/system/docker.service 
ExecStart=/usr/bin/dockerd -b br0 -H fd:// --containerd=/run/containerd/containerd.sock
[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart docker


[root@localhost  ~]# docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@localhost  ~]# docker run -itd centos7-ssh 
3a4e99ca426dd1586d831cfb02941a3c0652ade34c14875942dbe34edaca52ab
[root@www ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
3a4e99ca426d        centos7-ssh         "/bin/sh -c /usr/sbi??   3 seconds ago       Up 2 seconds        22/tcp              upbeat_kirch
[root@localhost  ~]# docker exec -it 3a4e99ca426d ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.179.1  netmask 255.255.255.0  broadcast 192.168.179.255
        ether 02:42:c0:a8:b3:01  txqueuelen 0  (Ethernet)
        RX packets 10  bytes 768 (768.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2  bytes 84 (84.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@localhost  ~]# ifconfig | grep veth
veth0635c6e: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
[root@localhost  ~]# brctl show
bridge name	bridge id		STP enabled	interfaces
br0		8000.000c29934261	no		ens33
							veth0635c6e


[root@localhost ~]# docker exec 3a4e99ca426d   route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.179.99  0.0.0.0         UG    0      0        0 eth0
192.168.179.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0

外网可以直接访问容器就不需要做NAT端口映射

Connecting to 192.168.179.1:22...
Connection established.
To escape to local shell, press 'Ctrl+Alt+]'.

 

如果不想通过DHCP分配使用(为了避免自动分配IP和宿主机的网关发生冲突)


安装pipework 

[root@localhost ~]# yum install  git -y
[root@localhost ~]# git clone https://github.com/jpetazzo/pipework
Cloning into 'pipework'...
remote: Enumerating objects: 8, done.
remote: Counting objects: 100% (8/8), done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 518 (delta 2), reused 5 (delta 2), pack-reused 510
Receiving objects: 100% (518/518), 182.72 KiB | 13.00 KiB/s, done.
Resolving deltas: 100% (272/272), done.

[root@localhost ~]# cp  pipework/pipework /usr/local/bin/

如果不想通过DHCP分配使用,为了避免自动分配IP和宿主机的网关发生冲突 

[root@localhost ~]# docker run -itd  --name=os5 --net=none centos7-ssh
c4d7dfddd5167d7c0d1f2ef0f7c2cf418e6c4e58bb1284ca68494baf3732e480

[root@localhost ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
c4d7dfddd516        centos7-ssh         "/bin/sh -c /usr/sbi…"   11 seconds ago      Up 11 seconds                           os5

[root@localhost ~]# docker inspect c4d7dfddd516 | grep -i ipaddr
            "SecondaryIPAddresses": null,
            "IPAddress": "",
                    "IPAddress": "",
#24位配置掩码  @后面接上网关(br0的Ip地址)
[root@localhost ~]# pipework br0 c4d7dfddd516  192.168.179.110/24@192.168.179.99
[root@localhost ~]# docker exec -it c4d7dfddd516 /bin/bash
[root@6fd3a25a39e9 ansible]# ifconfig
eth1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.179.110  netmask 255.255.255.0  broadcast 192.168.179.255
        ether da:35:b9:e9:b7:6d  txqueuelen 1000  (Ethernet)
        RX packets 34  bytes 3337 (3.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 11  bytes 873 (873.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0


[root@6fd3a25a39e9 ~]# traceroute www.baidu.com
traceroute to www.baidu.com (180.101.49.11), 30 hops max, 60 byte packets
 1  * * *
 2  192.168.179.2 (192.168.179.2)  0.246 ms  0.155 ms  0.113 ms


[root@6fd3a25a39e9 ansible]# ping www.baidu.com 
PING www.a.shifen.com (180.101.49.12) 56(84) bytes of data.
64 bytes from 180.101.49.12 (180.101.49.12): icmp_seq=1 ttl=50 time=16.6 ms
64 bytes from 180.101.49.12 (180.101.49.12): icmp_seq=2 ttl=50 time=20.4 ms
^C

 

浏览525 评论0
返回
目录
返回
首页
Nginx 负载均衡 upstream_module health check被动监测 MySQL 主从监控脚本