LVS四层负载均衡
四种模式
1.NAT模式
2.TUN隧道模式
3.DR直接路由,推荐使用
4.FULLNAT,完全NAT
LVS负载均衡实践
环境准备
1.MySQL负载均衡
外部IP地址 内部IP地址 角色 备注
L4:
192.168.238.15 172.16.1.15 LVS调度器(Director) 对外提供服务的VIP为192.168.238.17
192.168.238.16 172.16.1.16 LVS调度器(Director) 对外提供服务的VIP为192.168.238.17
192.168.238.51 172.16.1.51 RS1(真实服务器) MySQL
192.168.238.52 172.16.1.52 RS2(真实服务器) MySQL
#L4+L7+WEB 大规模web负载均衡
L4:
192.168.238.15 172.16.1.15 LVS调度器(Director) 对外提供服务的VIP为192.168.238.17
192.168.238.16 172.16.1.16 LVS调度器(Director) 对外提供服务的VIP为192.168.238.17
L7:
192.168.238.5 172.16.1.5 LVS调度器(Director)
192.168.238.6 172.16.1.6 LVS调度器(Director)
192.168.238.7 172.16.1.7 RS1(真实服务器) web01
192.168.238.8 172.16.1.8 RS2(真实服务器) web02
LVS负载均衡安装
#在lb4-01、lb4-02上安装LVS
[root@lb4-01 ~]# yum install ipvsadm -y
[root@lb4-01 ~]# rpm -qa ipvsadm
ipvsadm-1.27-8.el7.x86_64
[root@lb4-01 ~]# modprobe ip_vs #把ipvs加入到内核
[root@lb4-01 ~]# lsmod|grep ip_vs #检查内核是否有ip_vs
ip_vs 145458 0
nf_conntrack 143360 1 ip_vs
libcrc32c 12644 3 xfs,ip_vs,nf_conntrack
[root@lb4-01 ~]# uname -r
3.10.0-1160.83.1.el7.x86_64
[root@lb4-01 ~]# yum install kernel-devel -y
[root@lb4-01 ~]# ln -s /usr/src/kernels/3.10.0-1160.83.1.el7.x86_64/ /usr/src/linux
[root@lb4-01 ~]# ls -l /usr/src/
lrwxrwxrwx 1 root root 45 Mar 3 10:34 linux -> /usr/src/kernels/3.10.0-1160.83.1.el7.x86_64/
特别注意:
ln命令链接路径要和uname -r输出结果内核版本对应。
如果没有/usr/src/kernels/xx路径,可通过yum install kernel-devel -y安装。
场景1:实现MySQL数据库负载均衡
###配置LVS虚拟IP(VIP)
ifconfig eth1:17 172.16.1.17/24 up #==>简便写法
#route add -host 172.16.1.17 dev eth1 #==添加主机路由,也可不加此行。
###手工执行配置添加LVS服务并增加两台RS ipvsadm
ipvsadm -C #<== -C clear the whole table
ipvsadm --set 30 5 60 #<== --set tcp tcpfin udp set connection timeout values
#ipvsadm -A -t 172.16.1.17:3306 -s wrr #--add-service -A add virtual service with options
ipvsadm -A -t 172.16.1.17:3306 -s wrr -p 20
#dr模式
ipvsadm -a -t 172.16.1.17:3306 -r 172.16.1.51:3306 -g -w 1
ipvsadm -a -t 172.16.1.17:3306 -r 172.16.1.52:3306 -g -w 1
# ipvsadm -a|e -t|u|f service-address -r server-address [options]
[root@lb4-01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.1.17:3306 wrr persistent 20
-> 172.16.1.51:3306 Route 1 0 0
-> 172.16.1.52:3306 Route 1 0 0
[root@lb4-01 ~]# ipvsadm -Ln --stats
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes
-> RemoteAddress:Port
TCP 172.16.1.17:3306 0 0 0 0 0
-> 172.16.1.51:3306 0 0 0 0 0
-> 172.16.1.52:3306 0 0 0 0 0
[删除方法]
ipvsadm -D -t 172.16.1.17:3306
ipvsadm -d -t 172.16.1.17:3306 -r 172.16.1.51:3306 <==正确
[相关参数说明]
[root@oldboy ~]# ipvsadm -help
# --clear -C clear the whole table
# --add-service -A add virtual service with options
# --tcp-service -t service-address service-address is host[:port]
# --scheduler -s scheduler one of rr|wrr|lc|wlc|lblc|lblcr|dh|sh|sed|nq,
# --add-server -a add real server with options
# --real-server -r server-address server-addres s is host (and port)
# --masquerading -m masquerading (NAT)
# --gatewaying -g gatewaying (direct routing) (default)
# --delete-server -d delete real server
# --persistent -p [timeout] persistent service(会话保持功能)
# --set tcp tcpfin udp set connection timeout values
# --weight -w weight capacity of real server
# --ipip -i ipip encapsulation (tunneling)
提示:更多参数请ipvsadm -help自行查看
命令执行过程及检查配置的执行结果
ipvsadm -C
ipvsadm --set 30 5 60
ipvsadm -A -t 172.16.1.17:3306 -s wrr -p 20
ipvsadm -a -t 172.16.1.17:3306 -r 172.16.1.51:3306 -g -w 1
ipvsadm -a -t 172.16.1.17:3306 -r 172.16.1.52:3306 -g -w 1
ipvsadm -L -n --sort
ipvsadm -d -t 172.16.1.17:3306 -r 172.16.1.51:3306 #==>删除测试
ipvsadm -L -n --sort
ipvsadm -a -t 172.16.1.17:3306 -r 172.16.1.51:3306 -g -w 1
ipvsadm -L -n --sort
手工在RS端绑定lo网卡及抑制ARP
#每台real server端执行db01、db02
命令:
ifconfig lo:17 172.16.1.17 up
route add -host 172.16.1.17 dev lo
工作中写到配置文件:
vim /etc/sysconfig/network-scripts/ifcfg-lo:17
#centos7 添加方法
ip addr add 172.16.1.17/32 dev lo label lo:17
route add -host 172.16.1.17 dev lo
过程:
ifconfig lo:17
route -n|grep 172.16.1.17
#172.16.1.17 0.0.0.0 255.255.255.255 UH 0 0 0 lo
每个集群节点上的环回接口(lo)设备上被绑定VIP地址(其广播地址是其本身,子网掩码是255.255.255.255,采取可变长掩码方式把网段划分成只含一个主机地址的目的是避免ip地址冲突)允许LVS-DR集群中的集群节点接受发向该VIP地址的数据包,这会有一个非常严重的问题发生,集群内部的真实服务器将尝试回复来自正在请求VIP客户端的ARP广播,这样所有的真实服务器都将声称自己拥有该VIP地址,这时客户端将有可能直接发送请求数据包到某台真实服务器上,从而破坏了DR集群的负载均衡策略。因此,必须要抑制所有真实服务器响应目标地址为VIP的ARP广播,而把客户端ARP广播的响应交给负载均衡调度器。
手工在RS端抑制ARP响应
#在db01、db02执行
抑制ARP响应方法如下:
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
cat /proc/sys/net/ipv4/conf/lo/arp_ignore
cat /proc/sys/net/ipv4/conf/all/arp_ignore
cat /proc/sys/net/ipv4/conf/all/arp_announce
cat /proc/sys/net/ipv4/conf/lo/arp_announce
#在web01上安装mysql测试:
yum install mariadb -y
mysql -h 172.16.1.17 -uroot -poldboy123 #172.16.1.17是VIP
####停掉了51,发现连不了
lvs没有健康检查功能,51停掉了,仍然请求发给51,需要手工清理51
[root@lb4-01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.1.17:3306 wrr persistent 20
-> 172.16.1.51:3306 Route 1 0 0
-> 172.16.1.52:3306 Route 1 0 4
[root@lb4-01 ~]# ipvsadm -d -t 172.16.1.17:3306 -r 172.16.1.51:3306
[root@lb4-01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 172.16.1.17:3306 wrr persistent 20
-> 172.16.1.52:3306 Route 1 0 0
重连成功,LVS的健康检查要靠keepalved配合实现
抑制ARP脚本实现脚本
开发脚本配置LVS RS真实服务器端
#!/bin/bash
# Written by oldboy
# description: Config realserver lo and apply noarp
VIP=(
172.16.1.17
)
. /etc/rc.d/init.d/functions
case "$1" in
start)
for ((i=0; i<`echo ${#VIP[*]}`; i++))
do
interface="lo:`echo ${VIP[$i]}|awk -F . '{print $4}'`"
/sbin/ifconfig $interface ${VIP[$i]} broadcast ${VIP[$i]} netmask 255.255.255.255 up
done
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
action "Start LVS of RearServer.by old1boy"
;;
stop)
for ((i=0; i<`echo ${#VIP[*]}`; i++))
do
interface="lo:`echo ${VIP[$i]}|awk -F . '{print $4}'`"
/sbin/ifconfig $interface ${VIP[$i]} broadcast ${VIP[$i]} netmask 255.255.255.255 down
done
echo "close LVS Directorserver"
if [ ${#VIP[*]} -eq 1 ];then
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
fi
action "Close LVS of RearServer.by old2boy"
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
工作中可以将lo网卡写到配置文件:
db01、db02上操作
cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-lo:17
[root@db01 network-scripts]# cat /etc/sysconfig/network-scripts/ifcfg-lo:17
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="lo:17"
DEVICE="lo:17"
ONBOOT="yes"
IPADDR="172.16.1.17"
PREFIX="32
arp抑制技术参数说明
arp_ignore- INTEGER
定义对目标地址为本地IP的ARP询问不同的应答模式
0 -(默认值): 回应任何网络接口上对任何本地IP地址的arp查询请求。
1 -只回答目标IP地址是来访网络接口本地地址的ARP查询请求。
2 -只回答目标IP地址是来访网络接口本地地址的ARP查询请求,且来访IP必须在该网络接口的子网段内。
3 -不回应该网络界面的arp请求,而只对设置的唯一和连接地址做出回应。
4-7 -保留未使用。
8 -不回应所有(本地地址)的arp查询。
arp_announce - INTEGER
对网络接口上,本地IP地址的发出的,ARP回应,作出相应级别的限制:
确定不同程度的限制,宣布对来自本地源IP地址发出Arp请求的接口
0 -(默认) 在任意网络接口(eth0,eth1,lo)上的任何本地地址
1 -尽量避免不在该网络接口子网段的本地地址做出arp回应. 当发起ARP请求的源IP地址是被设置应该经由路由达到此网络接口的时候很有用.此时会检查来访IP是否为所有接口上的子网段内ip之一。如果该来访IP不属于各个网络接口上的子网段内,那么将采用级别2的方式来进行处理.
2 -对查询目标使用最适当的本地地址,在此模式下将忽略这个IP数据包的源地址并尝试选择能与该地址通信的本地地址,首要是选择所有的网络接口的子网中外出访问子网中包含该目标IP地址的本地地址。如果没有合适的地址被发现,将选择当前的发送网络接口或其他的有可能接受到该ARP回应的网络接口来进行发送.限制了使用本地的vip地址作为优先的网络接口
场景2:实现LVS+keepalved L4 mysql集群高可用
配置lb4-01/02
[root@lb4-01 ~]# cat /etc/keepalived/keepalived.conf
global_defs {
router_id lb4-01
}
vrrp_instance VI_1 {
state MASTER #state BACKUP
interface eth1
virtual_router_id 52
priority 150 #priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.16.1.17/24 dev eth1 label eth1:17
}
}
#MySQL port 3306
virtual_server 172.16.1.17 3306 {
delay_loop 6
lb_algo wrr
lb_kind DR
persistence_timeout 20
protocol TCP
real_server 172.16.1.51 3306 {
weight 1
TCP_CHECK {
connect_timeout 5
##nb_get_retry 3
delay_before_retry 3
connect_port 3306
}
}
real_server 172.16.1.52 3306 {
weight 1
TCP_CHECK {
connect_timeout 5
## nb_get_retry 3
delay_before_retry 3
connect_port 3306
}
}
}
配置完重启keepalived完成
[root@lb4-01 ~]# systemctl restart keepalived
检查
[root@lb4-01 ~]# ifconfig |grep 'eth1:17'
#在web01上测试
mysql -h 172.16.1.17 -uroot -poldboy123
#停掉db01上的mariadb
[root@db01 ~]# systemctl stop mariadb
#在测试web01测试
mysql -h 172.16.1.17 -uroot -poldboy123
use test;
show tables;
场景3:实现Web4层负载及配合后端7层反向代理+web节点
环境说明
2.L4+L7+WEB 大规模web负载均衡
L4:
192.168.238.15 172.16.1.15 LVS调度器(Director) 对外提供服务的VIP为192.168.238.17
192.168.238.16 172.16.1.16 LVS调度器(Director) 对外提供服务的VIP为192.168.238.17
L7:
192.168.238.5 172.16.1.5 nginx lb01 测好了。
192.168.238.6 172.16.1.6 nginx lb02
192.168.238.7 172.16.1.7 RS1(真实服务器) web01
192.168.238.8 172.16.1.8 RS2(真实服务器) web02
[root@lb4-02 keepalived]# curl -H "host:www.yunwei.com" 172.16.1.7
web01
[root@lb4-02 keepalived]# curl -H "host:www.yunwei.com" 172.16.1.8
web02
配置keepalived lb4-01/02
[root@lb4-01 ~]# vi /etc/keepalived/keepalived.conf
global_defs {
router_id lb4-01
}
vrrp_instance VI_1 {
state BACKUP
interface eth0
virtual_router_id 53
priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.238.17/24 dev eth0 label eth0:17
}
}
#web config
virtual_server 192.168.238.17 80 {
delay_loop 6
lb_algo wrr
lb_kind DR
persistence_timeout 20
protocol TCP
real_server 192.168.238.5 80 {
weight 1
TCP_CHECK {
connect_timeout 5
#nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.238.6 80 {
weight 1
TCP_CHECK {
connect_timeout 5
# nb_get_retry 3
delay_before_retry 3
connect_port 80
}
}
}
===============================================
vrrp_instance VI_2 {
state MASTER #state BACKUP
interface eth1
virtual_router_id 52
priority 150 #priority 50
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.16.1.17/24 dev eth1 label eth1:17
}
}
#MySQL config
virtual_server 172.16.1.17 3306 {
delay_loop 6
lb_algo wrr
lb_kind DR
persistence_timeout 20
protocol TCP
real_server 172.16.1.51 3306 {
weight 1
TCP_CHECK {
connect_timeout 5
#nb_get_retry 3
delay_before_retry 3
connect_port 3306
}
}
real_server 172.16.1.52 3306 {
weight 1
TCP_CHECK {
connect_timeout 5
# nb_get_retry 3
delay_before_retry 3
connect_port 3306
}
}
}
lb01、lb02配置VIP绑定,和抑制ARP(脚本)
[root@lb01 conf.d]# mkdir /server/scripts -p
[root@lb01 conf.d]# cd /server/scripts/
[root@lb01 scripts]# vim ipvs.sh
#!/bin/bash
# Written by oldboy
# description: Config realserver lo and apply noarp
VIP=(
192.168.238.17
)
. /etc/rc.d/init.d/functions
case "$1" in
start)
for ((i=0; i<`echo ${#VIP[*]}`; i++))
do
interface="lo:`echo ${VIP[$i]}|awk -F . '{print $4}'`"
/sbin/ifconfig $interface ${VIP[$i]} broadcast ${VIP[$i]} netmask 255.255.255.255 up
done
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
action "Start LVS of RearServer.by old1boy"
;;
stop)
for ((i=0; i<`echo ${#VIP[*]}`; i++))
do
interface="lo:`echo ${VIP[$i]}|awk -F . '{print $4}'`"
/sbin/ifconfig $interface ${VIP[$i]} broadcast ${VIP[$i]} netmask 255.255.255.255 down
done
echo "close LVS Directorserver"
if [ ${#VIP[*]} -eq 1 ];then
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "0" >/proc/sys/net/ipv4/conf/all/arp_announce
fi
action "Close LVS of RearServer.by old2boy"
;;
*)
echo "Usage: $0 {start|stop}"
exit 1
esac
[root@lb01 scripts]# sh ipvs.sh start
Start LVS of RearServer.by old1boy [ 确定 ]
#检查
[root@lb01 scripts]# ifconfig |grep 'lo:17'
获取lvs+nginx+web前端用户真实IP
#测试,需要把proxy_protocol给注释掉,不然访问报错。
upstream www {
server 172.16.1.7:80;
server 172.16.1.8:80;
}
server {
listen 80;
#listen 80 proxy_protocol; #注释
server_name www.yunwei.com;
set_real_ip_from 172.16.1.0/24; #添加七层负载前经过的代理IP地址
real_ip_header proxy_protocol; #将proxy_protocol获取的IP赋值给$remote_addr
location / {
proxy_pass http://www;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
#将proxy_protocol真实客户端的IP地址赋值给X-Forwarded-For变量携带至后端
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_headers_hash_max_size 51200;
proxy_headers_hash_bucket_size 6400;
}
}
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 这个开启
客户端访问LVS是携带源IP的,LVS仅仅是转发,DR修改目的MAC,并没有改变源IP。所以客户源IP可以被nginx lb01获取到,然后通过x-forward-for传给web节点
#查看日志
[root@web01 conf.d]# tail -f /var/log/nginx/access.log
172.16.1.6 - - [03/Mar/2023:16:14:53 +0800] "GET / HTTP/1.0" 200 6 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36 Edg/108.0.1462.54" "192.168.238.1"
172.16.1.6 - - [03/Mar/2023:16:14:54 +0800] "GET / HTTP/1.0" 200 6 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36 Edg/108.0.1462.54" "192.168.238.1"
172.16.1.6 - - [03/Mar/2023:16:14:54 +0800] "GET / HTTP/1.0" 200 6 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36 Edg/108.0.1462.54" "192.168.238.1"
172.16.1.6 - - [03/Mar/2023:16:14:55 +0800] "GET / HTTP/1.0" 200 6 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36 Edg/108.0.1462.54" "192.168.238.1"
最后一位是192.168.238.1是真实IP