基于DRBD实现高可用MariaDB
一、DRBD简介
DRBD全称Distributed Replicated Block (分布式的复制块设备),开源项目。它是一款基于块设备的文件复制解决方案,速度比文件级别的软件如NFS,samba快很多,而且不会出现单点故障,是很多中小企业的共享存储首选解决方案。
二、DRBD的工作模式
从上图中我们可以清楚的看到DRBD是工作在内核中,将协议建立在buffer cache(内核缓存)与Disk scheduler(磁盘调度器)之间,将上下文传输的二进制数据复制一份,通过tcp/ip协议封装后由网卡发送至另一台DRBD节点上进行数据同步的。
DRBD可以工作在主备(一个节点运行,另一个节点备份)模式,也可以工作在双主(上个节点同时运行)模式,在双主模式下要求必须建立在高可用集群的基础上工作。实现DRBD的必要条件是多个节点上必须准备相同大小相同名称的磁盘或分区。
三、搭建DRBD主备模型
1.准备环境
1).系统centos6.6;内核2.6.32-504.el6.x86_64
2).两个节点
node1.wuhf.com:172.16.13.13
node2.wuhf.com:172.16.13.14
3).两个磁盘分区
/dev/sda5 大小512M
4).时间同步
ntpdate -u 172.16.0.1
crontab -e
*/3 * * * * /usr/sbin/ntpdate 172.16.0.1 &> /dev/null
5).基于密钥互访
ssh-keygen -t rsa -f /root/.ssh/id_rsa -P ''
ssh-copy-id -i /root/.ssh/id_rsa.pub root@IP
2.安装程序包
对应自己系统内核版本的软件包,下载至本地目录:
drbd84-utils-8.9.1-1.el6.elrepo.x86_64.rpm;
kmod-drbd84-8.4.5-504.1.el6.x86_64.rpm;
1 | rpm -ivh drbd84-utils-8.9.1-1.el6.elrepo.x86_64.rpm kmod-drbd84-8.4.5-504.1.el6.x86_64.rpm |
3.配置文件
drbd的主配置文件为/etc/drbd.conf;为了管理的便捷性,目前通常会将些配置文件分成多个部分,且都保存至/etc/drbd.d/目录中,主配置文件中仅使用"include"指令将这些配置文件片断整合起来。通常,/etc/drbd.d目录中的配置文件为global_common.conf和所有以.res结尾的文件。其中global_common.conf中主要定义global段和common段,而每一个.res的文件用于定义一个资源。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | vim /etc/drbd .d /global-common .conf global { usage-count no; // 关掉在线帮助 # minor-count dialog-refresh disable-ip-verification } common { protocol C; // 设置同步传输 handlers { pri-on-incon-degr "/usr/lib/drbd/notify-pri-on-incon-degr.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f" ; pri-lost-after-sb "/usr/lib/drbd/notify-pri-lost-after-sb.sh; /usr/lib/drbd/notify-emergency-reboot.sh; echo b > /proc/sysrq-trigger ; reboot -f" ; local -io-error "/usr/lib/drbd/notify-io-error.sh; /usr/lib/drbd/notify-emergency-shutdown.sh; echo o > /proc/sysrq-trigger ; halt -f" ; # fence-peer "/usr/lib/drbd/crm-fence-peer.sh"; # split-brain "/usr/lib/drbd/notify-split-brain.sh root"; # out-of-sync "/usr/lib/drbd/notify-out-of-sync.sh root"; # before-resync-target "/usr/lib/drbd/snapshot-resync-target-lvm.sh -p 15 -- -c 16k"; # after-resync-target /usr/lib/drbd/unsnapshot-resync-target-lvm.sh; } startup { #wfc-timeout 120; #degr-wfc-timeout 120; } disk { on-io-error detach; // 磁盘错误处理机制 #fencing resource-only; } net { cram-hmac-alg "sha1" ; // 数据加密算法 shared-secret "mydrbdlab" ; } syncer { rate 500M; // 传输速率限制 } } # resource web { on node1.magedu.com { device /dev/drbd0 ; disk /dev/sda5 ; address 172.16.13.13:7789; meta-disk internal; } on node2.magedu.com { device /dev/drbd0 ; disk /dev/sda5 ; address 172.16.13.14:7789; meta-disk internal; } } |
4.启动服务
1 2 3 4 5 | scp /etc/drbd .d/* node2: /etc/drbd .d // 配置文件复制给node2 service drbd start; ssh node2 'service drbd start' // 两节点同时启动服务 drbd-overview // 查看服务状态 drbdadm primary --force web // 将当前节点强制设置为主节点 drbd-overview |
5.挂载文件系统
1 2 3 | mke2fs -t ext4 -L DRBD /dev/drbd0 mkdir /mnt/drbd mount /dev/drbd0/mnt/drbd |
6.测试
node1
1 2 3 | touch /mnt/drbd/ {a,b,c} umount /mnt/drbd drbdadm secondary web // 将node1撤销为备用节点 |
node2
1 2 3 4 5 | mkdir /mnt/drbd mount /dev/drbd0/mnt/drbd drbdadm primary web drbd-overview ls /mnt/drbd // 查看目录下有没有a,b,c文件,有的话表示成功 |
四、安装配置corosync+pacemaker+crmsh
1.安装配置corosync
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | yum install corosync pacemaker -y rpm -ql corosync cd /etc/corosync cp corosync.conf.example corosync.conf vim corosync.conf compatibility withetank 兼容性选项 toten { secauth:on // 安全认证功能开启 threads:0 // 多线程 interface{ ringnumber:0 // 心跳回传功能,一般不需要 bindnetaddr:172.16.0.0 // 给网络地址 mcastaddr:239.165.17.13 // 多播地址 mcastport: 5405 ttl:1 } } loggging { fileline:off // to_stderr:no // 日志发往标准错误输出 to_logfile: yes logfile: /var/log/cluster/corosync .log to_syslog:no debug:off timestamp:on // 日志记录时间戳 logger_subsys { subsys:AME debug:off } } |
1 2 3 4 5 6 7 8 9 | service { // 将pacemaker用作corosync的插件来运行 ver: 0 name: pacemaker # use_mgmtd: yes } aisexec { user: root group: root } |
1 2 3 4 5 6 | corosync-keygen // 会生成密码文件authkey scp -p authkey corosync.conf node2: /etc/corosync service corosync start; ssh node2 'service corosync start' ss -tunl // 查看5405是不是监听 tail -f /var/log/cluster/sorosync .log grep pcmk_startup /var/log/cluster/corosync .log // 查看pacemaker是否正常启动 |
2.安装crmsh
1 2 3 4 | crmsh-2.1-1.6.x86_64.rpm // 准备安装包 pssh-2.3.1-2.e16.x86_64.rpm yum --nogpgcheck install crmsh-2.1-1.6.x86_64.rpm pssh-2.3.1-2.e16.x86_64.rpm crm status // 查看集群状态 |
五、安装MariaDB
1.node1
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | mariadb-5.5.43-linux-x86_64. tar .gz // 安装包 tar xf mariadb-5.5.43-linux-x86_64. tar .gz -C /usr/local cd /usr/local/ ln -sv mariadb-5.5.43-linux-x86_64/ mysql cd mysql/ cp support-files /my-large .cnf /etc/my .cnf vim /etc/my .cnf // 编辑mysql配置文件 >thread_concurrency = 4 >datadir = /mnt/drbd >innodb_file_per_table = on groupadd -r -g 306 mysql // 添加mysql用户 useradd -r-u 306 -g 306 mysql scp /etc/my .cnf node2: /etc/ // 将配置文件复制到node2一份 vim /etc/drbd .d /web .res service drbd start; ssh node1 'service drbd start' // 开启DRBD drbdadm primary web drbd-overview mount /dev/drbd0 /mnt/drbd // 挂载drbd目录 . /scripts/mysql_install_db --datadir= /mnt/drbd/ --user=mysql // 初始化mysql cd /usr/local/mysql/ cp support-files /mysql .server /etc/rc .d /init .d /mysqld vim /etc/profile .d /mysql .sh export PATH= /usr/local/mysql/bin :$:PATH // 补全路径 . /etc/profile .d /mysql .sh service mysqld start mysql // 测试mysql chkconfig --add mysqld chkconfig --list mysqld chkconfig mysqld off // 确保mysqld不能开机自启动 service mysqld stop umount /mnt/drbd/ drbdadm secondary web |
2.node2
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | mariadb-5.5.43-linux-x86_64. tar .gz // 安装包 tar xf mariadb-5.5.43-linux-x86_64. tar .gz -C /usr/local cd /usr/local/ ln -sv mariadb-5.5.43-linux-x86_64/ mysql service drbd start; ssh node1 'service drbd start' drbdadm primary web mount /dev/drbd0 /mnt/drbd cd /usr/local/mysql/ cp support-files /mysql .server /etc/rc .d /init .d /mysqld vim /etc/profile .d /mysql .sh export PATH= /usr/local/mysql/bin :$:PATH . /etc/profile .d /mysql .sh service mysqld start mysql // 测试mysql chkconfig --add mysqld chkconfig mysqld off service mysqld stop umount /mnt/drbd/ drbdadm secondary web // 降级为次节点,停止drbd服务 service drbd stop |
六、定义DRBD资源,实现高可用
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | crm configrue show / / 配置文件如下 node node1.wuhf.com \ attributes standby = off node node2.wuhf.com \ attributes standby = on primitive mydata Filesystem \ / / 定义文件系统资源 params device = "/dev/drbd0" directory = "/mnt/drbd" fstype = ext4 \ op monitor interval = 20s timeout = 40s \ op start timeout = 60s interval = 0 \ op stop timeout = 60s interval = 0 primitive myip IPaddr \ / / 定义VIP资源 params ip = 172.16 . 13.209 \ op monitor interval = 10s timeout = 20s primitive myserver lsb:mysqld \ / / 定义mysql服务资源 op monitor interval = 20s timeout = 20s primitive mystor ocf:linbit:drbd \ / / 定义drbd资源 params drbd_resource = web \ op monitor role = Master interval = 10s timeout = 20s \ op monitor role = Slave interval = 20s timeout = 20s \ op start timeout = 240s interval = 0 \ op stop timeout = 100s interval = 0 ms ms_mystor mystor \ / / 定义主备模式的两节点工作方式 meta clone - max = 2 clone - node - max = 1 master - max = 1 master - node - max = 1 notify = truetarget - role = Started colocation mydata_with_ms_mystor_master inf: mydata ms_mystor:Master / / 定义三个排列 colocation myip_with_ms_mystor_master inf: myip ms_mystor:Master colocation myserver_with_mydata inf: myserver mydata order mydata_after_ms_mystor_master Mandatory: ms_mystor:promote mydata:start / / 定义顺序 order myserver_after_mydata Mandatory: mydata:start myserver:start order myserver_after_myip Mandatory: myip:start myserver:start property cib - bootstrap - options: \ dc - version = 1.1 . 11 - 97629de \ cluster - infrastructure = "classic openais (with plugin)" \ expected - quorum - votes = 2 \ stonith - enabled = false \ / / 爆头功能关闭 no - quorum - policy = ignore \ / / 集群分裂处理方式为忽略 |
七、测试
1.查看当前各节点状态
当前node2是standby状态,所有资源运行在node1上
2.将node2上线,将node1下线
在node2上执行crm node online
在node1上执行crm node standby
查看可见,所有资源成功转移到node2上运行,一切正常,表示高可用mysql基于drbd成功了。
目录 返回
首页