虚拟化容器,大数据,DBA,中间件,监控。

Install GFS2 in CentOS 6.5

13 07月
作者:admin|分类:应用管理

Install GFS2 In CentOS 6.5


In computing, the Global File System 2 or GFS2 is a shared disk file system for Linux computer clusters. GFS2 differs from distributed file systems (such as AFS, Coda, InterMezzo, or GlusterFS) because GFS2 allows all nodes to have direct concurrent access to the same shared block storage. In addition, GFS or GFS2 can also be used as a local filesystem. GFS  has no disconnected operating-mode, and no client or server roles. All nodes in a GFS cluster function as peers. Using GFS in a cluster requires hardware to allow access to the shared storage, and a lock manager to control access to the storage. The lock manager operates as a separate module: thus GFS and GFS2 can use the Distributed Lock Manager (DLM) for cluster configurations and the “nolock” lock manager for local filesystems. Older versions of GFS also support GULM, a server based lock manager which implements redundancy via failover.

Here is my testing environment:

Virtualbox:

  • Two CentOS 6.5 (64 bit) (node1 – node2).
  • Conga administration and management CentOS 6.5 (64 bit) (node3).
  • Shared disk between node1 and node2  /dev/sdb/.

Make  the  /etc/hosts on 2 nodes like this:

vi /etc/hosts
192.168.1.130  node1
192.168.1.131  node2
192.168.1.132  node3  ( management  Console)

Install this on node1 and  node2

 [root@node1 ~]# yum groupinstall "iSCSI Storage Client" "High Availability" "Resilient Storage"
[root@node2 ~]# yum groupinstall "iSCSI Storage Client" "High Availability" "Resilient Storage"

Install in Node3 (management node):

[root@node3 ~]# yum groupinstall "High Availability Management" "High Availability"

Format a GFS2 partition

Explanation:
Formatting filesystem: GFS2
Locking Protocol: lock_dlm
Cluster Name: unixmencluster1
FileSystem name: GFS
Journal: 2
Partition: /dev/sdb

The Node1 and 2 is sharing one disk  /dev/sdb.

[root@node1 ~]# mkfs.gfs2 -p lock_dlm -t unixmencluster1:GFS -j 2  /dev/sdb
 This will destroy any data on /dev/sdb.
 It appears to contain: data
Are you sure you want to proceed? [y/n] y
Device:                    /dev/sdb
 Blocksize:                 4096
 Device Size                1.00 GB (262144 blocks)
 Filesystem Size:           1.00 GB (262142 blocks)
 Journals:                  2
 Resource Groups:           4
 Locking Protocol:          "lock_dlm"
 Lock Table:                "unixmencluster1:GFS"
 UUID:                      ef9eda0d-a3da-b62d-f694-d738f5787285

And,
[root@node2 ~]# mkfs.gfs2 -p lock_dlm -t unixmencluster1:GFS -j 2  /dev/sdb
 This will destroy any data on /dev/sdb.
 It appears to contain: Linux GFS2 Filesystem (blocksize 4096, lockproto lock_dlm)
Are you sure you want to proceed? [y/n] y
Device:                    /dev/sdb
 Blocksize:                 4096
 Device Size                1.00 GB (262144 blocks)
 Filesystem Size:           1.00 GB (262142 blocks)
 Journals:                  2
 Resource Groups:           4
 Locking Protocol:          "lock_dlm"
 Lock Table:                "unixmencluster1:GFS"
 UUID:                      af45a3c0-da10-97b1-7ade-cb935778da78
1-  make password for ricci user on both machines node1 and node2

ricci is an agent that runs on any machine or cluster node to be administered. It is written in C for speed,

[root@node1 ~]# passwd  ricci
 Changing password for user ricci.
 New password:
 BAD PASSWORD: it is too short
 BAD PASSWORD: is too simple
 Retype new password:
 passwd: all authentication tokens updated successfully.
 [root@node1 ~]#
And
[root@node2 ~]#  passwd  ricci
 Changing password for user ricci.
 New password:
 BAD PASSWORD: it is too short
 BAD PASSWORD: is too simple
 Retype new password:
 passwd: all authentication tokens updated successfully.
 [root@node2 ~]#
2- start the ricci services
chkconfig ricci on
 [root@node1 ~]# /etc/init.d/ricci start
 Starting system message bus:                               [  OK  ]
 Starting oddjobd:                                          [  OK  ]
 generating SSL certificates...  done
 Generating NSS database...  done
 Starting ricci:                                            [  OK  ]
 [root@node1 ~]#
And
chkconfig ricci on
[root@node2 ~]# /etc/init.d/ricci start
 Starting system message bus:                               [  OK  ]
 Starting oddjobd:                                          [  OK  ]
 generating SSL certificates...  done
 Generating NSS database...  done
 Starting ricci:                                            [  OK  ]
 [root@node2 ~]#
3- Start Conga Services on (node3)
 [root@node3 ~]# /etc/init.d/luci start
 Start luci...                                              [  OK  ]
 Point your web browser to https://node3:8084 (or equivalent) to access luci
 [root@node3 ~]#
3-1 Stop The  network manager  and  start the Cluster manager
service NetworkManager stop
 /etc/init.d/cman restart
 4- Login to Luci Console

Login with root user and root password:

https://node3:8084

2014-05-28_155212gfs

then  under  mamager  Clusters  > Create

2014-05-28_161949gfs

Insert the Clustername  you created before in the formatting, node name or host name, password for and Ricci user:

2014-05-28_165221gfs

Click Create Cluster.

2014-05-28_165522gfs

after this the nodes should be live. if not please stop the network manager and restart v the cman service.

2014-05-28_165549gfs

5- Configure the Quorum

Under Configure  > Qdisk

2014-06-02_002109

6- Configure  Domain:

Failover Domain  > add

2014-06-02_002745

 7- Create Resources

Create under node1 and node GFS partition:

 [root@node1 ~]# mkdir /GFS
 [root@node2 ~]# mkdir /GFS

go to Ressources  >  add  > GFS2

2014-06-02_003415

8- Create  Sevice  group

Service Group  > Add

2014-06-02_003908

Add GFS2 resources:

2014-06-02_113923

Wait few minutes, then you will see something like this:

2014-06-02_112905

Check the status of the Cluster and the mount.

2014-06-02_113810

This is an small example in Vitualbox with 1 shared disk.

Cheers!

浏览1566 评论0
返回
目录
返回
首页
Active Session History (ASH) performed an emergency flush How to upgrade or update kernel of the CentOS to Latest stable Kernel