虚拟化容器,大数据,DBA,中间件,监控。

RHEL6 Openstack Install All-in-one

26 03月
作者:admin|分类:容器虚拟化

在RHEL上和Ubuntu上安装配置都差不多.

本文会把所有的组件Keystone Nova-compute Glance Dashboard(horzion) Swift安装到一台机器上,全部采用Mysql数据库.

hostname             eth0                eth1

control          192.168.8.100(wlan)        192.168.0.100(lan)


前提条件:rhel6.2光盘里并没有openstack的包,不过epel包含了所有的组件

epel下载地址: http://mirrors.ustc.edu.cn/fedora/epel/6/x86_64/epel-release-6-7.noarch.rpm

一. 设置基本环境

1.配置网络Bridge  /etc/sysconfig/network-scripts/

ifcfg-eth0

DEVICE="eth0"
NM_CONTROLLED="yes"
ONBOOT="yes"
BOOTPROTO="static"
IPADDR=192.168.8.100
NETMASK=255.255.255.0
GATEWAY=192.168.8.1
DNS1=192.168.3.1

ifcfg-eth1

DEVICE="eth1"
NM_CONTROLLED="yes"
ONBOOT="yes"
BOOTPROTO="none"
BRIDGE=br100
RELAY=0

ifcfg-br100

DEVICE="br100"
NM_CONTROLLED="yes"
ONBOOT="yes"
BOOTPROTO="static"
TYPE=Bridge
IPADDR=192.168.0.100
NETMASK=255.255.255.0

安装bridge包

yum -y install bridge-utils
/etc/init.d/network restart

2.安装epel源

rpm -Uvh http://mirrors.ustc.edu.cn/fedora/epel/6/x86_64/epel-release-6-7.noarch.rpm

3.关闭selinux和清空iptables

setenforce 0
/etc/selinux/config
SELINUX=disabled
iptables -F

4.设置NTP服务器

cat > /etc/ntp.conf << _GEEK_
driftfile /var/lib/ntp/drift
restrict default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict 192.168.8.0 mask 255.255.255.0 nomodify notrap
server 192.168.8.100
server  127.127.1.0     # local clock
fudge   127.127.1.0 stratum 10
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
logfile /var/log/ntp
_GEEK_
/etc/init.d/ntpd restart; chkconfig ntpd on

二.开始安装配置

 

1.安装mysql并设置密码为mysql

yum -y install mysql mysql-server mysql-devel
sed -i -e'2 s/^/bind_address=0.0.0.0n/' /etc/my.cnf
/etc/init.d/mysqld start; chkconfig mysqld on
mysqladmin -uroot password 'mysql'

2.创建openstack各组件的mysql用户并授权访问mysql

mysql -uroot -pmysql -e"create database keystone;"
mysql -uroot -pmysql -e"create database glance;"
mysql -uroot -pmysql -e"create database nova;"
mysql -uroot -pmysql -e"create database dash;"
mysql -uroot -pmysql -e"grant all on keystone.* to 'keystone'@'%' identified by 'keystone';"
mysql -uroot -pmysql -e"grant all on glance.* to 'glance'@'%' identified by 'glance';"
mysql -uroot -pmysql -e"grant all on nova.* to 'nova'@'%' identified by 'nova';"
mysql -uroot -pmysql -e"grant all on dash.* to 'dash'@'%' identified by 'dash';"

3.安装配置keystone

Keystone是Openstack的核心,所有的组件,都需要通过keystone进行认证和授权。

yum -y install openstack-keystone
sed -i 's/ADMIN/token/g' /etc/keystone/keystone.conf
sed -i -e '2 s/^#//g' /etc/keystone/keystone.conf
sed -i 's/localhost/192.168.8.100/g' /etc/keystone/keystone.conf
/etc/init.d/openstack-keystone start; chkconfig openstack-keystone on
keystone-manage db_sync

这里需要导入keystone的用户信息,根据官方的文档,整理后写成了脚本,
keystone.sh下载地址(脚本不用重复执行,执行过程中注意查看是否输出’err’)

wget http://www.longgeek.com/wp-content/uploads/2012/07/keystone.sh
sh keystone.sh

设置环境变量并写入到profile(如果你修改了keystone.sh中变量的值,请对应修改下面变量的值)

cat > /opt/export.sh << _GEEK_
export OS_USERNAME=admin
export OS_TENANT_NAME=openstack
export OS_PASSWORD=redhat
export OS_AUTH_URL=http://192.168.8.100:5000/v2.0/
export OS_REGION_NAME=RegionOne
export SERVICE_TOKEN=token
export SERVICE_ENDPOINT=http://192.168.8.100:35357/v2.0/
_GEEK_

echo 'source /opt/export.sh' >> /etc/profile

source /etc/profile

检查keystone是否正常工作

keystone user-list
keystone role-list
keystone tenant-list
keystone endpoint-list

4.安装配置glance

glance是提供镜像管理服务,可以理解成一个中间件,后面的存储可以是本地存储,也可以使用swift存储。

yum -y install openstack-glance python-glance

修改/etc/glance/glance-api-paste.ini和/etc/glance/glance-registry-paste.ini这两个文件,修改最后9行

service_host = 192.168.8.100
service_port = 5000
auth_host = 192.168.8.100
auth_port = 35357
auth_protocol = http
auth_url = http://192.168.8.100:5000/
admin_tenant_name = openstack
admin_user = admin
admin_password = redhat

在/etc/glance/glance-api.conf和/etc/glance/glance-registry.conf这两个文件末尾添加

[paste_deploy]
flavor = keystone

修改/etc/glance/glance-registry.conf中数据库连接

sed -i 's/localhost/192.168.8.100/g' /etc/glance/glance-registry.conf
sql_connection = mysql://glance:glance@192.168.8.100/glance

同步glance数据库

glance-manage db_sync
for i in api registry; do service openstack-glance-$i start; chkconfig openstack-glance-$i on; done

检查glance是否正常

glance index
什么也没有输出表示正常,此时就可以上传img文件了,在后面把swift配置好以后,我们让glance使用swift来做img的存储

上传一个测试img
下载Image
我们下载CirrOS的image作为测试使用,只有10M。

wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
glance add name=cirros-0.3.0-x86_64 is_public=true  container_format=bare disk_format=qcow2 < /root/cirros-0.3.0-x86_64-disk.img

Cirros,是可以使用用户名和密码登陆,也可以使用密钥登陆

user:cirros
password:cubswin:)
查看img

glance index
ID                                   Name                           Disk Format          Container Format     Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
44047fc6-8247-420e-a27e-1e3e718c0391 cirros-0.3.0-x86_64            qcow2                bare                        9761280

5.安装配置nova组件

yum -y install openstack-nova openstack-utils python-nova python-novaclient

修改/etc/nova/nova.conf

[DEFAULT]
debug = False
verbose = False
logdir = /var/log/nova
state_path = /var/lib/nova
lock_path = /var/lib/nova/tmp
dhcpbridge = /usr/bin/nova-dhcpbridge
dhcpbridge_flagfile = /etc/nova/nova.conf
force_dhcp_release = True
injected_network_template = /usr/share/nova/interfaces.template
libvirt_xml_template = /usr/share/nova/libvirt.xml.template
libvirt_nonblocking = True
vpn_client_template = /usr/share/nova/client.ovpn.template
credentials_template = /usr/share/nova/novarc.template
network_manager = nova.network.manager.FlatDHCPManager
sql_connection = mysql://nova:nova@192.168.8.100/nova
firewall_driver = nova.virt.libvirt.firewall.IptablesFirewallDriver
rpc_backend = nova.rpc.impl_kombu
root_helper = sudo nova-rootwrap
rabbit_host = 192.168.8.100
rabbit_password = guest

allow_admin_api = True
use_deprecated_auth = False
auth_strategy=keystone
compute_scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler

use_s3 = True
s3_host = 192.168.8.100
s3_port = 8080
ec2_host = 192.168.8.100
ec2_url = http://192.168.8.100:8773/services/Cloud
cc_host = 192.168.8.100
routing_source_ip = 192.168.8.100
glance_api_servers = 192.168.8.100:9292
image_service = nova.image.glance.GlanceImageService
sql_connection = mysql://nova:nova@192.168.8.100/nova
api_paste_config = /etc/nova/api-paste.ini
connection_type = libvirt
libvirt_type = kvm
libvirt_use_virtio_for_bridges = True
libvirt_inject_partition = -1
libvirt_use_virtio_for_bridges = True
start_guests_on_host_boot = True
resume_guests_state_on_host_boot = True

snapshot_image_format = qcow2
use_cow_image = True
novncproxy_base_url = http://192.168.8.100:6080/vnc_auto.html
vncserver_proxyclient_address = 192.168.8.100
vncserver_listen = 0.0.0.0

iscsi_helper = tgtadm
iscsi_ip_prefix = 192.168.0.
iscsi_ip_address = 192.168.0.100

public_interface = eth0
flat_interface = eth1
flat_network_bridge = br100
fixed_range = 192.168.0.0/27
floating_range = 192.168.8.32/27
network_size = 32
flat_network_dhcp_start = 192.168.0.10
flat_injected = False

编辑/etc/nova/api-paste.ini文件最后9行

service_host = 192.168.8.100
service_port = 5000
auth_host = 192.168.8.100
auth_port = 35357
auth_protocol = http
auth_uri = http://192.168.8.100:5000/
admin_tenant_name = openstack
admin_user = admin
admin_password = redhat

建立nova-volume服务所需要的逻辑卷

我这里使用sdb2,类型为8e
创建nova-volumes卷组(VG的名字是在nova.conf中定义)
pvcreate /dev/sdb2; vgcreate nova-volumes /dev/sdb2
pvs  #查看PV
vgs  #查看VG

安装libvirt和rabbitmq

 yum -y install libvirt* rabbitmq-server

启动

/etc/init.d/libvirtd start; chkconfig libvirtd on
/etc/init.d/rabbitmq-server start; chkconfig rabbitmq-server on

如果rabbitmq-server启动失败的话

yum -y remove qpid*
是这个qpidd服务占用了rabbitmq-server的5672端口

启动nova组件服务

for i in api cert compute network volume scheduler; do /etc/init.d/openstack-nova-$i restart; chkconfig openstack-nova-$i on; done

同步nova到数据库

nova-manage db sync

在启动次服务

for i in api cert compute network volume scheduler; do /etc/init.d/openstack-nova-$i restart; chkconfig openstack-nova-$i on; done

检查nova组件服务是否正常

nova-manage service-list
2012-07-30 14:22:04 WARNING nova.utils [req-89563aaf-22d5-4b3a-831a-127a09ff3490 None None] /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.3-py2.6-linux-x86_64.egg/sqlalchemy/pool.py:639: SADeprecationWarning: The 'listeners' argument to Pool (and create_engine()) is deprecated.  Use event.listen().
  Pool.__init__(self, creator, **kw)

2012-07-30 14:22:04 WARNING nova.utils [req-89563aaf-22d5-4b3a-831a-127a09ff3490 None None] /usr/lib64/python2.6/site-packages/SQLAlchemy-0.7.3-py2.6-linux-x86_64.egg/sqlalchemy/pool.py:145: SADeprecationWarning: Pool.add_listener is deprecated.  Use event.listen()
  self.add_listener(l)

Binary           Host                                 Zone             Status     State Updated_At
nova-cert        control                              nova             enabled    :-)   2012-07-30 06:22:02
nova-compute     control                              nova             enabled    :-)   2012-07-30 06:21:54
nova-scheduler   control                              nova             enabled    :-)   2012-07-30 06:21:55
nova-network     control                              nova             enabled    :-)   2012-07-30 06:21:56
nova-volume      control                              nova             enabled    :-)   2012-07-30 06:21:56

看到状态为笑脸相对应的服务就正常,如做状态是XX的话,注意查看/var/log/nova/下对应服务的log(上面的sqlalchemy警告信息无关紧要,可能是版本问题)

创建实例的网络

nova-manage network create private --fixed_range_v4=192.168.0.0/24 --num_networks=1 --bridge=br100 --bridge_interface=eth1 --network_size=32

设定Floating IP
用来给实例分配IP通往公网,其实就是公网的IP,它其实是通过类似防火墙类似,做一个映射。实际上是通过iptables来实现映射.

nova-manage floating create --ip_range=192.168.8.32/27

查看

nova floating-ip-create
nova-manage floating list
nova-manage fixed list

打开防火墙允许ssh和ICMP ping响应

nova secgroup-add-rule default tcp 22 22 0.0.0.0/0

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0

6.安装配置swift组件

yum -y install openstack-swift openstack-swift-proxy openstack-swift-account
openstack-swift-container openstack-swift-object rsync xinetd xfsprogs memcached
python-memcached libmemcached
swift --version
swift 1.0

创建swift存储分区(普通分区)
创建一个分区来做存储使用
假设你的第三块磁盘是/dev/sdc, 我们需要对这块盘进行分区并格式化为xfs格式,
然后挂载在/srv/node/sdc1这个目录上

fdisk /dev/sdc  (set up a single partition)
mkfs.xfs -i size=1024 /dev/sdb1
echo "/dev/sdc1 /srv/node/sdc1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
mkdir -p /srv/node/sdc1
mount /srv/node/sdc1
chown -R swift:swift /srv/node

设置rsync

sed -i 's/yes/no/g' /etc/xinetd.d/rsync

cat > /etc/rsyncd.conf << _GEEK_
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = 192.168.8.100

[account]
max_connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max_connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max_connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock
_GEEK_
/etc/init.d/xinetd restart; chkconfig xinetd on           #启动rsync服务

设置swift.conf

cat > /etc/swift/swift.conf << _GEEK_
[swift-hash]
swift_hash_path_suffix = http://www.longgeek.com                        #等号后面的值可以随便定义,swift多节点的话,每个节点保证相同就行
_GEEK_

建立并配置/etc/swift/account-server/account-server.conf文件

cat > /etc/swift/account-server/account-server.conf << _GEEK_
[DEFAULT]
bind_ip = 192.168.8.100
bind_port = 6002
mount_check = false
user = swift
devices = /srv/node
swift_dir = /etc/swift
workers = 2

[pipeline:main]
pipeline = account-server

[app:account-server]
use = egg:swift#account

[account-replicator]

[account-auditor]

[account-reaper]
_GEEK_

配置/etc/swift/container-server/container-server.conf

cat > /etc/swift/container-server/container-server.conf << _GEEK_
[DEFAULT]
bind_ip = 192.168.8.100
bind_port = 6001
mount_check = false
user = swift
devices = /srv/node
workers = 2
swift_dir = /etc/swift

[pipeline:main]
pipeline = container-server

[app:container-server]
use = egg:swift#container

[container-replicator]

[container-updater]

[container-auditor]

[container-sync]
_GEEK_

配置/etc/swift/object-server/object-server.conf

cat > /etc/swift/object-server/object-server.conf << _GEEK_
[DEFAULT]
bind_ip = 192.168.8.100
bind_port = 6000
user = swift
devices = /srv/node
mount_check = false
swift_dir = /etc/swift
workers = 2

[pipeline:main]
pipeline = object-server

[app:object-server]
use = egg:swift#object

[object-replicator]

[object-updater]

[object-auditor]
_GEEK_

配置swift proxy server

 cat > /etc/swift/proxy-server/proxy-server.conf << _GEEK_
[DEFAULT]
bind_port = 8080
bind_ip = 192.168.8.100
user = swift
swift_dir = /etc/swift
workers = 8
log_name = swift

[pipeline:main]
pipeline = healthcheck cache swift3 s3token authtoken keystone proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true

[filter:keystone]
paste.filter_factory = keystone.middleware.swift_auth:filter_factory
operator_roles = Member,admin,SwiftOperator
is_admin = true
#reseller_prefix=AUTH

# NOTE(chmou): s3token middleware is not updated yet to use only
# username and password.
[filter:s3token]
paste.filter_factory = keystone.middleware.s3_token:filter_factory
service_port = 192.168.8.100
service_host = 5000
auth_host = 192.168.8.100
auth_port = 35357
auth_protocol = http
auth_token = token
admin_token = token

[filter:authtoken]
paste.filter_factory = keystone.middleware.auth_token:filter_factory
auth_host = 192.168.8.100
auth_port = 35357
auth_protocol = http
auth_uri = http://192.168.8.100:5000/
admin_tenant_name = openstack
admin_user = admin
admin_password = redhat
auth_token = token
admin_token = token

[filter:swift3]
use = egg:swift#swift3

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:cache]
use = egg:swift#memcache
memcache_servers = 192.168.8.100:11211
_GEEK_

创建swift ring

swift-ring-builder account.builder create 18 1 1
swift-ring-builder container.builder create 18 1 1
swift-ring-builder object.builder create 18 1 1

注意, 我们创建的3个ring的文件, 后面的数字很重要,
这里的18表示分区将被处理为2 ^ 18th, 第二个数字1表示我们有两个zone,
所以这里根据你的zone的数量来定义, 如果和你的实际情况有差异,
你可能会在启动服务的时候报错。 第三个数字1表示限制分区数据的转移的时间,
这里为一小时。
swift-ring-builder account.builder add z1-192.168.8.100:6002/sdc1 100
swift-ring-builder container.builder add z1-192.168.8.100:6001/sdc1 100
swift-ring-builder object.builder add z1-192.168.8.100:6000/sdc1 100

当创建好了Ring文件, 你可以通过下面的命令来验证刚才添加的内容是否正确。

swift-ring-builder account.builder
swift-ring-builder container.builder
swift-ring-builder object.builder

如果都没有问题, 我们就生成最终的Ring

swift-ring-builder account.builder rebalance
swift-ring-builder container.builder rebalance
swift-ring-builder object.builder rebalance

c创建完环后会生成三个.gz文件,复制到/etc/swift/
设置权限

chown -R swift:swift /etc/swift/

启动swift服务

for i in account container object proxy; do /etc/init.d/openstack-swift-$i restart; chkconfig openstack-swift-$i on; done

验证

swift -V 2 -A http://192.168.8.100:5000/v2.0 -U openstack:admin -K redhat stat              #查看文件统计结果

7.安装dashboard(horzion)

yum -y install httpd mod_python mod_wsgi openstack-dashboard Django python-django-horizon

修改/etc/openstack-dashboard/local_settings
添加

DATABASES = {
	'default': {
		'ENGINE': 'django.db.backends.mysql',
		'NAME': 'dash',
		'USER': 'dash',
		'PASSWORD': 'dash',
		'HOST': '192.168.8.100',
		'PORT': '3306',
	},
}

修改

   更改如下内容:

	OPENSTACK_HOST = "127.0.0.1"
	OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
	OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"

	更改为:

	OPENSTACK_HOST = "192.168.8.100"
	OPENSTACK_KEYSTONE_URL = "http://192.168.8.100:5000/v2.0"
	OPENSTACK_KEYSTONE_ADMIN_URL = "http://192.168.8.100:35357/v2.0"
	OPENSTACK_KEYSTONE_DEFAULT_ROLE = "Member"

把dash配置同步到数据库中

/usr/share/openstack-dashboard/manage.py syncdb
/etc/init.d/httpd restart; chkconfig httpd on

打开浏览器输入:http://192.168.8.100/dashboard 帐号:admin 密码:redhat
rpm包的novnc不怎么好使
我们就用源码的吧

yum -y install git
git clone http://github.com/cloudbuilders/noVNC.git
mv noVNC /opt/
cd /opt/noVNC/
nohup ./utils/nova-novncproxy --flagfile=/etc/nova/nova.conf > /var/log/noVNC.log 2>&1 &
浏览2574 评论0
返回
目录
返回
首页
ESXI 虚拟化嵌套 Live ISO | USB OpenStack