OpenStack安装部署
# OpenStack安装
Train版本是CentOS 7上最后一个支持二进制方式部署的OpenStack版本,因此会使用这个版本来演示 二进制包安装OpenStack核心组件,了解每个核心组件由哪些包组成,需要哪些常用配置。熟悉了核 心组件以后再使用OpenStack开发的自动化部署工具Kolla来安装部署后续的新版本。
# 安装OpenStack之前的准备工作
在安装OpenStack核心组件之前,需要在安装好的操作系统做一些基础配置,主要包括:
- 网络架构选项,目前OpenStack在实际生产架构上主要使用下面两种类型网络:
- 二层网络,即OpenStack只负责二层网络通信,可以创建子网和网关,但是3层的路由工作交给外部物理路由器实现,性能较好,一般在生产上采用。
- 三层网络,OpenStack自身可以创建路由器router,即通过软件实现三层通信,性能不高,一般在测试环境或者条件受限的网络使用。
- 生成组件密码,因为每个组件都需要多个密码;
- NTP校时,各个组件的认证token有效时间需要各个组件的时间一致才能正常校验。
- OpenStack基础包,安装核心组件的依赖包;
- 数据库,生产集群使用是galera组件的高可用mysql集群,在测试环境可以安装MySQL单实例。
- 消息队列,生产集群会使用rabbitMQ高可用架构,测试环境使用RabbitMQ单实例。
- memcached,作为缓存使用; 这些基本配置工作完成后才开始安装openstack对应的组件。本次测试环境使用1台4c8g 100g系统盘的 控制节点和1台4c8g 100g系统盘+300g*2数据盘的虚拟机做计算节点,来演示基本组件安装。 如果计算节点本身也是虚拟机,例如vmware上开启的虚拟机,记得开启CPU虚拟化支持,否则后面的 安装会报错。
# 选择网络架构
本次测试环境使用2层网络,即需要自己创建和配置软路由组件。网络架构图如下所示:
控制节点和计算节点都会连到管理网络,然后计算节点上会多一块网卡,用来连接到虚拟机网络。管 理网络和虚拟机网络可能通过上层路由器互通,也可能是完全隔离开的。 本次实验实验VMware安装的两台虚拟机来进行模拟:
- 控制节点和计算节点的eth0使用nat网络或者桥接网络,保证可以上外网;
- 控制节点和计算节点的eth1使用LAN区段,即无法连接外网,也没有网关设备的网络,稍后我们会给它配上一个网关,解决虚拟机访问外网的问题。 你们自己开启的虚拟机网络名称可能会有差异,但是记住两套网络的规则即可。
# 生成组件密码
openstack集群各个组件搭建顺序
- keystone,认证
- placement,归置
- glance,镜像
- neutron,网络
- cinder,存储
- nova,计算
- horizon,web面板 计算节点nova放到最后一个安装是因为它需要依赖于Glance、Placement、Neutron和Cinder这几个组 件提供的服务。安装过程中各个组件都需要密码,因此在安装之前先统一生成:
组件 | 密码 | 备注 |
---|---|---|
集群管理员 | CKD3VAQUSOFYMYVs | 用户admin |
数据库mariadb | n5#XZ6^5eQg2e5bE | 用户root |
消息队列rabbitmq | i6sxgdW2Jbo3nHNE | 用户openstack |
keystone数据库 | glrKEib48VYPZBjO | 用户keystone |
placement数据库 | zEwd43RWhNxYPWVw | 用户placement |
placement用户 | Sy2lm71IrMiks3EW | |
glance数据库 | 35ktJazstxE8ZzHv | 用户glance |
glance用户 | Tn3Ss1mmh7WPQOpk | |
cinder数据库 | nrKa2GHj3HUZicCF | 用户cinder |
cinder用户 | krPNls9to3y54sTG | |
neutron数据库 | tf99MMkexjAX2ncg用户neutron | |
neutron用户 | ulfCxXYP6zlx5EIe | |
metadata proxy secret | INRV1Qqba62akutd | |
nova数据库 | cyaV7zUa8MEdvH8V | 用户nova |
nova用户 | rInAhw7qspZTFI4p |
# NTP校时配置
测试环境是控制节点和计算节点2台机器,配置好从节点到主节点的校时服务。高可用环境需要给所有 节点都配置好ntpd客户端,服务端地址设置为内网的校时服务器或者master节点的校时服务,然后启 动ntp客户端,保证所有节点和内网ntp服务器时间同步。 校时服务选择轻量级的chrony软件,安装命令是:
yum install chrony -y
安装完成后,主节点上直接启动chronyd服务,命令是:
systemctl start chronyd
从节点上修改配置文件/etc/chrony.conf,将校时服务器的地址改为主节点的管理网络IP地址:
server controller iburst
修改后启动chronyd服务,命令是:
systemctl start chronyd
最后通过下面的命令检查chronyd服务的状态:
systemctl status chronyd
# 系统基本配置
一些常规的系统配置需要修改,分别是:
- 确认所有节点的firewalld都是关闭状态,selinux都是disable状态。
- 主机hosts文件/etc/hosts 需要填写下面两个主机信息,填写主机的目的是rabbitmq需要使用主机名进行通信。例如我的配置是:
192.168.31.185 controller.my.com controller
192.168.31.193 compute.my.com compute
2
# OpenStack基础包安装
OpenStack基础包的yum源是位于CentOS Extra仓库中,因此需要在CentOS安装好后,打 开/etc/yum.repos.d/CentOS-Base.repo文件,修改extra 配置示例如下:
[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
2
3
4
5
6
如果使用我提供的yum.repos.d目录替换则无需进行上面配置。 然后依次执行下面的4条命令,在节点上配置OpenStack Train版本yum 源,然后替换为国内阿里云 yum源的地址:
yum install centos-release-openstack-train -y
sed -i 's/mirrorlist=/#mirrorlist=/g' /etc/yum.repos.d/CentOS-OpenStack-train.repo
sed -i 's/#baseurl=/baseurl=/g' /etc/yum.repos.d/CentOS-OpenStack-train.repo
sed -i 's@http://mirror.centos.org@https://mirrors.aliyun.com@g' /etc/yum.repos.d/CentOS-OpenStack-train.repo
2
3
4
yum源配置好以后,在控制节点上安装python-openstackclient包,提供openstack命令来进行集群管理 工作:
yum install python-openstackclient -y
安装完成后用命令验证一下:
openstack --version
确认命令返回类似下面的信息,说明安装完毕。
openstack 4.0.2
# 数据库安装
执行下面的命令安装数据库组件:
yum install mariadb mariadb-server python2-PyMySQL -y
安装完毕后,开始数据库的配置。
# 启动mariadb
systemctl start mariadb
systemctl status mariadb
systemctl enable mariadb
2
3
# 安全初始化
执行命令对刚安装好的数据库进行初始化:
mysql_secure_installation
完整流程如下所示:
NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!
In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.
Enter current password for root (enter for none):
OK, successfully used password, moving on...
Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.
Set root password? [Y/n]
New password:
Re-enter new password:
Password updated successfully!
Reloading privilege tables..
... Success!
By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.
Remove anonymous users? [Y/n]
... Success!
Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.
Disallow root login remotely? [Y/n]
... Success!
By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.
Remove test database and access to it? [Y/n]
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!
Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.
Reload privilege tables now? [Y/n]
... Success!
Cleaning up...
All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.
Thanks for using MariaDB!
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
这过程中会要求修改root密码,本次安装使用的密码如下所示:
n5#XZ6^5eQg2e5bE
提示:数据库默认是安装在/var/lib/mysql目录下,对于生产数据库来说,一般需要给数据库挂载独立的 硬盘,而且是性能较好的SSD硬盘,保证数据库性能,测试环境可以使用本机目录。
# 消息队列安装
在控制节点上执行下面命令安装消息队列组件RabbitMQ:
yum install rabbitmq-server -y
然后启动节点上的rabbitmq,并设置开机自启动
systemctl start rabbitmq-server
systemctl status rabbitmq-server
systemctl enable rabbitmq-server
2
3
4
# 创建管理员帐号
执行下面的命令:
rabbitmqctl add_user openstack i6sxgdW2Jbo3nHNE
正常输出如下:
Creating user "openstack"
openstack是用户,后面的是密码,后面安装组件时需要这个密码。
# 绑定用户角色
设置openstack用户为管理员角色,执行的命令是:
rabbitmqctl set_user_tags openstack administrator
正常输出如下:
Setting tags for user "openstack" to [administrator]
# 设置用户权限
为openstack用户配置对应的权限,执行的命令是:
rabbitmqctl set_permissions -p "/" openstack ".*" ".*" ".*"
正常输出如下:
Setting permissions for user "openstack" in vhost "/"
# 查看用户是否创建好
显示当前RabbitMQ实例上的用户,命令是:
rabbitmqctl list_users
正常输出如下:
openstack [administrator]
guest [administrator]
2
# memcached安装
keystone服务会使用memcached组件来存放认证令牌,因此需要在控制节点上安装这个服务。在控制 节点依次执行下面的命令安装memcached:
yum install memcached python-memcached -y
安装好以后,替换配置文件里的监听地址,打开/etc/sysconfig/memcached文件,将里面的
CACHESIZE="64"
替换为:
CACHESIZE="512"
OPTIONS="-l 127.0.0.1,::1,controller"
2
最后执行下面的命令启动memcached并设置开机自启动:
systemctl start memcached
systemctl status memcached
systemctl enable memcached
2
3
验证是否开始监听:
[root@control-01 ~]# ss -ltunp| grep memcached
tcp LISTEN 0 128 192.168.31.185:11211 *:*
users:(("memcached",pid=84481,fd=28))
tcp LISTEN 0 128 127.0.0.1:11211 *:*
users:(("memcached",pid=84481,fd=26))
tcp LISTEN 0 128 [::1]:11211 [::]:*
users:(("memcached",pid=84481,fd=27))
2
3
4
5
6
7
8
# 创建组件数据库
执行下面命令登录到MySQL数据库中,会要求输入上面mysql初始化时配置的mysql密码:
mysql -u root -p
密码是(注意前后没有空格)
n5#XZ6^5eQg2e5bE
然后在mysql命令行里依次执行下面的建表和授权语句,创建OpenStack核心组件所需的数据库、用 户,并给组件用户授予对应数据库的权限。建表语句如下所示:
create database keystone default character set utf8;
create database placement default character set utf8;
create database glance default character set utf8;
create database neutron default character set utf8;
create database cinder default character set utf8;
create database nova default character set utf8;
create database nova_api default character set utf8;
create database nova_cell0 default character set utf8;
2
3
4
5
6
7
8
基本服务总共需要创建8个数据库,授权语句如下所示:
grant all privileges on keystone.* to keystone@'%' identified by 'glrKEib48VYPZBjO';
grant all privileges on keystone.* to keystone@'%' identified by 'glrKEib48VYPZBjO';
grant all privileges on placement.* to placement@'%' identified by 'zEwd43RWhNxYPWVw';
grant all privileges on placement.* to placement@'%' identified by 'zEwd43RWhNxYPWVw';
grant all privileges on glance.* to 'glance'@'%' identified by '35ktJazstxE8ZzHv';
grant all privileges on glance.* to 'glance'@'%' identified by '35ktJazstxE8ZzHv';
grant all privileges on cinder.* to cinder@'%' identified by 'nrKa2GHj3HUZicCF';
grant all privileges on cinder.* to cinder@'%' identified by 'nrKa2GHj3HUZicCF';
grant all privileges on neutron.* TO 'neutron'@'%' identified by 'tf99MMkexjAX2ncg';
grant all privileges on neutron.* TO 'neutron'@'%' identified by 'tf99MMkexjAX2ncg';
grant all privileges on nova_api.* TO 'nova'@'%' identified by 'cyaV7zUa8MEdvH8V';
grant all privileges on nova_api.* TO 'nova'@'%' identified by 'cyaV7zUa8MEdvH8V';
grant all privileges on nova.* TO 'nova'@'%' identified by 'cyaV7zUa8MEdvH8V';
grant all privileges on nova.* TO 'nova'@'%' identified by 'cyaV7zUa8MEdvH8V';
grant all privileges on nova_cell0.* TO 'nova'@'%' identified by 'cyaV7zUa8MEdvH8V';
grant all privileges on nova_cell0.* TO 'nova'@'%' identified by 'cyaV7zUa8MEdvH8V';
flush privileges;
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
因为OpenStack各个组件配置文件中对特殊字符的支持不是很好,因此各个组件的数据库连接密码不 要带特殊字符,就使用大小写字母+数字的组合即可。数据库配置好后,继续下面的安装步骤。
# OpenStack组件安装
# 安装组件安装包
执行下面的命令安装6大组件的安装包:
yum install -y openstack-keystone httpd mod_wsgi \
openstack-placement-api \
openstack-glance \
openstack-cinder \
openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge \
openstack-nova \
openstack-dashboard
2
3
4
5
6
7
在所有组件的安装包都安装完成后,根据下面流程依次配置每个组件。上面的安装命令中,每一行就 是一个组件所需的安装包名称,其余依赖包会自动安装。
# Keystone配置
# 修改配置文件
打开keystone的配置文件/etc/keystone/keystone.conf,需要修改的配置如下所示:
[database]
#...
connection = mysql+pymysql://keystone:glrKEib48VYPZBjO@controller/keystone
#...
[cache]
backend = oslo_cache.memcache_pool
enabled = true
memcache_servers = localhost:11211
2
3
4
5
6
7
8
中间的...号表示省略的内容,剩下的是需要修改的位置,主要是两个:
- database,数据库连接配置
- connection,数据库连接字符串
- cache,token缓存配置
- backend,选择的后端组件名称
- enabled,是否开启缓存组件
- memcache_servers,memcached的连接地址,只有backend的值为oslo_cache.memcache_pool和dogpile.cache.memcached使用这个配置。这里因为memcached和keystone在同一个节点上,因此使用的localhost。 增加了memcached的配置,用于缓存token避免keystone多次查询数据库。
# keystone数据库初始化
- 初始化鉴权服务数据库,这个命令正常执行成功时没有任何输出:
su -s /bin/sh -c "keystone-manage db_sync" keystone
- 初始化fernet 密钥仓库,这个命令会生成对认证令牌进行加密解密的密钥:
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
2
上面两个命令的作用分别是:
- 为fernet token设置密钥仓库和授权收据(auth receipts),默认仓库路径是/etc/keystone/fernet-keys。这个命令也会创建一个主密钥用于创建和校验fernet 令牌和授权收据。
- 设置管理员密码变量,这个管理员也是后面登录openstack管理界面的管理员。
export ADMIN_PASS=CKD3VAQUSOFYMYVs
- 初始化鉴权服务
keystone-manage bootstrap --bootstrap-password $ADMIN_PASS \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
2
3
4
5
这个初始化命令会创建默认的地域RegionOne、默认的域Default、默认用户admin、默认角色admin、 默认用户admin和默认角色admin的关系绑定、设置keystone组件的服务端点。
- 配置apache 服务器 打开httpd配置文件/etc/httpd/conf/httpd.conf,需要修改的配置项如下所示:
ServerName controller:80
Listen controller:80
2
将主机名和监听的地址修改为主机对应的IP地址,然后创建keystone的配置文件软连接 到/etc/httpd/conf.d/目录下
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
打开/etc/httpd/conf.d/wsgi-keystone.conf这个文件,把里面第一行监听地址改为:
Listen controller:5000
修改好后保存,启动httpd服务并设置为开机自启动:
systemctl start httpd
systemctl status httpd
systemctl enable httpd
2
3
然后就可以在控制节点上创建admin用的授权文件了,创建在当前用户目录下(~)
vim admin-openrc.sh
文件的内容是:
export OS_USERNAME=admin
export OS_PASSWORD=CKD3VAQUSOFYMYVs
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
2
3
4
5
6
7
保存退出,到这里keystone组件就安装完毕。
# keystone服务验证
执行下面的命令来验证:
source admin-openrc
openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 54d64f44f90b403aaf75224621b636db | admin |
+----------------------------------+-------+
2
3
4
5
6
7
如果openstack用户能够看到实际的用户信息,说明配置正确。
# 创建service项目
命令是:
openstack project create --domain default --description "Service project" service
正常输出结果是:
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service project |
| domain_id | default |
| enabled | True |
| id | 5e9f30bfff134e21ad86e919a7f6f099 |
| is_domain | False |
| name | service |
| options | {} |
| parent_id | default |
| tags | [] |
+-------------+----------------------------------+
2
3
4
5
6
7
8
9
10
11
12
13
这个service项目需要给后面的几个组件使用,到这里keystone组件配置完毕。
# Placement配置
# 创建placement用户
在openstack集群内,创建placement用户,命令是:
openstack user create --domain default --password-prompt placement
密码是:
Sy2lm71IrMiks3EW
会要求你输入placement用户的密码然后创建用户:
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 8b1e625b90534d14a0d16539a11ad0af |
| name | placement |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
2
3
4
5
6
7
8
9
10
11
12
# 绑定角色
将placement用户绑定到admin角色,命令是:
openstack role add --project service --user placement admin
这个命令执行成功后没有任何输出。
# 创建placement服务
在集群中创建placement服务,对应的命令是:
openstack service create --name placement --description "Placement API" placement
输出结果如下:
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Placement API |
| enabled | True |
| id | bd88d30e494448c99e5aa656c2dc4902 |
| name | placement |
| type | placement |
+-------------+----------------------------------+
2
3
4
5
6
7
8
9
# 创建服务访问端点
创建placement服务的访问端点命令和输出结果如下:
openstack endpoint create --region RegionOne placement public http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 346383db4956412cbdd689bfddf895a1 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bd88d30e494448c99e5aa656c2dc4902 |
| service_name | placement |
| service_type | placement |
| url | http://control:8778 |
+--------------+----------------------------------+
openstack endpoint create --region RegionOne placement internal http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 486d43cbf29c4db69a9973d0cc78ee09 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bd88d30e494448c99e5aa656c2dc4902 |
| service_name | placement |
| service_type | placement |
| url | http://control:8778 |
+--------------+----------------------------------+
openstack endpoint create --region RegionOne placement admin http://controller:8778
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 3aa3fec1f0a5455e9a750f3271d628c9 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bd88d30e494448c99e5aa656c2dc4902 |
| service_name | placement |
| service_type | placement |
| url | http://control:8778 |
+--------------+----------------------------------+
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# 修改配置文件
打开控制节点上的/etc/placement/placement.conf文件,需要修改的配置如下所示:
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000/v3
auth_version = v3
service_token_roles = service
service_token_roles_required = true
memcached_servers = controller:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = Sy2lm71IrMiks3EW
[placement_database]
connection = mysql+pymysql://placement:zEwd43RWhNxYPWVw@controller/placement
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 初始化数据库
配置完成后,开始初始化placement组件的数据库,命令是:
su -s /bin/sh -c "placement-manage db sync" placement
这个命令可能会看到警告信息,对后续没有影响。数据库初始化完成后,会在/etc/httpd/conf.d/目录下 生成一个00-placement-api.conf文件,里面是placement的虚拟机主机,打开这个文件,将里面的监听 地址改为下面的样子:
Listen controller:8778
同时还要在这个文件的<VirtualHost *:8778>里添加下面 ... 中间的这部分代码:
<VirtualHost *:8778>
<Directory /usr/bin>
Require all denied
<Files "placement-api">
<RequireAll>
Require all granted
Require not env blockAccess
</RequireAll>
</Files>
</Directory>
...
</VirtualHost>
2
3
4
5
6
7
8
9
10
11
12
这是2.2和2.4版本的httpd的差异,添加这段内容,httpd才能正常调用/usr/bin/placement-api命令, placement-api才能在httpd2.4版本上正常工作。重启httpd服务:
systemctl restart httpd
到这里placement服务就安装完毕。
# placement服务校验
执行下面的命令来校验placement服务是否正常工作,严重命令如下:
placement-status upgrade check
正常输出结果如下:
+----------------------------------+
| Upgrade Check Results |
+----------------------------------+
| Check: Missing Root Provider IDs |
| Result: Success |
| Details: None |
+----------------------------------+
| Check: Incomplete Consumers |
| Result: Success |
| Details: None |
+----------------------------------+
2
3
4
5
6
7
8
9
10
11
然后检查命令执行结果:
echo $?
0
2
返回结果可能的代码有: |0|所有升级就绪检查成功通过| |1|至少有一个检查遇到了问题,需要进一步调查,可能是个警告,升级也可以成功| |2|升级状态检查失败,需要调查。需要考虑是某些原因导致升级失败| |255|未知错误|
如果一切正常,那么返回结果应该是0,到这里Placement服务就配置完毕。
# Glance配置
# 创建glance用户
数据创建完成后,在openstack集群里创建glance相关的用户:
openstack user create --domain default --password-prompt glance
在输入框里输入glance用户的密码,然后能看到对应的创建信息:
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | d486f252b7e94e1495e4fd8e6107a41a |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
2
3
4
5
6
7
8
9
10
11
12
这里设置的glance用户密码是:
Tn3Ss1mmh7WPQOpk
# 绑定角色
需要将glance用户绑定到admin角色
openstack role add --project service --user glance admin
这个命令执行成功时没有任何输出
# 创建glance 服务入口
命令是:
openstack service create --name glance --description "OpenStack Image" image
示例输出如下:
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 2d6c31b6c0be45a4b09a7136b107b4d4 |
| name | glance |
| type | image |
+-------------+----------------------------------+
2
3
4
5
6
7
8
9
# 创建服务访问端点
# public入口
openstack endpoint create --region RegionOne image public http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6766d903dd184168958e8677d3a0157f |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d6c31b6c0be45a4b09a7136b107b4d4 |
| service_name | glance |
| service_type | image |
| url | http://control:9292 |
+--------------+----------------------------------+
# internal入口
openstack endpoint create --region RegionOne image internal http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 833473f547fc4e84b802d89386e923d7 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d6c31b6c0be45a4b09a7136b107b4d4 |
| service_name | glance |
| service_type | image |
| url | http://control:9292 |
+--------------+----------------------------------+
# admin入口
openstack endpoint create --region RegionOne image admin http://controller:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | fd18b04dc5934b33bfbe04eaa3cc6c26 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 2d6c31b6c0be45a4b09a7136b107b4d4 |
| service_name | glance |
| service_type | image |
| url | http://control:9292 |
+--------------+----------------------------------+
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# 修改配置文件
打开/etc/glance/glance-api.conf,这个文件里需要修改多个部分。
- 修改绑定地址和glance数据库连接信息
[DEFAULT]
# ...
bind_host = 192.168.31.185
# ..
[database]
connection = mysql+pymysql://glance:35ktJazstxE8ZzHv@controller/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_version = v3
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
service_token_roles = service
service_token_roles_required = True
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = Tn3Ss1mmh7WPQOpk
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images
[paste_deploy]
flavor = keystone
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
这里使用操作系统本地文件路径作为镜像的存储位置,即镜像文件会放到控制节点的磁盘上存储。后 面生产环境中会将它和Ceph对接,存储到后端Ceph集群中。
# 初始化数据库
执行下面的命令初始化glance组件所需数据库:
su -s /bin/sh -c "glance-manage db_sync" glance
最后看到下面这样的字样,说明同步成功
INFO [alembic.runtime.migration] Context impl MySQLImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
Database is synced successfully.
2
3
4
# 启动glance-api服务
执行下面的命令启动glance-api服务并设置开机自启动:
systemctl start openstack-glance-api.service
systemctl enable openstack-glance-api.service
2
然后可以执行下面的命令验证:
source admin-openrc.sh
openstack image list
2
如果没有报错,说明glance-api可以正常查询。
# 镜像创建测试
下载了一个cirrors镜像上传到服务器的/root用户目录下,用于创建镜像测试:
openstack image create --file cirros-0.6.2-x86_64-disk.img --disk-format qcow2 --container-format bare --public cirros
创建成功,且镜像状态正常,说明glance-api正常工作:
openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| e633e629-3c6c-4f86-b8b4-33bffe63a595 | cirros | active |
+--------------------------------------+--------+--------+
2
3
4
5
6
7
到这里glance组件安装完毕。
# Cinder配置
# 创建cinder用户
执行下面的命令在OpenStack集群中创建cinder用户:
openstack user create --domain default --password-prompt cinder
密码是前面cinder密码krPNls9to3y54sTG输出结果是:
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 7ee77e86b81e46459c00f9732a7980d3 |
| name | cinder |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
2
3
4
5
6
7
8
9
10
11
12
13
# 绑定角色
将cinder用户和admin角色绑定:
openstack role add --project service --user cinder admin
创建cinder服务 需要创建v2和v3版本的cinder存储服务,具体的区别后面会详细讲述:
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | c0ee230c7de941e0b25f112ed6017e53 |
| name | cinderv2 |
| type | volumev2 |
+-------------+----------------------------------+
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Block Storage |
| enabled | True |
| id | bc7281663f5943f68c91921eb2d2edd5 |
| name | cinderv3 |
| type | volumev3 |
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 创建服务端点
先创建v2版本的
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
+--------------+---------------------------------------+
| Field | Value |
+--------------+---------------------------------------+
| enabled | True |
| id | 25dbebfaed4947e386daf08f09f44233 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c0ee230c7de941e0b25f112ed6017e53 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://control:8776/v2/%(project_id)s |
+--------------+---------------------------------------+
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
+--------------+---------------------------------------+
| Field | Value |
+--------------+---------------------------------------+
| enabled | True |
| id | f63b79e8582547c88b1be6b905158233 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c0ee230c7de941e0b25f112ed6017e53 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://control:8776/v2/%(project_id)s |
+--------------+---------------------------------------+
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
+--------------+---------------------------------------+
| Field | Value |
+--------------+---------------------------------------+
| enabled | True |
| id | b4a094a05a704cdf8f6993a1eb63cc8b |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c0ee230c7de941e0b25f112ed6017e53 |
| service_name | cinderv2 |
| service_type | volumev2 |
| url | http://control:8776/v2/%(project_id)s |
+--------------+---------------------------------------+
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
再创建v3版本的:
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
+--------------+---------------------------------------+
| Field | Value |
+--------------+---------------------------------------+
| enabled | True |
| id | ec16ede9346a493ba108add63930e45a |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bc7281663f5943f68c91921eb2d2edd5 |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://control:8776/v3/%(project_id)s |
+--------------+---------------------------------------+
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
+--------------+---------------------------------------+
| Field | Value |
+--------------+---------------------------------------+
| enabled | True |
| id | 1967d21b50cb4fb7b021235d8fbbaa3b |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bc7281663f5943f68c91921eb2d2edd5 |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://control:8776/v3/%(project_id)s |
+--------------+---------------------------------------+
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s
+--------------+---------------------------------------+
| Field | Value |
+--------------+---------------------------------------+
| enabled | True |
| id | 87dcee4419bb41088b792b71c9f573a5 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | bc7281663f5943f68c91921eb2d2edd5 |
| service_name | cinderv3 |
| service_type | volumev3 |
| url | http://control:8776/v3/%(project_id)s |
+--------------+---------------------------------------+
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
下面的操作需要注意,cinder的服务有3种:
- openstack-cinder-api,负责核心接口
- openstack-cinder-scheduler,负责调度,选择合适的节点接受调度任务
- openstack-cinder-volume,负责调度卷驱动创建实际的存储卷
- openstack-cinder-backup,负责卷快照任务 如果控制节点和计算节点是分离的,那么在控制节点上只需要启动前两个服务即可,在计算节点上只 需要启动后两个服务就行。同时,计算节点和控制节点的配置因为安装的服务不同是有差异的。
# 修改配置文件
打开/etc/cinder/cinder.conf文件,控制节点需要修改的配置如下所示:
[DEFAULT]
# 下面是基本配置
auth_strategy = keystone
glance_api_servers = http://controller:9292
my_ip = 192.168.31.185
osapi_volume_listen = 192.168.31.185
transport_url = rabbit://openstack:i6sxgdW2Jbo3nHNE@controller
[database]
connection = mysql+pymysql://cinder:nrKa2GHj3HUZicCF@controller/cinder
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_version = v3
auth_url = http://controller:5000
memcached_servers = localhost:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = krPNls9to3y54sTG
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
配置修改好后,保存退出继续下面的操作:
# 初始化数据库
初始化cinder数据库,创建所需的表结构:
su -s /bin/sh -c "cinder-manage db sync" cinder
# 启动服务
在控制节点上只需要启动这两个服务即可
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
2
启动成功后,执行命令测试:
cinder list
正常提示如下,说明配置成功
+----+--------+------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+----+--------+------+------+-------------+----------+-------------+
+----+--------+------+------+-------------+----------+-------------+
2
3
4
有报错说明哪里配置有问题,需要检查配置信息。因为目前只搭建好了cinder-api和cinder-scheduler服 务,还没有搭建cinder-volume和cinder-backup服务,因此还无法测试卷创建和快照备份。到后面计算 节点配置完成后,我们就可以测试了。
# 补充内容
数据卷在挂载到server实例上面之后,会存在一个挂载关系,这个挂载关系也是在数据库里进行管理的,主要有5个命令进行管理,分别是:
- cinder attachment-list,列出所有挂载关系。
- cinder attachment-show attach_id,显示挂载关系细节
- cinder attachment-delete attach_id,删除一个挂载关系
- cinder attachment-create 创建挂载关系
- cinder attachment-update 升级挂载关系。 后面两个命令使用的参数复杂一点。要根据具体的情况来确定,一般都是在网页上直接挂载磁盘到虚 拟机上或者从虚拟机上卸载磁盘,只有在网页上无法操作时,才需要使用到这几个命令来处理。
# Neutron配置
# 创建neutron用户
创建neutron用户的命令是:
openstack user create --domain default --password-prompt neutron
密码上最开始的neutron密码
输出结果是:
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | fdb0f541e28141719b6a43c8944bf1fb |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
2
3
4
5
6
7
8
9
10
11
12
# 绑定角色
绑定neutron用户到admin角色,命令是:
openstack role add --project service --user neutron admin
# 创建neutron服务
在集群内创建neutron网络服务,命令是:
openstack service create --name neutron --description "OpenStack Networking" network
示例输出结果是:
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 8c0358d5d2ba4f929062e0e0680f5147 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
2
3
4
5
6
7
8
9
# 创建服务访问端点
创建neutron服务的3个端点:
openstack endpoint create --region RegionOne network public http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 9c53f31a2a5245eb8d3a3b22437de0b2 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c0358d5d2ba4f929062e0e0680f5147 |
| service_name | neutron |
| service_type | network |
| url | http://control:9696 |
+--------------+----------------------------------+
openstack endpoint create --region RegionOne network internal http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6d2f2106614844c3a77da24ce1a69371 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c0358d5d2ba4f929062e0e0680f5147 |
| service_name | neutron |
| service_type | network |
| url | http://control:9696 |
+--------------+----------------------------------+
openstack endpoint create --region RegionOne network admin http://controller:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | c565a698bed649fb96dd92681d460538 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 8c0358d5d2ba4f929062e0e0680f5147 |
| service_name | neutron |
| service_type | network |
| url | http://control:9696 |
+--------------+----------------------------------+
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# 修改neutron配置
使用provider网络,即单纯的二层网络,打开/etc/neutron/neutron.conf配置文件,需要新增或修改的 配置有:
[DEFAULT]
core_plugin = ml2
service_plugins =
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
transport_url = rabbit://openstack:i6sxgdW2Jbo3nHNE@controller
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000/v3
memcached_servers = localhost:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = ulfCxXYP6zlx5EIe
[database]
connection = mysql+pymysql://neutron:tf99MMkexjAX2ncg@controller/neutron
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# nova相关的配置
Neutron服务需要和Nova服务进行交互,因此需要在/etc/neutron/neutron.conf配置文件最后添加Nova组件的配置:
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = rInAhw7qspZTFI4p
2
3
4
5
6
7
8
9
neutron.conf文件修改完毕后,保存退出。
# ML2配置
修改ML2配置文件,打开 /etc/neutron/plugins/ml2/ml2_conf.ini,需要新增的配置有:
[ml2]
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[securitygroup]
enable_ipset = true
2
3
4
5
6
7
8
9
10
11
这个配置表示只支持flat和vlan类型的网络,具体的网络类型和差异会在后面Neutron组件里详细讲解。
# linux agent配置文件
打开 /etc/neutron/plugins/ml2/linuxbridge_agent.ini,需要新增的配置是:
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
2
3
4
5
6
7
8
9
provider后面的ens33是控制节点上虚拟机网络的接口地址,根据实际情况来填写,因为每个人虚拟机上第二块 网卡的名称都可能不一样。针对这个配置文件,centos7系统还需要新建一个模块加载配置
vim /etc/modules-load.d/neutron.conf
文件里的内容是:
br_netfilter
为了能够开机自启动br_netfilter模块,还需要手动加载一下:
modprobe br_netfilter
然后往/etc/sysctl.conf文件里添加下面3行:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
2
3
然后执行sysctl -p来让配置生效。
# 配置dhcp agent
文件路径/etc/neutron/dhcp_agent.ini:
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
2
3
在这个文件最后加上上面三行。
# 配置metadata agent
打开文件/etc/neutron/metadata_agent.ini文件
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = INRV1Qqba62akutd
2
3
同时这两行必须写入到/etc/neutron/neutron.conf文件的[DEFAULT]位置,否则neutron-metadata agent获取不到这个配置。
# 创建配置文件软链接
创建ml2文件的软链接:
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
# 初始化数据库
初始化neutron组件的数据库,命令是:
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
最后看到下面的字样,说明数据库初始化成功:
INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d
OK
2
# 启动服务
设置网络服务开机自启并启动网络服务
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl status neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
2
3
确认服务正常了以后再配置开机自启动,否则会一直报错,导致多次重启失败后无法重启。需要执行 下面的命令重设状态后才能重新启动
systemctl reset-failed neutron-server
上面的服务是neutron-server,其他的服务如果也出现这种状态,用同样的命令执行即可。
# 验证
使用下面的命令验证:
openstack port list
如果能正常执行,说明没问题,如果报错说明有问题。
# 创建provider 网络
在控制节点上,初始化环境变量:
source admin-openrc.sh
然后创建一个虚拟网络:
openstack network create --share --external --provider-physical-network provider --provider-network-type flat flat_net
创建成功信息如下:
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2022-01-12T03:39:09Z |
| description | |
| dns_domain | None |
| id | 694d2510-24a4-4ea0-ab02-ffd783d15322 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | None |
| is_vlan_transparent | None |
| location | cloud='', project.domain_id=,
project.domain_name='Default', project.id='a95171d611aa4ae7a6e05c3d8c3ddb3e',
project.name='admin', region_name='RegionOne', zone= |
| mtu | 1500 |
| name | flat_net |
| port_security_enabled | True |
| project_id | a95171d611aa4ae7a6e05c3d8c3ddb3e |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 1 |
| router:external | External |
| segments | None |
| shared | True |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2022-01-12T03:39:09Z |
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------+
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
# 创建一个子网
网络创建好以后,在网络下面创建一个子网用于给虚拟机分配IP地址:
openstack subnet create --network flat_net --allocation-pool start=192.168.116.150,end=192.168.116.253 --dns-nameserver=192.168.116.2 --gateway=192.168.116.2 --subnet-range=192.168.116.0/24 flat_subnet
这个命令创建的ip地址池范围是192.168.116.150到192.168.116.254,dns地址和网关地址都是 192.168.116.2 创建成功后,输出结果如下:
+-------------------+---------------------------------------------------------------------------------+
| Field | Value |
+-------------------+---------------------------------------------------------------------------------+
| allocation_pools | 192.168.116.150-192.168.116.253 |
| cidr | 192.168.116.0/24 |
| created_at | 2023-10-23T14:25:49Z |
| description | |
| dns_nameservers | 192.168.116.2 |
| enable_dhcp | True |
| gateway_ip | 192.168.116.2 |
| host_routes | |
| id | fc819678-b982-4ae9-95dc-8a69bc9f93d5 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| location | cloud='', project.domain_id=, project.domain_name='Default', 16
project.id='fed9773f4da748f0a64670cfc22706c9',
project.name='admin', region_name='', zone= |
| name | flat_subnet |
| network_id | 8a1ee5ca-9c82-41fc-b1de-d62b49344e09 |
| prefix_length | None |
| project_id | fed9773f4da748f0a64670cfc22706c9 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2023-10-23T14:25:49Z |
+-------------------+---------------------------------------------------------------------------------+
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
然后在网络里也可以看到对应子网的信息:
openstack network list
+--------------------------------------+----------+--------------------------------------+
| ID || Name | Subnets |
+--------------------------------------+----------+--------------------------------------+
| 694d2510-24a4-4ea0-ab02-ffd783d15322 | flat_net | 20c8056f-6cb8-4b11-b41fb275fb8cb2dc |
+--------------------------------------+----------+--------------------------------------+
2
3
4
5
6
这个网络是否能用,得等到nova配置完成后,创建虚拟机来测试就知道了。
# Nova配置
# 创建nova用户
创建用户的命令是:
openstack user create --domain default --password-prompt nova
密码是最开始的密码列表里的nova用户密码 示例输出结果是:
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 1f2ea839b3344c97b3c864f2b4783f59 |
| name | nova |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
2
3
4
5
6
7
8
9
10
11
12
# 绑定角色
将用户nova和admin角色绑定:
openstack role add --project service --user nova admin
# 创建nova服务
openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled | True |
| id | da1e894174ef4a279dc2d56aa33a1f25 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
2
3
4
5
6
7
8
9
10
# 创建服务访问端点
依次创建nova服务的3个访问端点:
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 1a0eae0ab0384726b246c4fa6756be92 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | da1e894174ef4a279dc2d56aa33a1f25 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 877b57986d7f4d43a65252f5c0edcc09 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | da1e894174ef4a279dc2d56aa33a1f25 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 4db0abf4e8cf424a9984ff1851e4facf |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | da1e894174ef4a279dc2d56aa33a1f25 |
| service_name | nova |
| service_type | compute |
| url | http://controller:8774/v2.1 |
+--------------+----------------------------------+
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# 修改配置文件
需要修改的配置文件是/etc/nova/nova.conf,需要修改的配置内容如下所示:
[DEFAULT]
enabled_apis = osapi_compute,metadata
my_ip=192.168.31.185
metadata_host=$my_ip
firewall_driver=nova.virt.firewall.NoopFirewallDriver
transport_url=rabbit://openstack:i6sxgdW2Jbo3nHNE@controller
[api]
auth_strategy=keystone
[api_database]
connection=mysql+pymysql://nova:cyaV7zUa8MEdvH8V@controller/nova_api
[cinder]
catalog_info=volumev3::internalURL
os_region_name=RegionOne
auth_type=password
auth_url=http://controller:5000
project_name=service
project_domain_name=default
username=cinder
user_domain_name=default
password=krPNls9to3y54sTG
[database]
connection=mysql+pymysql://nova:cyaV7zUa8MEdvH8V@controller/nova
[glance]
api_servers=http://controller:9292
[keystone_authtoken]
www_authenticate_uri=http://controller:5000/
auth_url=http://controller:5000
memcached_servers=controller:11211
auth_type=password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = rInAhw7qspZTFI4p
[placement]
auth_type=password
auth_url=http://controller:5000/v3
project_name=service
project_domain_name=default
username=placement
user_domain_name=default
password=Sy2lm71IrMiks3EW
region_name=RegionOne
[vnc]
enabled=true
server_listen=$my_ip
server_proxyclient_address=$my_ip
novncproxy_host=controller
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
在/etc/nova/nova.conf文件里的[neutron]部分添加下面的配置:
[neutron]
auth_type = password
auth_url = http://controller:5000
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = ulfCxXYP6zlx5EIe
service_metadata_proxy = true
metadata_proxy_shared_secret = INRV1Qqba62akutd
2
3
4
5
6
7
8
9
10
11
这里的metadata_proxy_shared_secret配置项对应的值需要和neutron组件中的一样,否则nova和 neutron组件之间通信会失败。 配置nova服务使用网络服务(留到后面nova组件安装部分添加)
# 初始化数据库
首先初始化nova-api数据库
su -s /bin/sh -c "nova-manage api_db sync" nova
注册cell0数据库:
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建cell1 cell
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
示例输出结果如下:
ba01448c-9822-4c87-bb01-50b0140e85d6
初始化nova数据库:
su -s /bin/sh -c "nova-manage db sync" nova
这个命令执行时可能会看到部分警告信息,可以不用管它。 验证cell0和cell1正确注册命令:
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
正常输出如下:
+-------+--------------------------------------+------------------------------------------+----------------------------------------------+----------+
| Name | UUID |Transport URL | Database Connection | Disabled |
+-------+--------------------------------------+------------------------------------------+----------------------------------------------+----------+
| cell0 | 00000000-0000-0000-0000-000000000000 |none:/ | mysql+pymysql://nova:****@control/nova_cell0 | False |
| cell1 | d4ef4fe6-960b-4875-a786-63b3ce61cc0f | rabbit://openstack:****@controller |mysql+pymysql://nova:****@controller/nova |False |
+-------+--------------------------------------+------------------------------------------+----------------------------------------------+----------+
2
3
4
5
6
# 启动服务
systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl status openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
2
3
# nova服务测试
执行下面的命令测试nova服务是否正常
nova list
确认nova主服务ok后,设置nova服务开机自启动:
systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
我们下一节开始配置一个计算节点。
# Horizon面板配置
先打开配置文件/etc/openstack-dashboard/local_settings,依次修改下面的配置:
# 配置允许以哪个域名或ip访问这个地址
ALLOWED_HOSTS = ['localhost','192.168.31.185','controller.my.com']
# 配置主机名
OPENSTACK_HOST = "controller.my.com"
# 配置memcached作为缓存
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
},
}
# session存储引擎
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
# 设置各个组件的API接口版本,文件内没有,需要添加
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 3,
}
# 设置keystone用户的默认域和默认角色,文件内没有,需要添加,添加在OPENSTACK_API_VERSIONS后面即可
WEBROOT='/dashboard'
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin"
# 如果使用的是provider网络,那么OPENSTACK_NEUTRON_NETWORK需要改成下面的样子
OPENSTACK_NEUTRON_NETWORK = {
'enable_auto_allocated_network': False,
'enable_distributed_router': False,
'enable_fip_topology_check': False,
'enable_ha_router': False,
'enable_ipv6': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
#...
}
TIME_ZONE = "Asia/Shanghai"
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
打开/etc/httpd/conf.d/openstack-dashboard.conf文件,添加下面这行:
WSGIApplicationGroup %{GLOBAL}
最后启动服务
systemctl restart httpd memcached
确认无误后继续下面的操作。 到这里所有核心组件全部配置完成并正常启动,然后我们就可以打开网页,访问我们的OpenStack管理面板了。
# OpenStack计算节点配置
计算机节点因为只需要负责创建虚拟机并和控制节点进行交互,因此需要安装的组件比控制节点要少 得多,主要有:
- Neutron,负责虚拟机网络
- Cinder,负责虚拟机存储
- Nova,负责虚拟机管理
下面我们来依次配置。
- 基础配置 检查虚拟化支持 确认计算节点支持硬件虚拟化
egrep -c '(vmx|svm)' /proc/cpuinfo
如果是值是0,说明不支持硬件虚拟化,如果不是0,则支持。
防火墙和selinux 确认所有节点的firewalld都是关闭状态,selinux都是disable状态。配置好yum源:
# 网络配置
# 安装包
neutron在计算节点上安装的包和服务端略有区别,如下所示:
yum install openstack-neutron-linuxbridge ebtables ipset conntrack-tools -y
# 修改配置文件
先打开配置文件/etc/neutron/neutron.conf,需要修改的配置如下所示:
[DEFAULT]
transport_url = rabbit://openstack:i6sxgdW2Jbo3nHNE@controller
auth_strategy = keystone
[database]
connection = mysql+pymyqsl@neutron:tf99MMkexjAX2ncg@controller/neutron
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
service_token_roles = service
service_token_roles_required = true
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = ulfCxXYP6zlx5EIe
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
按照provider network的方式来配置,因为前面neutron服务端也是按照这个模式来配置的。 再打开/etc/neutron/plugins/ml2/linuxbridge_agent.ini文件,添加或者修改配置到下面的样子:
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan = false
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
2
3
4
5
6
7
8
9
provider后面的ens33是控制节点上虚拟机网络的接口地址,根据实际情况来填写,因为每个人虚拟机上第二块 网卡的名称都可能不一样。
节点上还需要配置br_netfilter模块开机自启动,步骤如下: a. 编辑/etc/modules-load.d/neutron.conf文件,内容如下:
br_netfilter
这个文件是为了配置br_netfilter模块开机自启动的。还需要手动加载一下:
modprobe br_netfilter
再验证一下:
#lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 1 br_netfilter
2
3
b. 修改/etc/sysctl.conf文件,添加下面三行:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
2
3
然后加载配置:
sysctl -p
# 启动网络服务
最后启动网络组件对应的服务
systemctl start neutron-linuxbridge-agent
systemctl status neutron-linuxbridge-agent
systemctl enable neutron-linuxbridge-agent
2
3
启动完成后,记得检查一下linuxbridge-agent服务的状态,确认是running状态后才能设置enable
# 存储配置
计算节点支持使用多种类型的后端,例如直接使用物理机上的磁盘,或者使用网络存储。本次测试环 境使用的是本地磁盘上的lvm卷,此时对于虚拟机来说需要两个独立的lvm卷,一个给Nova使用,用于 存放虚拟机系统盘,一个给Cinder使用,用于存放虚拟机数据盘。因此本次计算节点使用两块300G的 磁盘/dev/sdb和/dev/sdc,首先来创建LVM后端需要的vg,命令依次如下。
# 安装包
安装存储配置所需要的包,命令如下:
yum install lvm2 device-mapper-persistent-data openstack-cinder \
targetcli python-keystone -y
2
# 创建vg
查看所有磁盘
ls /dev/sd*
正常输出结果是:
/dev/sda /dev/sda1 /dev/sda2 /dev/sda3 /dev/sdb /dev/sdc
先创建pv,本次配置里因为nova和cinder都需要对接后端的LVM,将盘sdb和sdc分别创建pv
pvcreate /dev/sdb
pvcreate /dev/sdc
2
再创建nova和cinder使用的vg,命令是:
vgcreate nova-volumes /dev/sdb
vgcreate cinder-volumes /dev/sdc
2
要注意,这里的nova-volumes和cinder-volumes是vg的名称,下面要用到.
# 修改lvm配置
打开/etc/lvm/lvm.conf文件,在devices部分添加下面的过滤器:
global_filter = [ "a|/dev/sd*|", "r/.*/" ]
这个过滤器的意思是,a即accept,表示只接受物理磁盘作为后端存储卷。r即reject,.*是通配符, 即/dev/目录下的所有其他设备都不接受作为后端存储卷。 同时不要使用官方文档上介绍的filter过滤器,因为这个过滤器因为功能问题已经被废弃,需要使用现 在的global_filter。
# 修改cinder配置
打开/etc/cinder/cinder.conf文件,在这个文件里新增下面的配置(如果有重复的,则以实际配置文件为 准):
[DEFAULT]
# 下面是快照备份相关的配置,只改了有区别的,其他的都用默认值。
backup_ceph_user = cinder-backup2
backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
# 通用配置
transport_url = rabbit://openstack:i6sxgdW2Jbo3nHNE@controller
auth_strategy = keystone
glance_api_servers = http://controller:9292
enabled_backends = lvm
[database]
connection = mysql+pymysql://cinder:nrKa2GHj3HUZicCF@controller/cinder
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = krPNls9to3y54sTG
[lvm]
target_helper = lioadm
target_protocol = iscsi
target_ip_address = 192.168.31.193
volume_backend_name = LVM
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
volumes_dir = $state_path/volumes
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
cinder-backup备份配置目前支持的后端存储有:
- Ceph,块存储
- GCS,对象存储
- glusterfs,文件存储
- Swift,对象存储 在上面的示例中使用的是Ceph后端存储,因此还需要另外两个文件ceph.conf和ceph.client.cinder backup2.keyring,把这两个文件放在/etc/ceph/目录下,这两个文件的作用分别是:
- ceph.conf,ceph集群的连接配置文件
- ceph.client.cinder-backup2.keyring,Ceph集群的认证文件,cinder-backup组件利用这个文件连接ceph集群的存 储池来存放虚拟机快照。
# 启动服务
修改完成后,启动lvm服务:
systemctl start lvm2-lvmetad.service
systemctl enable lvm2-lvmetad.service
2
启动cinder-volume服务:
systemctl start openstack-cinder-volume
systemctl status openstack-cinder-volume
systemctl enable openstack-cinder-volume
2
3
这个时候存储就相当于有了一个对应的agent节点了。
# 计算配置
# 安装包
安装计算服务所需要的包:
yum install openstack-nova-compute
# 修改配置信息
打开/etc/nova/nova.conf文件,修改的内容如下:
[DEFAULT]
my_ip=192.168.31.193
enabled_apis=osapi_compute,metadata
transport_url = rabbit://openstack:i6sxgdW2Jbo3nHNE@controller
[api]
auth_strategy=keystone
[cinder]
catalog_info=volumev3::internalURL
os_region_name=RegionOne
auth_type=password
auth_url=http://controller:5000
project_domain_name=Default
username=cinder
user_domain_name=Default
password=krPNls9to3y54sTG
[glance]
api_servers=http://controller:9292
[keystone_authtoken]
www_authenticate_uri=http://controller:5000
auth_url=http://controller:5000
memcached_servers=controller:11211
auth_type=password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = rInAhw7qspZTFI4p
[libvirt]
# 这里是对接lvm后端的配置,需要在images_volume_group里指定nova-volumes
# 这个vg名称
# virt_type=kvm
# vmware做宿主机的时候使用下面这两个配置项
virt_type=qemu
cpu_mode=none
snapshot_image_format=qcow2
images_type=lvm
images_volume_group=nova-volumes
[neutron]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = ulfCxXYP6zlx5EIe
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type=password
user_domain_name = default
auth_url = http://controller:5000/v3
username = placement
password = Sy2lm71IrMiks3EW
[vnc]
enabled=true
server_listen=192.168.31.193
server_proxyclient_address = $my_ip
novncproxy_base_url=http://controller:6080/vnc_auto.html
novncproxy_host=192.168.31.193
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
# 启动服务
systemctl start libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service
2
到这里,计算节点就配置完毕了,剩下的是在控制节点上的操作。
# 在控制节点添加nova节点到cell数据库
在控制节点查看新增的nova计算节点状态,命令是:
source admin-openrc.sh
openstack compute service list --service nova-compute
2
正常输出结果是:
+-----+--------------+------------+------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+-----+--------------+------------+------+---------+-------+----------------------------+
| 133 | nova-compute | compute1 | nova | enabled | up | 2022-01-12T07:37:38.000000 |
+-----+--------------+------------+------+---------+-------+----------------------------+
2
3
4
5
正常情况下此时就可以看到计算节点的信息,执行下面命令发现计算节点主机(注册该计算节点到系统中)
在控制节点输入以下命令,将计算节点注册到cell中
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
正常情况下会到下面的提示:
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': d4ef4fe6-960b-4875-a786-63b3ce61cc0f
Checking host mapping for compute host 'compute1': b19a3326-dcc0-4e70-848a-557161d3c5e9
Creating host mapping for compute host 'compute1': b19a3326-dcc0-4e70-848a-557161d3c5e9
Found 1 unmapped computes in cell: d4ef4fe6-960b-4875-a786-63b3ce61cc0f
2
3
4
5
6
说明注册成功,这个时候就可以尝试创建并启动虚拟机了。
# 计算节点管理
# 删除一个计算节点
先登录到计算节点上,停止它上面的服务:
systemctl stop openstack-nova-compute
systemctl stop neutron-linuxbridge-agent
2
这个时候查看服务,它就是down的状态
openstack compute service list --service nova-compute
+-----+--------------+------------+------+----------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+-----+--------------+------------+------+----------+-------+----------------------------+
| 133 | nova-compute | compute1 12T15:52:36.000000 | nova | disabled | down | 2022-01-12T15:52:36.000000|
+-----+--------------+------------+------+----------+-------+----------------------------+
2
3
4
5
6
根据ID移除这个服务
openstack compute service delete 133
再使用资源查看命令来看,也不存在了:
openstack resource provider list
到这里这个节点就删除成功了。
# 重新添加删除的节点
到节点上,重新启动openstack-nova-compute和neutron-linuxbridge-agent服务:
systemctl start openstack-nova-compute
systemctl start neutron-linuxbridge-agent
2
然后回到控制节点,就能看到这个新节点
openstack compute service list --service nova-compute
结果如下:
+-----+--------------+------------+------+---------+-------+----------------------------+
| ID | Binary |Host | Zone | Status | State | Updated At |
+-----+--------------+------------+------+---------+-------+----------------------------+
| 136 | nova-compute | compute1 | nova | enabled | up | 2022-01-12T16:00:25.000000|
+-----+--------------+------------+------+---------+-------+----------------------------+
2
3
4
5
再到控制节点重新注册它
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': ba01448c-9822-4c87-bb01-50b0140e85d6
Checking host mapping for compute host 'compute-04': 64633c9e-b826-459c-b4ca
27a286f93cd0
Creating host mapping for compute host 'compute-04': 64633c9e-b826-459c-b4ca
27a286f93cd0
Found 1 unmapped computes in cell: ba01448c-9822-4c87-bb01-50b0140e85d6
2
3
4
5
6
7
8
9
10
11
备注: 使用vmware开启的虚拟机做实验时,可能会因为硬件兼容性问题导致OpenStack启动虚拟机实例时, 一直卡在Booting from hard disk位置。则配置成qemu即可。