为什么openstack 模块说明中的oslo模块总喜欢发生代码冲突

openstack 配置文件oslo.config
我的图书馆
openstack 配置文件oslo.config
今天给大家介绍OpenStack中负责CLI和CONF配置项解析的组件——
Oslo.config。E版本前,这个功能是放在cfg模块中的,后来社区中考虑将OpenStack中共性的组件都剥离出来,统一放在Oslo模块
中。今后开发新的OpenStack组件,估计都要用到Oslo模块。
下面说明一下用法:
在Oslo的cfg模块载入的时候(from Oslo.config import cfg),会自动运行模块中的载入代码CONF = ConfigOpts(),创建一个全局的配置项管理类。
和许多Conf配置模块一样,Oslo.conf在使用时,需要先声明配置项的名称、定义类型、帮助文字、缺省值等,然后再按照事先声明的配置项,对CLI或conf中的内容进行解析。
配置项声明结构示例如下:
[python] common_opts&=&[&&&&&&cfg.StrOpt('bind_host',&&&&&&&&&&&&&default='0.0.0.0',&&&&&&&&&&&&&&&&&help='IP&address&to&listen&on'),&&&&&&cfg.IntOpt('bind_port',&&&&&&&&&&&&&&&&&default=9292,&&&&&&&&&&&&&&&&&help='Port&number&to&listen&on')&&]&&类型的定义对应Opt的各个子类。
Oslo使用register_opt方法,将配置项定义向配置项管理类configOpts的注册是在程序的运行时刻,但是必须在配置项的引用前完成。
[plain] CONF&=&cfg.CONF&&CONF.register_opts(common_opts)&&&&port&=&CONF.bind_port&&
使用conf.register_cli_opts()方法,配置项还可以在管理类ConfigOpts中可选注册为CLI配置项,通过程序运行的CLI参数中获得配置项取值,并在错误打印时,自动输出给CLI配置项参数的帮助文档。
conf配置文件采用的是ini风格的格式
[plain] glance-api.conf:&&&&[DEFAULT]&&&&bind_port&=&9292&&&&&&...&&&&&&[rabbit]&&&&host&=&localhost&&&&port&=&5672&&&&use_ssl&=&False&&&&userid&=&guest&&&&password&=&guest&&&&virtual_host&=&/&&最后通过ConfigOpts类的__call()__方法,执行配置项的解析以及从CLI或配置文件读取配置项的值。
[python] def&__call__(self,&&&&&&&&&&&&&&&args=None,&&&&&&&&&&&&&&&project=None,&&&&&&&&&&&&&&&prog=None,&&&&&&&&&&&&&&&version=None,&&&&&&&&&&&&&&&usage=None,&&&&&&&&&&&&&&&default_config_files=None):&&&&&&"""Parse&command&line&arguments&and&config&files.&&下面是一个完整的示例
[python] from&oslo.config&import&cfg&&&&opts&=&[&&&&&&cfg.StrOpt('bind_host',&default='0.0.0.0'),&&&&&&cfg.IntOpt('bind_port',&default=9292),&&]&&&&CONF&=&cfg.CONF&&CONF.register_opts(opts)&&CONF(default_config_files='glance.conf')&&def&start(server,&app):&&&&&&server.start(app,&CONF.bind_port,&CONF.bind_host)&&
OpenStack项目的配置项声明和许多其他开源Python项目一样,配置项声明是放在
各个调用的模块里面的。也就是说哪里用到才到哪里声明。我觉得这种方式是完全体现了Pthonic的一种声明方式,有别于其他方式,程序员在阅读程序的时
候可以非常方便的在文件开头就能找到配置项的声明定义,而不用到某个指定的文件去查找,实现了KISS的原则。
TA的最新馆藏[转]&[转]&
喜欢该文的人也喜欢openstack高可用环境搭建(一):非高可用环境的搭建
openstack高可用环境搭建(一):非高可用环境的搭建。1 方案设计;四节点基本信息:10 192 44 148;10 192 44 149;10 192 44 150;10 192 44 151
1 方案设计
四节点基本信息:
10.192.44.148
10.192.44.149
10.192.44.150
10.192.44.151
每台设备1个128G的ssd盘,4个2T的数据盘
1.1 网络方案
目前先采用单网卡方案,即每台设备使用一个网卡。IP地址即采用目前的地址。
后续将管理网络、存储网络、存储管理网络、网络、外部网络分开。目前采用单网卡方式。
IP地址列表:
HostnameIP(eth0)IP1(备用IP)
隧道IP(eth0:1)openstack roleCeph mon roleCeph osd配置Vip
node110.192.44.148(eth0)172.16.2.148(eth3)
Controller1+network1Mon0Osd0~osd34Core 16G
node210.192.44.149(eth0)172.16.2.149(eth1)
Controller2+network2Mon1Osd4~osd74Core 16G
node310.192.44.150(eth0)172.16.2.150(eth1)
Compute1Mon2Osd8~osd114Core 16G
node410.192.44.151(eth1)172.16.2.151(eth2)
Compute2Mon3Osd12~osd158Core 16G
注意,后来已调整为:
因为150、151升级了libvirt,然后150每次重启后都不通
所以改为先安装(控制节点+网络节点:148)+(计算节点:149),后面高可用再把150、151安装上:
HostnameIP(eth0)IP1(备用IP)
隧道IP(eth0:1)openstack roleCeph mon roleCeph osd配置Vip
10.192.44.148(eth0)172.16.2.148(eth3)eth0:1controller1+network1Mon0Osd0~osd34Core 16G
10.192.44.149(eth0)172.16.2.149(eth1)eth0:1compute1Mon1Osd4~osd74Core 16G
第二个网口的IP: 172.16.2.148
[root@compute1 network-scripts]# catifcfg-eth1
DEVICE=eth1
ONBOOT=yes
STARTMODE=onboot
BOOTPROTO=static
IPADDR=172.16.2.150
NETMASK=255.255.255.0
GATEWAY=10.192.44.254
使用两个节点作为(控制+网络)的复用节点
使用两个节点作为计算节点
控制节点运行服务非常多,不适合全部用来复用为计算节点跑虚拟机
计算节点需要资源多(CPU、内存),所以最大的资源的那台作为计算节点。
1.2 存储方案
目前系统盘为SSD:128G,存储盘为SATA:2T
系统盘还有空间,将剩余空间作为ceph的journal空间,剩余大概有90G,设置为hda5分区。
HostnameCeph monCeph journalCeph osd
Node1Mon0/dev/hda5Osd.0~osd.3: sdb1/sdc1/sdd1/sde1
Node2Mon1/dev/hda5Osd.4~osd.7: sdb1/sdc1/sdd1/sde1
Node3Mon2/dev/hda5Osd.8~osd.11: sdb1/sdc1/sdd1/sde1
Node4Mon3/dev/hda5Osd.12~osd.15: sdb1/sdc1/sdd1/sde1
Rbd pools:
ServiceRbd poolPg nums
GlanceImages128
CinderVolumes128
NovaVms128
备注:先给磁盘分区,否则安装时给sdx分区,每个磁盘全部分成sdx1,ceph-deploy 分出来的sdx1只有5G大小。
1.3 运行服务
Openstack role
ControllerHttpd,rabbitmq,;keystone, glance, neutron-server, nova-api & scheduler, cinder-api & scheduler
NetworkNeutron agents:l3-agent, openvswitch-agent, dhcp-agent
ComputeNova-compute,neutron-openvswitch,cinder-volume
1.4 其他备注
(1)暂时不安装ceilometer,比较耗资源,且当前不对ceilometer做高可用
(2)暂时不安装swift对象存储
(3)资源非常有限,验证时可能只能开2台虚拟机
1.5 特别注意
1.每走一步,验证一下创建镜像、云硬盘、网络、虚拟机功能,避免错误积累导致重装
2.修改的各种配置注意保存到git上:
.cn/projects/FSDMDEPTHCLOUD/repos/hcloud_install_centos/browse/project_beijing
3.不明确的问题一定要现在虚拟机上验证
4.Horizon还是先安装2个
2.1yum还是rpm
配置内核和系统不升级:
/etc/yum.conf:
keepcache=1
exclude=kernel*
exclude=centos-release*
删除原来的
# rm yum.repos.d/ -rf
替换现在的
然后更新源:
# yum clean all
# yum makecache
# yum update &y
#yum upgrade &y
坚决不能执行yum update和yum upgrade
后续改进:此处后续做成自动化脚本
安装方案:
1.在虚拟机上使用yum安装一遍all-in-one,把缓存的rpm包保存下来
2.实体机上使用rpm包安装,实现持续集成,持续集成终究要做rpm包来安装,不如现在一次搞好,实现脚本化
[root@node1 etc]# vi yum.conf
cachedir=/var/cache/yum
keepcache=1
(1)Ceph的使用yum 安装完全没问题
(2)Openstack使用all-in-one先安装一个节点来检查环境
2.2/etc/hostname的设置
[root@localhost ~]# cat /etc/hostname
[root@localhost ~]# cat /etc/hostname
[root@localhost ~]# cat /etc/hostname
[root@localhost ~]# cat /etc/hostname
后续改进:此处集成到自动化脚本
2.3 /etc/hosts设置
[root@localhost ~]# vi /etc/hosts
127.0.0.1localhost localhost.localdomain localhost4 localhost4.localdomain4
::1localhost localhost.localdomain localhost6 localhost6.localdomain6
10.1.14..cn
10.192.44.148 node1
10.192.44.149 node2
10.192.44.150 node3
10.192.44.151 node4
后续改进:此处集成到自动化脚本
2.4关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
2.5 各种命令汇总
systemctl stop firewalld.service
systemctl disable firewalld.service
yum install ceph -y
yum install ceph-deploy -y
yum install yum-plugin-priorities -y
yum install snappy leveldb gdiskpython-argparse gperftools-libs -y
#ceph-deploy new lxp-node1 lxp-node2lxp-node3
# ceph-deploy install lxp-node1 lxp-node2lxp-node3
#ceph-deploy--overwrite-conf mon create-initial
ceph-deploy mon create lxp-node1 lxp-node2lxp-node3
ceph-deploy gatherkeys lxp-node1 lxp-node2lxp-node3
/etc/init.d/ceph-a start osd
systemctl enable haproxy
systemctl start haproxy
systemctl enable keepalived
systemctl start keepalived
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
# rabbitmqctl add_user guest guest
chown rabbitmq:rabbitmq .erlang.cookie
rabbitmqctl stop_app
rabbitmqctljoin_cluster rabbit@lxp-node1
rabbitmqctlstart_app
rabbitmqctl cluster_status
rabbitmqctl set_policy ha-all'^(?!amq\.).*' '{"ha-mode": "all"}'
yuminstall MySQL-python mariadb-galera-server galera xtrabackup socat
# systemctl enable mariadb.service
# systemctl restart mariadb.service
yuminstall openstack-keystone httpd mod_wsgi python-openstackclient memcachedpython-memcached
systemctlenable memcached.service
systemctlstart memcached.service
#yum install python-pip
#pip install eventlet
mkdir-p /var/www/cgi-bin/
将node1的keystone打包过来解压
chown-R keystone:keystone /var/www/cgi-bin/keystone
chmod755 /var/www/cgi-bin/keystone/ -R
重启httpd:
# systemctlenable httpd.service
#systemctl start httpd.service
[root@lxp-node1~]# export OS_TOKEN=c5a16fa64c00554bde49
[root@lxp-node1~]# export OS_URL=http://192.168.129.130:3
#systemctlenable openstack-glance-api.service openstack-glance-registry.service
#systemctl start openstack-glance-api.serviceopenstack-glance-registry.service
systemctlrestart openstack-glance-api.service openstack-glance-registry.service
systemctlrestart openstack-glance-api.service openstack-glance-registry.service
MariaDB [(none)]& GRANT ALL PRIVILEGESON glance.* TO 'glance'@'localhost' IDENTIFIED BY '6fbbfc';
MariaDB [(none)]& GRANT ALL PRIVILEGESON glance.* TO 'glance'@'%' IDENTIFIED BY '6fbbfc';
MariaDB [(none)]& FLUSH PRIVILEGES;
ceph osd tree
/etc/init.d/ceph-a start osd
# ceph-deploy --overwrite-conf osd preparelxp-node1:/data/osd4.lxp-node1:/dev/sdb2lxp-node2:/data/osd5.lxp-node2:/dev/sdb2lxp-node3:/data/osd6.lxp-node3:/dev/sdb2
# ceph-deploy --overwrite-conf osd activatelxp-node1:/data/osd4.lxp-node1:/dev/sdb2lxp-node2:/data/osd5.lxp-node2:/dev/sdb2lxp-node3:/data/osd6.lxp-node3:/dev/sdb2
# ceph osd lspools
# ceph pg stat
ceph osd pool create image 32
# ceph osd lspools
yum installopenstack-dashboardhttpd mod_wsgi memcached pythonmemcached
# systemctlrestarthttpd.service
yuminstall openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-consoleopenstack-nova-novncproxy penstack-nova-scheduler python-novaclient
# systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.serviceopenstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service
GRANTALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'b7cf1';
GRANTALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'b7cf1';
FLUSH PRIVILEGES;
# systemctl restart openstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.serviceopenstack-nova-novncproxy.service
yuminstall openstack-cinder python-cinderclient python-oslo-db
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.serviceopenstack-cinder-scheduler.service
# systemctl restartopenstack-cinder-api.service openstack-cinder-scheduler.service
GRANT ALL PRIVILEGES ON cinder.* TO'cinder'@'localhost' IDENTIFIED BY 'afdfc435eb0b4372';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%'IDENTIFIED BY 'afdfc435eb0b4372';
FLUSH PRIVILEGES;
yuminstall openstack-neutron openstack-neutron-ml2 python-neutronclient
MariaDB [(none)]& GRANT ALL PRIVILEGESON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '11becb';
MariaDB [(none)]& GRANT ALL PRIVILEGESON neutron.* TO 'neutron'@'%' IDENTIFIED BY '11becb';
MariaDB [(none)]& FLUSH PRIVILEGES;
# systemctl enable neutron-server.service
# systemctl restart neutron-server.service
# systemctl enable openvswitch.service
# systemctl restart openvswitch.service
# systemctl enable neutron-openvswitch-agent.serviceneutron-l3-agent.service neutron-dhcp-agent.serviceneutron-metadata-agent.service neutron-ovs-cleanup.service
# systemctl restart neutron-openvswitch-agent.serviceneutron-l3-agent.service neutron-dhcp-agent.serviceneutron-metadata-agent.service
systemctlrestart openstack-nova-api.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service
# systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service
2.6 遇到的差异记录
yum -y update
升级所有包,改变软件设置和系统设置,系统版本内核都升级
yum -y upgrade
升级所有包,不改变软件设置和系统设置,系统版本升级,内核不改变
貌似系统起不来了
2.7 特别注意
配置一项就要确认一下虚拟机、vnc是否还正常
如果系统起不来需要重装,那么立即沟通
在北京重装的过程中,自己做尽可能多的验证
3. Ceph安装
3.1 磁盘分区:SSD(/dev/hda5)作为Ceph journal盘
原因:Journal使用SSD盘,会对Ceph性能有一定的提升。
设置/dev/hda5自动挂载到/data/目录
(1)将hda5做成ext4系统:
# mkfs.ext4 /dev/hda5
(2)创建目录/data
# mkdir /data
(3)在/etc/fstab中添加:
/dev/hda5/data ext4defaults,async,noatime,nodiratime,data=writeback,barrier=0 0 0
(4)重启验证是否被挂载
[root@node1 ~]# mount |grep hda5
/dev/hda5 on /data type ext4(rw,noatime,nodiratime,nobarrier,data=writeback)
OK,挂载成功
SSD(/dev/hda5)开机自动挂载为本地目录作为journal盘的方法:
[root@lxp-node1 osd.lxp-node1]# ceph-deployosd prepare --help
usage: ceph-deploy osd prepareHOST:DISK[:JOURNAL] [HOST:DISK[:JOURNAL] ...]
3.2 ceph、ceph-deploy安装
Ceph的包目前使用rpm安装没有任何问题:
yum install ceph -y
yum install ceph-deploy -y
yum install yum-plugin-priorities -y
yum install snappy leveldb gdiskpython-argparse gperftools-libs -y
重启看是否还可以重启成功。Ceph目前只需要这些包
可以重启成功,先不处理ceph,没有风险,先处理openstack。先验证安装方案!
3.3 ceph mon安装
3.4 ceph osd安装
4. 使用packstack进行openstack基本环境安装(4节点)--【此路不通】
先安装一个all-in-one的,检查一下环境是否有冲突
在10.192.44.148使用packstack安装openstack的all-in-one环境
验证安装方案的可行性
# yum install openstack-packstack
# yum install screen
# packstack--gen-answer-file=hcloud.txt
关闭如下选项:
CONFIG_PROVISION_DEMO=n
CONFIG_CEILOMETER_INSTALL=n
CONFIG_SWIFT_INSTALL=n
CONFIG_NAG_INSTALL=n
# screen packstack --answer-file=hcloud.txt
出现问题:
10.192.44.151_mariadb.pp: [ ERROR ]
Applying Puppet manifests [ ERROR ]
4.1 解决冲突问题
ERROR : Error appeared during Puppet run:10.192.44.151_mariadb.pp
Error: Execution of '/usr/bin/rpm -emariadb-server-5.5.35-3.el7.x86_64' returned 1: error: Failed dependencies:
You will find full trace in log/var/tmp/packstack/517-yG5qIz/manifests/10.192.44.151_mariadb.pp.log
数据库安装错误,依赖问题,
删除原来的mysql包,packstack会下载galera版本的mariadb:
[root@localhost ~]# rpm -aq |grep maria
mariadb-devel-5.5.35-3.el7.x86_64
mariadb-5.5.35-3.el7.x86_64
mariadb-test-5.5.35-3.el7.x86_64
mariadb-libs-5.5.35-3.el7.x86_64
mariadb-embedded-5.5.35-3.el7.x86_64
mariadb-embedded-devel-5.5.35-3.el7.x86_64
mariadb-server-5.5.35-3.el7.x86_64
# rpm -e --nodeps mariadb-devel mariadbmariadb-test mariadb-libs mariadb-embedded mariadb-embedded-develmariadb-server
重新安装openstack,在node1上:
删除之后还是有问题
手动安装试试:
把galera也删除:
# rpm -e --nodeps mariadb-galera-commonmariadb-galera-server galera
# rpm -e --nodeps mariadb-libs mariadb
[root@localhost ~]# rpm -aq |grepmaria
[root@localhost ~]# rpm -aq |grep galera
手动验证:
# yum install mariadb mariadb-serverMySQL-python
Error: mariadb-galera-server conflicts with1:mariadb-server-5.5.44-2.el7.centos.x86_64
OK,可以安装
再次用packstack,看是否还会出错,如果出错,选择mariadb_install=n是否可行
如果不行,修改packstack是否可行
手动安装一直无法启动,有如下和sql相关的报错:
yum installmariadb-galera-server galera
MySQL-python-1.2.3-11.el7.x86_64 hasmissing requires of libmysqlclient.so.18()(64bit)
MySQL-python-1.2.3-11.el7.x86_64 hasmissing requires of libmysqlclient.so.18(libmysqlclient_18)(64bit)
perl-DBD-MySQL-4.023-5.el7.x86_64 hasmissing requires of libmysqlclient.so.18()(64bit)
perl-DBD-MySQL-4.023-5.el7.x86_64 hasmissing requires of libmysqlclient.so.18(libmysqlclient_18)(64bit)
Installing : 1:mariadb-libs-5.5.44-2.el7.centos.x86_641/5
/sbin/ldconfig: /lib64/libosipparser2.so.3is not a symbolic link
/sbin/ldconfig: /lib64/libeXosip2.so.4 isnot a symbolic link
/sbin/ldconfig: /lib64/libosip2.so.3 is nota symbolic link
/sbin/ldconfig: /lib64/libosipparser2.so.3is not a symbolic link
/sbin/ldconfig: /lib64/libeXosip2.so.4 isnot a symbolic link
/sbin/ldconfig: /lib64/libosip2.so.3 is nota symbolic link
解决:/sbin/ldconfig: /lib64/libosipparser2.so.3 is not a symbolic link
[root@localhost lib64]# rmlibosipparser2.so.3
[root@localhost lib64]# ln -slibosipparser2.so libosipparser2.so.3
[root@localhost lib64]# ls libosipparser2.*-l
-rw-r--r-- 1 root root 707666 Apr 1 13:52 libosipparser2.a
-rw-r--r-- 1 root root 857 Apr1 13:52 libosipparser2.la
-rw-r--r-- 1 root root 380223 Apr 1 13:52 libosipparser2.so
lrwxrwxrwx 1 root root 17 May 24 21:06 libosipparser2.so.3 -&libosipparser2.so
/sbin/ldconfig: /lib64/libeXosip2.so.4 isnot a symbolic link
[root@localhost lib64]# rm libeXosip2.so.4
[root@localhost lib64]# ln -s libeXosip2.solibeXosip2.so.4
[root@localhost lib64]# ls libeXosip2.so*-l
-rw-r--r-- 1 root root 818385 Apr 1 13:52 libeXosip2.so
lrwxrwxrwx 1 root root 13 May 24 21:08 libeXosip2.so.4 -&libeXosip2.so
(3)/sbin/ldconfig: /lib64/libosip2.so.3 is not a symbolic link
[root@localhost lib64]# rm libosip2.so.3
[root@localhost lib64]# ln -s libosip2.solibosip2.so.3
重新安装:
yum installmariadb-galera-server galera
没有再报这些错误!!!!!
还有其他依赖错误:
MySQL-python-1.2.3-11.el7.x86_64 has missingrequires of libmysqlclient.so.18()(64bit)
MySQL-python-1.2.3-11.el7.x86_64 hasmissing requires of libmysqlclient.so.18(libmysqlclient_18)(64bit)
10:libcacard-1.5.3-60.el7.x86_64 hasmissing requires of libgfapi.so.0()(64bit)
10:libcacard-1.5.3-60.el7.x86_64 hasmissing requires of libgfrpc.so.0()(64bit)
10:libcacard-1.5.3-60.el7.x86_64 hasmissing requires of libgfxdr.so.0()(64bit)
libvirt-daemon-driver-storage-1.1.1-29.el7.x86_64has missing requires of libgfapi.so.0()(64bit)
libvirt-daemon-driver-storage-1.1.1-29.el7.x86_64has missing requires of libgfrpc.so.0()(64bit)
libvirt-daemon-driver-storage-1.1.1-29.el7.x86_64has missing requires of libgfxdr.so.0()(64bit)
libvirt-daemon-driver-storage-1.1.1-29.el7.x86_64has missing requires of libglusterfs.so.0()(64bit)
perl-DBD-MySQL-4.023-5.el7.x86_64 hasmissing requires of libmysqlclient.so.18()(64bit)
perl-DBD-MySQL-4.023-5.el7.x86_64 hasmissing requires of libmysqlclient.so.18(libmysqlclient_18)(64bit)
2:postfix-2.10.1-6.el7.x86_64 has missingrequires of libmysqlclient.so.18()(64bit)
2:postfix-2.10.1-6.el7.x86_64 has missingrequires of libmysqlclient.so.18(libmysqlclient_18)(64bit)
10:qemu-img-1.5.3-60.el7.x86_64 has missingrequires of libgfapi.so.0()(64bit)
10:qemu-img-1.5.3-60.el7.x86_64 has missingrequires of libgfrpc.so.0()(64bit)
10:qemu-img-1.5.3-60.el7.x86_64 has missingrequires of libgfxdr.so.0()(64bit)
10:qemu-kvm-1.5.3-60.el7.x86_64 has missingrequires of libgfapi.so.0()(64bit)
10:qemu-kvm-1.5.3-60.el7.x86_64 has missingrequires of libgfrpc.so.0()(64bit)
10:qemu-kvm-1.5.3-60.el7.x86_64 has missingrequires of libgfxdr.so.0()(64bit)
10:qemu-kvm-common-1.5.3-60.el7.x86_64 hasmissing requires of libgfapi.so.0()(64bit)
10:qemu-kvm-common-1.5.3-60.el7.x86_64 hasmissing requires of libgfrpc.so.0()(64bit)
10:qemu-kvm-common-1.5.3-60.el7.x86_64 hasmissing requires of libgfxdr.so.0()(64bit)
先试一下可否启动mysql
还是启动失败
将这些库从OK的环境整理过来:
MySQL-python-1.2.3-11.el7.x86_64 hasmissing requires of libmysqlclient.so.18()(64bit)
MySQL-python-1.2.3-11.el7.x86_64 hasmissing requires of libmysqlclient.so.18(libmysqlclient_18)(64bit)
还是启动失败:
手动启动试试:
/usr/bin/mysqld_safe --basedir=/usr
解决办法:
删除ib_logfile0 ib_logfile1 文件:
#cd /var/lib/mysql/
#rm ib_logfile0 ib_logfile1
重启mysql服务
还是有错:
mysqld_safe mysqld from pid file /var/run/mariadb/mariadb.pid ended
touch /var/run/mariadb/mariadb.pi
# chown mysql:mysql/var/run/mariadb/mariadb.pid
# chmod 0660 /var/run/mariadb/mariadb.pid
再次启动:
# systemctl enable mariadb.service
# systemctl restart mariadb.service
查看/var/log/mariadb/mariadb.log,报如下错误:
:34:16 [Note] Plugin 'FEEDBACK' is disabled.
:34:16 [Note] Server socket created on IP:'0.0.0.0'.
:16 [ERROR] Fatal error: Can't open and lock privilege tables: Table'mysql.host' doesn't exist
:16 mysqld_safe mysqld from pid file /var/run/mariadb/mariadb.pid ended
执行mysql_install_db
Mariadb.log中报如下错误:
:30 [Note] WSREP: Read nil XID from storage engines, skippingposition init
:30 [Note] WSREP: wsrep_load(): loading provider library 'none'
:30 [ERROR] mysqld: Incorrect information in file:'./mysql/tables_priv.frm'
ERROR: 1033Incorrect information in file: './mysql/tables_priv.frm'
:30 [ERROR] Aborting
:42 [ERROR] mysqld: Can't find file: './mysql/host.frm' (errno: 13)
:42 [ERROR] Fatal error: Can't open and lock privilege tables: Can'tfind file: './mysql/host.frm' (errno: 13)
可能是权限问题:
/blog/614656
/var/lib/mysql
[root@localhost mysql]# pwd
/var/lib/mysql
[root@localhost mysql]# chmod 770 mysql/ -R
还是报找不到
chmod 777mysql/ -R
:20 [ERROR] mysqld: Incorrect information in file:'./mysql/proxies_priv.frm'
:20 [ERROR] Fatal error: Can't open and lock privilege tables:Incorrect information in file: './mysql/proxies_priv.frm'
[root@localhost mysql]# rm proxies*
重启后重启成功
但是启动日志会有很多错误:
tail -f mariadb.log
:38 [ERROR] mysqld: Incorrect information in file: './mysql/tables_priv.frm'
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'THREAD_ID' atposition 0 to have type int(11), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'EVENT_NAME' atposition 2, found 'END_EVENT_ID'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'EVENT_NAME' atposition 2 to have type varchar(128), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'SOURCE' at position3, found 'EVENT_NAME'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'SOURCE' at position 3to have type varchar(64), found type varchar(128).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'TIMER_START' atposition 4, found 'SOURCE'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'TIMER_START' atposition 4 to have type bigint(20), found type varchar(64).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'TIMER_END' atposition 5, found 'TIMER_START'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'TIMER_WAIT' atposition 6, found 'TIMER_END'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'SPINS' at position 7,found 'TIMER_WAIT'.
:38 [ERROR] Incorrect definition of table performance_schema.events_waits_current:expected column 'SPINS' at position 7 to have type int(10), found typebigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'OBJECT_SCHEMA' atposition 8, found 'SPINS'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'OBJECT_SCHEMA' atposition 8 to have type varchar(64), found type int(10) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'OBJECT_NAME' atposition 9, found 'OBJECT_SCHEMA'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'OBJECT_NAME' atposition 9 to have type varchar(512), found type varchar(64).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'OBJECT_TYPE' atposition 10, found 'OBJECT_NAME'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'OBJECT_TYPE' atposition 10 to have type varchar(64), found type varchar(512).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column'OBJECT_INSTANCE_BEGIN' at position 11, found 'INDEX_NAME'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column'OBJECT_INSTANCE_BEGIN' at position 11 to have type bigint(20), found typevarchar(64).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'NESTING_EVENT_ID' atposition 12, found 'OBJECT_TYPE'.
:38 [ERROR] Incorrect definition of table performance_schema.events_waits_current:expected column 'NESTING_EVENT_ID' at position 12 to have type bigint(20),found type varchar(64).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'OPERATION' at position13, found 'OBJECT_INSTANCE_BEGIN'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'OPERATION' atposition 13 to have type varchar(16), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'NUMBER_OF_BYTES' atposition 14, found 'NESTING_EVENT_ID'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'FLAGS' at position15, found 'NESTING_EVENT_TYPE'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_current: expected column 'FLAGS' at position 15to have type int(10), found type enum('STATEMENT','STAGE','WAIT').
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'THREAD_ID' atposition 0 to have type int(11), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of table performance_schema.events_waits_history:expected column 'EVENT_NAME' at position 2, found 'END_EVENT_ID'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'EVENT_NAME' atposition 2 to have type varchar(128), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'SOURCE' at position3, found 'EVENT_NAME'.
:38 [ERROR] Incorrect definition of table performance_schema.events_waits_history:expected column 'SOURCE' at position 3 to have type varchar(64), found typevarchar(128).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'TIMER_START' at position4, found 'SOURCE'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'TIMER_START' atposition 4 to have type bigint(20), found type varchar(64).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'TIMER_END' atposition 5, found 'TIMER_START'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'TIMER_WAIT' at position6, found 'TIMER_END'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'SPINS' at position 7,found 'TIMER_WAIT'.
:38 [ERROR] Incorrect definition of table performance_schema.events_waits_history:expected column 'SPINS' at position 7 to have type int(10), found typebigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'OBJECT_SCHEMA' atposition 8, found 'SPINS'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'OBJECT_SCHEMA' atposition 8 to have type varchar(64), found type int(10) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'OBJECT_NAME' atposition 9, found 'OBJECT_SCHEMA'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'OBJECT_NAME' atposition 9 to have type varchar(512), found type varchar(64).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'OBJECT_TYPE' atposition 10, found 'OBJECT_NAME'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'OBJECT_TYPE' atposition 10 to have type varchar(64), found type varchar(512).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column'OBJECT_INSTANCE_BEGIN' at position 11, found 'INDEX_NAME'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column'OBJECT_INSTANCE_BEGIN' at position 11 to have type bigint(20), found typevarchar(64).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'NESTING_EVENT_ID' atposition 12, found 'OBJECT_TYPE'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'NESTING_EVENT_ID' atposition 12 to have type bigint(20), found type varchar(64).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'OPERATION' atposition 13, found 'OBJECT_INSTANCE_BEGIN'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'OPERATION' atposition 13 to have type varchar(16), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'NUMBER_OF_BYTES' atposition 14, found 'NESTING_EVENT_ID'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'FLAGS' at position15, found 'NESTING_EVENT_TYPE'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history: expected column 'FLAGS' at position 15to have type int(10), found type enum('STATEMENT','STAGE','WAIT').
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'THREAD_ID' atposition 0 to have type int(11), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of table performance_schema.events_waits_history_long:expected column 'EVENT_NAME' at position 2, found 'END_EVENT_ID'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'EVENT_NAME' atposition 2 to have type varchar(128), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'SOURCE' atposition 3, found 'EVENT_NAME'.
:38 [ERROR] Incorrect definition of table performance_schema.events_waits_history_long:expected column 'SOURCE' at position 3 to have type varchar(64), found typevarchar(128).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'TIMER_START' atposition 4, found 'SOURCE'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'TIMER_START' atposition 4 to have type bigint(20), found type varchar(64).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'TIMER_END' atposition 5, found 'TIMER_START'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'TIMER_WAIT' atposition 6, found 'TIMER_END'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'SPINS' atposition 7, found 'TIMER_WAIT'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'SPINS' atposition 7 to have type int(10), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'OBJECT_SCHEMA'at position 8, found 'SPINS'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'OBJECT_SCHEMA'at position 8 to have type varchar(64), found type int(10) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'OBJECT_NAME' atposition 9, found 'OBJECT_SCHEMA'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'OBJECT_NAME' atposition 9 to have type varchar(512), found type varchar(64).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'OBJECT_TYPE' atposition 10, found 'OBJECT_NAME'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'OBJECT_TYPE' atposition 10 to have type varchar(64), found type varchar(512).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column'OBJECT_INSTANCE_BEGIN' at position 11, found 'INDEX_NAME'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'OBJECT_INSTANCE_BEGIN'at position 11 to have type bigint(20), found type varchar(64).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column'NESTING_EVENT_ID' at position 12, found 'OBJECT_TYPE'.
:48:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column'NESTING_EVENT_ID' at position 12 to have type bigint(20), found typevarchar(64).
:38 [ERROR] Incorrect definition of table performance_schema.events_waits_history_long:expected column 'OPERATION' at position 13, found 'OBJECT_INSTANCE_BEGIN'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'OPERATION' atposition 13 to have type varchar(16), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'NUMBER_OF_BYTES'at position 14, found 'NESTING_EVENT_ID'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'FLAGS' atposition 15, found 'NESTING_EVENT_TYPE'.
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_history_long: expected column 'FLAGS' atposition 15 to have type int(10), found type enum('STATEMENT','STAGE','WAIT').
:38 [ERROR] Incorrect definition of tableperformance_schema.threads: expected column 'THREAD_ID' at position 0 to havetype int(11), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.threads: expected column 'PROCESSLIST_ID' at position 1,found 'NAME'.
:38 [ERROR] Incorrect definition of tableperformance_schema.threads: expected column 'PROCESSLIST_ID' at position 1 tohave type int(11), found type varchar(128).
:38 [ERROR] Incorrect definition of tableperformance_schema.threads: expected column 'NAME' at position 2, found 'TYPE'.
:38 [ERROR] Incorrect definition of tableperformance_schema.threads: expected column 'NAME' at position 2 to have typevarchar(128), found type varchar(10).
:38 [ERROR] Incorrect definition of tableperformance_schema.events_waits_summary_by_thread_by_event_name: expected column'THREAD_ID' at position 0 to have type int(11), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_event_name: expected column 'COUNT_READ' atposition 1, found 'COUNT_STAR'.
:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_event_name: expected column 'COUNT_WRITE' atposition 2, found 'SUM_TIMER_WAIT'.
:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_event_name: expected column'SUM_NUMBER_OF_BYTES_READ' at position 3, found 'MIN_TIMER_WAIT'.
:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_event_name: expected column'SUM_NUMBER_OF_BYTES_WRITE' at position 4, found 'AVG_TIMER_WAIT'.
:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_instance: expected column 'COUNT_READ' atposition 2, found 'OBJECT_INSTANCE_BEGIN'.
:38 [ERROR] Incorrect definition of table performance_schema.file_summary_by_instance:expected column 'COUNT_WRITE' at position 3, found 'COUNT_STAR'.
:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_instance: expected column'SUM_NUMBER_OF_BYTES_READ' at position 4, found 'SUM_TIMER_WAIT'.
:38 [ERROR] Incorrect definition of tableperformance_schema.file_summary_by_instance: expected column'SUM_NUMBER_OF_BYTES_WRITE' at position 5, found 'MIN_TIMER_WAIT'.
:38 [ERROR] Incorrect definition of tableperformance_schema.mutex_instances: expected column 'LOCKED_BY_THREAD_ID' atposition 2 to have type int(11), found type bigint(20) unsigned.
:38 [ERROR] Incorrect definition of tableperformance_schema.rwlock_instances: expected column'WRITE_LOCKED_BY_THREAD_ID' at position 2 to have type int(11), found typebigint(20) unsigned.
:38 [ERROR] mysqld: Incorrect information in file:'./mysql/event.frm'
:38 [ERROR] Cannot open mysql.event
:38 [ERROR] Event Scheduler: An error occurred when initializingsystem tables. Disabling the Event Scheduler.
:38 [Note] WSREP: Read nil XID from storage engines, skippingposition init
将表信息全部删除:
其他尝试:
升级数据库表失败:
# /usr/bin/mysql_upgrade -u root
终极解决办法:
rpm 删除包之后,手动清理一下mysql文件:
[root@localhost var]# find ./ -name mysql
./lib/mysql
./lib/mysql/mysql
[root@localhost var]# rm ./lib/mysql/ -rf
[root@localhost usr]# find ./ -name mysql |xargs rm &rf
再次安装试试:
yum install mariadb-galera-server galera
# systemctl enable mariadb.service
# systemctl start mariadb.service
OK,终于解决了
需要手动清理干净!!!
将/f和/f.d也清理掉
4.2 解决后数据库问题重新使用packstack自动化安装,检查环境差异
清理后重新用packstack重装,否则带着手动安装的还是会报错
依旧会报错,手动使用yum逐个安装
5.openstack逐个模块安装:根据官网文档,安装四节点环境,先安装controller1和compute1
HostnameIPRole
controller110.192.44.148Controller1 (network1)
controller210.192.44.149Controller2(networ2)
compute110.192.44.150Compute1
compute210.192.44.151Compute2
先安装如下两个节点
HostnameIPRole
controller110.192.44.148Controller1 (network1)
compute110.192.44.150Compute1
5.1 基本环境安装
设置hostname和hosts
10.1.14.235 .cn
10.192.44.148 controller1
10.192.44.150 compute1
5.1.1 数据库安装
# yum install mariadb mariadb-serverMySQL-python
修改/f.f:
bind-address = 10.192.44.148
default-storage-engine= innodb
innodb_file_per_table
collation-server =utf8_general_ci
init-connect = 'SETNAMES utf8'
character-set-server = utf8
启动数据库:
# systemctl enable mariadb.service
# systemctl start mariadb.service
设置密码:
# mysql_secure_installation
Root密码为1,其他全部选择Y
检查数据库:
[root@f.d]# mysql -uroot-p
Enter password:
Welcome to the MariaDB monitor. C or \g.
Your MariaDB connection id is 11
Server version: 5.5.44-MariaDB MariaDBServer
Copyright (c) , , MariaDBCorporation Ab and others.
Type '' or '\h' for help. Type '\c' toclear the current input statement.
MariaDB [(none)]& Ctrl-C -- exit!
5.1.2 安装rabbitmq
yum install rabbitmq-server
启动rabbitmq-server:
[root@controller1 7]# systemctl enablerabbitmq-server.service
[root@controller1 7]# systemctl startrabbitmq-server.service
增加openstack用户:
# rabbitmqctl add_user openstack 1 这里密码为1
设置访问权限:
rabbitmqctl set_permissionsopenstack ".*" ".*" ".*"
systemctl restart rabbitmq-server.service
5.2 安装keystone
5.2.1 创建数据库,密码为1
MariaDB [(none)]& CREATE DATABASE
Query OK, 1 row affected (0.00 sec)
MariaDB [(none)]& GRANT ALL PRIVILEGESON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '1';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]& GRANT ALL PRIVILEGESON keystone.* TO 'keystone'@'%' IDENTIFIED BY '1';
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]& FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]& quit
生成随机数
[root@controller17]# openssl rand -hex 10
5.2.2 安装keystone
yum installopenstack-keystone httpd mod_wsgi python-openstackclient memcached python-memcached
启动memcached:
[root@controller1 7]# systemctl enablememcached.service
ln -s'/usr/lib/systemd/system/memcached.service''/etc/systemd/system/multi-user.target.wants/memcached.service'
[root@controller1 7]# systemctl startmemcached.service
5.2.3 修改keystone配置:
将packstack自动安装的配置拷贝过来进行修改
生成随机数
[root@controller1 7]# openssl rand -hex 10
修改、检查如下字段:
admin_token = 5aa78ddcb
public_port=5000
admin_bind_host=0.0.0.0
public_bind_host=0.0.0.0
admin_port=35357
connection =mysql://keystone:1@10.192.44.148/keystone
rabbit_host = 10.192.44.148
rabbit_port = 5672
rabbit_hosts ="10.192.44.148:5672"
同步数据库:
su -s /bin/sh -c"keystone-manage db_sync" keystone
5.2.4 配置httpd
将packstack的httpd配置拷贝过来
修改如下内容:
[root@controller1 httpd]#grep node ./ -r
./conf/httpd.conf:ServerName"node1"
./conf.d/15-horizon_vhost.conf: ServerName node1
./conf.d/15-horizon_vhost.conf: ServerAlias node1
./conf.d/10-keystone_wsgi_admin.conf: ServerName node1
./conf.d/10-keystone_wsgi_main.conf: ServerName node1
[root@controller1 httpd]#grep controller1 ./ -r
./conf/httpd.conf:ServerName"controller1"
./conf.d/15-horizon_vhost.conf: ServerName controller1
./conf.d/15-horizon_vhost.conf: ServerAlias controller1
./conf.d/10-keystone_wsgi_admin.conf: ServerName controller1
./conf.d/10-keystone_wsgi_main.conf: ServerName controller1
[root@controller1 httpd]#
[root@controller1 httpd]# grep 192 ./ -r
./conf.d/15-horizon_vhost.conf: ServerAlias 192.168.129.131
ServerAlias 10.192.44.148
创建keystone站点:
mkdir -p/var/www/cgi-bin/keystone
拷贝packstack环境的:
[root@controller1 keystone]# chown -Rkeystone:keystone /var/www/cgi-bin/keystone
[root@controller1 keystone]# chmod 755/var/www/cgi-bin/keystone/*
启动httpd服务:
# systemctl enable httpd.service
# systemctl start httpd.service
15-default.conf
ServerName controller1
可以重启成功
但是目前无法登录
安装horizon再验证排查
5.2.5 创建service和endpoint
[root@controller1 ~]# exportOS_TOKEN=5aa78ddcb
[root@controller1 ~]# exportOS_URL=http://10.192.44.148:3
[root@controller1 ~]# openstack servicelist
创建service:
[root@controller1 ~]# openstack servicecreate --name keystone --description "OpenStack Identity" identity
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | 69ccf6b2be |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
创建endpoint:
openstack endpoint create \
--publicurl http://controller1: \
--internalurl http://controller1:\
--adminurl http://controller1:3 \
--region RegionOne \
# openstack endpoint create --publicurlhttp://controller1: --internalurl http://controller1:--adminurl http://controller1:3 --region RegionOne identity
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| adminurl | http://controller1:3 |
| id | 6df505cf8dc42d64879c69 |
| internalurl | http://controller1: |
| publicurl | http://controller1: |
| region | RegionOne |
| service_id | 69ccf6b2be |
| service_name | keystone |
| service_type | identity |
+--------------+----------------------------------+
5.2.6 创建项目、用户、角色
[root@controller1 ~]# openstack projectcreate --description "Admin Project" admin
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Admin Project |
| enabled | True |
| id | 617e98e151b245d081203adcbb0ce7a4 |
| name | admin |
+-------------+----------------------------------+
[root@controller1 ~]# openstack user create--password-prompt admin
User Password:
Repeat User Password:
+----------+----------------------------------+
| Field| Value|
+----------+----------------------------------+
| email| None|
| enabled| True |
| id| cfcade990b52ad341a06f0 |
| name| admin|
| username | admin |
+----------+----------------------------------+
[root@controller1 ~]# openstack role createadmin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id| 6c89e70e3b274c44b068dbd6aef08bb2 |
| name| admin|
+-------+----------------------------------+
[root@controller1 ~]#
[root@controller1 ~]# openstack role add--project admin --user admin admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id| 6c89e70e3b274c44b068dbd6aef08bb2 |
| name| admin|
+-------+----------------------------------+
[root@controller1 ~]# openstack projectcreate --description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| enabled | True |
| id | 165f6edf748d4bff957beada1f2a728e |
| name | service |
+-------------+----------------------------------+
5.2.7 keystone的验证
unset OS_TOKEN OS_URL
[root@controller1 ~]# openstack--os-auth-url http://controller1:35357 --os-project-name admin --os-usernameadmin --os-auth-type password token issue
+------------+----------------------------------+
| Field| Value|
+------------+----------------------------------+
| expires| T03:27:46Z|
| id| 2b1325bdd1c643ad9b6ceed17e663913 |
| project_id |617e98e151b245d081203adcbb0ce7a4 |
| user_id| cfcade990b52ad341a06f0 |
+------------+----------------------------------+
# openstack --os-auth-urlhttp://controller1:35357 --os-project-domain-id default --os-user-domain-iddefault --os-project-name admin --os-username admin --os-auth-type passwordtoken issue
+------------+----------------------------------+
| Field| Value|
+------------+----------------------------------+
| expires| T03:30:03.368364Z|
| id| 5c8f0e1ac4fdff3b232d8 |
| project_id |617e98e151b245d081203adcbb0ce7a4 |
| user_id| cfcade990b52ad341a06f0 |
+------------+----------------------------------+
创建环境变量脚本:
[root@controller1 ~(keystone_admin)]# catadmin_keystone
unset OS_SERVICE_TOKEN OS_TOKEN OS_URL
export OS_USERNAME=admin
export OS_PASSWORD=1
exportOS_AUTH_URL=http://10.192.44.148:3
export PS1='[\u@\h \W(keystone_admin)]\$ '
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne
[root@controller1 ~(keystone_admin)]#openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| cfcade990b52ad341a06f0 | admin|
+----------------------------------+-------+
5.3 安装horizon
5.3.1 horizon安装
yum installopenstack-dashboard httpd mod_wsgi memcached pythonmemcached
5.3.1 修改horizon配置
将packstack的/etc/openstack-dashboard拷贝过来:
修改如下内容:
./local_settings:OPENSTACK_KEYSTONE_URL = http://192.168.129.131:
OPENSTACK_KEYSTONE_URL ="http://10.192.44.148:"
其他不必修改
setsebool -Phttpd_can_network_connect on
# chown -R apache:apache/usr/share/openstack-dashboard/static
重启httpd:
# systemctlenable httpd.service memcached.service
# systemctl restarthttpd.service memcached.service
5.3.2 登录验证
Internal Server Error
The server encounteredan internal error or misconfiguration and was unable to complete your request.
Please contact theserver administrator at [no address given] to inform them of the time thiserror occurred, and the actions you performed just before this error.
More informationabout this error may be available in the server error log.
这个问题遇到过,参考PART3:
/var/log/horizon/horizon.log的属主有问题:
[root@lxp-node2horizon(keystone_admin)]#ls -l
-rw-r--r--1 root root 0 May 20 23:44horizon.log
[root@lxp-node1horizon(keystone_admin)]#ls -l
-rw-r-----.1 apache apache 316 May 1819:35 horizon.log
# chownapache:apache horizon.log
OK,界面可以登录:
其他组件还没有安装、
所以登录进去肯定报错:
5.4 安装glance
5.4.1 创建数据库
MariaDB [(none)]& CREATE DATABASE
MariaDB [(none)]& GRANT ALL PRIVILEGESON glance.* TO 'glance'@'localhost' IDENTIFIED BY '1';
MariaDB [(none)]& GRANT ALL PRIVILEGESON glance.* TO 'glance'@'%' IDENTIFIED BY '1';
[root@controller1~(keystone_admin)]# openstack user create --password-prompt glance
User Password:密码全是1
Repeat User Password:
+----------+----------------------------------+
| Field | Value |
+----------+----------------------------------+
| email | None |
| enabled | True |
| id | 9b9b7d340f5c47fa8ead236b |
| name | glance |
| username | glance |
+----------+----------------------------------+
# openstack role add --project service --userglance admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id| 6c89e70e3b274c44b068dbd6aef08bb2 |
| name| admin|
+-------+----------------------------------+
# openstack service create --name glance--description "OpenStack Image service" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Imageservice |
| enabled | True |
| id | a0ccbb2f |
| name | glance |
| type | image |
+-------------+----------------------------------+
openstackendpoint create \
--publicurlhttp://10.192.44.148:9292 \
--internalurlhttp://10.192.44.148:9292 \
--adminurlhttp://10.192.44.148:9292 \
--regionRegionOne \
# openstack endpoint create --publicurlhttp://10.192.44.148:9292 --internalurl http://10.192.44.148:9292 --adminurlhttp://10.192.44.148:9292 --region RegionOne image
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| adminurl | http://10.192.44.148:9292 |
| id | 49a032e19ff |
| internalurl | http://10.192.44.148:9292 |
| publicurl | http://10.192.44.148:9292 |
| region | RegionOne |
| service_id | a0ccbb2f |
| service_name | glance |
| service_type | image |
+--------------+----------------------------------+
5.4.2 安装glance
yum install openstack-glancepython-glance python-glanceclient
5.4.3 配置glance
将packstack的glance配置拷贝过来,修改
[root@controller1 glance(keystone_admin)]#grep 192 ./ -r
./glance-registry.conf:connection=mysql://glance:b859cde598ec474f@192.168.129.131/glance
./glance-registry.conf:auth_uri=http://192.168.129.131:
./glance-registry.conf:identity_uri=http://192.168.129.131:35357
./glance-api.conf:connection=mysql://glance:b859cde598ec474f@192.168.129.131/glance
./glance-api.conf:auth_uri=http://192.168.129.131:
./glance-api.conf:identity_uri=http://192.168.129.131:35357
[root@controller1 glance(keystone_admin)]#grep 192 ./ -r
./glance-registry.conf:connection=mysql://glance:1@10.192.44.148/glance
./glance-registry.conf:auth_uri=http://10.192.44.148:
./glance-registry.conf:identity_uri=http://10.192.44.148:35357
./glance-api.conf:connection=mysql://glance:1@10.192.44.148/glance
./glance-api.conf:auth_uri=http://10.192.44.148:
./glance-api.conf:identity_uri=http://10.192.44.148:353
同步数据库:
su -s /bin/sh -c"glance-manage db_sync" glance
重启服务:
systemctlenable openstack-glance-api.service openstack-glance-registry.service
systemctl startopenstack-glance-api.service openstack-glance-registry.service
5.4.4 glance上传镜像验证
echo "exportOS_IMAGE_API_VERSION=2" | tee -a ./admin_keystone
[root@controller1 ~(keystone_admin)]# catadmin_keystone
unset OS_SERVICE_TOKEN OS_TOKEN OS_URL
export OS_USERNAME=admin
export OS_PASSWORD=1
exportOS_AUTH_URL=http://10.192.44.148:3
export PS1='[\u@\h \W(keystone_admin)]\$ '
export OS_TENANT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_IMAGE_API_VERSION=2
[root@controller1 ~(keystone_admin)]# .admin_keystone
网络组件还没安装,暂时上传不了
5.5 安装nova:控制节点
5.5.1 创建数据库
MariaDB [(none)]& CREATE DATABASE
MariaDB [(none)]& GRANT ALL PRIVILEGESON nova.* TO 'nova'@'localhost' IDENTIFIED BY '1';
MariaDB [(none)]& GRANT ALL PRIVILEGESON nova.* TO 'nova'@'%' IDENTIFIED BY '1';
创建用户:密码都是1
# openstack user create --password-promptnova
User Password:
Repeat User Password:
+----------+----------------------------------+
| Field| Value|
+----------+----------------------------------+
| email| None|
| enabled| True|
| id| f4c238ef96c66dc9d7ba6 |
| name| nova|
| username | nova |
+----------+----------------------------------+
# openstack role add --project service--user nova admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id| 6c89e70e3b274c44b068dbd6aef08bb2 |
| name| admin|
+-------+----------------------------------+
# openstack service create --name nova --description"OpenStack Compute" compute
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Compute |
| enabled| True |
| id | f82dbb5b6be918b826f0 |
| name | nova |
| type | compute |
+-------------+----------------------------------+
创建endpoint:
openstackendpoint create \
--publicurlhttp://10.192.44.148:8774/v2/%\(tenant_id\)s \
--internalurlhttp:// 10.192.44.148:8774/v2/%\(tenant_id\)s \
--adminurlhttp:// 10.192.44.148:8774/v2/%\(tenant_id\)s \
--regionRegionOne \
# openstack endpoint create --publicurlhttp://10.192.44.148:8774/v2/%\(tenant_id\)s --internalurlhttp://10.192.44.148:8774/v2/%\(tenant_id\)s --adminurlhttp://10.192.44.148:8774/v2/%\(tenant_id\)s --region RegionOne compute
+--------------+--------------------------------------------+
| Field | Value |
+--------------+--------------------------------------------+
| adminurl |http://10.192.44.148:8774/v2/%(tenant_id)s |
| id | c34d670ee15b47bdac4ef2 |
| internalurl | http://10.192.44.148:8774/v2/%(tenant_id)s|
| publicurl |http://10.192.44.148:8774/v2/%(tenant_id)s |
| region | RegionOne |
| service_id | f82dbb5b6be918b826f0 |
| service_name | nova |
| service_type | compute |
+--------------+--------------------------------------------+
5.5.2 安装控制节点
yuminstall openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-consoleopenstack-nova-novncproxy openstack-nova-scheduler python-novaclient
5.5.3 配置:参考萤石云配置和官网配置、packstack配置
Packstack安装的nova.conf配置项目太多太复杂,参考萤石云的配置,然后检查逛网设置的几项:
[root@controller1 nova(keystone_admin)]#
[root@controller1 nova(keystone_admin)]#cat nova.conf
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.192.44.148
vncserver_listen = 10.192.44.148
vncserver_proxyclient_address = 10.192.148
memcached_servers = controller1:11211
[database]
connection =mysql://nova:1@10.192.44.148/nova
[oslo_messaging_rabbit]
rabbit_hosts=10.192.44.148:5672
rabbit_userid = openstack
rabbit_password = 1
[keystone_authtoken]
auth_uri = http://10.192.44.148:5000
auth_url = http://10.192.44.148:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = 1
host = 10.192.44.148
[oslo_concurrency]
lock_path = /var/lock/nova
[root@controller1 nova(keystone_admin)]#
同步数据库:
su -s /bin/sh -c"nova-manage db sync" nova
启动服务:
# systemctlenable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.serviceopenstack-nova-novncproxy.service
# systemctl startopenstack-nova-api.service openstack-nova-cert.serviceopenstack-nova-consoleauth.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service openstack-nova-novncproxy.service
Nova-api启动失败,其他服务OK
[root@controller1 nova(keystone_admin)]# systemctl restart openstack-nova-cert.service
[root@controller1nova(keystone_admin)]# systemctl restartopenstack-nova-consoleauth.service
[root@controller1nova(keystone_admin)]# systemctl restartopenstack-nova-scheduler.service
[root@controller1nova(keystone_admin)]# systemctl restartopenstack-nova-conductor.service
[root@controller1nova(keystone_admin)]# systemctl restartopenstack-nova-novncproxy.service
[root@controller1nova(keystone_admin)]#
排查nova-api启动失败原因:
13:46:00.431 21599 ERRORnova OSError: [Errno 13] Permission denied: '/var/lock/nova'
手动创建试试:
[root@controller1lock(keystone_admin)]# mkdir nova
[root@controller1lock(keystone_admin)]# chmod 777 nova
OK,重启成功
5.5.4 验证nova service-list
[root@controller1 ~(keystone_admin)]# novaservice-list
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status| State | Updated_at| Disabled Reason |
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
| 1| nova-cert | controller1 |internal | enabled | up |T05:49:02.000000 | -|
| 2| nova-consoleauth | controller1 | internal | enabled | up | T05:48:57.000000 | - |
| 3| nova-conductor | controller1 |internal | enabled | up |T05:48:59.000000 | -|
| 4| nova-scheduler | controller1 |internal | enabled | up |T05:49:03.000000 | -|
+----+------------------+-------------+----------+---------+-------+----------------------------+-----------------+
OK,nova控制节点所有服务状态正常
5.6 安装nova:计算节点(compute1)【作废:libvirtd升级会有问题】
5.6.1 安装
#yum installopenstack-nova-compute sysfsutils
5.6.2 配置
[neutron]字段暂时保留,后续整理
---------------------------------------------------------------------------------------------------------------------------------------------------
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.192.44.150
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.192.150
novncproxy_base_url =http://10.192.44.148:6080/vnc_auto.html
memcached_servers = controller1:11211
[database]
connection =mysql://nova:1@10.192.44.148/nova
[oslo_messaging_rabbit]
rabbit_host=10.192.44.148
rabbit_hosts=10.192.44.148:5672
rabbit_userid = openstack
rabbit_password = 1
[keystone_authtoken]
auth_uri = http://10.192.44.148:5000
auth_url = http://10.192.44.148:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = 1
host = 10.192.44.148
host=10.192.44.148
[oslo_concurrency]
lock_path = /var/lock/nova
virt_type=qemu
---------------------------------------------------------------------------------------------------------------------------------------------------
egrep -c '(vmx|svm)'/proc/cpuinfo
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.serviceopenstack-nova-compute.service
启动出错,排查:
oslo_config.cfg.ConfigFilesPermissionDeniedError:Failed to open some config files: /etc/nova/nova.conf
修改nova.conf的属性:
-rw-r----- 1 root root 805 May 25 15:32 nova.conf
# chown root:nova nova.conf
再次重启:
OK,启动成功
5.6.3 验证:nova service-list
为什么没有出现nova-compute?
Packstack安装完全的nova-compute是可以看到的
这里先记录一下,放到neutron之后再排查
5.7 neutron的安装(控制节点)
5.7.1 创建数据库
MariaDB [(none)]& CREATE DATABASE
MariaDB [(none)]& GRANT ALL PRIVILEGESON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '1';
MariaDB [(none)]& GRANT ALL PRIVILEGESON neutron.* TO 'neutron'@'%' IDENTIFIED BY '1';
# openstack user create --password-promptneutron
User Password:
Repeat User Password:
+----------+----------------------------------+
| Field| Value|
+----------+----------------------------------+
| email| None|
| enabled| True|
| id|2398cfe405acdfba36b64b4 |
| name| neutron|
| username | neutron |
+----------+----------------------------------+
# openstack role add --project service--user neutron admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id| 6c89e70e3b274c44b068dbd6aef08bb2 |
| name| admin|
+-------+----------------------------------+
# openstack service create --name neutron--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | a3f4980ffbca7d3a2b01 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
创建endpoint:
openstackendpoint create \
--publicurlhttp://10.192.44.148:9696 \
--adminurlhttp://10.192.44.148:9696 \
--internalurlhttp://10.192.44.148:9696 \
--regionRegionOne \
# openstack endpoint create --publicurlhttp://10.192.44.148:9696 --adminurl http://10.192.44.148:9696 --internalurlhttp://10.192.44.148:9696 --region RegionOne network
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| adminurl | http://10.192.44.148:9696 |
| id | 63fa679e443aff17387b9f |
| internalurl | http://10.192.44.148:9696 |
| publicurl | http://10.192.44.148:9696 |
| region | RegionOne |
| service_id | a3f4980ffbca7d3a2b01 |
| service_name | neutron |
| service_type | network |
+--------------+----------------------------------+
5.7.2 安装网络组件:(控制节点)
yuminstall openstack-neutron openstack-neutron-ml2 python-neutronclient which
5.7.3 配置neutron
主要参考萤石云的配置。packstack的配置比较多,有些用不到,后面不好整理。
neutron.conf
[root@controller1 neutron(keystone_admin)]#cat neutron.conf
rpc_backend = rabbit
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://10.192.44.148:8774/v2
[database]
connection =mysql://neutron:1@10.192.44.148/neutron
[oslo_messaging_rabbit]
rabbit_hosts = 10.192.44.148:5672
rabbit_userid = openstack
rabbit_password = 1
[keystone_authtoken]
auth_uri = http://10.192.44.148:5000
auth_url = http://10.192.44.148:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = 1
auth_url = http://10.192.44.148:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = 1
[root@controller1 neutron(keystone_admin)]#
ml2_conf.ini
/etc/neutron/plugins/ml2/ml2_conf.ini:
type_drivers = flat,vlan,gre,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_vxlan]
vni_ranges = 1:1000
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
创建软连接:
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini
nova.conf 【compute节点也要改】
network_api_class =nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver =nova.network.linux_net.OVSInterfaceDriver
firewall_driver =nova.virt.firewall.NoopFirewallDriver
url = http://10.192.44.148:9696
auth_strategy = keystone
admin_auth_url =http://10.192.44.148:3
admin_tenant_name = service
admin_username = neutron
admin_password = 1
5.7.4 同步数据库&启动服务
同步数据库:
# su -s /bin/sh -c "neutron-db-manage--config-file /etc/neutron/neutron.conf --config-file/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
systemctl restartopenstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service
重启nova-compute:
systemctl start libvirtd.serviceopenstack-nova-compute.service
启动neutron-server:
# systemctl enable neutron-server.service
# systemctl start neutron-server.service
5.7.5 验证
[root@controller1 ml2(keystone_admin)]#neutron ext-list
+-----------------------+-----------------------------------------------+
| alias | name |
+-----------------------+-----------------------------------------------+
| flavors | Neutron Service Flavors |
| security-group | security-group |
| dns-integration | DNS Integration |
| l3_agent_scheduler | L3 Agent Scheduler |
| net-mtu | Network MTU |
| ext-gw-mode | Neutron L3 Configurable externalgateway mode |
| binding | Port Binding |
| provider | Provider Network |
| agent | agent |
| quotas | Quota management support |
| subnet_allocation | Subnet Allocation |
| dhcp_agent_scheduler | DHCP Agent Scheduler |
| rbac-policies | RBAC Policies |
| l3-ha | HA Router extension |
| multi-provider | Multi Provider Network |
| external-net | Neutron external network |
| router | Neutron L3 Router |
| allowed-address-pairs | Allowed AddressPairs |
| extraroute | Neutron Extra Route |
| extra

我要回帖

更多关于 openstack数据库模块 的文章

 

随机推荐