管理网络eth0和虚拟机实例网络eth1, openstack真的是八个卓殊缠绵悱恻的东西

     
 openstack真的是3个不胜缠绵悱恻的东西,还好有机关布置工具,即便有活动铺排工具得以方便我们安顿使用,可是读书的话,第①次最权威动安顿,因为手动铺排更能大家明白openstack的干活流程和各组建之间的联络。


       系统平台cnetos6.7 X86

Openstack Mitaka安装配备教程


       openstack icehouse

一 、实验环境:

系统:centos7.2-minimal

网络:管理网络eth0和虚拟机实例网络eth1

controller:192.168.22.202 eth0

                       192.168.30.202 eth1

Compute01:192.168.22.203 eth0

                           192.168.30.203 eth1


     
作者是依据openstack的原版手册安装的,安装keystone,glance和compute都很顺畅,不过到了neutron的时候就痛楚了,google了瞬间关于neutron的小说,全是说又何其多么的繁杂,对于三个新手来说确实是四个惊人的打击啊。(不可能,依旧要一步一步的走下去)。在这些进度中告负了好数十次,最后弄了两周终于弄好了。

② 、环境布署:

一 、全数节点关闭Firewalls、NetworkMananger、selinux、主机名为各自节点名称

② 、安装时间同步服务器chrony

#Yum install chrony –y

叁 、在决定节点上配置:allow 192.168.21.0/22

肆 、在盘算节点上同步控制节点时间:server controller iburst

⑤ 、运维服务并开机自动运转:

#systemctl enable chronyd.service

#systemctl start chronyd.service

⑥ 、准备Ali源、epel源

#yum install -y centos-release-openstack-mitaka

#yum
install https://repos.fedorapeople.org/repos/openstack/openstack-mitaka/rdo-release-mitaka-6.noarch.rpm
-y

#yum install python-openstackclient  -y                          
 ####安装opentack必须的插件####

#yum install openstack-selinux -y

#yum upgrade

#reboot

⑦ 、数据库安装(mariadb)       ####controller###

#yum install mariadb mariadb-serverpython2-PyMySQL -y

######数据库配置######

###始建并编写:/etc/my.cnf.d/openstack.cnf

[mysqld]

default-storage-engine = innodb

innodb_file_per_table

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8

######最先服务######

# systemctl enable mariadb.service

# systemctl start mariadb.service

######开头化数据库######

#mysql_secure_installation

####瞩目查看端口是不是业已起步:netstat -lnp | grep 3306###

8、rabbitmq安装(rabbitmq使用5672端口) ##controller##

# yum install rabbitmq-server -y                    
###安装###

# systemctl enable rabbitmq-server.service                  
###开机运营###

# systemctl start rabbitmq-server.service                        
###启航服务###

#rabbitmqctl
add_user
 openstack zx123456  
               ###日增openstack用户,并设置密码为zx123456###

#rabbitmqctl
set_permissions
openstack
 “.*” “.*” “.*”    
         ###增产用户设置权限###

⑨ 、memcached安装(使用端口11211)   ##controller##

# yum install memcached python-memcached -y                        
 ###安装###

# systemctl enable memcached.service                  
###开机运行###

# systemctl start memcached.service                      
 ###运行服务###

10、keystone安装 ##controller##

######登录数据库并成立keystone数据库:

#mysql -uroot –pzx123456

CREATE DATABASE
keystone;

GRANT ALL PRIVILEGES ON keystone.*
TO
 ‘keystone’@’localhost’
IDENTIFIED BY ‘zx123456’;

GRANTALL PRIVILEGES ON keystone.* TO ‘keystone’@’%’ IDENTIFIED BY
‘zx123456’;

       ###安装授权用户和密码###

**生成admin_token的随机值:openssl
rand -hex 10
**

# yum install openstack-keystone httpd mod_wsgi -y          
 ##controller##

配置:vi /etc/keystone/keystone.conf

admin_token=随机值(重要为安全,也足以不用替换)

connection=
mysql+pymysql://keystone:zx123456@192.168.22.202/keystone

provider = fernet

#初步化身份验证服务的数据库:

#su -s /bin/sh -c
“keystone-manage
 db_sync”
keystone

#初始化Fernet keys:

#keystone-manage
fernet_setup
–keystone-user
 keystone
–keystone-group keystone

#配置Apache HTTP服务

配置:/etc/httpd/conf/httpd.conf

ServerName controller

用上面包车型客车剧情创造文件/etc/httpd/conf.d/wsgi-keystone.conf

Listen
5000

Listen 35357

WSGIDaemonProcess keystone-public processes=5 threads=1
user=keystonegroup=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-public

WSGIScriptAlias / /usr/bin/keystone-wsgi-public

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat “%{cu}t %M”

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone-access.log combined

Require all granted

WSGIDaemonProcess keystone-admin processes=5 threads=1
user=keystonegroup=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-admin

WSGIScriptAlias / /usr/bin/keystone-wsgi-admin

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat “%{cu}t %M”

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone-access.log combined

Require all granted

启动Apache HTTP服务:

# systemctl enable httpd.service

# systemctl start httpd.service

#创设服务实体和API端点

配备认证令牌:

#export
OS_TOKEN=
2e8cd090b7b50499d5f9

安排端点UCRUISERL:

#export OS_URL=export

#OS_URL=http://controller:35357/v3

布局认证API版本:

#export
OS_IDENTITY_API_VERSION=3

#创立服务实体和地位验证服务:

#openstack service create –name keystone–description “OpenStack
Identity” identity

#创立认证服务的API端点:

#openstack endpoint create –region
RegionOne
 identity
public http://controller:5000/v3

#openstack endpoint create –region
RegionOne
 identity
internal http://controller:5000/v3

#openstack endpoint create –region
RegionOne
 identity
admin http://controller:35357/v3

#创建域、项目、用户、角色

创建域“default”

#openstack domain create –description”Default Domain”
default

创建admin项目

#openstack project create –domain default–description “Admin
Project” admin

创建admin用户

#openstack user create –domain
default
 –password-prompt
admin

 ##唤醒输入admin用户密码##

创建admin角色

openstack role create
admin

添加“admin“剧中人物到admin项目和用户上

openstack role add
–project admin –user
adminadmin

创建“service“项目

openstack project
create –domain default –description “Service Project” service

创建“demo“项目

openstack project
create –domain default –description “Demo Project” demo

创建“demo“用户

openstack user
create –domain default –password-prompt demo

##唤醒输入demo用户密码##

创建user角色

openstack role
create user

添加”user”角色到“demo “项目和用户

openstack
role add –project demo –user demo
user

验证:

关闭临时认证令牌机制:

编辑/etc/keystone/keystone-paste.ini文件,从“[pipeline:public_api]“,[pipeline:admin_api]“和“[pipeline:api_v3]“有的删除“admin_token_auth

重置“OS_TOKEN“和“OS_URL“环境变量

unset OS_TOKEN
OS_URL

使用admin用户来,检查测试,看能或不可能得到令牌:

#openstack–os-auth-url
http://controller:35357/v3–os-project-domain-name default
–os-user-domain-namedefault–os-project-name admin–os-username admin
token issu
e

图片 1

新建admin项目和demo项指标环境变量

admin项目:添加如下内容

vim admin-openrc

export
OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=zx123456

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

demo项目:

vim demo-openrc

export
OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=demo

export OS_USERNAME=demo

export OS_PASSWORD=zx123456

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

加载环境变量并收获令牌:

#source admin-openrc

#openstack token issue

图片 2


① 、注意事项    

③ 、glance安装和布局

支配节点安装glance

① 、登录MySQL,建库和建用户

mysql -uroot –pzx123456

CREATE DATABASE
glance;
       
 ##创建glance数据库##

GRANT ALL
PRIVILEGES ON glance.* TO ‘glance’@’localhost’ IDENTIFIED BY
‘zx123456’;

GRANT ALL PRIVILEGES ON glance.* TO’glance’@’%’ IDENTIFIED BY
‘zx123456’;

贰 、建keystone论证连接,使用的用户,密码,剧中人物权限

source admin-openrc

创建glance用户

openstack user
create –domain default –password-prompt glance

##提示输入glance密码##

添加admin角色到glance用户和service项目上

openstack role
add –project service –user glance admin

3、创建“glance“劳务实体

openstack
service create –name glance –description “OpenStack Image”
image

肆 、成立镜像服务的API端点

openstack endpoint
create –region RegionOne image
publichttp://controller:9292

openstack endpoint
create –region RegionOne image
internalhttp://controller:9292

openstack endpoint create
–region RegionOneimage admin
http://controller:9292

5、安装glance包   #controller#

yum install openstack-glance -y

6、glance-api配置

vim /etc/glance/glance-api.conf

[database]

connection =
mysql+pymysql://glance:zx123456@controller/glance

[keystone_authtoken]

auth_url
=
http://controller:5000

auth_url=
http://controller:35357

memcached_servers= controller:11211

auth_type= password

project_domain_name= default

user_domain_name= default

project_name= service

username= glance

password

= zx123456

[paste_deploy]

flavor = keystone***#点名论证机制***

[glance_store]

stores = file,http

default_store = file

filesystem_store_datadir=
/var/lib/glance

7、配置/etc/glance/glance-registry.conf

vim /etc/glance/glance-registry.conf

[database]

connection =
mysql+pymysql://glance:zx123456@controller/glance

[keystone_authtoken]

auth_uri =
http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = zx123456

[paste_deploy]

flavor = keystone

八 、新建保存镜象目录,并更改属主

mkdir
/var/lib/glance

chown glance.
/var/lib/glance

玖 、生成数据库结构

su -s /bin/sh -c
“glance-managedb_sync”
glance

10、设置开机运转和周转

#systemctl enable
openstack-glance-api.service openstack-glance-registry.service

#systemctl
start
 openstack-glance-api.service
openstack-glance-registry.service

查阅服务end point音讯

#openstack catalog
list

评释操作

#source admin-openrc

#wgethttp://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86\_64-disk.img

##下载镜像##

openstack image create
“cirros” –file cirros-0.3.4-x86_64-disk.img–disk-format qcow2
–container-format bare
–public

##上传镜像##

openstack image list     ##翻开结果##


壹 、Neutron的布署文件中要把auth_uri换成identity_uri;(其余服务能够用auth_url,可是neutron服务必供给改为identity_url,不然不可能健康运维) 

④ 、nova服务安装与布局

操纵节点

壹 、建数据库,连库使用的用户名和密码

mysql -uroot -pzx123456

CREATEDATABASE
nova_api;

CREATE DATABASE nova;

GRANT ALL
PRIVILEGES ON nova_api.* TO ‘nova’@’localhost’ IDENTIFIED BY
‘zx123456’;

GRANT ALL PRIVILEGES ONnova_api.* TO ‘nova’@’%’ IDENTIFIED BY
‘zx123456’;

GRANTALL PRIVILEGES ON
nova.* TO ‘nova’@’localhost’ IDENTIFIED BY
‘zx123456’;

GRANT ALL PRIVILEGES ONnova.* TO ‘nova’@’%’ \IDENTIFIED BY
‘zx123456’;

flush privileges;

贰 、检查实施结果

select user,host from
mysql.user where user=”nova”;

叁 、建服务实体,keystone用户,剧中人物关系

建nova服务实体

openstack service create
–name nova –description “OpenStack Compute”
compute

建用户

openstack user create
–domain default –password-prompt
nova

##提醒输入NOVA密码##

用户,剧中人物,项目事关

openstack role add
–project service –user nova
admin

建keystone-api对外的端点

openstack endpoint create
–region RegionOne compute
publichttp://controller:8774/v2.1/%\\(tenant\_id\\)s

openstack endpoint create
–region RegionOne compute
internalhttp://controller:8774/v2.1/%\\(tenant\_id\\)s

openstack endpoint create
–region RegionOne compute admin
http://controller:8774/v2.1/%\\(tenant\_id\\)s

④ 、查看结果

openstack catalog
list

5、安装nova软件包

yum installopenstack-nova-api openstack-nova-conductor
openstack-nova-consoleopenstack-nova-novncproxy openstack-nova-scheduler
-y

⑥ 、修改nova配置文件

vim /etc/nova/nova.conf

[DEFAULT]

enabled_apis=
osapi_compute,metadata

rpc_backend= rabbit

auth_strategy= keystone

my_ip= 192.168.22.202

use_neutron= True

firewall_driver= nova.virt.firewall.NoopFirewallDriver

[api_database]

connection =
mysql+pymysql://nova:zx123456@controller/nova_api

[database]

#nova连数据库.

connection =
mysql+pymysql://nova:zx123456@controller/nova

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = zx123456

[keystone_authtoken]

#keystone论证连接装置

auth_url=http://controller:5000

auth_url= http://controller:35357

memcached_servers= controller:11211

auth_type= password

project_domain_name= default

user_domain_name= default

project_name= service

username= nova

password= zx123456

[glance]

api_servers=
http://controller:9292

[vnc]

vncserver_listen=
192.168.22.202

vncserver_proxyclient_address=
192.168.22.202

[oslo_concurrency]

#设置锁文件地点

lock_path=
/var/lib/nova/tmp

⑦ 、同步数据库

su -s /bin/sh -c”nova-manage api_db sync” nova su-s /bin/sh -c
“nova-manage db sync” nova

##告诫音信可以忽略##

8、验证

mysql –uroot –pzx123456

use nova;

show tables;

玖 、运维服务并开机自启

#systemctl
enable
 openstack-nova-api.service openstack-nova-consoleauth.service

openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

# systemctl startopenstack-nova-api.service \

openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service
\

openstack-nova-conductor.service openstack-nova-novncproxy.service

总括节点

壹 、nova-compute服务安装

yum installopenstack-nova-compute –y

二 、修改配置文件

vim /etc/nova/nova.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

#计量节点ip

my_ip = 192.168.22.203

use_neutron = True

firewall_driver = nova.virt.firewall.NoopFirewallDriver

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password=
zx123456

[vnc]

enabled = True

vncserver_listen=
0.0.0.0

#测算节点管理网络ip

vncserver_proxyclient_address= 192.168.22.203

novncproxy_base_url=
http://192.168.22.202:6080/vnc\_auto.html

[glance]

api_servers = http://controller:9292

[oslo_concurrency]

#锁文件

lock_path = /var/lib/nova/tmp

egrep-c ‘(vmx|svm)’
/proc/cpuinfo

##规定你的计量节点是或不是帮助虚拟机的硬件加速##

重回0,则需求配置下边:

[libvirt]

virt_type = qemu

叁 、运转服务

#systemctl
enable
 libvirtd.service
openstack-nova-compute.service

# systemctl startlibvirtd.service openstack-nova-compute.service

表明操作

在controller执行上边发号施令:

#source /root/admin-openrc

#openstack compute servicelist

图片 3


② 、各安插文件属组应该为相应的劳动的运维者用户地方,不然其将不可能访问导致服务运转败北;

五 、Neutron安装与安顿

决定节点

① 、创立neutron数据库并予以权力

mysql –uroot –pzx123456

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’localhost’ IDENTIFIED BY ‘zx123456’;

GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron’@’%’ IDENTIFIED BY ‘zx123456’;

② 、获得admin凭证及创建neutron用户

#source admin-openrc

#openstack user create –domain default –password-prompt neutron

##提醒输入neutron密码##

3、添加“admin“角色到“neutron“用户**

openstack role add –project service –user neutron admin

四 、创立“neutron“服务实体

openstack service create –name neutron –description “OpenStack Networking” network

⑤ 、创造网络服务API端点

openstack endpoint create –region RegionOnenetwork public http://controller:9696

openstack endpoint create –region RegionOnenetwork internal http://controller:9696

openstack endpoint create –region RegionOne

network adminhttp://controller:9696

陆 、网络选拔:Self-service network

neutron相关包安装:

yum install openstack-neutronopenstack-neutron-ml2
openstack-neutron-linuxbridge ebtables –y

七 、neutron服务配置文件

mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

vim /etc/neutron/neutron.conf

[DEFAULT]

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

rpc_backend = rabbit

auth_strategy = keystone

notify_nova_on_port_status_changes = True

notify_nova_on_port_data_changes = True

[database]

connection = mysql+pymysql://neutron:zx123456@controller/neutron  #改为友好数据库密码

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = zx123456   #改为rabbitmq的密码

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = zx123456   #改为团结neutron服务的密码

[nova]

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = zx123456  #改为温馨nova服务的密码

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

ML2插件的配备:

mv /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak

vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = *

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = True

linuxbridge agent配置文件

mv /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = provider:eht1   #此地设置为provider网络的网卡名称,小编那里eth1

[vxlan]

enable_vxlan = True

local_ip = 192.168.22.202  #以此ip地址大家选择的是治本网段的ip (192.168.22.202)

l2_population = True

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

L3 agent配置文件:

mv /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak

vim /etc/neutron/l3_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

external_network_bridge =#留空

dhcp agent配置

mv /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak

vim /etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = True

配置metadata agent

mv /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak

vim /etc/neutron/metadata_agent.ini

[DEFAULT]

nova_metadata_ip = controller

metadata_proxy_shared_secret = zx123456#修改为投机的METADATA_SECRET,也足以不改动,要和nova服务配置一样

安顿nova服务使用network

vim /etc/nova/nova.conf#充实以下内容

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = zx123456#改为投机neutron服务密码

service_metadata_proxy= True

metadata_proxy_shared_secret= zx123456   #和上边的METADATA对应

八 、给ML2插件做个软连接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

九 、同步数据库

su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf  –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron

10、重启nova-api

systemctl restart openstack-nova-api.service

1壹 、运转neutron相关服务,并设置开机运营

systemctl enable neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.service

neutron-l3-agent.service

# systemctl start neutron-server.service   neutron-linuxbridge-agent.service neutron-dhcp-agent.service   neutron-metadata-agent.serviceneutron-l3-agent.service

compute节点配置

1、安装neutron服务

yum install openstack-neutron-linuxbridge ebtables ipset

2、配置

neutron服务配置

mv /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak

vim /etc/neutron/neutron.conf

[DEFAULT]

rpc_backend = rabbit

auth_strategy = keystone

[oslo_messaging_rabbit]

rabbit_host = controller

rabbit_userid = openstack

rabbit_password = zx123456   #改为rabbit密码

[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = zx123456        #改为和谐neutron服务密码

[oslo_concurrency]

lock_path = /var/lib/neutron/tmp

linuxbridge agent配置

mv /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak

vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = provider:eth1  #改为provider互联网的网卡,那里是eth1

[vxlan]

enable_vxlan = True

local_ip = 192.168.22.203#改为本机managent互联网的ip地址192.168.22.203

l2_population = True

[securitygroup]

enable_security_group = True

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

nova服务应用network

vim  /etc/nova/nova.conf  #增添以下内容

[neutron]

url = http://controller:9696

auth_url = http://controller:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = zx123456    #改为友好的neutron服务密码

3、重启nova服务

systemctl restart openstack-nova-compute.service

4、启动neutron

systemctl enable neutron-linuxbridge-agent.service

systemctl start neutron-linuxbridge-agent.service

验证

在controller节点上进行:

source /root/admin-openrc

neutron ext-list

图片 4


neutron agent-list

图片 5


Neutron服务安装完毕。

 

6、Dashboard安装配置

决定节点

① 、安装面板

yum installopenstack-dashboard –y

2、配置相应设置

vim /etc/openstack-dashboard/local_settings

修改如下配置:

OPENSTACK_HOST =”controller”

ALLOWED_HOSTS = [‘*’, ]

CACHES = {

‘default’: {

‘BACKEND’:’django.core.cache.backends.locmem.LocMemCache’,

‘LOCATION’: ‘192.168.22.202:11211’,

},

}

OPENSTACK_KEYSTONE_URL =”http://%s:5000/v3” % OPENSTACK_HOST

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT= True

OPENSTACK_API_VERSIONS = {

“identity”: 3,

“image”: 2,

“volume”: 2,

}

OPENSTACK_KEYSTONE_DEFAULT_DOMAIN= “default”

OPENSTACK_KEYSTONE_DEFAULT_ROLE= “user”

TIME_ZONE = “UTC”

3、重启apache和memcaced服务

systemctl enablehttpd.service memcached.service

systemctl restarthttpd.service memcached.service

systemctl statushttpd.service memcached.service

验证

http://192.168.22.202/dashboard

关于运营虚拟机日志报错NovaException: Unexpected
vif_type=binding_failed.:的消除方案

1.面世上述错误首先检查ml2布置文件是不是配备不错
2.查看网络节点metadata_agent.ini配置文件是不是错误,metadata是背负将对neutron的操作保存在数据库(metadata_agent配置文件填写错误日志不会报错。eg:将admin_tenant_name
= service 写成 dmin_tenant_name = service)
3.禁止使用虚拟机网络作用看是否能够平常运营,若是能运作那么难点出在netron上,借使也不能够运行那么就须求检讨别的了。

 

设置glance碰到的问题:

1.布局cinder时要把cinder
volumes上的计划文件中volumes_dir=$state_path/volumes
改为volumes_dir=/etc/cinder/volumes
2.将/etc/rc.d/init.d/openstack-cinder-volume
中的配置文件只保留–config-file $config,删除-config-file
$distconfig,制止失误
eg: daemon –user cinder –pidfile $pidfile “$exec –config-file $config
–logfile $logfile &>/dev/null & echo \$! > $pidfile”
3.cinder voleme节点配置文件中volume_group =
stack-volumes-lvmdriver-1项表示暗中同意vg为stack-volumes-lvmdriver,运转cinder
volume前务必先成立名为stack-volumes-lvmdriver的卷组。

去除neutron网络的步调:
1.router-gateway-clear
2.router-interface-delete
3.subnet-delete
4.router-delete

neutron服务遇到的题材:

借使日志不报错但服务不不奇怪,例如实例不能够获得到p。

1.用neutron agent-list 查看各组件工作情况是还是不是符合规律。

万一情况不寻常请查看各节点时间是不是不联合。(日志不报错,但气象不正规基本上都以岁月差异台造成的)

二、总结

①遇到标题一定要冷静,不要遗弃,要善用思考。

②openstack的题材一般都以布局文件漏洞百出引起的

③尽量将服务多重启三遍看是还是不是会报错,某个服务固然起步的时候显得的ok,可是并未运行起来。

④劳务运转后肯定要看日志(grep -i ‘error’)

⑤各主机时间必须共同

 

屈居一海岩后的美图:

图片 6