環境:

CentOS7.3

openstack ocata




一.基礎環境配置

1.host表

192.168.130.101 controller

192.168.130.111 block1

192.168.130.201 compute1

提示: 生產環境一般都會做bonding,示例

文本界面nmtui(NetworkManager-tui包和NetworkManager服務)可以非常方便生成網卡配置模板,可能利用該界面生成標準網絡配置模板,之後添加compute節點可直接修改該模板



2.NTP

https://docs.openstack.org/ocata/install-guide-rdo/environment-ntp-controller.html

controller

yum -y install chrony

sed -i '/server 0.centos.pool.ntp.org iburst/i server

time.nist.gov iburst' /etc/chrony.conf

sed -i '/.centos.pool.ntp.org iburst/d' /etc/chrony.conf

sed -i '/#allow 192.168/c allow 192.168.130.0/24'

/etc/chrony.conf

systemctl enable chronyd.service

systemctl restart chronyd.service

chronyc sources

components

yum -y install chrony

sed -i '/server 0.centos.pool.ntp.org iburst/i server

controller iburst' /etc/chrony.conf

sed -i '/.centos.pool.ntp.org iburst/d' /etc/chrony.conf

systemctl enable chronyd.service

systemctl restart chronyd.service

chronyc sources

3.openstack

client包(所有節點)

https://docs.openstack.org/ocata/install-guide-rdo/environment-packages.html

cat >/etc/yum.repos.d/extras.repo <<'HERE'

[extras]

name=CentOS-$releasever - extras


baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

enabled=1

HERE

yum -y install centos-release-openstack-ocata

yum -y install python-openstackclient openstack-selinux


提示: 可以做個本地yum源,以ocata版本為例

mkdir openstack-ocata

yum -y install centos-release-openstack-ocata

yum -y install yum-utils

yumdownloader --destdir chrony python-openstackclient

openstack-selinux mariadb mariadb-server python2-PyMySQL

rabbitmq-server memcached python-memcached openstack-keystone httpd

mod_wsgi openstack-glance openstack-nova-api

openstack-nova-conductor openstack-nova-console

openstack-nova-novncproxy openstack-nova-scheduler

openstack-nova-placement-api openstack-nova-compute

openstack-neutron openstack-neutron-ml2

openstack-neutron-linuxbridge ebtables

openstack-neutron-linuxbridge ebtables ipset openstack-dashboard

openstack-cinder lvm2 openstack-cinder targetcli

python-keystone

yum -y --downloadonly --downloaddir=openstack-ocata install

...

4.SQL

database(可單獨指定)

提示: 實驗環境資源有限,直接裝在controller上

yum -y install mariadb mariadb-server  

yum -y install python2-PyMySQL

cat >/etc/my.cnf.d/openstack.cnf <<HERE

[mysqld]

bind-address = 192.168.130.101

default-storage-engine = innodb

innodb_file_per_table

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8

HERE

systemctl enable mariadb.service

systemctl start mariadb.service

mysql_secure_installation

5.MQ(可單獨指定)

提示: 實驗環境資源有限,直接裝在controller上

yum -y install rabbitmq-server

systemctl enable rabbitmq-server.service

systemctl start rabbitmq-server.service

添加用户並授權

rabbitmqctl add_user openstack RABBIT_PASS

rabbitmqctl set_permissions openstack ".*" ".*" ".*"


web界面http://controller:15672, 用户名/密碼 guest/guest

rabbitmq-plugins enable rabbitmq_management

systemctl restart rabbitmq-server.service



提示:可以適當修改最大連接數增加吞吐量

/etc/security/limits.conf

*      

  soft  

nproc    

*      

  hard  

nproc    

*      

  soft  

nofile  

65536


*      

  hard  

nofile  

65536


/usr/lib/systemd/system/rabbitmq-server.service

[Service]

LimitNOFILE=655360



ubuntu修改


/etc/default/rabbitmq-server


ulimit -S -n 6553606.Memcached(可單獨指定)

提示: 實驗環境資源有限,直接裝在controller上

yum -y install memcached python-memcached

sed -i '/OPTIONS=/c OPTIONS="-l 192.168.130.101"'

/etc/sysconfig/memcached

systemctl enable memcached.service

systemctl start memcached.service



二.Identity

service(controller節點)

1.建庫

mysql -u root -proot

CREATE DATABASE keystone;

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'

IDENTIFIED BY 'KEYSTONE_DBPASS';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'

IDENTIFIED BY 'KEYSTONE_DBPASS';

FLUSH PRIVILEGES;

2.安裝identity組件

yum -y install openstack-keystone httpd mod_wsgi

3.配置identity

mv /etc/keystone/keystone.conf{,.default}

cat >/etc/keystone/keystone.conf <<HERE

[DEFAULT]

[assignment]

[auth]

[cache]

[catalog]

[cors]

[cors.subdomain]

[credential]

[database]

connection =

mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone

[domain_config]

[endpoint_filter]

[endpoint_policy]

[eventlet_server]

[federation]

[fernet_tokens]

[healthcheck]

[identity]

[identity_mapping]

[kvs]

[ldap]

[matchmaker_redis]

[memcache]

[oauth1]

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[paste_deploy]

[policy]

[profiler]

[resource]

[revoke]

[role]

[saml]

[security_compliance]

[shadow_users]

[signing]

[token]

provider =

fernet

[tokenless_auth]

[trust]

HERE

4.初始化keystone

https://docs.openstack.org/ocata/install-guide-rdo/keystone-install.html

su -s /bin/sh -c "keystone-manage db_sync" keystone


keystone-manage fernet_setup --keystone-user keystone

--keystone-group keystone

keystone-manage credential_setup --keystone-user keystone

--keystone-group keystone


keystone-manage bootstrap --bootstrap-password ADMIN_PASS

\





5.配置apache

sed -i '/^#ServerName/c ServerName controller'

/etc/httpd/conf/httpd.conf

ln -s /usr/share/keystone/wsgi-keystone.conf

/etc/httpd/conf.d/

systemctl enable httpd.service

systemctl start httpd.service


export OS_USERNAME=admin

export OS_PASSWORD=ADMIN_PASS

export OS_PROJECT_NAME=admin

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_DOMAIN_NAME=Default

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

6.創建domain, projects, users,

and roles

openstack project create --domain default --description

"Service Project" service

openstack project create --domain default --description "Demo

Project" demo

openstack user create --domain default --password demo

demo

openstack role create user

openstack role add --project demo --user demo user

7.確認Identity

server配置正確

測試admin獲取token

openstack --os-auth-url http://controller:35357/v3 \



8.openstack client

rc環境變量

cat >admin-openrc <<HERE

export OS_PROJECT_DOMAIN_NAME=Default

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=ADMIN_PASS

export OS_AUTH_URL=http://controller:35357/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

HERE

cat >demo-openrc <<HERE

export OS_PROJECT_DOMAIN_NAME=Default

export OS_USER_DOMAIN_NAME=Default

export OS_PROJECT_NAME=demo

export OS_USERNAME=demo

export OS_PASSWORD=DEMO_PASS

export OS_AUTH_URL=http://controller:5000/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

HERE


三.Glance  

1.建庫

mysql -u root -proot

CREATE DATABASE glance;

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost'

IDENTIFIED BY 'GLANCE_DBPASS';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY

'GLANCE_DBPASS';

FLUSH PRIVILEGES;

2.創建憑據

source admin-openrc

openstack user create --domain default --password GLANCE_PASS

glance

openstack role add --project service --user glance admin

3.創建service

openstack service create --name glance --description

"OpenStack Image" image

4.創建api入口

openstack endpoint create --region RegionOne image public

http://controller:9292

openstack endpoint create --region RegionOne image internal

http://controller:9292

openstack endpoint create --region RegionOne image admin

http://controller:9292

5.安裝glance


https://docs.openstack.org/ocata/install-guide-rdo/glance-install.html

yum -y install openstack-glance

6.配置glance

mv /etc/glance/glance-api.conf{,.default}

cat >/etc/glance/glance-api.conf <<HERE

[DEFAULT]

[cors]

[cors.subdomain]

[database]

connection =

mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

[glance_store]

stores =

file,http

default_store = file

filesystem_store_datadir =

/var/lib/glance/images/

[image_format]

[keystone_authtoken]

auth_uri =

http://controller:5000

auth_url =

http://controller:35357

memcached_servers = controller:11211

auth_type =

password

project_domain_name = Default

user_domain_name = Default

project_name

= service

username =

glance

password =

GLANCE_PASS

[matchmaker_redis]

[oslo_concurrency]

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[paste_deploy]

flavor =

keystone

[profiler]

[store_type_location_strategy]

[task]

[taskflow_executor]

HERE


mv /etc/glance/glance-registry.conf{,.default}

cat >/etc/glance/glance-registry.conf <<HERE

[DEFAULT]

[database]

connection =

mysql+pymysql://glance:GLANCE_DBPASS@controller/glance

[keystone_authtoken]

auth_uri =

http://controller:5000

auth_url =

http://controller:35357

memcached_servers = controller:11211

auth_type =

password

project_domain_name = Default

user_domain_name = Default

project_name

= service

username =

glance

password =

GLANCE_PASS

[matchmaker_redis]

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_policy]

[paste_deploy]

flavor =

keystone

[profiler]

HERE

6.同步glance數據庫

su -s /bin/sh -c "glance-manage db_sync" glance

7.啓服務

systemctl enable openstack-glance-api.service

openstack-glance-registry.service

systemctl start openstack-glance-api.service

openstack-glance-registry.service

8.確認Glance服務配置正確

curl http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img -o

cirros-0.3.5-x86_64-disk.img

source

admin-openrc  

openstack image create "cirros" \





openstack image

list


https://docs.openstack.org/image-guide/obtain-images.html

http://cloud.centos.org/centos/7/images/

鏡像修改

https://docs.openstack.org/image-guide/modify-images.html

鏡像製作

https://docs.openstack.org/image-guide/centos-image.html

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/2/html/Getting_Started_Guide/chap-Deploying_Image_Services.html

http://cloudinit.readthedocs.io/en/latest/topics/examples.html

http://people.redhat.com/mskinner/rhug/q3.2014/cloud-init.pdf

https://access.redhat.com/documentation/zh-CN/Red_Hat_Enterprise_Linux_OpenStack_Platform/7/pdf/Instances_and_Images_Guide/Red_Hat_Enterprise_Linux_OpenStack_Platform-7-Instances_and_Images_Guide-zh-CN.pdf

centos7

virt-install --virt-type kvm --name ct7-cloud --ram 1024

\







或直接導入己存在虛擬機鏡像

virt-install --name ct7-cloud --vcpus 2 --memory 2048 --disk

ct7-cloud.img--import

virsh console ct7-cloud

http://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/RH-COMMON/SRPMS/

http://ftp.redhat.com/pub/redhat/linux/enterprise/7Server/en/RH-COMMON/SRPMS/

yum -y install acpid cloud-init cloud-utils-growpart

systemctl enable acpid

echo "NOZEROCONF=yes" > /etc/sysconfig/network

grubby --update-kernel=ALL --remove-args="rhgb quiet"

grubby --update-kernel=ALL --args="console=tty0

console=ttyS0,115200n8"

grub2-mkconfig -o /boot/grub2/grub.cfg

poweroff

宿主機上執行virt-sysprep去個性化

yum -y

install libguestfs-tools

echo root >/tmp/rootpw

virt-sysprep

-a /var/lib/libvirt/images/ct7-cloud.img --root-password

file:/tmp/rootpw

virt-sparsify --compress

/var/lib/libvirt/images/ct7-cloud.img

ct7-cloud.qcow2root@router:images#virt-sysprep

-a

/var/lib/libvirt/images/centos.qcow2 --root-password

file:/tmp/rootpw

[

 Examining the guest

...

[

 Performing

"abrt-data" ...

[

 Performing

"bash-history" ...

[

 Performing

"blkid-tab" ...

[

 Performing

"crash-data" ...

[

 Performing

"cron-spool" ...

[

 Performing

"dhcp-client-state" ...

[

 Performing

"dhcp-server-state" ...

[

 Performing

"dovecot-data" ...

[

 Performing "logfiles"

...

[

 Performing

"machine-id" ...

[

 Performing

"mail-spool" ...

[

 Performing

"net-hostname" ...

[

 Performing

"net-hwaddr" ...

[

 Performing

"pacct-log" ...

[

 Performing

"package-manager-cache" ...

[

 Performing "pam-data"

...

[

 Performing

"puppet-data-log" ...

[

 Performing

"rh-subscription-manager" ...

[

 Performing

"rhn-systemid" ...

[

 Performing "rpm-db"

...

[

 Performing

"samba-db-log" ...

[

 Performing "script"

...

[

 Performing

"smolt-uuid" ...

[

 Performing

"ssh-hostkeys" ...

[

 Performing

"ssh-userdir" ...

[

 Performing

"sssd-db-log" ...

[

 Performing

"tmp-files" ...

[

 Performing

"udev-persistent-net" ...

[

 Performing "utmp"

...

[

 Performing "yum-uuid"

...

[

 Performing

"customize" ...

[

 Setting a random

seed

[

 Performing

"lvm-uuids" ...

root@router:images#virt-sparsify

--compress  

[

 Create overlay file

in /tmp to protect source disk

[

 Examine source

disk

[

 Fill free space in

/dev/cl/root with zero

100%

⟦?????????????????????????????????????????????????????????????????????????????????????????????????????⟧

00:00

[  Clearing Linux swap

on /dev/cl/swap

[  Fill free space in

/dev/sda1 with zero

100%

⟦?????????????????????????????????????????????????????????????????????????????????????????????????????⟧

00:00

[  Copy to destination

and make sparse

[

243.9] Sparsify operation

completed with no errors.

virt-sparsify:

Before deleting the old disk, carefully check that

the  

target

disk boots and works correctly.

root@router:images#ls

-lh ct7-cloud.*

-rw-r--r-- 1 qemu

qemu 1.4G 5月  

-rw-r--r-- 1 root

root 474M 5月  

可以看到經過壓縮後的centos7鏡像是之前的1/3大小



補充:

通過oz來創建鏡像


https://github.com/rcbops/oz-image-build

yum -y install oz

sed -i '/image_type

= raw/s/raw/qcow2/'

/etc/oz/oz.cfg

oz-install -p -u -d3

centos7.3.tdl


通過guestfs來修改

guestmount

-a ct7-cloud.qcow2 -i

--rw /mnt/cloud 進一步修改其它項


ubuntu16.04

apt-get install cloud-init

dpkg-reconfigure cloud-init


virt-sysprep -d ubuntu16.04



四.Nova

相對於Newton placement是必須組件 http://docs.openstack.org/developer/nova/placement.html


A.準備環境

https://docs.openstack.org/ocata/install-guide-rdo/nova-controller-install.html

1.建庫

mysql -u root -proot

CREATE DATABASE nova_api;

CREATE DATABASE nova;

CREATE DATABASE nova_cell0;

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost'

IDENTIFIED BY 'NOVA_DBPASS';

GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY

'NOVA_DBPASS';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost'

IDENTIFIED BY 'NOVA_DBPASS';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY

'NOVA_DBPASS';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost'

IDENTIFIED BY 'NOVA_DBPASS';

GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED

BY 'NOVA_DBPASS';

FLUSH PRIVILEGES;

用admin權限如下操作

source admin-openrc

2.創建nova用户

openstack user create --domain default --password NOVA_PASS

nova

openstack role add --project service --user nova admin

3.創建compute

service

openstack service create --name nova --description "OpenStack

Compute" compute

4.創建compute api

endpoints

openstack endpoint create --region RegionOne compute public

http://controller:8774/v2.1/%\(tenant_id\)s

openstack endpoint create --region RegionOne compute internal

http://controller:8774/v2.1/%\(tenant_id\)s

openstack endpoint create --region RegionOne compute admin

http://controller:8774/v2.1/%\(tenant_id\)s

5.創建placement用户

openstack user create --domain default --password

PLACEMENT_PASS placement

openstack role add --project service --user placement

admin

6.創建placement

service

openstack service create --name placement --description

"Placement API" placement

7.創建placement api

endpoints

openstack endpoint create --region RegionOne placement public

http://controller:8778

openstack endpoint create --region RegionOne placement

internal http://controller:8778

openstack endpoint create --region RegionOne placement admin

http://controller:8778




注:按照官方文檔配置80端口下的/placement

會導致placement api啓動失敗,調用時日誌會報/placement 404錯誤,解決方法是指定8778端口

https://ask.openstack.org/en/question/103860/ocata-the-placement-api-endpoint-not-found-on-ubuntu/

https://docs.openstack.org/developer/nova/placement.html

openstack endpoint list

openstack endpoint delete  





B.安裝配置nova

controller

1.安裝

yum -y install openstack-nova-api openstack-nova-conductor

openstack-nova-console openstack-nova-novncproxy

openstack-nova-scheduler openstack-nova-placement-api

2.配置

mv /etc/nova/nova.conf{,.default}

cat >/etc/nova/nova.conf <<'HERE'

[DEFAULT]

enabled_apis

= osapi_compute,metadata

transport_url =

rabbit://openstack:RABBIT_PASS@controller

auth_strategy = keystone

my_ip =

192.168.130.101

use_neutron

= True

firewall_driver =

nova.virt.firewall.NoopFirewallDriver

[api]

[api_database]

connection =

mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api

[barbican]

[cache]

[cells]

[cinder]

[cloudpipe]

[conductor]

[console]

[consoleauth]

[cors]

[cors.subdomain]

[crypto]

[database]

connection =

mysql+pymysql://nova:NOVA_DBPASS@controller/nova

[ephemeral_storage_encryption]

[filter_scheduler]

[glance]

api_servers

= http://controller:9292

[guestfs]

[healthcheck]

[hyperv]

[image_file_url]

[ironic]

[key_manager]

[keystone_authtoken]

auth_uri =

http://controller:5000

auth_url =

http://controller:35357

memcached_servers = controller:11211

auth_type =

password

project_domain_name = Default

user_domain_name = Default

project_name

= service

username =

nova

password =

NOVA_PASS

[libvirt]

[matchmaker_redis]

[metrics]

[mks]

[neutron]

[notifications]

[osapi_v21]

[oslo_concurrency]

lock_path =

/var/lib/nova/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[pci]

[placement]

os_region_name = RegionOne

project_domain_name = Default

project_name

= service

auth_type =

password

user_domain_name = Default

auth_url =

http://controller:35357/v3

username =

placement

password =

PLACEMENT_PASS

[placement_database]

[quota]

[rdp]

[remote_debug]

[scheduler]

discover_hosts_in_cells_interval =

300

[serial_console]

[service_user]

[spice]

[ssl]

[trusted_computing]

[upgrade_levels]

[vendordata_dynamic_auth]

[vmware]

[vnc]

vncserver_listen = $my_ip

vncserver_proxyclient_address =

$my_ip

[workarounds]

[wsgi]

[xenserver]

[xvp]

HERE

解決bug https://bugzilla.redhat.com/show_bug.cgi?id=1430540


/etc/httpd/conf.d/00-nova-placement-api.conf

在VirtualHost段添加如下段




3.更新數據庫


https://docs.openstack.org/developer/nova/cells.html#step-by-step-for-common-use-cases

su -s /bin/sh -c "nova-manage api_db sync" nova

su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1

--verbose" nova

su -s /bin/sh -c "nova-manage db sync" nova




4.啓nova服務

systemctl enable openstack-nova-api.service \



systemctl start openstack-nova-api.service \




compute

1.安裝nova組件

yum -y install openstack-nova-compute

2.配置nova

compute

mv /etc/nova/nova.conf{,.default}

cat >/etc/nova/nova.conf <<'HERE'

[DEFAULT]

enabled_apis

= osapi_compute,metadata

transport_url =

rabbit://openstack:RABBIT_PASS@controller

my_ip =

192.168.130.201

use_neutron

= True

firewall_driver =

nova.virt.firewall.NoopFirewallDriver

[api]

auth_strategy = keystone

[api_database]

[barbican]

[cache]

[cells]

[cinder]

[cloudpipe]

[conductor]

[console]

[consoleauth]

[cors]

[cors.subdomain]

[crypto]

[database]

[ephemeral_storage_encryption]

[filter_scheduler]

[glance]

api_servers

= http://controller:9292

[oslo_concurrency]

lock_path =

/var/lib/nova/tmp

[guestfs]

[healthcheck]

[hyperv]

[image_file_url]

[ironic]

[key_manager]

[keystone_authtoken]

auth_uri =

http://controller:5000

auth_url =

http://controller:35357

memcached_servers = controller:11211

auth_type =

password

project_domain_name = Default

user_domain_name = Default

project_name

= service

username =

nova

password =

NOVA_PASS

[libvirt]

[matchmaker_redis]

[metrics]

[mks]

[neutron]

[notifications]

[osapi_v21]

[oslo_concurrency]

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[pci]

[placement]

os_region_name = RegionOne

project_domain_name = Default

project_name

= service

auth_type =

password

user_domain_name = Default

auth_url =

http://controller:35357/v3

username =

placement

password =

PLACEMENT_PASS

[placement_database]

[quota]

[rdp]

[remote_debug]

[scheduler]

[serial_console]

[service_user]

[spice]

[ssl]

[trusted_computing]

[upgrade_levels]

[vendordata_dynamic_auth]

[vmware]

[vnc]

enabled =

True

vncserver_listen = 0.0.0.0

vncserver_proxyclient_address =

$my_ip

novncproxy_base_url =

http://controller:6080/vnc_auto.html

[workarounds]

[wsgi]

[xenserver]

[xvp]

HERE


systemctl enable libvirtd.service

openstack-nova-compute.service

systemctl restart libvirtd.service

openstack-nova-compute.service

C.確認nova服務配置正確

source admin-openrc

openstack compute service list

openstack catalog list




D.添加compute節點到cell數據庫

source admin-openrc

openstack hypervisor list

su -s /bin/sh -c "nova-manage cell_v2 discover_hosts

--verbose" nova





hyper-v作為compute,請參看

https://cloudbase.it/openstack-hyperv-driver/http://libvirt.org/drvhyperv.htmlhttps://docs.openstack.org/juno/config-reference/content/hyper-v-virtualization-platform.html



五.Neutron

A.準備環境

1.建庫

mysql -u root -proot

CREATE DATABASE neutron;

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'IDENTIFIED BY 'NEUTRON_DBPASS';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIEDBY 'NEUTRON_DBPASS';

FLUSH PRIVILEGES;

用admin權限如下操作

source admin-openrc

2.創建用户

openstack user create --domain default --password NEUTRON_PASSneutron

openstack role add --project service --user neutronadmin

3.創建service

openstack service create --name neutron --description"OpenStack Networking" network

4.創建apiendpoint

openstack endpoint create --region RegionOne network publichttp://controller:9696

openstack endpoint create --region RegionOne network internalhttp://controller:9696

openstack endpoint create --region RegionOne network adminhttp://controller:9696

B.安裝neutron

controller

1.安裝 neutroncontroller組件

yum -y install openstack-neutron openstack-neutron-ml2openstack-neutron-linuxbridge ebtables

2.網絡選擇(二選一)

Providernetworks方案

/etc/neutron/neutron.conf

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Modular Layer 2 (ML2)plug-in(/etc/neutron/plugins/ml2/ml2_conf.ini)

 

 

 

 

 

 

 

 

 

Linux bridgeagent(/etc/neutron/plugins/ml2/linuxbridge_agent.ini)

 

 

 

 

 

 

 

DHCP agent(/etc/neutron/dhcp_agent.ini)

     

     

     

     

Self-servicenetworks方案

neutron

cat >/etc/neutron/neutron.conf <<HERE

[DEFAULT]

core_plugin= ml2

service_plugins = router

allow_overlapping_ips = True

transport_url =rabbit://openstack:RABBIT_PASS@controller

auth_strategy = keystone

notify_nova_on_port_status_changes =True

notify_nova_on_port_data_changes =True

[agent]

[cors]

[cors.subdomain]

[database]

connection =mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron

[keystone_authtoken]

auth_uri =http://controller:5000

auth_url =http://controller:35357

memcached_servers = controller:11211

auth_type =password

project_domain_name = Default

user_domain_name = Default

project_name= service

username =neutron

password =NEUTRON_PASS

[matchmaker_redis]

[nova]

auth_url =http://controller:35357

auth_type =password

project_domain_name = Default

user_domain_name = Default

region_name= RegionOne

project_name= service

username =nova

password =NOVA_PASS

[oslo_concurrency]

lock_path =/var/lib/neutron/tmp

[oslo_messaging_amqp]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[qos]

[quotas]

[ssl]

HERE

Layer 2

cat >/etc/neutron/plugins/ml2/ml2_conf.ini<<HERE

[DEFAULT]

[ml2]

type_drivers= flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers =linuxbridge,l2population

extension_drivers = port_security

[ml2_type_flat]

flat_networks = provider

[ml2_type_geneve]

[ml2_type_gre]

[ml2_type_vlan]

[ml2_type_vxlan]

vni_ranges =1:1000

[securitygroup]

enable_ipset= True

HERE

Linux bridge agent

cat >/etc/neutron/plugins/ml2/linuxbridge_agent.ini<<HERE

[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings =provider:ens3

[securitygroup]

enable_security_group = True

firewall_driver =neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]

enable_vxlan= True

local_ip =192.168.130.101

l2_population = True

HERE

Layer-3

cat >/etc/neutron/l3_agent.ini <<HERE

[DEFAULT]

interface_driver = linuxbridge

[agent]

[ovs]

HERE

DHCP agent

cat >/etc/neutron/dhcp_agent.ini <<HERE

[DEFAULT]

interface_driver = linuxbridge

dhcp_driver =neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = True

[agent]

[ovs]

HERE

metadata agent

cat >/etc/neutron/metadata_agent.ini <<HERE

[DEFAULT]

nova_metadata_ip = controller

metadata_proxy_shared_secret =METADATA_SECRET

[agent]

[cache]

HERE

4.配置compute使用網絡

/etc/nova/nova.conf

[neutron]

url =http://controller:9696

auth_url =http://controller:35357

auth_type =password

project_domain_name = Default

user_domain_name = Default

region_name= RegionOne

project_name= service

username =neutron

password =NEUTRON_PASS

service_metadata_proxy = True

metadata_proxy_shared_secret =METADATA_SECRET

5.更新數據庫

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini

su -s /bin/sh -c "neutron-db-manage --config-file/etc/neutron/neutron.conf \


6.啓服務

systemctl restart openstack-nova-api.service

systemctl enable neutron-server.service \



neutron-metadata-agent.service  

systemctl start neutron-server.service \



neutron-metadata-agent.service  

注: self-service network需要額外啓動layer-3服務

systemctl enable neutron-l3-agent.service

systemctl start neutron-l3-agent.service


compute

1.安裝neutroncompute組件

yum -y install openstack-neutron-linuxbridge ebtablesipset

2.common配置

mv /etc/neutron/neutron.conf{,.default}

cat >/etc/neutron/neutron.conf <<HERE

[DEFAULT]

transport_url =rabbit://openstack:RABBIT_PASS@controller

auth_strategy = keystone

[agent]

[cors]

[cors.subdomain]

[database]

[keystone_authtoken]

auth_uri =http://controller:5000

auth_url =http://controller:35357

memcached_servers = controller:11211

auth_type =password

project_domain_name = Default

user_domain_name = Default

project_name= service

username =neutron

password =NEUTRON_PASS

[matchmaker_redis]

[nova]

[oslo_concurrency]

lock_path =/var/lib/neutron/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[qos]

[quotas]

[ssl]

HERE

2.網絡配置(二選一)

Providernetworks方案

Linux bridgeagent(/etc/neutron/plugins/ml2/linuxbridge_agent.ini)

 

 

 

 

 

 

 

Self-servicenetworks方案

Linux bridge agent

mv/etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.default}

cat >/etc/neutron/plugins/ml2/linuxbridge_agent.ini<<HERE

[DEFAULT]

[agent]

[linux_bridge]

physical_interface_mappings =provider:ens33

[securitygroup]

enable_security_group = True

firewall_driver =neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

[vxlan]

enable_vxlan= True

local_ip =192.168.130.201

l2_population = True

HERE

3.配置compute使用網絡

/etc/nova/nova.conf

[neutron]

url =http://controller:9696

auth_url =http://controller:35357

auth_type =password

project_domain_name = Default

user_domain_name = Default

region_name= RegionOne

project_name= service

username =neutron

password =NEUTRON_PASS

4.啓服務

systemctl restart openstack-nova-compute.service

systemctl enable neutron-linuxbridge-agent.service

systemctl restart neutron-linuxbridge-agent.service

C.確認neutron服務配置正確

source admin-openrc

neutron ext-list

openstack network agent list






六.Dashboard(controller節點)

https://docs.openstack.org/ocata/install-guide-rdo/horizon-install.html#install-and-configure-components

yum -y install openstack-dashboard

配置請參看官方文檔

systemctl restart httpd.service memcached.service







七.CinderBlock

A.準備環境

1.建庫

mysql -u root -proot

CREATE DATABASE cinder;

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'IDENTIFIED BY 'CINDER_DBPASS';

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY'CINDER_DBPASS';

FLUSH PRIVILEGES;

用admin權限如下操作

source admin-openrc

2.創建用户

openstack user create --domain default --password CINDER_PASScinder

openstack role add --project service --user cinder admin

3.創建service

openstack service create --name cinder --description"OpenStack Block Storage" volume

openstack service create --name cinderv2 --description"OpenStack Block Storage" volumev2

4.創建apiendpoint

openstack endpoint create--region RegionOne volume public

http://controller:8776/v1/%\(tenant_id\)s

openstack endpoint create--region RegionOne volume internal

http://controller:8776/v1/%\(tenant_id\)s

openstack endpoint create--region RegionOne volume admin

http://controller:8776/v1/%\(tenant_id\)s

openstack endpoint create--region RegionOne volumev2 public

http://controller:8776/v2/%\(tenant_id\)s

openstack endpoint create--region RegionOne volumev2 internal

http://controller:8776/v2/%\(tenant_id\)s

openstack endpoint create--region RegionOne volumev2 admin

http://controller:8776/v2/%\(tenant_id\)s


B.安裝cinder

controller

1.安裝cinder組件

yum -y install openstack-cinder

2.配置cinder

mv /etc/cinder/cinder.conf{,.defalt}

cat >/etc/cinder/cinder.conf <<HERE

[DEFAULT]

transport_url =rabbit://openstack:RABBIT_PASS@controller

auth_strategy = keystone

my_ip =192.168.130.101

[BACKEND]

[BRCD_FABRIC_EXAMPLE]

[CISCO_FABRIC_EXAMPLE]

[COORDINATION]

[FC-ZONE-MANAGER]

[KEY_MANAGER]

[barbican]

[cors]

[cors.subdomain]

[database]

connection =mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

[key_manager]

[keystone_authtoken]

auth_uri =http://controller:5000

auth_url =http://controller:35357

memcached_servers = controller:11211

auth_type =password

project_domain_name = Default

user_domain_name = Default

project_name= service

username =cinder

password =CINDER_PASS

[matchmaker_redis]

[oslo_concurrency]

lock_path =/var/lib/cinder/tmp

[oslo_messaging_amqp]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[oslo_reports]

[oslo_versionedobjects]

[ssl]

HERE

3.配置compute使用cinder

/etc/nova/nova.conf

[cinder]

os_region_name = RegionOne

4.更新數據庫結構

su -s /bin/sh -c "cinder-manage db sync" cinder

5.啓cinder服務

systemctl restart openstack-nova-api.service

systemctl enable openstack-cinder-api.serviceopenstack-cinder-scheduler.service

systemctl restart openstack-cinder-api.serviceopenstack-cinder-scheduler.service


block1

1.創建存儲設備

yum -y install lvm2

systemctl enable lvm2-lvmetad.service

systemctl start lvm2-lvmetad.service


pvcreate /dev/sdb

vgcreate cinder-volumes /dev/sdb

/etc/lvm/lvm.conf

devices {

filter = ["a/sdb/", "r/.*/"]

2.安裝cinder

yum -y install openstack-cinder targetclipython-keystone

mv /etc/cinder/cinder.conf{,.default}

cat >/etc/cinder/cinder.conf <<HERE

[DEFAULT]

transport_url =rabbit://openstack:RABBIT_PASS@controller

auth_strategy = keystone

my_ip =192.168.130.111

enabled_backends = lvm

glance_api_servers =http://controller:9292

[BACKEND]

[BRCD_FABRIC_EXAMPLE]

[CISCO_FABRIC_EXAMPLE]

[COORDINATION]

[FC-ZONE-MANAGER]

[KEY_MANAGER]

[barbican]

[cors]

[cors.subdomain]

[database]

connection =mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder

[key_manager]

[keystone_authtoken]

auth_uri =http://controller:5000

auth_url =http://controller:35357

memcached_servers = controller:11211

auth_type =password

project_domain_name = Default

user_domain_name = Default

project_name= service

username =cinder

password =CINDER_PASS

[matchmaker_redis]

[oslo_concurrency]

lock_path =/var/lib/cinder/tmp

[oslo_messaging_amqp]

[oslo_messaging_notifications]

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_middleware]

[oslo_policy]

[oslo_reports]

[oslo_versionedobjects]

[ssl]

[lvm]

volume_driver =cinder.volume.drivers.lvm.LVMVolumeDriver

volume_group= cinder-volumes

iscsi_protocol = iscsi

iscsi_helper= lioadm

HERE


systemctl enable openstack-cinder-volume.servicetarget.service

systemctl restart openstack-cinder-volume.servicetarget.service


C.確認cinder服務配置正確

source admin-openrc

openstack volume service list





提示:如果blockstate為down,在保證配置文件都正確的情況,請檢查block節點和controller節點的時間是否同步,不同步是也會造成為down狀態



八.創建實例

1.network

Self-servicenetwork

https://docs.openstack.org/newton/install-guide-rdo/launch-instance-networks-selfservice.html



創建公有網絡(浮動ip池),以admin身份

https://docs.openstack.org/install-guide/launch-instance-networks-provider.html

source admin-openrc.sh

openstack network create public --external

--provider-network-type flat

--provider-physical-network provider

openstack subnet create --network public \



--allocation-poolstart=192.168.130.31,end=192.168.130.99sub-public


通常浮動ip無需啓用dhcp,可以通過--no-dhcp 來禁用dhcp,啓用只是為了方便測試





注:

a.官方文檔默認是flat網絡,故vmwareworkstation想要直接使用自帶的nat網絡,需要將供應商網絡類型設置為flat,這樣生成出來的虛擬機直接和其它workstation

nat網絡下的虛擬機互通;

如果環境支持vxlan網絡,則可用如下命令創建


openstack network createpublic --external \




--provider-physical-network provider \




b.如果僅創建浮動ip池,則取消共享屬性並禁用dhcp。如果想讓虛擬機更方便地直接使用workstationnat網絡,則可設置為共享並啓用dhcp;

c.物理網絡名稱與配置文件中的保持一致

grep'physical_interface_mappings'

/etc/neutron/plugins/ml2/linuxbridge_agent.ini


physical_interface_mappings=

provider:ens33





私網和路由都以demo身份創建

source demo-openrc.sh

創建私有網絡

openstack network create private --share

openstack subnet create --network private \





創建路由器

openstack router create myrouter

添加內部接口

openstackrouter add subnet myrouter sub-private



設置外部網關

openstack router set--external-gateway public myrouter








確認網絡創建正確

ip netns


openstack routerlist



openstack routershow myrouter



2.flavor

openstack flavor create --vcpus 1 --ram 64 --disk 1m1.tiny

openstack flavor create --vcpus 1 --ram 512 --disk 10m1.nano

openstack flavor create --vcpus 1 --ram 1024 --disk 10m1.micro

openstack flavor create --vcpus 2 --ram 2048 --disk 100m2.micro

openstack flavor create --vcpus 2 --ram 4096 --disk 100m2.medium


openstack flavor create --vcpus 4 --ram 4096 --disk 100

m2.large3.keypair

source demo-openrc

ssh-keygen  -t rsa -N '' -f ~/.ssh/id_rsa -q-b 2048


openstack keypair create --public-key ~/.ssh/id_rsa.pubmykey

openstack keypair list

4.securitygroup

openstack security group rule create --proto icmpdefault

openstack security group rule create --proto tcp --dst-port 22default

5.instance

https://docs.openstack.org/ocata/install-guide-rdo/launch-instance-selfservice.html

source demo-openrc

openstack flavor list

openstack image list

openstack network list

openstack security group list


openstack server create --flavor m1.nano --image cirros\

--nic net-id=f28fb8ff-9b84-418f-81f4-1e40b6a6ba8e --security-groupdefault \


--key-name mykey --max 1

openstack server list

openstack console url show selfservice-instance

6.浮動ip

添加浮動ip

openstack floating ip create public



關聯浮動ip

openstack server add floatingip selfservice-instance 192.168.130.34





7.block

https://docs.openstack.org/ocata/install-guide-rdo/launch-instance-cinder.html

source admin-openrc

openstack volume create --size 1 volume1

openstack volume list

openstack server add volume selfservice-instance volume1

8.quota


openstack quota show demo

openstack quota set --ports 100 demo



問題

https://docs.openstack.org/developer/devstack/guides/devstack-with-nested-kvm.htmlhttps://ask.openstack.org/en/question/101928/instance-get-stuck-when-booting-grub/?answer=102645https://ask.openstack.org/en/question/104408/nested-kvm-w-centos-73/https://bugs.launchpad.net/nova/+bug/1653430https://ask.openstack.org/en/question/100543/instance-cant-boot-without-throwing-any-errors/https://ask.openstack.org/en/question/100756/instances-stuck-on-booting-from-hard-drive/https://bugzilla.redhat.com/show_bug.cgi?id=1404627https://github.com/AJNOURI/COA/issues/49

現象

Newton/Ocata: server instance stuck in BUILD state






libvirt log日誌顯示CPU指令集不支持

char device redirected to /dev/pts/1 (label charserial1)

warning: host doesn't support requested feature:CPUID.01H:EDX.ds [bit 21]

warning: host doesn't support requested feature:CPUID.01H:ECX.osxsave [bit 27]

warning: host doesn't support requested feature:CPUID.07H:EBX.erms [bit 9]

warning: host doesn't support requested feature:CPUID.80000001H:EDX.pdpe1gb [bit 26]

warning: host doesn't support requested feature:CPUID.01H:EDX.ds [bit 21]

warning: host doesn't support requested feature:CPUID.01H:ECX.osxsave [bit 27]

warning: host doesn't support requested feature:CPUID.07H:EBX.erms [bit 9]

warning: host doesn't support requested feature:CPUID.80000001H:EDX.pdpe1gb [bit 26]

原因

vmware fusion己開啓VT-X嵌套虛擬化,直接通過virt-manager,virsh等能正常以kvm硬件加速方式啓動虛擬機。但通過openstack創建的實例則始終無法啓用kvm加速,只能使用qemu啓動,速度相比kvm就慢了不止半點。也試過當時mac最新版的virtualbox-5.1.22,居然連Enable

VT-x/AMD-v的按鈕都沒有。

臨時解決方案

https://docs.openstack.org/mitaka/config-reference/compute/hypervisor-kvm.html

https://docs.openstack.org/project-deploy-guide/kolla-ansible/pike/quickstart.html#install-kolla-for-development


compute節點/etc/nova/nova.conf指定virt_type

[libvirt]

virt_type =qemu

cpu_mode =none

重啓nova-compute服務

注意:cpu_mode這行一定要加,否則還是無法啓動。



啓用root密碼登錄


#!/bin/sh

sed -i 's/PermitRootLogin without-password/PermitRootLoginyes/g' /etc/ssh/sshd_config

sed -i 's/PasswordAuthentication no/PasswordAuthenticationyes/g' /etc/ssh/sshd_config

sed -i '/^#UseDNS yes/c UseDNS no' /etc/ssh/sshd_config

cp -f /home/centos/.ssh/authorized_keys /root/.ssh/

cp -f /home/ubuntu/.ssh/authorized_keys /root/.ssh/

cp -f /home/cirros/.ssh/authorized_keys /root/.ssh/

service sshd restart

passwd root<<EOF

root

root

EOF


reboot