PG1X WIKI

My Knowledge Base

User Tools

Site Tools


tech:cloud:openstack:stein:deploy-charmed-openstack-ha-bionic:deploy-charmed-openstack-ha-bionic

This is an old revision of the document!


Deploy Full HA Charmed OpenStack Stein series Bionic

This procedure describe one Little Powerfull Desktop Fully Virtualized, HA topology Charmed OpenStack.

- Deployment is complex task. - But deployment is easier rather than Operating OpenStack.

- OpenStack Stein - All deployemnt script available

- Ubuntu Server 18.04 LTS (Bionic)

Advanced detailed pictures lator.

- VMware Workstation Pro 15.5.5 build-16285975 - Memory: 128GB DDR4 - CPU: Core i9-9900 3.10GHz 8C/16T - Crucial NVMe CT1000P1SSD8 1TB - Virtual Machine Requirements - Virtual Machine Required Nested Virtualiztion Enabled. - 13 Nodes - MAAS, Juju BootStrap Controller, Nagios is not redundant. - 2 NIC(may be neutro-gateway, nova-compute)

  1. 1 for
    1. Management
    2. Tenant overlay network
  2. 1 for
    1. Provider Network
  3. No VLAN

- 2 Disk(may be ceph-osd)

  1. 1 for
    1. System
  2. 1 for Data

Using VMware Workstation Pro

ubuntu
password

DNS

8.8.8.8
8.8.4.4
1.1.1.1
8.8.8.8 8.8.4.4 1.1.1.1

NTP

ntp.nict.jp
ntp1.jst.mfeed.ad.jp
ntp2.jst.mfeed.ad.jp
ntp3.jst.mfeed.ad.jp
0.pool.ntp.org
1.pool.ntp.org
2.pool.ntp.org
3.pool.ntp.org
ntp.nict.jp ntp1.jst.mfeed.ad.jp ntp2.jst.mfeed.ad.jp ntp3.jst.mfeed.ad.jp 0.pool.ntp.org 1.pool.ntp.org 2.pool.ntp.org 3.pool.ntp.org
  1. Setup
  2. DHCP
  3. DNS cut off DNSSec
  4. NTP
  5. Image

vmrest

PS C:\Program Files (x86)\VMware\VMware Workstation> vmrest.exe -C
vmrest.exe : 用語 'vmrest.exe' は、コマンドレット、関数、スクリプト ファイル、または操作可能なプログラムの名前として認
識されません。名前が正しく記述されていることを確認し、パスが含まれている場合はそのパスが正しいことを確認してから、再試
行してください。
発生場所 行:1 文字:1
+ vmrest.exe -C
+ ~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (vmrest.exe:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException


Suggestion [3,General]: コマンド vmrest.exe は見つかりませんでしたが、現在の場所に存在します。Windows PowerShell は、既 定では、現在の場所からコマンドを読み込みません。このコマンドを信頼する場合は、".\vmrest.exe" と入力してください。詳細に ついては、"get-help about_Command_Precedence" と入力してヘルプを参照してください。
PS C:\Program Files (x86)\VMware\VMware Workstation> .\vmrest.exe -C
VMware Workstation REST API
Copyright (C) 2018-2020 VMware Inc.
All Rights Reserved

vmrest 1.2.0 build-15785246
Username:wnoguchi
New password:
Password does not meet complexity requirements:
- Minimum 1 uppercase character
- Minimum 1 lowercase character
- Minimum 1 numeric digit
- Minimum 1 special character(!#$%&'()*+,-./:;<=>?@[]^_`{|}~)
- Length between 8 and 12
New password:
Password does not meet complexity requirements:
- Minimum 1 uppercase character
- Minimum 1 lowercase character
- Minimum 1 numeric digit
- Minimum 1 special character(!#$%&'()*+,-./:;<=>?@[]^_`{|}~)
- Length between 8 and 12
New password:
Retype new password:
Processing...
Credential updated successfully
PS C:\Program Files (x86)\VMware\VMware Workstation>
PS C:\Program Files (x86)\VMware\VMware Workstation> .\vmrest.exe
VMware Workstation REST API
Copyright (C) 2018-2020 VMware Inc.
All Rights Reserved

vmrest 1.2.0 build-15785246
-
Using the VMware Workstation UI while API calls are in progress is not recommended and may yield unexpected results.
-
Serving HTTP on 127.0.0.1:8697
-
Press Ctrl+C to stop.
interrupt
VMware Workstation REST API server is stopped.
PS C:\Program Files (x86)\VMware\VMware Workstation> vmrest -h
vmrest : 用語 'vmrest' は、コマンドレット、関数、スクリプト ファイル、または操作可能なプログラムの名前として認識されま
せん。名前が正しく記述されていることを確認し、パスが含まれている場合はそのパスが正しいことを確認してから、再試行してく
ださい。
発生場所 行:1 文字:1
+ vmrest -h
+ ~~~~~~
    + CategoryInfo          : ObjectNotFound: (vmrest:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException


Suggestion [3,General]: コマンド vmrest は見つかりませんでしたが、現在の場所に存在します。Windows PowerShell は、既定で は、現在の場所からコマンドを読み込みません。このコマンドを信頼する場合は、".\vmrest" と入力してください。詳細については 、"get-help about_Command_Precedence" と入力してヘルプを参照してください。
PS C:\Program Files (x86)\VMware\VMware Workstation> .\vmrest.exe -h
VMware Workstation REST API
Copyright (C) 2018-2020 VMware Inc.
All Rights Reserved

vmrest 1.2.0 build-15785246
Usage of C:\Program Files (x86)\VMware\VMware Workstation\vmrest.exe:
  -c, --cert-path <cert-path>
        REST API Server certificate path
  -C, --config
        Configure credential
  -d, --debug
        Enable debug logging
  -h, --help
        Print usage
  -i, --ip <ip>
        REST API Server IP binding (default 127.0.0.1)
  -k, --key-path <key-path>
        REST API Server private key path
  -p, --port <port>
        REST API Server port (default 8697)
  -v, --version
        Print version information
PS C:\Program Files (x86)\VMware\VMware Workstation> .\vmrest.exe -i 10.0.12.191
VMware Workstation REST API
Copyright (C) 2018-2020 VMware Inc.
All Rights Reserved

vmrest 1.2.0 build-15785246
missing required value : https is mandatory for non-localhost IP bindings.
Usage of C:\Program Files (x86)\VMware\VMware Workstation\vmrest.exe:
  -c, --cert-path <cert-path>
        REST API Server certificate path
  -C, --config
        Configure credential
  -d, --debug
        Enable debug logging
  -h, --help
        Print usage
  -i, --ip <ip>
        REST API Server IP binding (default 127.0.0.1)
  -k, --key-path <key-path>
        REST API Server private key path
  -p, --port <port>
        REST API Server port (default 8697)
  -v, --version
        Print version information

Install Juju Client

Install OSC

sudo apt install git

sudo apt install open-vm-tools-desktop
sudo apt install open-vm-tools
sudo apt install openssh-server
sudo apt install vim
sudo apt install python3-venv
sudo apt install tmux
cat <<EOF | sudo tee /etc/sudoers.d/ubuntu
ubuntu ALL=(ALL) NOPASSWD: ALL
EOF
sudo chmod 640 /etc/sudoers.d/ubuntu

ubuntu@os-client:~$ cd work/
ubuntu@os-client:~/work$ python3 -m venv venv
ubuntu@os-client:~/work$ ls
venv
ubuntu@os-client:~/work$ source venv/bin/activate
(venv) ubuntu@os-client:~/work$ python -V
pip install wheel
pip install python-openstackclient
pip install python-swiftclient
pip freeze >requirements.txt
(venv) ubuntu@os-client:~/work$ openstack
(openstack) exit
(venv) ubuntu@os-client:~/work$ openstack --version
openstack 5.2.0
(venv) ubuntu@os-client:~/work$ deactivate
ubuntu@os-client:~/work$ openstack

Command 'openstack' not found, but can be installed with:

sudo snap install openstackclients         # version train, or
sudo apt  install python-openstackclient
sudo apt  install python3-openstackclient

See 'snap info openstackclients' for additional versions.

(venv) ubuntu@os-client:~/work$ cat requirements.txt
appdirs==1.4.3
Babel==2.8.0
certifi==2020.4.5.1
cffi==1.14.0
chardet==3.0.4
cliff==3.1.0
cmd2==0.8.9
cryptography==2.9.2
debtcollector==2.0.1
decorator==4.4.2
dogpile.cache==0.9.2
idna==2.9
iso8601==0.1.12
jmespath==0.9.5
jsonpatch==1.25
jsonpointer==2.0
keystoneauth1==4.0.0
msgpack==1.0.0
munch==2.5.0
netaddr==0.7.19
netifaces==0.10.9
openstacksdk==0.46.0
os-service-types==1.7.0
osc-lib==2.0.0
oslo.config==8.0.2
oslo.i18n==4.0.1
oslo.serialization==3.1.1
oslo.utils==4.1.1
pbr==5.4.5
pkg-resources==0.0.0
prettytable==0.7.2
pycparser==2.20
pyparsing==2.4.7
pyperclip==1.8.0
python-cinderclient==7.0.0
python-keystoneclient==4.0.0
python-novaclient==17.0.0
python-openstackclient==5.2.0
pytz==2020.1
PyYAML==5.3.1
requests==2.23.0
requestsexceptions==1.4.0
rfc3986==1.4.0
simplejson==3.17.0
six==1.14.0
stevedore==1.32.0
urllib3==1.25.9
wcwidth==0.1.9
wrapt==1.12.1

OpenStack Docs: Install Juju

sudo snap install juju --classic

OpenStack Docs: OpenStack Charms Deployment Guide

maas-cloud.yaml

clouds:
  mymaas:
    type: maas
    auth-types: [oauth1]
    endpoint: http://10.0.12.11:5240/MAAS
juju add-cloud --client -f maas-cloud.yaml mymaas
ubuntu@os-client:~/work/openstack$ juju add-cloud --client -f maas-cloud.yaml mymaas
Cloud "mymaas" successfully added to your local client.
You will need to add a credential for this cloud (`juju add-credential mymaas`)
before you can use it to bootstrap a controller (`juju bootstrap mymaas`) or
to create a model (`juju add-model <your model name> mymaas`).
ubuntu@os-client:~/work/openstack$ juju add-credential --client -f maas-creds.yaml mymaas
Credential "ubuntu" added locally for cloud "mymaas".

Kill Juju Bootstrap Controller

If you have already deployed juju bootstrap controller, charms, applications, and you want to deploy OpenStack again, you should destory all things.

juju destroy-controller -y --destroy-all-models --destroy-storage maas-controller

If not works, then use last resort.

juju kill-controller -y maas-controller
bash 00100-remove-juju-bootstrap-controller.sh
ubuntu@os-client:~/work/openstack/deploy$ bash 00100-remove-juju-bootstrap-controller.sh
Destroying controller
Waiting for hosted model resources to be reclaimed
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications
Waiting on 1 model, 49 machines, 31 applications

(snip)

Waiting on 1 model, 11 machines, 31 applications
Waiting on 1 model, 11 machines, 31 applications
Waiting on 1 model, 10 machines, 31 applications
Waiting on 1 model, 10 machines, 31 applications
Waiting on 1 model, 9 machines, 31 applications
Waiting on 1 model, 8 machines, 31 applications
Waiting on 1 model, 8 machines, 31 applications
Waiting on 1 model, 6 machines, 31 applications
Waiting on 1 model, 4 machines, 31 applications
Waiting on 1 model, 3 machines, 31 applications
Waiting on 1 model, 2 machines, 31 applications
Waiting on 1 model, 29 applications
Waiting on 1 model, 25 applications
Waiting on 1 model, 19 applications
Waiting on 1 model, 12 applications
Waiting on 1 model, 4 applications
Waiting on 1 model, 4 applications
Waiting on 1 model, 4 applications
Waiting on 1 model, 4 applications
Waiting on 1 model, 4 applications
Waiting on 1 model, 3 applications
Waiting on 1 model, 2 applications
Waiting on 1 model, 2 applications
Waiting on 1 model, 2 applications
All hosted models reclaimed, cleaning up controller machines
ubuntu@os-client:~/work/openstack/deploy$

Create Juju Bootstrap Controller

juju bootstrap --to os-juju-bootstrap.os.pg1x.net mymaas maas-controller
ubuntu@os-client:~/work/openstack/deploy$ bash 00200-create-juju-bootstrap-controller.sh
Creating Juju controller "maas-controller" on mymaas/default
Looking for packaged Juju agent version 2.7.6 for amd64
Launching controller instance(s) on mymaas/default...
 - nw44h3 (arch=amd64 mem=8G cores=8)
Installing Juju agent on bootstrap instance
Fetching Juju GUI 2.15.3
Waiting for address
Attempting to connect to 10.0.12.73:22
Connected to 10.0.12.73
Running machine configuration script...
Bootstrap agent now started
Contacting Juju controller at 10.0.12.73 to verify accessibility...

Bootstrap complete, controller "maas-controller" is now available
Controller machines are in the "controller" model
Initial model "default" added

Add machines

juju add-machine os-controller1.os.pg1x.net
juju add-machine os-controller2.os.pg1x.net
juju add-machine os-controller3.os.pg1x.net
juju add-machine os-compute1.os.pg1x.net
juju add-machine os-compute2.os.pg1x.net
juju add-machine os-compute3.os.pg1x.net
juju add-machine os-ceph1.os.pg1x.net
juju add-machine os-ceph2.os.pg1x.net
juju add-machine os-ceph3.os.pg1x.net
juju add-machine os-ceph-backup1.os.pg1x.net
juju add-machine os-ceph-backup2.os.pg1x.net
juju add-machine os-ceph-backup3.os.pg1x.net
juju add-machine os-nagios1.os.pg1x.net
ubuntu@os-client:~/work/openstack/deploy$ bash 00300-add-machines.sh
created machine 0
created machine 1
created machine 2
created machine 3
created machine 4
created machine 5
created machine 6
created machine 7
created machine 8
created machine 9
created machine 10
created machine 11
created machine 12
juju machines
juju status
ubuntu@os-client:~/work/openstack/deploy$ juju machines
Machine  State    DNS  Inst id          Series  AZ       Message
0        pending       os-controller1   bionic  default  starting
1        pending       os-controller2   bionic  default  starting
2        pending       os-controller3   bionic  default  starting
3        pending       os-compute1      bionic  default  starting
4        pending       os-compute2      bionic  default  starting
5        pending       os-compute3      bionic  default  starting
6        pending       os-ceph1         bionic  default  starting
7        pending       os-ceph2         bionic  default  starting
8        pending       os-ceph3         bionic  default  starting
9        pending       os-ceph-backup1  bionic  default  starting
10       pending       os-ceph-backup2  bionic  default  starting
11       pending       os-ceph-backup3  bionic  default  starting
12       pending       os-nagios1       bionic  default  starting
ubuntu@os-client:~/work/openstack/deploy$ juju status
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  15:51:55+09:00

Machine  State    DNS         Inst id          Series  AZ       Message
0        pending  10.0.12.84  os-controller1   bionic  default  Deploying: From 'Allocated' to 'Deploying'
1        pending  10.0.12.86  os-controller2   bionic  default  Deploying: From 'Allocated' to 'Deploying'
2        pending  10.0.12.77  os-controller3   bionic  default  Deploying: From 'Allocated' to 'Deploying'
3        pending  10.0.12.74  os-compute1      bionic  default  Deploying: From 'Allocated' to 'Deploying'
4        pending  10.0.12.80  os-compute2      bionic  default  Deploying: From 'Allocated' to 'Deploying'
5        pending  10.0.12.82  os-compute3      bionic  default  Deploying: From 'Allocated' to 'Deploying'
6        pending  10.0.12.79  os-ceph1         bionic  default  Deploying: From 'Allocated' to 'Deploying'
7        pending  10.0.12.78  os-ceph2         bionic  default  Deploying: From 'Allocated' to 'Deploying'
8        pending  10.0.12.83  os-ceph3         bionic  default  Deploying: From 'Allocated' to 'Deploying'
9        pending  10.0.12.76  os-ceph-backup1  bionic  default  Deploying: From 'Allocated' to 'Deploying'
10       pending  10.0.12.85  os-ceph-backup2  bionic  default  Deploying: From 'Allocated' to 'Deploying'
11       pending  10.0.12.75  os-ceph-backup3  bionic  default  Deploying: From 'Allocated' to 'Deploying'
12       pending  10.0.12.81  os-nagios1       bionic  default  Deploying: From 'Allocated' to 'Deploying'
watch -n1 --color juju machines --color
watch -n1 --color juju status --color
ubuntu@os-client:~/work/openstack/deploy$ juju machines
Machine  State    DNS         Inst id          Series  AZ       Message
0        started  10.0.12.84  os-controller1   bionic  default  Deployed
1        started  10.0.12.86  os-controller2   bionic  default  Deployed
2        started  10.0.12.77  os-controller3   bionic  default  Deployed
3        started  10.0.12.74  os-compute1      bionic  default  Deployed
4        started  10.0.12.80  os-compute2      bionic  default  Deployed
5        started  10.0.12.82  os-compute3      bionic  default  Deployed
6        started  10.0.12.79  os-ceph1         bionic  default  Deployed
7        started  10.0.12.78  os-ceph2         bionic  default  Deployed
8        started  10.0.12.83  os-ceph3         bionic  default  Deployed
9        started  10.0.12.76  os-ceph-backup1  bionic  default  Deployed
10       started  10.0.12.85  os-ceph-backup2  bionic  default  Deployed
11       started  10.0.12.75  os-ceph-backup3  bionic  default  Deployed
12       started  10.0.12.81  os-nagios1       bionic  default  Deployed

os-client, os-maas1

cat <<EOF | sudo tee /etc/sudoers.d/ubuntu
ubuntu ALL=(ALL) NOPASSWD: ALL
EOF
sudo chmod 640 /etc/sudoers.d/ubuntu
juju ssh 0 sudo systemctl poweroff
juju ssh 1 sudo systemctl poweroff
juju ssh 2 sudo systemctl poweroff
juju ssh 3 sudo systemctl poweroff
juju ssh 4 sudo systemctl poweroff
juju ssh 5 sudo systemctl poweroff
juju ssh 6 sudo systemctl poweroff
juju ssh 7 sudo systemctl poweroff
juju ssh 8 sudo systemctl poweroff
juju ssh 9 sudo systemctl poweroff
juju ssh 10 sudo systemctl poweroff
juju ssh 11 sudo systemctl poweroff
juju ssh 12 sudo systemctl poweroff
juju ssh -m controller 0 sudo systemctl poweroff
ssh os-maas1.os.pg1x.net sudo systemctl poweroff
sudo systemctl poweroff
ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIbn2VyO9Mby6BwkijQmGfH8O2+Uqewn0/oIOXOxMNgCZiztR3v2o5n1l9ET1GuN7iVMe9whoUiNuZMUVEv0INb+A6Yd0M/37tlWlC+qbIjjqL6UzJAqRISdGP1oVmnV2g== wnoguchi@lasthope.pg1x.net
ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJtKr5f5IYZ5whdy8qdPsUDcgxngHaOkZJiNCdIG30PeiDYqTPoawf3NQexYXdG4EABQtcW8oselfwL0hJ5t+AARvsPxUnGLguXZiUaV9W0DsmB65p1gNxMQ8xK1cygFGg== nopass wnoguchi@lasthope.pg1x.net
ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMiv5gZE9dJZw09qYSFyElBqf+d3/QHJhjetW19Ur4oFqqiRU5cAbW8+SON4Qb5yj4QBAzNdRC/jyKURHJ/2tDroGXJe2zKYu9jTlIh6IW4vGrAAmxKLi+bwRSu8muHGbA== ubuntu@os-client
ubuntu@os-client:~/work/openstack$ bash poweroff.sh
ERROR cannot connect to any address: [10.0.12.23:22 10.0.12.23:22]
Connection to 10.0.12.22 closed.
Connection to 10.0.12.26 closed.
Connection to 10.0.12.34 closed.
Connection to 10.0.12.29 closed.
Connection to 10.0.12.30 closed.
Connection to 10.0.12.25 closed.
Connection to 10.0.12.31 closed.
Connection to 10.0.12.32 closed.
Connection to 10.0.12.24 closed.
Connection to 10.0.12.33 closed.
Connection to 10.0.12.28 closed.
Connection to 10.0.12.21 closed.

Deploy ceph-osd

juju deploy –config config/ceph-osd.yaml -n 3 –to os-ceph1.os.pg1x.net,os-ceph2.os.pg1x.net,os-ceph3.os.pg1x.net cs:ceph-osd ceph-osd

ceph-osd:

osd-devices: /dev/sdb
source: cloud:bionic-stein

ubuntu@os-client:~/work/openstack/deploy$ bash 00400-deploy-ceph-osd.sh Located charm “cs:ceph-osd-303”. Deploying charm “cs:ceph-osd-303”.

tmux juju debug-log juju status “ceph*” watch -n 1 –color juju status “ceph*” –color

If you can see Missing relation: monitor message, you can go to next step.

ubuntu@os-client:~/work/openstack/deploy$ bash 00400-deploy-ceph-osd.sh Located charm “cs:ceph-osd-301”. Deploying charm “cs:ceph-osd-301”. ubuntu@os-client:~/work/openstack/deploy$ juju status “ceph*” Model Controller Cloud/Region Version SLA Timestamp default maas-controller mymaas/default 2.7.6 unsupported 17:41:28+09:00

App Version Status Scale Charm Store Rev OS Notes ceph-osd 13.2.8 blocked 3 ceph-osd jujucharms 301 ubuntu

Unit Workload Agent Machine Public address Ports Message ceph-osd/0* blocked idle 6 10.0.12.25 Missing relation: monitor ceph-osd/1 blocked idle 7 10.0.12.31 Missing relation: monitor ceph-osd/2 blocked idle 8 10.0.12.32 Missing relation: monitor

Machine State DNS Inst id Series AZ Message 6 started 10.0.12.25 os-ceph1 bionic default Deployed 7 started 10.0.12.31 os-ceph2 bionic default Deployed 8 started 10.0.12.32 os-ceph3 bionic default Deployed

Deploy ceph-osd for cinder backup

00550-deploy-ceph-mon-backup.sh
#!/bin/bash
juju deploy --config config/ceph-osd-backup.yaml -n 3 --to 9,10,11 cs:ceph-osd ceph-osd-backup
ceph-osd-backup.yaml
ceph-osd-backup:
  osd-devices: /dev/sdb
  source: cloud:bionic-stein

This timing, I mis operate to same machine 6,7,8. Correct is 9,10,11…

bash 00450-deploy-ceph-osd-backup.sh
ubuntu@os-client:~/work/openstack/deploy$ bash 00450-deploy-ceph-osd-backup.sh
Located charm "cs:ceph-osd-303".
Deploying charm "cs:ceph-osd-303".
juju debug-log --include ceph-osd-backup --include ceph-mon-backup
juju status "ceph*backup"
watch -n 1 --color juju status "ceph*backup" --color

Deploy ceph-mon

This command deploy ceph-mon to machine 0, 1, 2 as LXD container.

00500-deploy-ceph-mon.sh
#!/bin/bash
juju deploy --config config/ceph-mon.yaml -n 3 --to lxd:0,lxd:1,lxd:2 cs:ceph-mon ceph-mon
juju add-relation ceph-mon:osd ceph-osd:mon
ceph-osd-backup.yaml
ceph-mon:
  expected-osd-count: 3
  monitor-count: 3
  # ceph-authtool /dev/stdout --name=mon. --gen-key
  monitor-secret: 'AQACsMFeYPKUChAAIaA94CWemo92sLiCteCk3A=='
  source: cloud:bionic-stein
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh ceph-mon/0 ceph-authtool /dev/stdout --name=mon. --gen-key
[mon.]
        key = AQACsMFeYPKUChAAIaA94CWemo92sLiCteCk3A==
Connection to 10.0.12.35 closed.

Above YAML is right configuration, but I deployed for this document, expected-osd-count configuration is lacked. configure later but, use above configuration, you do not config later section.

ubuntu@os-client:~/work/openstack/deploy$ bash 00500-deploy-ceph-mon.sh
Located charm "cs:ceph-mon-48".
Deploying charm "cs:ceph-mon-48".
juju debug-log
juju status "ceph*"
watch -n 1 --color juju status "ceph*" --color

If you can see all unit active/idle state, you can go to next step.

ubuntu@os-client:~/work/openstack/deploy$ juju status "ceph*"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  18:33:28+09:00

App       Version  Status  Scale  Charm     Store       Rev  OS      Notes
ceph-mon  13.2.8   active      3  ceph-mon  jujucharms   46  ubuntu
ceph-osd  13.2.8   active      3  ceph-osd  jujucharms  301  ubuntu

Unit         Workload  Agent  Machine  Public address  Ports  Message
ceph-mon/0*  active    idle   0/lxd/0  10.0.12.35             Unit is ready and clustered
ceph-mon/1   active    idle   1/lxd/0  10.0.12.36             Unit is ready and clustered
ceph-mon/2   active    idle   2/lxd/0  10.0.12.37             Unit is ready and clustered
ceph-osd/0*  active    idle   6        10.0.12.25             Unit is ready (1 OSD)
ceph-osd/1   active    idle   7        10.0.12.31             Unit is ready (1 OSD)
ceph-osd/2   active    idle   8        10.0.12.32             Unit is ready (1 OSD)

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/0  started  10.0.12.35  juju-a5ab4c-0-lxd-0  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/0  started  10.0.12.36  juju-a5ab4c-1-lxd-0  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/0  started  10.0.12.37  juju-a5ab4c-2-lxd-0  bionic  default  Container started
6        started  10.0.12.25  os-ceph1             bionic  default  Deployed
7        started  10.0.12.31  os-ceph2             bionic  default  Deployed
8        started  10.0.12.32  os-ceph3             bionic  default  Deployed

Deploy ceph-mon for cinder backup

00550-deploy-ceph-mon-backup.sh
#!/bin/bash
juju deploy --config config/ceph-mon-backup.yaml -n 3 --to lxd:0,lxd:1,lxd:2 cs:ceph-mon ceph-mon-backup
juju add-relation ceph-mon-backup:osd ceph-osd-backup:mon
ceph-mon-backup.yaml
ceph-mon-backup:
  expected-osd-count: 3
  monitor-count: 3
  # ceph-authtool /dev/stdout --name=mon. --gen-key
  monitor-secret: 'AQAFsMFew+tBCRAAOW0wwXDlsgVi2IdtR4rjzw=='
  source: cloud:bionic-stein

(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh ceph-mon/0 ceph-authtool /dev/stdout –name=mon. –gen-key [mon.]

      key = AQAFsMFew+tBCRAAOW0wwXDlsgVi2IdtR4rjzw==

Connection to 10.0.12.35 closed.

bash 00550-deploy-ceph-mon-backup.sh

ubuntu@os-client:~/work/openstack/deploy$ bash 00550-deploy-ceph-mon-backup.sh
Located charm "cs:ceph-mon-48".
Deploying charm "cs:ceph-mon-48".
juju debug-log --include ceph-osd-backup --include ceph-mon-backup
juju status "ceph*backup"
watch -n 1 --color juju status "ceph*backup" --color

Deploy rabbitmq-server

juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 cs:rabbitmq-server rabbitmq-server
ubuntu@os-client:~/work/openstack/deploy$ bash 00700-deploy-rabbitmq-server.sh
Located charm "cs:rabbitmq-server-100".
Deploying charm "cs:rabbitmq-server-100".
juju debug-log --include rabbitmq-server
juju status "rabbitmq-server"
watch -n 1 --color juju status "rabbitmq-server" --color
ubuntu@os-client:~$ juju status "rabbitmq-server"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  20:39:18+09:00

App              Version  Status  Scale  Charm            Store       Rev  OS      Notes
rabbitmq-server  3.6.10   active      3  rabbitmq-server  jujucharms  100  ubuntu

Unit                Workload  Agent  Machine  Public address  Ports     Message
rabbitmq-server/0   active    idle   0/lxd/2  10.0.12.41      5672/tcp  Unit is ready and clustered
rabbitmq-server/1*  active    idle   1/lxd/2  10.0.12.40      5672/tcp  Unit is ready and clustered
rabbitmq-server/2   active    idle   2/lxd/2  10.0.12.42      5672/tcp  Unit is ready and clustered

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/2  started  10.0.12.41  juju-a5ab4c-0-lxd-2  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/2  started  10.0.12.40  juju-a5ab4c-1-lxd-2  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/2  started  10.0.12.42  juju-a5ab4c-2-lxd-2  bionic  default  Container started
juju deploy -n 3 --to lxd:0,lxd:1,lxd:2 cs:memcached memcached
ubuntu@os-client:~/work/openstack/deploy$ bash 00800-deploy-memcached.sh
Located charm "cs:memcached-28".
Deploying charm "cs:memcached-28".
juju debug-log --include memcached
juju status "memcached"
watch -n 1 --color juju status "memcached" --color
juju ssh 0 sudo systemctl poweroff
juju ssh 1 sudo systemctl poweroff
juju ssh 2 sudo systemctl poweroff
juju ssh -m controller 0 sudo systemctl poweroff
ssh ubuntu@10.0.12.11 sudo systemctl poweroff

take a snapshot “rabbitmq, memcached”

verify operations

juju ssh memcached/0
echo "stats settings" | nc localhost 11211 | head
ubuntu@juju-a5ab4c-0-lxd-3:~$ echo "stats settings" | nc localhost 11211 | head
STAT maxbytes 805306368
STAT maxconns 1024
STAT tcpport 11211
STAT udpport 0
STAT inter 0.0.0.0
STAT verbosity 0
STAT oldest 0
STAT evictions on
STAT domain_socket NULL
STAT umask 700

Percona cluster

juju deploy --config config/mysql.yaml -n 3 --to lxd:0,lxd:1,lxd:2 cs:percona-cluster mysql
juju deploy --config config/mysql.yaml cs:hacluster mysql-hacluster
juju add-relation mysql:ha mysql-hacluster:ha
mysql:
  max-connections: 20000
  min-cluster-size: 3
  innodb-buffer-pool-size: 512M
  performance-schema: true
  root-password: password
  source: cloud:bionic-stein
  vip: 10.0.14.130
mysql-hacluster:
  corosync_transport: unicast
ubuntu@os-client:~/work/openstack/deploy$ bash 00900-deploy-mysql.sh
Located charm "cs:percona-cluster-286".
Deploying charm "cs:percona-cluster-286".
Located charm "cs:hacluster-66".
Deploying charm "cs:hacluster-66".
juju debug-log --include mysql
juju status "mysql"
watch -n 1 --color juju status "mysql" --color

Verify Operation

juju status "mysql"
juju ssh mysql/0 sudo crm status
juju ssh mysql/1 ip address show
ping -c 4 10.0.14.130
juju ssh mysql/1 mysql -u root -p"password"
SHOW DATABASES;
SELECT user, host, plugin FROM mysql.user;
EXIT;
ubuntu@os-client:~$ juju status "mysql"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  21:47:18+09:00

App              Version  Status  Scale  Charm            Store       Rev  OS      Notes
mysql            5.7.20   active      3  percona-cluster  jujucharms  286  ubuntu
mysql-hacluster           active      3  hacluster        jujucharms   66  ubuntu

Unit                  Workload  Agent  Machine  Public address  Ports     Message
mysql/0               active    idle   0/lxd/4  10.0.12.47      3306/tcp  Unit is ready
  mysql-hacluster/1   active    idle            10.0.12.47                Unit is ready and clustered
mysql/1*              active    idle   1/lxd/4  10.0.12.46      3306/tcp  Unit is ready
  mysql-hacluster/2   active    idle            10.0.12.46                Unit is ready and clustered
mysql/2               active    idle   2/lxd/4  10.0.12.48      3306/tcp  Unit is ready
  mysql-hacluster/0*  active    idle            10.0.12.48                Unit is ready and clustered

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/4  started  10.0.12.47  juju-a5ab4c-0-lxd-4  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/4  started  10.0.12.46  juju-a5ab4c-1-lxd-4  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/4  started  10.0.12.48  juju-a5ab4c-2-lxd-4  bionic  default  Container started
ubuntu@os-client:~$ juju ssh mysql/0 sudo crm status
Stack: corosync
Current DC: juju-a5ab4c-1-lxd-4 (version 1.1.18-2b07d5c5a9) - partition with quorum
Last updated: Sat May  9 12:47:41 2020
Last change: Sat May  9 12:30:22 2020 by hacluster via crmd on juju-a5ab4c-1-lxd-4

3 nodes configured
4 resources configured

Online: [ juju-a5ab4c-0-lxd-4 juju-a5ab4c-1-lxd-4 juju-a5ab4c-2-lxd-4 ]

Full list of resources:

 Resource Group: grp_mysql_vips
     res_mysql_2e4d5b2_vip      (ocf::heartbeat:IPaddr2):       Started juju-a5ab4c-1-lxd-4
 Clone Set: cl_mysql_monitor [res_mysql_monitor]
     Started: [ juju-a5ab4c-0-lxd-4 juju-a5ab4c-1-lxd-4 juju-a5ab4c-2-lxd-4 ]

Connection to 10.0.12.47 closed.
ubuntu@os-client:~$ juju ssh mysql/1 ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:aa:e8:eb brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.12.46/22 brd 10.0.15.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.14.130/22 brd 10.0.15.255 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:feaa:e8eb/64 scope link
       valid_lft forever preferred_lft forever
Connection to 10.0.12.46 closed.
ubuntu@os-client:~$ ping -c 4 10.0.14.130
PING 10.0.14.130 (10.0.14.130) 56(84) bytes of data.
64 bytes from 10.0.14.130: icmp_seq=1 ttl=64 time=1.10 ms
64 bytes from 10.0.14.130: icmp_seq=2 ttl=64 time=0.343 ms
64 bytes from 10.0.14.130: icmp_seq=3 ttl=64 time=1.40 ms
64 bytes from 10.0.14.130: icmp_seq=4 ttl=64 time=0.484 ms

--- 10.0.14.130 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3021ms
rtt min/avg/max/mdev = 0.343/0.835/1.407/0.438 ms
ubuntu@os-client:~$ juju ssh mysql/1 mysql -u root -p "password"
Enter password:
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
Connection to 10.0.12.46 closed.
ubuntu@os-client:~$ juju ssh mysql/1 mysql -u root -p"password"
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1017
Server version: 5.7.20-18-18 Percona XtraDB Cluster (GPL), Release rel18, Revision e19a6b7, WSREP version 29.24, wsrep_29.24

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)

mysql> select user, host, plugin from mysql.user;
+---------------+---------------+-----------------------+
| user          | host          | plugin                |
+---------------+---------------+-----------------------+
| root          | localhost     | mysql_native_password |
| mysql.session | localhost     | mysql_native_password |
| mysql.sys     | localhost     | mysql_native_password |
| sstuser       | localhost     | mysql_native_password |
| sstuser       | ip6-localhost | mysql_native_password |
+---------------+---------------+-----------------------+
5 rows in set (0.00 sec)

mysql> quit
Bye
Connection to 10.0.12.46 closed.
juju ssh 0 sudo systemctl poweroff
juju ssh 1 sudo systemctl poweroff
juju ssh 2 sudo systemctl poweroff
juju ssh -m controller 0 sudo systemctl poweroff
ssh ubuntu@10.0.12.11 sudo systemctl poweroff

take a snapshot “mysql(percona cluster)”

mysql outage

https://jaas.ai/percona-cluster

Cold Boot section

ubuntu@os-client:~$ juju status mysql
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  22:08:47+09:00

App              Version  Status   Scale  Charm            Store       Rev  OS      Notes
mysql            5.7.20   blocked      3  percona-cluster  jujucharms  286  ubuntu
mysql-hacluster           active       3  hacluster        jujucharms   66  ubuntu

Unit                  Workload  Agent  Machine  Public address  Ports     Message
mysql/0*              blocked   idle   0/lxd/4  10.0.12.47      3306/tcp  Services not running that should be: mysql
  mysql-hacluster/1*  active    idle            10.0.12.47                Unit is ready and clustered
mysql/1               blocked   idle   1/lxd/4  10.0.12.46      3306/tcp  MySQL is down. Sequence Number: 2. Safe To Bootstrap: 0
  mysql-hacluster/2   active    idle            10.0.12.46                Unit is ready and clustered
mysql/2               blocked   idle   2/lxd/4  10.0.12.48      3306/tcp  Services not running that should be: mysql
  mysql-hacluster/0   active    idle            10.0.12.48                Unit is ready and clustered

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/4  started  10.0.12.47  juju-a5ab4c-0-lxd-4  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/4  started  10.0.12.46  juju-a5ab4c-1-lxd-4  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/4  started  10.0.12.48  juju-a5ab4c-2-lxd-4  bionic  default  Container started
ubuntu@os-client:~/work/openstack/deploy$ juju ssh mysql/0 sudo cat /var/lib/percona-xtradb-cluster/grastate.dat
# GALERA saved state
version: 2.1
uuid:    7fbcf7ac-91ef-11ea-9a8a-dffd5f25db10
seqno:   2
safe_to_bootstrap: 0
 Connection to 10.0.12.47 closed.
ubuntu@os-client:~/work/openstack/deploy$ juju ssh mysql/1 sudo cat /var/lib/percona-xtradb-cluster/grastate.dat
# GALERA saved state
version: 2.1
uuid:    7fbcf7ac-91ef-11ea-9a8a-dffd5f25db10
seqno:   2
safe_to_bootstrap: 0
 Connection to 10.0.12.46 closed.
ubuntu@os-client:~/work/openstack/deploy$ juju ssh mysql/2 sudo cat /var/lib/percona-xtradb-cluster/grastate.dat
# GALERA saved state
version: 2.1
uuid:    7fbcf7ac-91ef-11ea-9a8a-dffd5f25db10
seqno:   2
safe_to_bootstrap: 1
 Connection to 10.0.12.48 closed.

if sequence number latest is leader

juju run-action mysql/leader bootstrap-pxc --wait
juju run-action mysql/non-leader notify-bootstrapped --wait

if sequence number latest is non leader

juju run-action mysql/2 bootstrap-pxc --wait
juju run-action mysql/leader notify-bootstrapped --wait
ubuntu@os-client:~/work/openstack/deploy$ juju status mysql
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  10:27:27+09:00

App              Version  Status   Scale  Charm            Store       Rev  OS      Notes
mysql            5.7.20   blocked      3  percona-cluster  jujucharms  286  ubuntu
mysql-hacluster           active       3  hacluster        jujucharms   66  ubuntu

Unit                  Workload  Agent  Machine  Public address  Ports     Message
mysql/0*              blocked   idle   0/lxd/4  10.0.12.47      3306/tcp  MySQL is down. Sequence Number: 13886. Safe To Bootstrap: 0
  mysql-hacluster/1*  active    idle            10.0.12.47                Unit is ready and clustered
mysql/1               blocked   idle   1/lxd/4  10.0.12.46      3306/tcp  MySQL is down. Sequence Number: 13888. Safe To Bootstrap: 0
  mysql-hacluster/2   active    idle            10.0.12.46                Unit is ready and clustered
mysql/2               blocked   idle   2/lxd/4  10.0.12.48      3306/tcp  MySQL is down. Sequence Number: 13888. Safe To Bootstrap: 1
  mysql-hacluster/0   active    idle            10.0.12.48                Unit is ready and clustered

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/4  started  10.0.12.47  juju-a5ab4c-0-lxd-4  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/4  started  10.0.12.46  juju-a5ab4c-1-lxd-4  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/4  started  10.0.12.48  juju-a5ab4c-2-lxd-4  bionic  default  Container started

ubuntu@os-client:~/work/openstack/deploy$ juju run-action mysql/2 bootstrap-pxc --wait
unit-mysql-2:
  UnitId: mysql/2
  id: "19"
  results:
    Stderr: |
      Unknown operation bootstrap-pxc.
    Stdout: |
      active
    output: Bootstrap succeeded. Wait for the other units to run update-status
  status: completed
  timing:
    completed: 2020-05-18 01:29:23 +0000 UTC
    enqueued: 2020-05-18 01:28:34 +0000 UTC
    started: 2020-05-18 01:28:39 +0000 UTC
ubuntu@os-client:~/work/openstack/deploy$ juju status mysql
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  10:32:13+09:00

App              Version  Status   Scale  Charm            Store       Rev  OS      Notes
mysql            5.7.20   waiting      3  percona-cluster  jujucharms  286  ubuntu
mysql-hacluster           active       3  hacluster        jujucharms   66  ubuntu

Unit                  Workload  Agent  Machine  Public address  Ports     Message
mysql/0*              waiting   idle   0/lxd/4  10.0.12.47      3306/tcp  Unit waiting for cluster bootstrap
  mysql-hacluster/1*  active    idle            10.0.12.47                Unit is ready and clustered
mysql/1               waiting   idle   1/lxd/4  10.0.12.46      3306/tcp  Unit waiting for cluster bootstrap
  mysql-hacluster/2   active    idle            10.0.12.46                Unit is ready and clustered
mysql/2               waiting   idle   2/lxd/4  10.0.12.48      3306/tcp  Unit waiting for cluster bootstrap
  mysql-hacluster/0   active    idle            10.0.12.48                Unit is ready and clustered

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/4  started  10.0.12.47  juju-a5ab4c-0-lxd-4  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/4  started  10.0.12.46  juju-a5ab4c-1-lxd-4  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/4  started  10.0.12.48  juju-a5ab4c-2-lxd-4  bionic  default  Container started

ubuntu@os-client:~/work/openstack/deploy$ juju run-action mysql/leader notify-bootstrapped --wait
unit-mysql-0:
  UnitId: mysql/0
  id: "20"
  results: {}
  status: completed
  timing:
    completed: 2020-05-18 01:33:23 +0000 UTC
    enqueued: 2020-05-18 01:33:21 +0000 UTC
    started: 2020-05-18 01:33:22 +0000 UTC
ubuntu@os-client:~/work/openstack/deploy$ juju status mysql
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  10:35:50+09:00

App              Version  Status  Scale  Charm            Store       Rev  OS      Notes
mysql            5.7.20   active      3  percona-cluster  jujucharms  286  ubuntu
mysql-hacluster           active      3  hacluster        jujucharms   66  ubuntu

Unit                  Workload  Agent  Machine  Public address  Ports     Message
mysql/0*              active    idle   0/lxd/4  10.0.12.47      3306/tcp  Unit is ready
  mysql-hacluster/1*  active    idle            10.0.12.47                Unit is ready and clustered
mysql/1               active    idle   1/lxd/4  10.0.12.46      3306/tcp  Unit is ready
  mysql-hacluster/2   active    idle            10.0.12.46                Unit is ready and clustered
mysql/2               active    idle   2/lxd/4  10.0.12.48      3306/tcp  Unit is ready
  mysql-hacluster/0   active    idle            10.0.12.48                Unit is ready and clustered

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/4  started  10.0.12.47  juju-a5ab4c-0-lxd-4  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/4  started  10.0.12.46  juju-a5ab4c-1-lxd-4  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/4  started  10.0.12.48  juju-a5ab4c-2-lxd-4  bionic  default  Container started
unit-mysql-2: 22:16:25 WARNING unit.mysql/2.juju-log min-cluster-size is not defined, race conditions may occur if this is not a single unit deployment.
mysql:
  min-cluster-size: 3
juju config mysql --file config/mysql.yaml
juju ssh mysql/0 sudo systemctl poweroff
juju ssh mysql/1 sudo systemctl poweroff
juju ssh mysql/2 sudo systemctl poweroff
juju ssh 0 sudo systemctl poweroff
juju ssh 1 sudo systemctl poweroff
juju ssh 2 sudo systemctl poweroff
juju ssh -m controller 0 sudo systemctl poweroff
ssh ubuntu@10.0.12.11 sudo systemctl poweroff

take a snapshot “mysql(percona cluster) fix”

  • Garbage
ubuntu@os-client:~/work/openstack/deploy$ seq 0 2 | xargs -I{} juju ssh mysql/{} sudo cat /var/lib/percona-xtradb-cluster/grastate.dat
# GALERA saved state
version: 2.1
uuid:    720546f7-91f9-11ea-9158-9ef3e369231b
seqno:   0
safe_to_bootstrap: 0
 # GALERA saved state
version: 2.1
uuid:    720546f7-91f9-11ea-9158-9ef3e369231b
seqno:   0
safe_to_bootstrap: 0
 # GALERA saved state
version: 2.1
uuid:    720546f7-91f9-11ea-9158-9ef3e369231b
seqno:   0
safe_to_bootstrap: 1
 ubuntu@os-client:~/work/openstack/deploy$ seq 0 2 | xargs -I{} echo juju ssh mysql/{} sudo cat /var/lib/percona-xtradb-cluster/grastate.dat
juju ssh mysql/0 sudo cat /var/lib/percona-xtradb-cluster/grastate.dat
juju ssh mysql/1 sudo cat /var/lib/percona-xtradb-cluster/grastate.dat
juju ssh mysql/2 sudo cat /var/lib/percona-xtradb-cluster/grastate.dat
  • service is stopped

this method is applicable other application components.

juju run-action mysql/1 resume

ceph-mon lack of setting

juju config ceph-mon --file config/ceph-mon.yaml
ceph-mon:
  expected-osd-count: 3
juju ssh 0 sudo systemctl poweroff
juju ssh 1 sudo systemctl poweroff
juju ssh 2 sudo systemctl poweroff
juju ssh ceph-osd/0 sudo systemctl poweroff
juju ssh ceph-osd/1 sudo systemctl poweroff
juju ssh ceph-osd/2 sudo systemctl poweroff
juju ssh -m controller 0 sudo systemctl poweroff
ssh ubuntu@10.0.12.11 sudo systemctl poweroff

take a snapshot “ceph-mon config expected-osd-count”

keystone

juju deploy --config config/keystone.yaml -n 3 --to lxd:0,lxd:1,lxd:2 cs:keystone keystone
juju deploy --config config/keystone.yaml cs:hacluster keystone-hacluster
juju add-relation keystone:ha keystone-hacluster:ha
#
juju add-relation keystone:shared-db mysql:shared-db
keystone:
  admin-password: password
  openstack-origin: cloud:bionic-stein
  vip: 10.0.14.131
keystone-hacluster:
  corosync_transport: unicast
ubuntu@os-client:~/work/openstack/deploy$ bash 01000-deploy-keystone.sh
Located charm "cs:keystone-312".
Deploying charm "cs:keystone-312".
Located charm "cs:hacluster-66".
Deploying charm "cs:hacluster-66".
juju debug-log --include keystone
juju status "keystone"
watch -n 1 --color juju status "keystone" --color

Verify Operation

juju status "keystone"
juju ssh keystone/0 sudo crm status
juju ssh keystone/0 ip address show
ping -c 4 10.0.14.131
juju ssh mysql/0 mysql -u root -p"password"
SHOW DATABASES;
SHOW TABLES FROM keystone;
mkdir -p ~/.config/openstack
vim ~/work/openstack/workspace/clouds.yaml
ln -sf ~/work/openstack/workspace/clouds.yaml ~/.config/openstack/clouds.yaml
ll ~/.config/openstack/clouds.yaml
source ~/work/venv/bin/activate
openstack --os-cloud default token issue
alias openstack="openstack --os-cloud default"
openstack token issue
deactivate

OpenStack Docs: Configuration

clouds:
  default:
    auth:
      auth_url: http://10.0.14.131:35357/
      project_name: admin
      username: admin
      password: password
    region_name: RegionOne
    project_domain_name: admin_domain
    user_domain_name: admin_domain
ubuntu@os-client:~$ juju status "keystone"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  00:03:00+09:00

App                 Version  Status  Scale  Charm      Store       Rev  OS      Notes
keystone            15.0.0   active      3  keystone   jujucharms  312  ubuntu
keystone-hacluster           active      3  hacluster  jujucharms   66  ubuntu

Unit                     Workload  Agent  Machine  Public address  Ports     Message
keystone/0               active    idle   0/lxd/5  10.0.12.49      5000/tcp  Unit is ready
  keystone-hacluster/1   active    idle            10.0.12.49                Unit is ready and clustered
keystone/1*              active    idle   1/lxd/5  10.0.12.50      5000/tcp  Unit is ready
  keystone-hacluster/0*  active    idle            10.0.12.50                Unit is ready and clustered
keystone/2               active    idle   2/lxd/5  10.0.12.51      5000/tcp  Unit is ready
  keystone-hacluster/2   active    idle            10.0.12.51                Unit is ready and clustered

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/5  started  10.0.12.49  juju-a5ab4c-0-lxd-5  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/5  started  10.0.12.50  juju-a5ab4c-1-lxd-5  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/5  started  10.0.12.51  juju-a5ab4c-2-lxd-5  bionic  default  Container started

ubuntu@os-client:~$ juju ssh keystone/0 sudo crm status
Stack: corosync
Current DC: juju-a5ab4c-0-lxd-5 (version 1.1.18-2b07d5c5a9) - partition with quorum
Last updated: Sat May  9 15:03:08 2020
Last change: Sat May  9 14:51:26 2020 by hacluster via crmd on juju-a5ab4c-1-lxd-5

3 nodes configured
4 resources configured

Online: [ juju-a5ab4c-0-lxd-5 juju-a5ab4c-1-lxd-5 juju-a5ab4c-2-lxd-5 ]

Full list of resources:

 Resource Group: grp_ks_vips
     res_ks_56e2056_vip (ocf::heartbeat:IPaddr2):       Started juju-a5ab4c-0-lxd-5
 Clone Set: cl_ks_haproxy [res_ks_haproxy]
     Started: [ juju-a5ab4c-0-lxd-5 juju-a5ab4c-1-lxd-5 juju-a5ab4c-2-lxd-5 ]

Connection to 10.0.12.49 closed.
ubuntu@os-client:~$ juju ssh keystone/0 ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
16: eth0@if17: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:e6:11:8b brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.12.49/22 brd 10.0.15.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.14.131/22 brd 10.0.15.255 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fee6:118b/64 scope link
       valid_lft forever preferred_lft forever
Connection to 10.0.12.49 closed.
ubuntu@os-client:~$ ping -c 4 10.0.14.131
PING 10.0.14.131 (10.0.14.131) 56(84) bytes of data.
64 bytes from 10.0.14.131: icmp_seq=1 ttl=64 time=0.181 ms
64 bytes from 10.0.14.131: icmp_seq=2 ttl=64 time=0.980 ms
64 bytes from 10.0.14.131: icmp_seq=3 ttl=64 time=3.35 ms
64 bytes from 10.0.14.131: icmp_seq=4 ttl=64 time=0.251 ms

--- 10.0.14.131 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3026ms
rtt min/avg/max/mdev = 0.181/1.192/3.358/1.289 ms
ubuntu@os-client:~$ juju ssh mysql/0 mysql -u root -p"password"
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2831
Server version: 5.7.20-18-18 Percona XtraDB Cluster (GPL), Release rel18, Revision e19a6b7, WSREP version 29.24, wsrep_29.24

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> SHOW DATABASES;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| keystone           |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
5 rows in set (0.00 sec)

mysql> SHOW TABLES FOR keystone;
ERROR 1046 (3D000): No database selected
mysql> SHOW TABLES FROM keystone;
+------------------------------------+
| Tables_in_keystone                 |
+------------------------------------+
| access_rule                        |
| access_token                       |
| application_credential             |
| application_credential_access_rule |
| application_credential_role        |
| assignment                         |
| config_register                    |
| consumer                           |
| credential                         |
| endpoint                           |
| endpoint_group                     |
| federated_user                     |
| federation_protocol                |
| group                              |
| id_mapping                         |
| identity_provider                  |
| idp_remote_ids                     |
| implied_role                       |
| limit                              |
| local_user                         |
| mapping                            |
| migrate_version                    |
| nonlocal_user                      |
| password                           |
| policy                             |
| policy_association                 |
| project                            |
| project_endpoint                   |
| project_endpoint_group             |
| project_tag                        |
| region                             |
| registered_limit                   |
| request_token                      |
| revocation_event                   |
| role                               |
| sensitive_config                   |
| service                            |
| service_provider                   |
| system_assignment                  |
| token                              |
| trust                              |
| trust_role                         |
| user                               |
| user_group_membership              |
| user_option                        |
| whitelisted_config                 |
+------------------------------------+
46 rows in set (0.00 sec)

mysql> SELECT * FROM `keystone`.`user`;
+----------------------------------+-----------------------------+---------+----------------------------------+---------------------+----------------+----------------------------------+
| id                               | extra                       | enabled | default_project_id               | created_at          | last_active_at | domain_id                        |
+----------------------------------+-----------------------------+---------+----------------------------------+---------------------+----------------+----------------------------------+
| 7d295fc2efef4f51b9fcd57202b04207 | {"email": "juju@localhost"} |       1 | af6a392e752a4e188bc1f3e215d913c3 | 2020-05-09 14:53:59 | NULL           | e3d249510ae44d319268a984919e4a90 |
| ae645266bd8a43398cbe81ebb73cc603 | {"email": "juju@localhost"} |       1 | 386976af481d4f108d6e72112615d143 | 2020-05-09 14:53:57 | NULL           | 0b28f522ee3641e09e260d970601f11a |
| c34133f8c318436f97b7d55ed904585c | {"email": "juju@localhost"} |       1 | NULL                             | 2020-05-09 14:42:27 | NULL           | 09bf4b4adc6449d0805f40fe44d172dc |
+----------------------------------+-----------------------------+---------+----------------------------------+---------------------+----------------+----------------------------------+
3 rows in set (0.00 sec)

mysql> SELECT * FROM `keystone`.`service`;
+----------------------------------+--------------+---------+------------------------------------------------------------------+
| id                               | type         | enabled | extra                                                            |
+----------------------------------+--------------+---------+------------------------------------------------------------------+
| 6f24e9d01ce145baacbf153a5dc3e9d4 | identity     |       1 | {"name": "keystone", "description": "Keystone Identity Service"} |
| 9425fec231a0486bb6a765d5277b40ba | object-store |       1 | {"name": "swift", "description": "Swift Object Storage Service"} |
+----------------------------------+--------------+---------+------------------------------------------------------------------+
2 rows in set (0.00 sec)

mysql> SELECT * FROM `keystone`.`local_user`;
+----+----------------------------------+----------------------------------+-------+-------------------+----------------+
| id | user_id                          | domain_id                        | name  | failed_auth_count | failed_auth_at |
+----+----------------------------------+----------------------------------+-------+-------------------+----------------+
|  1 | c34133f8c318436f97b7d55ed904585c | 09bf4b4adc6449d0805f40fe44d172dc | admin |                 0 | NULL           |
|  4 | ae645266bd8a43398cbe81ebb73cc603 | 0b28f522ee3641e09e260d970601f11a | swift |                 0 | NULL           |
|  7 | 7d295fc2efef4f51b9fcd57202b04207 | e3d249510ae44d319268a984919e4a90 | swift |                 0 | NULL           |
+----+----------------------------------+----------------------------------+-------+-------------------+----------------+
3 rows in set (0.00 sec)

mysql> SELECT * FROM `keystone`.`nonlocal_user`;
Empty set (0.00 sec)

mysql> SELECT * FROM `keystone`.`user_option`;
Empty set (0.00 sec)

mysql> EXIT;
Bye
Connection to 10.0.12.47 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ ln -sf ~/work/openstack/workspace/clouds.yaml ~/.config/openstack/clouds.yaml
(venv) ubuntu@os-client:~/work/openstack/workspace$ ll ~/.config/openstack/clouds.yaml
lrwxrwxrwx 1 ubuntu ubuntu 49 May 10 00:00 /home/ubuntu/.config/openstack/clouds.yaml -> /home/ubuntu/work/openstack/workspace/clouds.yaml
(venv) ubuntu@os-client:~/work/openstack/workspace$ cat /home/ubuntu/.config/openstack/clouds.yaml
clouds:
  default:
    auth:
      auth_url: http://10.0.14.131:35357/
      project_name: admin
      username: admin
      password: password
    region_name: RegionOne
    project_domain_name: admin_domain
    user_domain_name: admin_domain


ubuntu@os-client:~/work/openstack/workspace$ source ~/work/venv/bin/activate
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack token issue
Missing value auth-url required for auth plugin password
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack --os-cloud default token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2020-05-09T15:57:22+0000                                                                                                                                                                |
| id         | gAAAAABetsTSlpp_Mje3MQhcfFDkanyhAV2JsGjDwbuKEJxKD9ENpLbipymxdCyBIsgWpz5A7sT1zr3DRc3HkbLjxy48xB5RUCgi_AaNxoxX8utsDtq2MZBtb3QO5m6P04wT7EJPpsaa0l9QrUUOltJ-bN4kaNyMUzLMkTi0A_fPuTJQhU2JhR0 |
| project_id | f4b32f7133004e30a770ca7ef4084856                                                                                                                                                        |
| user_id    | c34133f8c318436f97b7d55ed904585c                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack --os-cloud default token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2020-05-09T15:57:27+0000                                                                                                                                                                |
| id         | gAAAAABetsTXmUrHIsaLdc-Zsf5l_DgaJihabjp1y_eLhkfhjdO-BvRicS-RdNd7U4sGtKgfri1YPwcgobUpr67WKbG4Xsp2p6FJte7DQX4DnEFW0dIRTOrS4ZT1oaaf48BYlTT98MwYL4wQ-Ygo7CkrLpvHOjcmm3lxKzQC1UGoE-A9sURPyj0 |
| project_id | f4b32f7133004e30a770ca7ef4084856                                                                                                                                                        |
| user_id    | c34133f8c318436f97b7d55ed904585c                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack --os-cloud default token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2020-05-09T15:57:29+0000                                                                                                                                                                |
| id         | gAAAAABetsTZAuqq52Vy7LxYgl3fW851su7ytFjMSik4UlfBsHD6-cGJ7y_T6QrcxQ8XkVDLgP-ZHoK60GQn8NRpD6wuQ4o-FbR7oYcuFt5Wh1BjVKcBFlmMZXGGKPMo4VDSamCLTDK4DP75ipzGfRTz2HypcKcWpIQMcJEpEPXnFtEanVsv8lE |
| project_id | f4b32f7133004e30a770ca7ef4084856                                                                                                                                                        |
| user_id    | c34133f8c318436f97b7d55ed904585c                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ alias openstack="openstack --os-cloud default"
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2020-05-09T16:01:57+0000                                                                                                                                                                |
| id         | gAAAAABetsXlugoL0QNFbmHRF_ylS15cOmgSfmfYYFvYXtS278NH1eT953rvmBwVqjmoZRCb6JNjx745aIYQ5GJlSRo1I6HdXkvjR3x9i8EjGVJQTXwMdIvi13aV3fVZQ7LTq5CBYPhzZXgi3Z8WYp1xJjyynbn7Ia8LRpaZKUI0ejv4ugm3CBY |
| project_id | f4b32f7133004e30a770ca7ef4084856                                                                                                                                                        |
| user_id    | c34133f8c318436f97b7d55ed904585c                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2020-05-09T16:01:59+0000                                                                                                                                                                |
| id         | gAAAAABetsXnVrytuWAwC3qrjOBwyEkhPk8Vg9N6mQhmRlC1T4-AFxW2u6YP1pvCcmyI0NuoQ_7bHqYNGpwCEWXm8BGcV0Xt78eM4BjArI2_x80loLhmxuUXBurIgyn4e5GJiwabRbiE6n-autj3f9zk_6azx_hj43kA44s3V00tZXmiTd1s-7g |
| project_id | f4b32f7133004e30a770ca7ef4084856                                                                                                                                                        |
| user_id    | c34133f8c318436f97b7d55ed904585c                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2020-05-09T16:02:00+0000                                                                                                                                                                |
| id         | gAAAAABetsXoX3fPDB7wi3vPXMDqW-VgDjxTcN8lo8Ey3gCiI9ejJu6ECvEv-F6iAHSm51-dh2pW2tqDl7WL15r3pO1RpngtYUVin2O3wtbv8mBY8Kl8J-ljikyjtqZMBQ60U9ZhiLBI9_9SD8l_CfgtMgjPcqkZ16cFVnOpMUVMYu6HDcnWqkg |
| project_id | f4b32f7133004e30a770ca7ef4084856                                                                                                                                                        |
| user_id    | c34133f8c318436f97b7d55ed904585c                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ swift stat

Command 'swift' not found, but can be installed with:

sudo snap install openstackclients     # version train, or
sudo apt  install python-swiftclient
sudo apt  install python3-swiftclient

See 'snap info openstackclients' for additional versions.

(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack container list

(venv) ubuntu@os-client:~/work/openstack/workspace$ deactivate
ubuntu@os-client:~/work/openstack/workspace$ snap info openstackclients
name:      openstackclients
summary:   OpenStack Client tools
publisher: James Page
store-url: https://snapcraft.io/openstackclients
contact:   james.page@canonical.com
license:   Apache-2.0
description: |
  OpenStackClient (aka OSC) is a command-line client for OpenStack
  that brings the command set for Compute, Identity, Image, Object
  Store and Block Storage APIs together in a single shell with a
  uniform command structure.

  The primary goal is to provide a unified shell command structure
  and a common language to describe operations in OpenStack.

  This snap provides the openstack command-line client and other
  project specific command-line clients.
snap-id: n61HGFwQwCYUizp8TcdUWRitZwbumzVR
channels:
  latest/stable:    train 2019-11-12 (38) 46MB classic
  latest/candidate: train 2019-11-12 (44) 47MB classic
  latest/beta:      train 2020-05-06 (51) 47MB classic
  latest/edge:      train 2019-12-12 (51) 47MB classic
admin-openrc
OS_PARAMS=$(env | awk 'BEGIN {FS="="} /^OS_/ {print $1;}' | paste -sd ' ')
for param in $_OS_PARAMS; do
    if [ "$param" = "OS_AUTH_PROTOCOL" ]; then continue; fi
    if [ "$param" = "OS_CACERT" ]; then continue; fi
    unset $param
done
unset _OS_PARAMS
 
_keystone_ip=$(juju run $_juju_model_arg --unit keystone/leader 'unit-get private-address')
_password=$(juju run $_juju_model_arg --unit keystone/leader 'leader-get admin_passwd')
 
export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${_keystone_ip}:5000/v3
export OS_USERNAME=admin
export OS_PASSWORD=${_password}
export OS_USER_DOMAIN_NAME=admin_domain
export OS_PROJECT_DOMAIN_NAME=admin_domain
export OS_PROJECT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_IDENTITY_API_VERSION=3
# Swift needs this:
export OS_AUTH_VERSION=3
# Gnocchi needs this
export OS_AUTH_TYPE=password
source ~/work/openstack/workspace/admin-openrc
juju ssh 0 sudo systemctl poweroff
juju ssh 1 sudo systemctl poweroff
juju ssh 2 sudo systemctl poweroff
juju ssh ceph-osd/0 sudo systemctl poweroff
juju ssh ceph-osd/1 sudo systemctl poweroff
juju ssh ceph-osd/2 sudo systemctl poweroff
juju ssh -m controller 0 sudo systemctl poweroff
ssh ubuntu@10.0.12.11 sudo systemctl poweroff

take a snapshot “keystone”

Deploy ceph-radosgw

juju deploy --config config/ceph-radosgw.yaml -n 3 --to lxd:0,lxd:1,lxd:2 cs:ceph-radosgw ceph-radosgw
juju deploy --config config/ceph-radosgw.yaml cs:hacluster ceph-radosgw-hacluster
juju add-relation ceph-radosgw:mon ceph-mon:radosgw
juju add-relation ceph-radosgw:identity-service keystone:identity-service
ceph-radosgw:
  namespace-tenants: true
  source: cloud:bionic-stein
  vip: 10.0.14.129
ceph-radosgw-hacluster:
  corosync_transport: unicast
ubuntu@os-client:~/work/openstack/deploy$ bash 01050-deploy-ceph-radosgw.sh
Located charm "cs:ceph-radosgw-286".
Deploying charm "cs:ceph-radosgw-286".
Located charm "cs:hacluster-66".
Deploying charm "cs:hacluster-66".
juju debug-log
juju status "ceph*"
watch -n 1 --color juju status "ceph*" --color

verify operation

juju status "ceph*"
juju ssh ceph-mon/0 sudo ceph status
juju ssh ceph-mon/0 sudo ceph osd status
juju ssh ceph-radosgw/0 sudo crm status
juju ssh ceph-radosgw/1 ip address show
ping -c 4 10.0.14.129
ubuntu@os-client:~/work/openstack/deploy$ juju status "ceph*"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  20:19:48+09:00

App                     Version  Status  Scale  Charm         Store       Rev  OS      Notes
ceph-mon                13.2.8   active      3  ceph-mon      jujucharms   46  ubuntu
ceph-osd                13.2.8   active      3  ceph-osd      jujucharms  301  ubuntu
ceph-radosgw            13.2.8   active      3  ceph-radosgw  jujucharms  286  ubuntu
ceph-radosgw-hacluster           active      3  hacluster     jujucharms   66  ubuntu

Unit                         Workload  Agent  Machine  Public address  Ports   Message
ceph-mon/0*                  active    idle   0/lxd/0  10.0.12.35              Unit is ready and clustered
ceph-mon/1                   active    idle   1/lxd/0  10.0.12.36              Unit is ready and clustered
ceph-mon/2                   active    idle   2/lxd/0  10.0.12.37              Unit is ready and clustered
ceph-osd/0*                  active    idle   6        10.0.12.25              Unit is ready (1 OSD)
ceph-osd/1                   active    idle   7        10.0.12.31              Unit is ready (1 OSD)
ceph-osd/2                   active    idle   8        10.0.12.32              Unit is ready (1 OSD)
ceph-radosgw/0               active    idle   0/lxd/1  10.0.15.0       80/tcp  Unit is ready
  ceph-radosgw-hacluster/1   active    idle            10.0.15.0               Unit is ready and clustered
ceph-radosgw/1               active    idle   1/lxd/1  10.0.12.39      80/tcp  Unit is ready
  ceph-radosgw-hacluster/0*  active    idle            10.0.12.39              Unit is ready and clustered
ceph-radosgw/2*              active    idle   2/lxd/1  10.0.12.38      80/tcp  Unit is ready
  ceph-radosgw-hacluster/2   active    idle            10.0.12.38              Unit is ready and clustered

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/0  started  10.0.12.35  juju-a5ab4c-0-lxd-0  bionic  default  Container started
0/lxd/1  started  10.0.15.0   juju-a5ab4c-0-lxd-1  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/0  started  10.0.12.36  juju-a5ab4c-1-lxd-0  bionic  default  Container started
1/lxd/1  started  10.0.12.39  juju-a5ab4c-1-lxd-1  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/0  started  10.0.12.37  juju-a5ab4c-2-lxd-0  bionic  default  Container started
2/lxd/1  started  10.0.12.38  juju-a5ab4c-2-lxd-1  bionic  default  Container started
6        started  10.0.12.25  os-ceph1             bionic  default  Deployed
7        started  10.0.12.31  os-ceph2             bionic  default  Deployed
8        started  10.0.12.32  os-ceph3             bionic  default  Deployed

ubuntu@os-client:~$ juju ssh ceph-mon/0 sudo ceph status
  cluster:
    id:     c08e5b6e-91d7-11ea-8df0-00163e820de3
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum juju-a5ab4c-0-lxd-0,juju-a5ab4c-1-lxd-0,juju-a5ab4c-2-lxd-0
    mgr: juju-a5ab4c-2-lxd-0(active), standbys: juju-a5ab4c-1-lxd-0, juju-a5ab4c-0-lxd-0
    osd: 3 osds: 3 up, 3 in
    rgw: 3 daemons active

  data:
    pools:   15 pools, 46 pgs
    objects: 187  objects, 1.1 KiB
    usage:   3.0 GiB used, 237 GiB / 240 GiB avail
    pgs:     46 active+clean

Connection to 10.0.12.35 closed.
ubuntu@os-client:~$ juju ssh ceph-mon/0 sudo ceph osd status
+----+----------+-------+-------+--------+---------+--------+---------+-----------+
| id |   host   |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+----------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | os-ceph2 | 1032M | 78.9G |    0   |     0   |    0   |     0   | exists,up |
| 1  | os-ceph3 | 1032M | 78.9G |    0   |     0   |    3   |     0   | exists,up |
| 2  | os-ceph1 | 1032M | 78.9G |    0   |     0   |    1   |     0   | exists,up |
+----+----------+-------+-------+--------+---------+--------+---------+-----------+
Connection to 10.0.12.35 closed.
ubuntu@os-client:~$ juju ssh ceph-radosgw/0 sudo crm status
Stack: corosync
Current DC: juju-a5ab4c-1-lxd-1 (version 1.1.18-2b07d5c5a9) - partition with quorum
Last updated: Sat May  9 11:20:44 2020
Last change: Sat May  9 11:14:48 2020 by hacluster via crmd on juju-a5ab4c-1-lxd-1

3 nodes configured
4 resources configured

Online: [ juju-a5ab4c-0-lxd-1 juju-a5ab4c-1-lxd-1 juju-a5ab4c-2-lxd-1 ]

Full list of resources:

 Resource Group: grp_cephrg_vips
     res_cephrg_abc1e3c_vip     (ocf::heartbeat:IPaddr2):       Started juju-a5ab4c-1-lxd-1
 Clone Set: cl_cephrg_haproxy [res_cephrg_haproxy]
     Started: [ juju-a5ab4c-0-lxd-1 juju-a5ab4c-1-lxd-1 juju-a5ab4c-2-lxd-1 ]

Connection to 10.0.15.0 closed.
ubuntu@os-client:~$ juju ssh ceph-radosgw/1 ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
8: eth0@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:e2:04:08 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.12.39/22 brd 10.0.15.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.14.129/22 brd 10.0.15.255 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fee2:408/64 scope link
       valid_lft forever preferred_lft forever
Connection to 10.0.12.39 closed.
ubuntu@os-client:~$ ping -c 4 10.0.14.129
PING 10.0.14.129 (10.0.14.129) 56(84) bytes of data.
64 bytes from 10.0.14.129: icmp_seq=1 ttl=64 time=1.21 ms
64 bytes from 10.0.14.129: icmp_seq=2 ttl=64 time=0.797 ms
64 bytes from 10.0.14.129: icmp_seq=3 ttl=64 time=0.358 ms
64 bytes from 10.0.14.129: icmp_seq=4 ttl=64 time=0.283 ms

--- 10.0.14.129 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3036ms
rtt min/avg/max/mdev = 0.283/0.663/1.216/0.375 ms

Swift API (ceph-radosgw) Verify Operation

Now, we can verify swift API compatible ceph-radosgw operation. Ceph RADOS Gateway provides Swift Compatible APIs.

In this timing, I forgot to enable namespace-tenants in ceph-radosgw. And I didn't know ceph-radosgw how works, so I'm not verified long time this charm. I've always deployed swift-storage, swift-proxy charms to use Swift API… So, Now we do not need to deploy swift-* anymore if you deployed ceph-radosgw.

OpenStack Docs: Verify operation

source ~/work/openstack/workspace/admin-openrc
swift stat
openstack container create container1
vim lorem-ipsum.txt
sha256sum lorem-ipsum.txt
cat lorem-ipsum.txt
openstack object create container1 lorem-ipsum.txt
openstack object list container1
cd tmp
ls lorem-ipsum.txt
openstack object save container1 lorem-ipsum.txt
ls lorem-ipsum.txt
sha256sum lorem-ipsum.txt
cat lorem-ipsum.txt
rm lorem-ipsum.txt
(venv) ubuntu@os-client:~/work/openstack/workspace$ source ~/work/openstack/workspace/admin-openrc
(venv) ubuntu@os-client:~/work/openstack/workspace$ swift stat
                                    Account: AUTH_f4b32f7133004e30a770ca7ef4084856
                                 Containers: 0
                                    Objects: 0
                                      Bytes: 0
Objects in policy "default-placement-bytes": 0
  Bytes in policy "default-placement-bytes": 0
   Containers in policy "default-placement": 0
      Objects in policy "default-placement": 0
        Bytes in policy "default-placement": 0
                                X-Timestamp: 1589708529.28326
                X-Account-Bytes-Used-Actual: 0
                                 X-Trans-Id: tx000000000000000000001-005ec106f1-b3573-default
                     X-Openstack-Request-Id: tx000000000000000000001-005ec106f1-b3573-default
                              Accept-Ranges: bytes
                               Content-Type: text/plain; charset=utf-8
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack container create container1
+---------------------------------------+------------+--------------------------------------------------+
| account                               | container  | x-trans-id                                       |
+---------------------------------------+------------+--------------------------------------------------+
| AUTH_f4b32f7133004e30a770ca7ef4084856 | container1 | tx000000000000000000001-005ec106fd-b35f1-default |
+---------------------------------------+------------+--------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ vim lorem-ipsum.txt
(venv) ubuntu@os-client:~/work/openstack/workspace$ sha256sum lorem-ipsum.txt
9a7884748fa090de828586132d104cdfb6bbcc228f6dacf30e0497d9ebf5732b  lorem-ipsum.txt
(venv) ubuntu@os-client:~/work/openstack/workspace$
(venv) ubuntu@os-client:~/work/openstack/workspace$ cat lorem-ipsum.txt
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque ultricies mauris maximus libero condimentum semper. Pellentesque varius neque at felis dignissim aliquam. Sed at erat in justo faucibus egestas ut eget turpis. Curabitur cursus ante eu faucibus consectetur. Sed non lacus sit amet enim efficitur dignissim. Nullam a arcu sed nisi mattis posuere eu sed leo. Vestibulum pellentesque orci quis elit rutrum suscipit. Nullam porttitor metus at nulla lobortis, ac auctor dui congue. Aenean at fermentum tellus, ac auctor felis.

Cras dignissim sem a elit ultricies vestibulum. Nulla tempor metus ac odio tincidunt, at blandit lacus condimentum. Fusce fermentum ligula fringilla tellus interdum ornare. Phasellus vehicula diam molestie, facilisis justo nec, sollicitudin est. Vestibulum non lacus metus. Curabitur in justo in nisl ornare dictum sit amet in lectus. Donec dui felis, lacinia sit amet semper non, commodo varius est. Cras sodales erat dolor. Phasellus rhoncus nunc at lectus ultrices sagittis. Nunc id sollicitudin lorem, ut vestibulum nulla. Ut lobortis porta turpis, quis pellentesque risus venenatis at. Etiam tincidunt imperdiet neque, eget sollicitudin est lobortis et. Curabitur eget ante consectetur, vestibulum massa eget, vestibulum leo. Nunc cursus consectetur justo, a mollis justo consectetur nec. Duis dapibus mauris ac quam tincidunt, sit amet volutpat ipsum condimentum. Phasellus sit amet lorem vel orci tincidunt malesuada consequat ac orci.

Vestibulum convallis lacus quis tortor consectetur scelerisque. Duis vitae purus quam. Nullam finibus viverra purus et tincidunt. Cras ullamcorper elementum ante nec auctor. Aenean tortor lorem, eleifend vitae nunc nec, elementum lobortis est. Sed maximus ipsum justo, quis venenatis mauris eleifend consequat. Morbi lacinia arcu ex. Aenean eu semper lorem. Maecenas porta lectus vel tellus molestie imperdiet. Aenean urna ante, mollis eu luctus id, tempus at dolor. Phasellus tempus, arcu maximus porta gravida, lectus augue venenatis ligula, vel euismod ex elit eget lacus. Ut consequat urna eu turpis auctor dictum. Duis vitae odio tellus. In non elit a eros semper sodales. Pellentesque non mattis enim.

Phasellus in sem posuere, ullamcorper neque in, ultrices enim. Mauris non elementum arcu, ut facilisis tortor. Nullam a lectus sed tellus rhoncus tempor sit amet vestibulum purus. Ut tristique tellus ac venenatis rutrum. Proin quis dapibus metus, ac elementum libero. Sed at leo molestie, accumsan arcu sit amet, vestibulum sapien. Duis vitae orci nunc.

Nam aliquam mauris a ultricies tempor. Duis turpis ipsum, vulputate nec tincidunt eu, cursus et ipsum. Pellentesque interdum nibh magna, quis dignissim nisl pretium ut. Nam libero orci, blandit in rutrum at, egestas sed ipsum. Integer eget nisi nec risus venenatis faucibus. Quisque magna ligula, venenatis semper velit sit amet, tempus imperdiet lacus. Fusce vitae mollis neque, sit amet vulputate nunc. Mauris gravida mollis arcu at sollicitudin. Suspendisse metus orci, laoreet in dignissim vel, tristique quis eros. Sed ullamcorper condimentum arcu sed aliquet. Suspendisse vulputate tristique lacus quis vestibulum. Phasellus imperdiet varius magna, at euismod arcu. Nam at turpis congue, ultrices urna vitae, cursus lectus. Phasellus purus erat, suscipit malesuada justo eget, imperdiet tempor lectus. Nullam porta erat diam, vel malesuada eros fermentum eu.

(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack object create container1 lorem-ipsum.txt
+-----------------+------------+----------------------------------+
| object          | container  | etag                             |
+-----------------+------------+----------------------------------+
| lorem-ipsum.txt | container1 | 6671724651e1d8efd499b5b2c3f5d35b |
+-----------------+------------+----------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack object list container1
+-----------------+
| Name            |
+-----------------+
| lorem-ipsum.txt |
+-----------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ cd tmp
(venv) ubuntu@os-client:~/work/openstack/workspace/tmp$ ls lorem-ipsum.txt
ls: cannot access 'lorem-ipsum.txt': No such file or directory
(venv) ubuntu@os-client:~/work/openstack/workspace/tmp$ openstack object save container1 lorem-ipsum.txt
(venv) ubuntu@os-client:~/work/openstack/workspace/tmp$ ls lorem-ipsum.txt
lorem-ipsum.txt
(venv) ubuntu@os-client:~/work/openstack/workspace/tmp$ sha256sum lorem-ipsum.txt
9a7884748fa090de828586132d104cdfb6bbcc228f6dacf30e0497d9ebf5732b  lorem-ipsum.txt
(venv) ubuntu@os-client:~/work/openstack/workspace/tmp$ cat lorem-ipsum.txt
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque ultricies mauris maximus libero condimentum semper. Pellentesque varius neque at felis dignissim aliquam. Sed at erat in justo faucibus egestas ut eget turpis. Curabitur cursus ante eu faucibus consectetur. Sed non lacus sit amet enim efficitur dignissim. Nullam a arcu sed nisi mattis posuere eu sed leo. Vestibulum pellentesque orci quis elit rutrum suscipit. Nullam porttitor metus at nulla lobortis, ac auctor dui congue. Aenean at fermentum tellus, ac auctor felis.

Cras dignissim sem a elit ultricies vestibulum. Nulla tempor metus ac odio tincidunt, at blandit lacus condimentum. Fusce fermentum ligula fringilla tellus interdum ornare. Phasellus vehicula diam molestie, facilisis justo nec, sollicitudin est. Vestibulum non lacus metus. Curabitur in justo in nisl ornare dictum sit amet in lectus. Donec dui felis, lacinia sit amet semper non, commodo varius est. Cras sodales erat dolor. Phasellus rhoncus nunc at lectus ultrices sagittis. Nunc id sollicitudin lorem, ut vestibulum nulla. Ut lobortis porta turpis, quis pellentesque risus venenatis at. Etiam tincidunt imperdiet neque, eget sollicitudin est lobortis et. Curabitur eget ante consectetur, vestibulum massa eget, vestibulum leo. Nunc cursus consectetur justo, a mollis justo consectetur nec. Duis dapibus mauris ac quam tincidunt, sit amet volutpat ipsum condimentum. Phasellus sit amet lorem vel orci tincidunt malesuada consequat ac orci.

Vestibulum convallis lacus quis tortor consectetur scelerisque. Duis vitae purus quam. Nullam finibus viverra purus et tincidunt. Cras ullamcorper elementum ante nec auctor. Aenean tortor lorem, eleifend vitae nunc nec, elementum lobortis est. Sed maximus ipsum justo, quis venenatis mauris eleifend consequat. Morbi lacinia arcu ex. Aenean eu semper lorem. Maecenas porta lectus vel tellus molestie imperdiet. Aenean urna ante, mollis eu luctus id, tempus at dolor. Phasellus tempus, arcu maximus porta gravida, lectus augue venenatis ligula, vel euismod ex elit eget lacus. Ut consequat urna eu turpis auctor dictum. Duis vitae odio tellus. In non elit a eros semper sodales. Pellentesque non mattis enim.

Phasellus in sem posuere, ullamcorper neque in, ultrices enim. Mauris non elementum arcu, ut facilisis tortor. Nullam a lectus sed tellus rhoncus tempor sit amet vestibulum purus. Ut tristique tellus ac venenatis rutrum. Proin quis dapibus metus, ac elementum libero. Sed at leo molestie, accumsan arcu sit amet, vestibulum sapien. Duis vitae orci nunc.

Nam aliquam mauris a ultricies tempor. Duis turpis ipsum, vulputate nec tincidunt eu, cursus et ipsum. Pellentesque interdum nibh magna, quis dignissim nisl pretium ut. Nam libero orci, blandit in rutrum at, egestas sed ipsum. Integer eget nisi nec risus venenatis faucibus. Quisque magna ligula, venenatis semper velit sit amet, tempus imperdiet lacus. Fusce vitae mollis neque, sit amet vulputate nunc. Mauris gravida mollis arcu at sollicitudin. Suspendisse metus orci, laoreet in dignissim vel, tristique quis eros. Sed ullamcorper condimentum arcu sed aliquet. Suspendisse vulputate tristique lacus quis vestibulum. Phasellus imperdiet varius magna, at euismod arcu. Nam at turpis congue, ultrices urna vitae, cursus lectus. Phasellus purus erat, suscipit malesuada justo eget, imperdiet tempor lectus. Nullam porta erat diam, vel malesuada eros fermentum eu.

(venv) ubuntu@os-client:~/work/openstack/workspace/tmp$ rm lorem-ipsum.txt

glance(Image Service)

take a live snapshot to all vm: “before install glance”

01100-deploy-glance.sh
#!/bin/bash
juju deploy --config config/glance.yaml -n 3 --to lxd:0,lxd:1,lxd:2 cs:glance glance
juju deploy --config config/glance.yaml cs:hacluster glance-hacluster
juju add-relation glance:ha glance-hacluster:ha
#
juju add-relation glance:shared-db mysql:shared-db
juju add-relation glance:identity-service keystone:identity-service
juju add-relation glance:amqp rabbitmq-server:amqp
#
juju add-relation glance:ceph ceph-mon:client
glance.yaml
glance:
  openstack-origin: cloud:bionic-stein
  vip: 10.0.14.132
glance-hacluster:
  corosync_transport: unicast
ubuntu@os-client:~/work/openstack/deploy$ bash 01100-deploy-glance.sh
Located charm "cs:glance-295".
Deploying charm "cs:glance-295".
Located charm "cs:hacluster-66".
Deploying charm "cs:hacluster-66".

watch status

juju debug-log --include glance
juju status "glance"
watch -n 1 --color juju status "glance" --color

Verify Operation

juju status "glance"

juju ssh glance/0 sudo crm status
juju ssh glance/0 ip address show

ping -c 4 10.0.14.132
openstack image list
cd ~/work/openstack/workspace
mkdir cloud-images
mkdir cloud-images/ubuntu/
touch cloud-images/.keep
touch cloud-images/ubuntu/.keep
mkdir cloud-images/ubuntu/{focal,bionic,xenial}

gpg --recv-keys 7DB87C81

cd cloud-images
wget http://download.cirros-cloud.net/0.5.1/cirros-0.5.1-x86_64-disk.img
sha256sum cirros-0.5.1-x86_64-disk.img
md5sum cirros-0.5.1-x86_64-disk.img

cd focal
wget https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img
wget https://cloud-images.ubuntu.com/focal/current/SHA256SUMS
wget https://cloud-images.ubuntu.com/focal/current/SHA256SUMS.gpg
gpg --verify SHA256SUMS.gpg SHA256SUMS
sha256sum -c <(grep focal-server-cloudimg-amd64.img SHA256SUMS)
cd ..
cd bionic
wget https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img
wget https://cloud-images.ubuntu.com/bionic/current/SHA256SUMS
wget https://cloud-images.ubuntu.com/bionic/current/SHA256SUMS.gpg
gpg --verify SHA256SUMS.gpg SHA256SUMS
sha256sum -c <(grep bionic-server-cloudimg-amd64.img SHA256SUMS)
cd ..
cd xenial
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
wget https://cloud-images.ubuntu.com/xenial/current/SHA256SUMS
wget https://cloud-images.ubuntu.com/xenial/current/SHA256SUMS.gpg
gpg --verify SHA256SUMS.gpg SHA256SUMS
sha256sum -c <(grep xenial-server-cloudimg-amd64-disk1.img SHA256SUMS)


openstack image create "cirros-0.5.1-x86_64" \
  --file cloud-images/cirros-0.5.1-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --property architecture=x86_64 \
  --property hw_disk_bus=virtio \
  --property hw_vif_model=virtio \
  --public

openstack image create "ubuntu-server-20.04-x86_64-focal" \
  --file cloud-images/ubuntu/focal/focal-server-cloudimg-amd64.img \
  --disk-format qcow2 --container-format bare \
  --property architecture=x86_64 \
  --property hw_disk_bus=virtio \
  --property hw_vif_model=virtio \
  --public

openstack image create "ubuntu-server-18.04-x86_64-bionic" \
  --file cloud-images/ubuntu/bionic/bionic-server-cloudimg-amd64.img \
  --disk-format qcow2 --container-format bare \
  --property architecture=x86_64 \
  --property hw_disk_bus=virtio \
  --property hw_vif_model=virtio \
  --public

openstack image create "ubuntu-server-16.04-x86_64-xenial" \
  --file cloud-images/ubuntu/xenial/xenial-server-cloudimg-amd64-disk1.img \
  --disk-format qcow2 --container-format bare \
  --property architecture=x86_64 \
  --property hw_disk_bus=virtio \
  --property hw_vif_model=virtio \
  --public

openstack image list
(venv) ubuntu@os-client:~/work/openstack/workspace$ gpg --recv-keys 7DB87C81
gpg: key 1A5D6C4C7DB87C81: 2 signatures not checked due to missing keys
gpg: key 1A5D6C4C7DB87C81: public key "UEC Image Automatic Signing Key <cdimage@ubuntu.com>" imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1
wnoguchi@LASTHOPE MINGW64 ~/Downloads/ubuntu/cloud-images
$ cd focal

wnoguchi@LASTHOPE MINGW64 ~/Downloads/ubuntu/cloud-images/focal
$ sha256sums -c <(grep focal-server-cloudimg-amd64.img SHA256SUMS)
bash: sha256sums: command not found

wnoguchi@LASTHOPE MINGW64 ~/Downloads/ubuntu/cloud-images/focal
$ sha256sum -c <(grep focal-server-cloudimg-amd64.img SHA256SUMS)
focal-server-cloudimg-amd64.img: OK

wnoguchi@LASTHOPE MINGW64 ~/Downloads/ubuntu/cloud-images/focal
$ cd ..

wnoguchi@LASTHOPE MINGW64 ~/Downloads/ubuntu/cloud-images
$ cd bionic

wnoguchi@LASTHOPE MINGW64 ~/Downloads/ubuntu/cloud-images/bionic
$ sha256sum -c <(grep bionic-server-cloudimg-amd64.img SHA256SUMS)
bionic-server-cloudimg-amd64.img: OK

wnoguchi@LASTHOPE MINGW64 ~/Downloads/ubuntu/cloud-images/bionic
$ cd ..

wnoguchi@LASTHOPE MINGW64 ~/Downloads/ubuntu/cloud-images
$ cd xenial

wnoguchi@LASTHOPE MINGW64 ~/Downloads/ubuntu/cloud-images/xenial
$ sha256sum -c <(grep xenial-server-cloudimg-amd64-disk1.img SHA256SUMS)
xenial-server-cloudimg-amd64-disk1.img: OK
(venv) ubuntu@os-client:~/work/openstack/workspace/cloud-images$ sha256sum cirros-0.5.1-x86_64-disk.img
c4110030e2edf06db87f5b6e4efc27300977683d53f040996d15dcc0ad49bb5a  cirros-0.5.1-x86_64-disk.img
(venv) ubuntu@os-client:~/work/openstack/workspace/cloud-images$ md5sum cirros-0.5.1-x86_64-disk.img
1d3062cd89af34e419f7100277f38b2b  cirros-0.5.1-x86_64-disk.img
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | 1d3062cd89af34e419f7100277f38b2b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| container_format | bare                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| created_at       | 2020-05-10T02:03:36Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| disk_format      | qcow2                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
| file             | /v2/images/ff4d5798-7675-45b1-84c4-e76844691947/file                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| id               | ff4d5798-7675-45b1-84c4-e76844691947                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| min_disk         | 0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
| min_ram          | 0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
| name             | cirros-0.5.1-x86_64                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| owner            | f4b32f7133004e30a770ca7ef4084856                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| properties       | direct_url='rbd://c08e5b6e-91d7-11ea-8df0-00163e820de3/glance/ff4d5798-7675-45b1-84c4-e76844691947/snap', locations='[{'url': 'rbd://c08e5b6e-91d7-11ea-8df0-00163e820de3/glance/ff4d5798-7675-45b1-84c4-e76844691947/snap', 'metadata': {}}]', os_hash_algo='sha512', os_hash_value='553d220ed58cfee7dafe003c446a9f197ab5edf8ffc09396c74187cf83873c877e7ae041cb80f3b91489acf687183adcd689b53b38e3ddd22e627e7f98a09c46', os_hidden='False', owner_specified.openstack.md5='1d3062cd89af34e419f7100277f38b2b', owner_specified.openstack.object='images/cirros-0.5.1-x86_64', owner_specified.openstack.sha256='c4110030e2edf06db87f5b6e4efc27300977683d53f040996d15dcc0ad49bb5a', self='/v2/images/ff4d5798-7675-45b1-84c4-e76844691947' |
| protected        | False                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
| schema           | /v2/schemas/image                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
| size             | 16338944                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
| status           | active                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| tags             |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
| updated_at       | 2020-05-10T02:03:38Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| visibility       | public                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack image create "ubuntu-server-20.04-x86_64-focal" \
>   --file cloud-images/ubuntu/focal/focal-server-cloudimg-amd64.img \
>   --disk-format qcow2 --container-format bare \
>   --public
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | a0a570ad022bbd1cd1711acbc171d0b3                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| container_format | bare                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| created_at       | 2020-05-10T02:04:32Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| disk_format      | qcow2                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
| file             | /v2/images/da2da418-818d-4f32-9aad-fad2a5af1d67/file                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| id               | da2da418-818d-4f32-9aad-fad2a5af1d67                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| min_disk         | 0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| min_ram          | 0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| name             | ubuntu-server-20.04-x86_64-focal                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| owner            | f4b32f7133004e30a770ca7ef4084856                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| properties       | direct_url='rbd://c08e5b6e-91d7-11ea-8df0-00163e820de3/glance/da2da418-818d-4f32-9aad-fad2a5af1d67/snap', locations='[{'url': 'rbd://c08e5b6e-91d7-11ea-8df0-00163e820de3/glance/da2da418-818d-4f32-9aad-fad2a5af1d67/snap', 'metadata': {}}]', os_hash_algo='sha512', os_hash_value='317be5956466e6dd7083bc56e8d7cc32d53233c286b8cde0c7edb025bc0e43aac39d28ad5ef54d95896568387e445e214558286777b08b17d358aec0756a7ba8', os_hidden='False', owner_specified.openstack.md5='a0a570ad022bbd1cd1711acbc171d0b3', owner_specified.openstack.object='images/ubuntu-server-20.04-x86_64-focal', owner_specified.openstack.sha256='f8fea6a80ced88eabe9d41eb61d4d9970348c025fe303583183ab81347ceea82', self='/v2/images/da2da418-818d-4f32-9aad-fad2a5af1d67' |
| protected        | False                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
| schema           | /v2/schemas/image                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| size             | 533135360                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| status           | active                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
| tags             |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| updated_at       | 2020-05-10T02:04:41Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| visibility       | public                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack image create "ubuntu-server-18.04-x86_64-bionic" \
>   --file cloud-images/ubuntu/bionic/bionic-server-cloudimg-amd64.img \
>   --disk-format qcow2 --container-format bare \
>   --public
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | 63dd6d369c5cc81d4587d8e74f94eb07                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| container_format | bare                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| created_at       | 2020-05-10T02:04:50Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| disk_format      | qcow2                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| file             | /v2/images/c9474e52-b514-4cb2-9430-1367238adf8b/file                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| id               | c9474e52-b514-4cb2-9430-1367238adf8b                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| min_disk         | 0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| min_ram          | 0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| name             | ubuntu-server-18.04-x86_64-bionic                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| owner            | f4b32f7133004e30a770ca7ef4084856                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| properties       | direct_url='rbd://c08e5b6e-91d7-11ea-8df0-00163e820de3/glance/c9474e52-b514-4cb2-9430-1367238adf8b/snap', locations='[{'url': 'rbd://c08e5b6e-91d7-11ea-8df0-00163e820de3/glance/c9474e52-b514-4cb2-9430-1367238adf8b/snap', 'metadata': {}}]', os_hash_algo='sha512', os_hash_value='e7a533ca39e71dc30958f394f251ca5de9d3c08c7c139d7aa2a7037f68a0754b7846765e42134a762c4932ee791f9a3e246bbd3b39646aa827cf372b39355bfb', os_hidden='False', owner_specified.openstack.md5='63dd6d369c5cc81d4587d8e74f94eb07', owner_specified.openstack.object='images/ubuntu-server-18.04-x86_64-bionic', owner_specified.openstack.sha256='cc13bc739f89060565e76f4327fad8e7a01dc75442d87095a9d558ca3370ff80', self='/v2/images/c9474e52-b514-4cb2-9430-1367238adf8b' |
| protected        | False                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| schema           | /v2/schemas/image                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| size             | 345767936                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
| status           | active                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
| tags             |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
| updated_at       | 2020-05-10T02:05:00Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| visibility       | public                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack image create "ubuntu-server-16.04-x86_64-xenial" \
>   --file cloud-images/ubuntu/xenial/xenial-server-cloudimg-amd64-disk1.img \
>   --disk-format qcow2 --container-format bare \
>   --public
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | d25b30f2d9ced9af21de737022097082                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| container_format | bare                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| created_at       | 2020-05-10T02:05:04Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| disk_format      | qcow2                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| file             | /v2/images/31a177db-97ee-4e6a-a7a0-0e2094e2a4ea/file                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| id               | 31a177db-97ee-4e6a-a7a0-0e2094e2a4ea                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| min_disk         | 0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| min_ram          | 0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| name             | ubuntu-server-16.04-x86_64-xenial                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| owner            | f4b32f7133004e30a770ca7ef4084856                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| properties       | direct_url='rbd://c08e5b6e-91d7-11ea-8df0-00163e820de3/glance/31a177db-97ee-4e6a-a7a0-0e2094e2a4ea/snap', locations='[{'url': 'rbd://c08e5b6e-91d7-11ea-8df0-00163e820de3/glance/31a177db-97ee-4e6a-a7a0-0e2094e2a4ea/snap', 'metadata': {}}]', os_hash_algo='sha512', os_hash_value='4f6e14aade8aaa390428a1041950eecdb5e7193f259f510304feb0f05b786b9f24af23a0eb58a4d7dcbef5d23d274c04570e70a1b7fa4595aa6fa12e6b614352', os_hidden='False', owner_specified.openstack.md5='d25b30f2d9ced9af21de737022097082', owner_specified.openstack.object='images/ubuntu-server-16.04-x86_64-xenial', owner_specified.openstack.sha256='c713e26c35be1e6cf394a062ab12d4196e291c802bd99d4fcad03cc4017cf640', self='/v2/images/31a177db-97ee-4e6a-a7a0-0e2094e2a4ea' |
| protected        | False                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| schema           | /v2/schemas/image                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| size             | 297926656                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
| status           | active                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
| tags             |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
| updated_at       | 2020-05-10T02:05:11Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| visibility       | public                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack image list
+--------------------------------------+-----------------------------------+--------+
| ID                                   | Name                              | Status |
+--------------------------------------+-----------------------------------+--------+
| ff4d5798-7675-45b1-84c4-e76844691947 | cirros-0.5.1-x86_64               | active |
| 31a177db-97ee-4e6a-a7a0-0e2094e2a4ea | ubuntu-server-16.04-x86_64-xenial | active |
| c9474e52-b514-4cb2-9430-1367238adf8b | ubuntu-server-18.04-x86_64-bionic | active |
| da2da418-818d-4f32-9aad-fad2a5af1d67 | ubuntu-server-20.04-x86_64-focal  | active |
+--------------------------------------+-----------------------------------+--------+

cinder

01200-deploy-cinder.sh
#!/bin/bash
juju deploy --config config/cinder.yaml -n 3 --to lxd:0,lxd:1,lxd:2 cs:cinder cinder
juju deploy --config config/cinder.yaml cs:hacluster cinder-hacluster
juju deploy cs:cinder-ceph cinder-ceph
juju add-relation cinder:ha cinder-hacluster:ha
#
juju add-relation cinder:shared-db mysql:shared-db
juju add-relation cinder:identity-service keystone:identity-service
juju add-relation cinder:amqp rabbitmq-server:amqp
#
juju add-relation cinder:image-service glance:image-service
#
juju add-relation cinder-ceph:storage-backend cinder:storage-backend
juju add-relation cinder-ceph:ceph ceph-mon:client
config/cinder.yaml
cinder:
  block-device: None
  glance-api-version: 2
  openstack-origin: cloud:bionic-stein
  vip: 10.0.14.133
cinder-hacluster:
  corosync_transport: unicast
ubuntu@os-client:~/work/openstack/deploy$ bash 01200-deploy-cinder.sh
Located charm "cs:cinder-301".
Deploying charm "cs:cinder-301".
Located charm "cs:hacluster-66".
Deploying charm "cs:hacluster-66".
Located charm "cs:cinder-ceph-254".
Deploying charm "cs:cinder-ceph-254".

watch status

juju debug-log --include cinder
juju status "cinder*"
watch -n 1 --color juju status "cinder*" --color

Verify Operation

juju status "cinder*"

juju ssh cinder/0 sudo crm status
juju ssh cinder/1 ip address show

ping -c 4 10.0.14.133

openstack volume service list

juju run-action cinder/leader remove-services --wait

openstack volume service list
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju status "cinder*"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  11:52:03+09:00

App               Version  Status  Scale  Charm        Store       Rev  OS      Notes
cinder            14.0.4   active      3  cinder       jujucharms  301  ubuntu
cinder-ceph       14.0.4   active      3  cinder-ceph  jujucharms  254  ubuntu
cinder-hacluster           active      3  hacluster    jujucharms   66  ubuntu

Unit                   Workload  Agent  Machine  Public address  Ports     Message
cinder/0*              active    idle   0/lxd/7  10.0.12.55      8776/tcp  Unit is ready
  cinder-ceph/2        active    idle            10.0.12.55                Unit is ready
  cinder-hacluster/2   active    idle            10.0.12.55                Unit is ready and clustered
cinder/1               active    idle   1/lxd/7  10.0.12.57      8776/tcp  Unit is ready
  cinder-ceph/1        active    idle            10.0.12.57                Unit is ready
  cinder-hacluster/1   active    idle            10.0.12.57                Unit is ready and clustered
cinder/2               active    idle   2/lxd/7  10.0.12.56      8776/tcp  Unit is ready
  cinder-ceph/0*       active    idle            10.0.12.56                Unit is ready
  cinder-hacluster/0*  active    idle            10.0.12.56                Unit is ready and clustered

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/7  started  10.0.12.55  juju-a5ab4c-0-lxd-7  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/7  started  10.0.12.57  juju-a5ab4c-1-lxd-7  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/7  started  10.0.12.56  juju-a5ab4c-2-lxd-7  bionic  default  Container started

(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh cinder/0 sudo crm status
Stack: corosync
Current DC: juju-a5ab4c-2-lxd-7 (version 1.1.18-2b07d5c5a9) - partition with quorum
Last updated: Sun May 10 02:52:17 2020
Last change: Sun May 10 02:40:03 2020 by hacluster via crmd on juju-a5ab4c-2-lxd-7

3 nodes configured
4 resources configured

Online: [ juju-a5ab4c-0-lxd-7 juju-a5ab4c-1-lxd-7 juju-a5ab4c-2-lxd-7 ]

Full list of resources:

 Resource Group: grp_cinder_vips
     res_cinder_8d00f28_vip     (ocf::heartbeat:IPaddr2):       Started juju-a5ab4c-1-lxd-7
 Clone Set: cl_cinder_haproxy [res_cinder_haproxy]
     Started: [ juju-a5ab4c-0-lxd-7 juju-a5ab4c-1-lxd-7 juju-a5ab4c-2-lxd-7 ]

Connection to 10.0.12.55 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh cinder/1 ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
20: eth0@if21: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:9c:2f:26 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.12.57/22 brd 10.0.15.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.14.133/22 brd 10.0.15.255 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe9c:2f26/64 scope link
       valid_lft forever preferred_lft forever
Connection to 10.0.12.57 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ ping -c 4 10.0.14.133
PING 10.0.14.133 (10.0.14.133) 56(84) bytes of data.
64 bytes from 10.0.14.133: icmp_seq=1 ttl=64 time=1.53 ms
64 bytes from 10.0.14.133: icmp_seq=2 ttl=64 time=0.785 ms
64 bytes from 10.0.14.133: icmp_seq=3 ttl=64 time=0.241 ms
64 bytes from 10.0.14.133: icmp_seq=4 ttl=64 time=0.229 ms

--- 10.0.14.133 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3036ms
rtt min/avg/max/mdev = 0.229/0.697/1.534/0.533 ms
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack volume service list
+------------------+-------------------------+------+---------+-------+----------------------------+
| Binary           | Host                    | Zone | Status  | State | Updated At                 |
+------------------+-------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | juju-a5ab4c-0-lxd-7     | nova | enabled | down  | 2020-05-10T02:40:58.000000 |
| cinder-scheduler | juju-a5ab4c-2-lxd-7     | nova | enabled | down  | 2020-05-10T02:43:19.000000 |
| cinder-volume    | juju-a5ab4c-2-lxd-7@LVM | nova | enabled | down  | 2020-05-10T02:41:58.000000 |
| cinder-volume    | juju-a5ab4c-0-lxd-7@LVM | nova | enabled | down  | 2020-05-10T02:40:48.000000 |
| cinder-volume    | juju-a5ab4c-1-lxd-7@LVM | nova | enabled | down  | 2020-05-10T02:39:58.000000 |
| cinder-scheduler | juju-a5ab4c-1-lxd-7     | nova | enabled | down  | 2020-05-10T02:39:58.000000 |
| cinder-volume    | cinder@cinder-ceph      | nova | enabled | up    | 2020-05-10T02:53:16.000000 |
| cinder-scheduler | cinder                  | nova | enabled | up    | 2020-05-10T02:53:14.000000 |
+------------------+-------------------------+------+---------+-------+----------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju actions cinder
Action                  Description
openstack-upgrade       Perform openstack upgrades. Config option action-managed-upgrade must be set to True.
pause                   Pause the cinder unit.  This action will stop cinder services.
remove-services         Remove unused services entities from the database after enabling HA with a stateless backend such as cinder-ceph.
rename-volume-host      Update the host attribute of volumes from currenthost to newhost
resume                  No description
security-checklist      Validate the running configuration against the OpenStack security guides checklist
volume-host-add-driver  Update the os-vol-host-attr:host volume attribute to include driver and volume name. Used for migrating volumes to multi-backend and Ocata+ configurtation.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju run-action cinder/0 remove-services
Action queued with id: "15"
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju run-action cinder/0 remove-services --wait
unit-cinder-0:
  UnitId: cinder/0
  id: "16"
  results:
    removed: ""
  status: completed
  timing:
    completed: 2020-05-10 02:55:12 +0000 UTC
    enqueued: 2020-05-10 02:54:51 +0000 UTC
    started: 2020-05-10 02:55:10 +0000 UTC
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack volume service list
+------------------+--------------------+------+---------+-------+----------------------------+
| Binary           | Host               | Zone | Status  | State | Updated At                 |
+------------------+--------------------+------+---------+-------+----------------------------+
| cinder-volume    | cinder@cinder-ceph | nova | enabled | up    | 2020-05-10T02:55:21.000000 |
| cinder-scheduler | cinder             | nova | enabled | up    | 2020-05-10T02:55:22.000000 |
+------------------+--------------------+------+---------+-------+----------------------------+

OpenStack Docs: Verify Cinder operation

cinder-backup

01250-deploy-cinder-backup.sh
#!/bin/bash
juju deploy cs:~openstack-charmers/cinder-backup cinder-backup
#
juju add-relation cinder-backup:backup-backend cinder:backup-backend
juju add-relation cinder-backup:ceph ceph-mon-backup:client

do not relate ceph-mon, relate to ceph-mon-backup for backup different ceph cluster.

(venv) ubuntu@os-client:~/work/openstack/deploy$ bash 01250-deploy-cinder-backup.sh
Located charm "cs:~openstack-charmers/cinder-backup-250".
Deploying charm "cs:~openstack-charmers/cinder-backup-250".

following charm is correct charm? may be…

https://jaas.ai/u/openstack-charmers/cinder-backup

https://jaas.ai/u/openstack-charmers

do not use following Google result top charm. there is no latest charm found. I don't know what is this charm still exist by openstack-charmers…

https://jaas.ai/cinder-backup/17

watch status

juju debug-log --include cinder --include ceph-osd-backup --include ceph-mon-backup
juju status "cinder*" "ceph-osd-backup" "ceph-mon-backup"
watch -n 1 --color juju status "cinder*" "ceph-osd-backup" "ceph-mon-backup" --color

Verify Operation

juju status "cinder*" "ceph-osd-backup" "ceph-mon-backup"

openstack volume service list

juju run-action cinder/leader remove-services --wait

openstack volume service list
ubuntu@os-client:~/work/openstack/deploy$ juju status "cinder*" "ceph-osd-backup" "ceph-mon-backup"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  11:29:37+09:00

App               Version  Status  Scale  Charm          Store       Rev  OS      Notes
ceph-mon-backup   13.2.8   active      3  ceph-mon       jujucharms   46  ubuntu
ceph-osd-backup   13.2.8   active      3  ceph-osd       jujucharms  301  ubuntu
cinder            14.0.4   active      3  cinder         jujucharms  301  ubuntu
cinder-backup     14.0.4   active      3  cinder-backup  jujucharms  248  ubuntu
cinder-ceph       14.0.4   active      3  cinder-ceph    jujucharms  254  ubuntu
cinder-hacluster           active      3  hacluster      jujucharms   66  ubuntu

Unit                   Workload  Agent  Machine   Public address  Ports     Message
ceph-mon-backup/0      active    idle   0/lxd/11  10.0.12.68                Unit is ready and clustered
ceph-mon-backup/1*     active    idle   1/lxd/11  10.0.12.67                Unit is ready and clustered
ceph-mon-backup/2      active    idle   2/lxd/11  10.0.12.69                Unit is ready and clustered
ceph-osd-backup/0      active    idle   9         10.0.12.24                Unit is ready (1 OSD)
ceph-osd-backup/1*     active    idle   10        10.0.12.33                Unit is ready (1 OSD)
ceph-osd-backup/2      active    idle   11        10.0.12.28                Unit is ready (1 OSD)
cinder/0*              active    idle   0/lxd/7   10.0.12.55      8776/tcp  Unit is ready
  cinder-backup/0      active    idle             10.0.12.55                Unit is ready
  cinder-ceph/2*       active    idle             10.0.12.55                Unit is ready
  cinder-hacluster/2*  active    idle             10.0.12.55                Unit is ready and clustered
cinder/1               active    idle   1/lxd/7   10.0.12.57      8776/tcp  Unit is ready
  cinder-backup/2*     active    idle             10.0.12.57                Unit is ready
  cinder-ceph/1        active    idle             10.0.12.57                Unit is ready
  cinder-hacluster/1   active    idle             10.0.12.57                Unit is ready and clustered
cinder/2               active    idle   2/lxd/7   10.0.12.56      8776/tcp  Unit is ready
  cinder-backup/1      active    idle             10.0.12.56                Unit is ready
  cinder-ceph/0        active    idle             10.0.12.56                Unit is ready
  cinder-hacluster/0   active    idle             10.0.12.56                Unit is ready and clustered

Machine   State    DNS         Inst id               Series  AZ       Message
0         started  10.0.12.23  os-controller1        bionic  default  Deployed
0/lxd/7   started  10.0.12.55  juju-a5ab4c-0-lxd-7   bionic  default  Container started
0/lxd/11  started  10.0.12.68  juju-a5ab4c-0-lxd-11  bionic  default  Container started
1         started  10.0.12.22  os-controller2        bionic  default  Deployed
1/lxd/7   started  10.0.12.57  juju-a5ab4c-1-lxd-7   bionic  default  Container started
1/lxd/11  started  10.0.12.67  juju-a5ab4c-1-lxd-11  bionic  default  Container started
2         started  10.0.12.26  os-controller3        bionic  default  Deployed
2/lxd/7   started  10.0.12.56  juju-a5ab4c-2-lxd-7   bionic  default  Container started
2/lxd/11  started  10.0.12.69  juju-a5ab4c-2-lxd-11  bionic  default  Container started
9         started  10.0.12.24  os-swift1             bionic  default  Deployed
10        started  10.0.12.33  os-swift2             bionic  default  Deployed
11        started  10.0.12.28  os-swift3             bionic  default  Deployed
ubuntu@os-client:~/work/openstack/deploy$ juju run-action cinder/leader remove-services --wait
unit-cinder-0:
  UnitId: cinder/0
  id: "24"
  results:
    Stdout: |
      Service cinder-volume on host juju-a5ab4c-0-lxd-7@cinder-ceph removed.
      Service cinder-volume on host juju-a5ab4c-2-lxd-7@cinder-ceph removed.
      Service cinder-backup on host juju-a5ab4c-2-lxd-7 removed.
      Service cinder-scheduler on host juju-a5ab4c-2-lxd-7 removed.
      Service cinder-backup on host juju-a5ab4c-0-lxd-7 removed.
      Service cinder-scheduler on host juju-a5ab4c-0-lxd-7 removed.
      Service cinder-volume on host juju-a5ab4c-1-lxd-7@cinder-ceph removed.
      Service cinder-scheduler on host juju-a5ab4c-1-lxd-7 removed.
      Service cinder-backup on host juju-a5ab4c-1-lxd-7 removed.
    removed: juju-a5ab4c-0-lxd-7@cinder-ceph,juju-a5ab4c-2-lxd-7@cinder-ceph,juju-a5ab4c-2-lxd-7,juju-a5ab4c-2-lxd-7,juju-a5ab4c-0-lxd-7,juju-a5ab4c-0-lxd-7,juju-a5ab4c-1-lxd-7@cinder-ceph,juju-a5ab4c-1-lxd-7,juju-a5ab4c-1-lxd-7
  status: completed
  timing:
    completed: 2020-05-17 22:12:36 +0000 UTC
    enqueued: 2020-05-17 22:12:08 +0000 UTC
    started: 2020-05-17 22:12:11 +0000 UTC
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack volume service list
+------------------+--------------------+------+---------+-------+----------------------------+
| Binary           | Host               | Zone | Status  | State | Updated At                 |
+------------------+--------------------+------+---------+-------+----------------------------+
| cinder-volume    | cinder@cinder-ceph | nova | enabled | up    | 2020-05-18T02:31:46.000000 |
| cinder-scheduler | cinder             | nova | enabled | up    | 2020-05-18T02:31:43.000000 |
| cinder-backup    | cinder             | nova | enabled | up    | 2020-05-18T02:31:40.000000 |
+------------------+--------------------+------+---------+-------+----------------------------+

Great.

If you have a following WARNING. there is nothing to worry about this. Go ahead.

(venv) ubuntu@os-client:~/work/openstack/deploy$ juju ssh ceph-mon-backup/0 sudo ceph status
  cluster:
    id:     eddf10d4-9cc4-11ea-90d8-00163e9411be
    health: HEALTH_WARN
            too few PGs per OSD (8 < min 30)

  services:
    mon: 3 daemons, quorum juju-64142c-2-lxd-1,juju-64142c-0-lxd-1,juju-64142c-1-lxd-1
    mgr: juju-64142c-0-lxd-1(active), standbys: juju-64142c-2-lxd-1, juju-64142c-1-lxd-1
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   1 pools, 8 pgs
    objects: 0  objects, 0 B
    usage:   3.0 GiB used, 237 GiB / 240 GiB avail
    pgs:     8 active+clean

Connection to 10.0.12.91 closed.
(venv) ubuntu@os-client:~/work/openstack/deploy$ juju ssh ceph-mon-backup/0 sudo ceph osd status
+----+-----------------+-------+-------+--------+---------+--------+---------+-----------+
| id |       host      |  used | avail | wr ops | wr data | rd ops | rd data |   state   |
+----+-----------------+-------+-------+--------+---------+--------+---------+-----------+
| 0  | os-ceph-backup3 | 1025M | 78.9G |    0   |     0   |    0   |     0   | exists,up |
| 1  | os-ceph-backup2 | 1025M | 78.9G |    0   |     0   |    0   |     0   | exists,up |
| 2  | os-ceph-backup1 | 1025M | 78.9G |    0   |     0   |    0   |     0   | exists,up |
+----+-----------------+-------+-------+--------+---------+--------+---------+-----------+
Connection to 10.0.12.91 closed.

nova-cloud-controller

01300-deploy-nova-cloud-controller.sh
#!/bin/bash
juju deploy --config config/nova-cloud-controller.yaml -n 3 --to lxd:0,lxd:1,lxd:2 cs:nova-cloud-controller nova-cloud-controller
juju deploy --config config/nova-cloud-controller.yaml cs:hacluster ncc-hacluster
juju add-relation nova-cloud-controller:ha ncc-hacluster:ha
#
juju add-relation nova-cloud-controller:shared-db mysql:shared-db
juju add-relation nova-cloud-controller:identity-service keystone:identity-service
juju add-relation nova-cloud-controller:amqp rabbitmq-server:amqp
#
juju add-relation nova-cloud-controller:memcache memcached:cache
juju add-relation nova-cloud-controller:image-service glance:image-service
juju add-relation nova-cloud-controller:cinder-volume-service cinder:cinder-volume-service
nova-cloud-controller.yaml
nova-cloud-controller:
  network-manager: Neutron
  openstack-origin: cloud:bionic-stein
  vip: 10.0.14.134
ncc-hacluster:
  corosync_transport: unicast
ubuntu@os-client:~/work/openstack/deploy$ bash 01300-deploy-nova-cloud-controller.sh
Located charm "cs:nova-cloud-controller-343".
Deploying charm "cs:nova-cloud-controller-343".
Located charm "cs:hacluster-66".
Deploying charm "cs:hacluster-66".
ERROR application "memcached" has no "memcache" relation
ubuntu@os-client:~/work/openstack/deploy$ juju add-relation nova-cloud-controller:memcache memcached:cache
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju status "nova*"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  18:59:52+09:00

App                    Version  Status   Scale  Charm                  Store       Rev  OS      Notes
ncc-hacluster                   active       3  hacluster              jujucharms   66  ubuntu
nova-cloud-controller  19.1.0   blocked      3  nova-cloud-controller  jujucharms  343  ubuntu

Unit                      Workload  Agent      Machine  Public address  Ports                       Message
nova-cloud-controller/0   blocked   executing  0/lxd/8  10.0.12.59      8774/tcp,8775/tcp,8778/tcp  Missing relations: compute; incomplete relations: identity
  ncc-hacluster/2         active    idle                10.0.12.59                                  Unit is ready and clustered
nova-cloud-controller/1*  blocked   executing  1/lxd/8  10.0.12.58      8774/tcp,8775/tcp,8778/tcp  Missing relations: compute
  ncc-hacluster/0*        active    idle                10.0.12.58                                  Unit is ready and clustered
nova-cloud-controller/2   blocked   executing  2/lxd/8  10.0.12.60      8774/tcp,8775/tcp,8778/tcp  Missing relations: compute; incomplete relations: identity
  ncc-hacluster/1         active    idle                10.0.12.60                                  Unit is ready and clustered

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/8  started  10.0.12.59  juju-a5ab4c-0-lxd-8  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/8  started  10.0.12.58  juju-a5ab4c-1-lxd-8  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/8  started  10.0.12.60  juju-a5ab4c-2-lxd-8  bionic  default  Container started

watch status

juju debug-log --include nova
juju status "nova*"
watch -n 1 --color juju status "nova*" --color

nova-compute

01400-deploy-nova-compute.sh
#!/bin/bash
juju deploy --config config/nova-compute.yaml -n 3 --to 3,4,5 cs:nova-compute nova-compute
#
juju add-relation nova-compute:cloud-compute nova-cloud-controller:cloud-compute
juju add-relation nova-compute:amqp rabbitmq-server:amqp
juju add-relation nova-compute:image-service glance:image-service
juju add-relation nova-compute:ceph ceph-mon:client
juju add-relation nova-compute:ceph-access cinder-ceph:ceph-access
nova-compute.yaml
nova-compute:
  config-flags: default_ephemeral_format=ext4
  cpu-mode: custom
  cpu-model: kvm64
  enable-live-migration: true
  enable-resize: true
  migration-auth-type: ssh
  openstack-origin: cloud:bionic-stein
ubuntu@os-client:~/work/openstack/deploy$ bash 01400-deploy-nova-compute.sh
Located charm "cs:nova-compute-314".
Deploying charm "cs:nova-compute-314".

watch status

juju debug-log --include nova
juju status "nova*"
watch -n 1 --color juju status "nova*" --color

Verify Operation

juju status "nova*"

juju ssh nova-cloud-controller/0 sudo crm status
juju ssh nova-cloud-controller/0 ip address show

ping -c 4 10.0.14.134

openstack compute service list
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju status "nova*"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  19:22:22+09:00

App                    Version  Status  Scale  Charm                  Store       Rev  OS      Notes
ncc-hacluster                   active      3  hacluster              jujucharms   66  ubuntu
nova-cloud-controller  19.1.0   active      3  nova-cloud-controller  jujucharms  343  ubuntu
nova-compute           19.1.0   active      3  nova-compute           jujucharms  314  ubuntu

Unit                      Workload  Agent  Machine  Public address  Ports                       Message
nova-cloud-controller/0   active    idle   0/lxd/8  10.0.12.59      8774/tcp,8775/tcp,8778/tcp  Unit is ready
  ncc-hacluster/2         active    idle            10.0.12.59                                  Unit is ready and clustered
nova-cloud-controller/1*  active    idle   1/lxd/8  10.0.12.58      8774/tcp,8775/tcp,8778/tcp  Unit is ready
  ncc-hacluster/0*        active    idle            10.0.12.58                                  Unit is ready and clustered
nova-cloud-controller/2   active    idle   2/lxd/8  10.0.12.60      8774/tcp,8775/tcp,8778/tcp  Unit is ready
  ncc-hacluster/1         active    idle            10.0.12.60                                  Unit is ready and clustered
nova-compute/0*           active    idle   3        10.0.12.34                                  Unit is ready
nova-compute/1            active    idle   4        10.0.12.29                                  Unit is ready
nova-compute/2            active    idle   5        10.0.12.30                                  Unit is ready

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/8  started  10.0.12.59  juju-a5ab4c-0-lxd-8  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/8  started  10.0.12.58  juju-a5ab4c-1-lxd-8  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/8  started  10.0.12.60  juju-a5ab4c-2-lxd-8  bionic  default  Container started
3        started  10.0.12.34  os-compute1          bionic  default  Deployed
4        started  10.0.12.29  os-compute2          bionic  default  Deployed
5        started  10.0.12.30  os-compute3          bionic  default  Deployed

(venv) ubuntu@os-client:~/work/openstack/workspace$ ping -c 4 10.0.14.134
PING 10.0.14.134 (10.0.14.134) 56(84) bytes of data.
64 bytes from 10.0.14.134: icmp_seq=1 ttl=64 time=1.85 ms
64 bytes from 10.0.14.134: icmp_seq=2 ttl=64 time=0.608 ms
64 bytes from 10.0.14.134: icmp_seq=3 ttl=64 time=0.266 ms
64 bytes from 10.0.14.134: icmp_seq=4 ttl=64 time=0.466 ms

--- 10.0.14.134 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3041ms
rtt min/avg/max/mdev = 0.266/0.799/1.856/0.622 ms
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack compute service list
+----+----------------+-------------------------+----------+---------+-------+----------------------------+
| ID | Binary         | Host                    | Zone     | Status  | State | Updated At                 |
+----+----------------+-------------------------+----------+---------+-------+----------------------------+
|  1 | nova-scheduler | juju-a5ab4c-1-lxd-8     | internal | enabled | up    | 2020-05-10T10:22:33.000000 |
| 16 | nova-conductor | juju-a5ab4c-1-lxd-8     | internal | enabled | up    | 2020-05-10T10:22:35.000000 |
| 22 | nova-scheduler | juju-a5ab4c-0-lxd-8     | internal | enabled | up    | 2020-05-10T10:22:39.000000 |
| 34 | nova-conductor | juju-a5ab4c-0-lxd-8     | internal | enabled | up    | 2020-05-10T10:22:31.000000 |
| 40 | nova-conductor | juju-a5ab4c-2-lxd-8     | internal | enabled | up    | 2020-05-10T10:22:34.000000 |
| 43 | nova-scheduler | juju-a5ab4c-2-lxd-8     | internal | enabled | up    | 2020-05-10T10:22:38.000000 |
| 55 | nova-compute   | os-compute1.os.pg1x.net | nova     | enabled | up    | 2020-05-10T10:22:34.000000 |
| 58 | nova-compute   | os-compute3.os.pg1x.net | nova     | enabled | up    | 2020-05-10T10:22:34.000000 |
| 61 | nova-compute   | os-compute2.os.pg1x.net | nova     | enabled | up    | 2020-05-10T10:22:32.000000 |
+----+----------------+-------------------------+----------+---------+-------+----------------------------+

OpenStack Docs: Verify operation

neutron

01500-deploy-neutron.sh
#!/bin/bash
juju deploy --config config/neutron.yaml -n 3 --to 0,1,2 cs:neutron-gateway neutron-gateway
juju deploy --config config/neutron.yaml -n 3 --to lxd:0,lxd:1,lxd:2 cs:neutron-api neutron-api
juju deploy cs:neutron-openvswitch neutron-openvswitch
#
juju deploy --config config/neutron.yaml cs:hacluster neutron-hacluster
juju add-relation neutron-api:ha neutron-hacluster:ha
#
juju add-relation neutron-gateway:quantum-network-service nova-cloud-controller:quantum-network-service
juju add-relation neutron-gateway:amqp rabbitmq-server:amqp
#
juju add-relation neutron-api:shared-db mysql:shared-db
juju add-relation neutron-api:identity-service keystone:identity-service
juju add-relation neutron-api:amqp rabbitmq-server:amqp
#
juju add-relation neutron-api:neutron-plugin-api neutron-gateway:neutron-plugin-api
juju add-relation neutron-api:neutron-plugin-api neutron-openvswitch:neutron-plugin-api
juju add-relation neutron-api:neutron-api nova-cloud-controller:neutron-api
#
juju add-relation neutron-openvswitch:amqp rabbitmq-server:amqp
juju add-relation neutron-openvswitch:neutron-plugin nova-compute:neutron-plugin
neutron.yaml
neutron-gateway:
  data-port: br-ex:ens34
  bridge-mappings: physnet1:br-ex
  openstack-origin: cloud:bionic-stein
neutron-api:
  default-tenant-network-type: vxlan
  enable-l3ha: true
  flat-network-providers: physnet1
  max-l3-agents-per-router: 3
  neutron-security-groups: true
  openstack-origin: cloud:bionic-stein
  overlay-network-type: vxlan
  vip: 10.0.14.135
neutron-hacluster:
  corosync_transport: unicast
ubuntu@os-client:~/work/openstack/deploy$ bash 01500-deploy-neutron.sh
Located charm "cs:neutron-gateway-280".
Deploying charm "cs:neutron-gateway-280".
Located charm "cs:neutron-api-284".
Deploying charm "cs:neutron-api-284".
Located charm "cs:neutron-openvswitch-274".
Deploying charm "cs:neutron-openvswitch-274".
Located charm "cs:hacluster-66".
Deploying charm "cs:hacluster-66".

watch status

juju debug-log --include neutron --include nova-compute
juju status "neutron*"
watch -n 1 --color juju status "neutron*" --color

Verify Operation

juju status "neutron*"

juju ssh neutron-api/0 sudo crm status
juju ssh neutron-api/0 ip address show

ping -c 4 10.0.14.135

openstack extension list --network
openstack network agent list

juju ssh neutron-gateway/0 sudo ovs-vsctl show
juju ssh neutron-gateway/1 sudo ovs-vsctl show
juju ssh neutron-gateway/2 sudo ovs-vsctl show
juju ssh nova-compute/0 sudo ovs-vsctl show
juju ssh nova-compute/1 sudo ovs-vsctl show
juju ssh nova-compute/2 sudo ovs-vsctl show
juju ssh neutron-gateway/0 ip link show
juju ssh neutron-gateway/1 ip link show
juju ssh neutron-gateway/2 ip link show
juju ssh nova-compute/0 ip link show
juju ssh nova-compute/1 ip link show
juju ssh nova-compute/2 ip link show
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju status "neutron*"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  20:43:28+09:00

App                  Version  Status  Scale  Charm                Store       Rev  OS      Notes
neutron-api          14.1.0   active      3  neutron-api          jujucharms  284  ubuntu
neutron-gateway      14.1.0   active      3  neutron-gateway      jujucharms  280  ubuntu
neutron-hacluster             active      3  hacluster            jujucharms   66  ubuntu
neutron-openvswitch  14.1.0   active      3  neutron-openvswitch  jujucharms  274  ubuntu
nova-compute         19.1.0   active      3  nova-compute         jujucharms  314  ubuntu

Unit                      Workload  Agent  Machine  Public address  Ports     Message
neutron-api/0             active    idle   0/lxd/9  10.0.12.62      9696/tcp  Unit is ready
  neutron-hacluster/2     active    idle            10.0.12.62                Unit is ready and clustered
neutron-api/1             active    idle   1/lxd/9  10.0.12.61      9696/tcp  Unit is ready
  neutron-hacluster/1     active    idle            10.0.12.61                Unit is ready and clustered
neutron-api/2*            active    idle   2/lxd/9  10.0.12.63      9696/tcp  Unit is ready
  neutron-hacluster/0*    active    idle            10.0.12.63                Unit is ready and clustered
neutron-gateway/0*        active    idle   0        10.0.12.23                Unit is ready
neutron-gateway/1         active    idle   1        10.0.12.22                Unit is ready
neutron-gateway/2         active    idle   2        10.0.12.26                Unit is ready
nova-compute/0*           active    idle   3        10.0.12.34                Unit is ready
  neutron-openvswitch/1   active    idle            10.0.12.34                Unit is ready
nova-compute/1            active    idle   4        10.0.12.29                Unit is ready
  neutron-openvswitch/2   active    idle            10.0.12.29                Unit is ready
nova-compute/2            active    idle   5        10.0.12.30                Unit is ready
  neutron-openvswitch/0*  active    idle            10.0.12.30                Unit is ready

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/9  started  10.0.12.62  juju-a5ab4c-0-lxd-9  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/9  started  10.0.12.61  juju-a5ab4c-1-lxd-9  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/9  started  10.0.12.63  juju-a5ab4c-2-lxd-9  bionic  default  Container started
3        started  10.0.12.34  os-compute1          bionic  default  Deployed
4        started  10.0.12.29  os-compute2          bionic  default  Deployed
5        started  10.0.12.30  os-compute3          bionic  default  Deployed

(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh neutron-api/0 sudo crm status
Stack: corosync
Current DC: juju-a5ab4c-1-lxd-9 (version 1.1.18-2b07d5c5a9) - partition with quorum
Last updated: Sun May 10 11:43:42 2020
Last change: Sun May 10 11:35:13 2020 by hacluster via crmd on juju-a5ab4c-0-lxd-9

3 nodes configured
4 resources configured

Online: [ juju-a5ab4c-0-lxd-9 juju-a5ab4c-1-lxd-9 juju-a5ab4c-2-lxd-9 ]

Full list of resources:

 Resource Group: grp_neutron_vips
     res_neutron_e72968a_vip    (ocf::heartbeat:IPaddr2):       Started juju-a5ab4c-0-lxd-9
 Clone Set: cl_neutron_haproxy [res_neutron_haproxy]
     Started: [ juju-a5ab4c-0-lxd-9 juju-a5ab4c-1-lxd-9 juju-a5ab4c-2-lxd-9 ]

Connection to 10.0.12.62 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh neutron-api/0 ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
24: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:6c:69:bf brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.12.62/22 brd 10.0.15.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.14.135/22 brd 10.0.15.255 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe6c:69bf/64 scope link
       valid_lft forever preferred_lft forever
Connection to 10.0.12.62 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ ping -c 4 10.0.14.135
PING 10.0.14.135 (10.0.14.135) 56(84) bytes of data.
64 bytes from 10.0.14.135: icmp_seq=1 ttl=64 time=0.529 ms
64 bytes from 10.0.14.135: icmp_seq=2 ttl=64 time=0.315 ms
64 bytes from 10.0.14.135: icmp_seq=3 ttl=64 time=0.337 ms
64 bytes from 10.0.14.135: icmp_seq=4 ttl=64 time=0.483 ms

--- 10.0.14.135 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3052ms
rtt min/avg/max/mdev = 0.315/0.416/0.529/0.091 ms
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack extension list --network
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Name                                                                                                                                                           | Alias                            | Description                                                                                                                                              |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
| Address scope                                                                                                                                                  | address-scope                    | Address scopes extension.                                                                                                                                |
| agent                                                                                                                                                          | agent                            | The agent management extension.                                                                                                                          |
| Agent's Resource View Synced to Placement                                                                                                                      | agent-resources-synced           | Stores success/failure of last sync to Placement                                                                                                         |
| Allowed Address Pairs                                                                                                                                          | allowed-address-pairs            | Provides allowed address pairs                                                                                                                           |
| Auto Allocated Topology Services                                                                                                                               | auto-allocated-topology          | Auto Allocated Topology Services.                                                                                                                        |
| Availability Zone                                                                                                                                              | availability_zone                | The availability zone extension.                                                                                                                         |
| Availability Zone Filter Extension                                                                                                                             | availability_zone_filter         | Add filter parameters to AvailabilityZone resource                                                                                                       |
| Default Subnetpools                                                                                                                                            | default-subnetpools              | Provides ability to mark and use a subnetpool as the default.                                                                                            |
| DHCP Agent Scheduler                                                                                                                                           | dhcp_agent_scheduler             | Schedule networks among dhcp agents                                                                                                                      |
| Distributed Virtual Router                                                                                                                                     | dvr                              | Enables configuration of Distributed Virtual Routers.                                                                                                    |
| Empty String Filtering Extension                                                                                                                               | empty-string-filtering           | Allow filtering by attributes with empty string value                                                                                                    |
| Neutron external network                                                                                                                                       | external-net                     | Adds external network attribute to network resource.                                                                                                     |
| Neutron Extra DHCP options                                                                                                                                     | extra_dhcp_opt                   | Extra options configuration for DHCP. For example PXE boot options to DHCP clients can be specified (e.g. tftp-server, server-ip-address, bootfile-name) |
| Neutron Extra Route                                                                                                                                            | extraroute                       | Extra routes configuration for L3 router                                                                                                                 |
| Floating IP Port Details Extension                                                                                                                             | fip-port-details                 | Add port_details attribute to Floating IP resource                                                                                                       |
| Neutron Service Flavors                                                                                                                                        | flavors                          | Flavor specification for Neutron advanced services.                                                                                                      |
| Floating IP Pools Extension                                                                                                                                    | floatingip-pools                 | Provides a floating IP pools API.                                                                                                                        |
| IP Allocation                                                                                                                                                  | ip_allocation                    | IP allocation extension.                                                                                                                                 |
| IP address substring filtering                                                                                                                                 | ip-substring-filtering           | Provides IP address substring filtering when listing ports                                                                                               |
| L2 Adjacency                                                                                                                                                   | l2_adjacency                     | Display L2 Adjacency for Neutron Networks.                                                                                                               |
| Neutron L3 Router                                                                                                                                              | router                           | Router abstraction for basic L3 forwarding between L2 Neutron networks and access to external networks via a NAT gateway.                                |
| Neutron L3 Configurable external gateway mode                                                                                                                  | ext-gw-mode                      | Extension of the router abstraction for specifying whether SNAT should occur on the external gateway                                                     |
| HA Router extension                                                                                                                                            | l3-ha                            | Adds HA capability to routers.                                                                                                                           |
| Router Flavor Extension                                                                                                                                        | l3-flavors                       | Flavor support for routers.                                                                                                                              |
| Prevent L3 router ports IP address change extension                                                                                                            | l3-port-ip-change-not-allowed    | Prevent change of IP address for some L3 router ports                                                                                                    |
| L3 Agent Scheduler                                                                                                                                             | l3_agent_scheduler               | Schedule routers among l3 agents                                                                                                                         |
| Neutron Metering                                                                                                                                               | metering                         | Neutron Metering extension.                                                                                                                              |
| Multi Provider Network                                                                                                                                         | multi-provider                   | Expose mapping of virtual networks to multiple physical networks                                                                                         |
| Network MTU                                                                                                                                                    | net-mtu                          | Provides MTU attribute for a network resource.                                                                                                           |
| Network MTU (writable)                                                                                                                                         | net-mtu-writable                 | Provides a writable MTU attribute for a network resource.                                                                                                |
| Network Availability Zone                                                                                                                                      | network_availability_zone        | Availability zone support for network.                                                                                                                   |
| Network IP Availability                                                                                                                                        | network-ip-availability          | Provides IP availability data for each network and subnet.                                                                                               |
| Pagination support                                                                                                                                             | pagination                       | Extension that indicates that pagination is enabled.                                                                                                     |
| Neutron Port MAC address regenerate                                                                                                                            | port-mac-address-regenerate      | Network port MAC address regenerate                                                                                                                      |
| Port Binding                                                                                                                                                   | binding                          | Expose port bindings of a virtual port to external application                                                                                           |
| Port Bindings Extended                                                                                                                                         | binding-extended                 | Expose port bindings of a virtual port to external application                                                                                           |
| project_id field enabled                                                                                                                                       | project-id                       | Extension that indicates that project_id field is enabled.                                                                                               |
| Provider Network                                                                                                                                               | provider                         | Expose mapping of virtual networks to physical networks                                                                                                  |
| Quota management support                                                                                                                                       | quotas                           | Expose functions for quotas management per tenant                                                                                                        |
| Quota details management support                                                                                                                               | quota_details                    | Expose functions for quotas usage statistics per project                                                                                                 |
| RBAC Policies                                                                                                                                                  | rbac-policies                    | Allows creation and modification of policies that control tenant access to resources.                                                                    |
| Add security_group type to network RBAC                                                                                                                        | rbac-security-groups             | Add security_group type to network RBAC                                                                                                                  |
| If-Match constraints based on revision_number                                                                                                                  | revision-if-match                | Extension indicating that If-Match based on revision_number is supported.                                                                                |
| Resource revision numbers                                                                                                                                      | standard-attr-revisions          | This extension will display the revision number of neutron resources.                                                                                    |
| Router Availability Zone                                                                                                                                       | router_availability_zone         | Availability zone support for router.                                                                                                                    |
| Port filtering on security groups                                                                                                                              | port-security-groups-filtering   | Provides security groups filtering when listing ports                                                                                                    |
| security-group                                                                                                                                                 | security-group                   | The security groups extension.                                                                                                                           |
| Segment                                                                                                                                                        | segment                          | Segments extension.                                                                                                                                      |
| Segments peer-subnet host routes                                                                                                                               | segments-peer-subnet-host-routes | Add host routes to subnets on a routed network (segments)                                                                                                |
| Neutron Service Type Management                                                                                                                                | service-type                     | API for retrieving service providers for Neutron advanced services                                                                                       |
| Sorting support                                                                                                                                                | sorting                          | Extension that indicates that sorting is enabled.                                                                                                        |
| Standard Attribute Segment Extension                                                                                                                           | standard-attr-segment            | Add standard attributes to Segment resource                                                                                                              |
| standard-attr-description                                                                                                                                      | standard-attr-description        | Extension to add descriptions to standard attributes                                                                                                     |
| Subnet Onboard                                                                                                                                                 | subnet_onboard                   | Provides support for onboarding subnets into subnet pools                                                                                                |
| Subnet SegmentID (writable)                                                                                                                                    | subnet-segmentid-writable        | Provides a writable segment_id attribute for a subnet resource.                                                                                          |
| Subnet service types                                                                                                                                           | subnet-service-types             | Provides ability to set the subnet service_types field                                                                                                   |
| Subnet Allocation                                                                                                                                              | subnet_allocation                | Enables allocation of subnets from a subnet pool                                                                                                         |
| Tag support for resources with standard attribute: port, subnet, subnetpool, network, router, floatingip, policy, security_group, trunk, network_segment_range | standard-attr-tag                | Enables to set tag on resources with standard attribute.                                                                                                 |
| Resource timestamps                                                                                                                                            | standard-attr-timestamp          | Adds created_at and updated_at fields to all Neutron resources that have Neutron standard attributes.                                                    |
| FWaaS v2                                                                                                                                                       | fwaas_v2                         | Provides support for firewall-as-a-service version 2                                                                                                     |
| Neutron BGP Dynamic Routing Extension                                                                                                                          | bgp                              | Discover and advertise routes for Neutron prefixes dynamically via BGP                                                                                   |
| BGP 4-byte AS numbers                                                                                                                                          | bgp_4byte_asn                    | Support bgp 4-byte AS numbers                                                                                                                            |
| BGP Dynamic Routing Agent Scheduler                                                                                                                            | bgp_dragent_scheduler            | Schedules BgpSpeakers on BgpDrAgent                                                                                                                      |
| Add a fall threshold to health monitor                                                                                                                         | hm_max_retries_down              | Add a fall threshold to health monitor                                                                                                                   |
| L7 capabilities for LBaaSv2                                                                                                                                    | l7                               | Adding L7 policies and rules support for LBaaSv2                                                                                                         |
| Load Balancer Graph                                                                                                                                            | lb-graph                         | Extension for allowing the creation of load balancers with a full graph in one API request.                                                              |
| Create loadbalancer with network_id                                                                                                                            | lb_network_vip                   | Create loadbalancer with network_id                                                                                                                      |
| Loadbalancer Agent Scheduler V2                                                                                                                                | lbaas_agent_schedulerv2          | Schedule load balancers among lbaas agents                                                                                                               |
| LoadBalancing service v2                                                                                                                                       | lbaasv2                          | Extension for LoadBalancing service v2 (deprecated)                                                                                                      |
| Shared pools for LBaaSv2                                                                                                                                       | shared_pools                     | Allow pools to be shared among listeners for LBaaSv2                                                                                                     |
+----------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack network agent list
+--------------------------------------+----------------------+-------------------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type           | Host                    | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+----------------------+-------------------------+-------------------+-------+-------+---------------------------+
| 00727e53-5de0-4e70-8736-497fdf6f671b | Metadata agent       | os-controller1          | None              | :-)   | UP    | neutron-metadata-agent    |
| 1cf41443-22c4-4201-a415-a109a28ea15c | Loadbalancerv2 agent | os-controller3          | None              | :-)   | UP    | neutron-lbaasv2-agent     |
| 20e38481-1f48-4aee-8e16-b1163d890a5b | Open vSwitch agent   | os-controller1          | None              | :-)   | UP    | neutron-openvswitch-agent |
| 323471db-4541-496e-ad90-7012acafd6a0 | L3 agent             | os-controller2          | nova              | :-)   | UP    | neutron-l3-agent          |
| 34f21131-158d-4f1d-8a8d-efcd0655d322 | Loadbalancerv2 agent | os-controller2          | None              | :-)   | UP    | neutron-lbaasv2-agent     |
| 37aa27ad-10f3-4c5e-8e48-3d8a9f949c11 | L3 agent             | os-controller1          | nova              | :-)   | UP    | neutron-l3-agent          |
| 439c8f99-ce62-45a4-9948-6ac7b8d11053 | Loadbalancerv2 agent | os-controller1          | None              | :-)   | UP    | neutron-lbaasv2-agent     |
| 464fda10-7b43-40db-bbfa-0a2a396d1c97 | L3 agent             | os-controller3          | nova              | :-)   | UP    | neutron-l3-agent          |
| 5bec8f36-1f45-471c-9e72-d4eb8195fdc0 | Metering agent       | os-controller3          | None              | :-)   | UP    | neutron-metering-agent    |
| 634cb29c-9327-4384-b706-24dc19cd3711 | Open vSwitch agent   | os-compute1.os.pg1x.net | None              | :-)   | UP    | neutron-openvswitch-agent |
| 6bacb8fe-f874-4564-8be1-9eca490352d4 | Metadata agent       | os-controller3          | None              | :-)   | UP    | neutron-metadata-agent    |
| 72e48b44-747a-4c9f-b04a-4a2aa13d6b2b | Metadata agent       | os-controller2          | None              | :-)   | UP    | neutron-metadata-agent    |
| 76d6c91e-0670-42ea-b955-582002c68160 | Metering agent       | os-controller2          | None              | :-)   | UP    | neutron-metering-agent    |
| 7c5c656e-fa5e-408c-b3d5-220f1a1cb329 | Open vSwitch agent   | os-controller3          | None              | :-)   | UP    | neutron-openvswitch-agent |
| 7f8beba6-244d-478f-8393-279dab0b1a18 | DHCP agent           | os-controller3          | nova              | :-)   | UP    | neutron-dhcp-agent        |
| a2d4c176-906c-4482-828b-d2e3f5461f22 | Open vSwitch agent   | os-controller2          | None              | :-)   | UP    | neutron-openvswitch-agent |
| b40f71ad-8164-4070-ad5a-52f16ef99519 | Open vSwitch agent   | os-compute3.os.pg1x.net | None              | :-)   | UP    | neutron-openvswitch-agent |
| e62005c8-5996-47eb-9c9f-7959e9044246 | DHCP agent           | os-controller1          | nova              | :-)   | UP    | neutron-dhcp-agent        |
| ed8f85ab-7dbb-407c-831c-234990edf2c6 | Metering agent       | os-controller1          | None              | :-)   | UP    | neutron-metering-agent    |
| f9837426-0bc9-41e4-b6df-2ae1670cba11 | DHCP agent           | os-controller2          | nova              | :-)   | UP    | neutron-dhcp-agent        |
| fecd0d19-3cbf-490e-b4a1-e453caa74921 | Open vSwitch agent   | os-compute2.os.pg1x.net | None              | :-)   | UP    | neutron-openvswitch-agent |
+--------------------------------------+----------------------+-------------------------+-------------------+-------+-------+---------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh neutron-gateway/0 sudo ovs-vsctl show
ecbdf222-8d46-4708-83f4-dd60bef30625
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "ens34"
            Interface "ens34"
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    ovs_version: "2.11.0"
Connection to 10.0.12.23 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh neutron-gateway/1 sudo ovs-vsctl show
bc43cb61-0038-4aae-91a2-3e914db8af84
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-ex
            Interface br-ex
                type: internal
        Port "ens34"
            Interface "ens34"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    ovs_version: "2.11.0"
Connection to 10.0.12.22 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh neutron-gateway/2 sudo ovs-vsctl show
9c50d98e-4637-4bd0-b698-018dd64eda5d
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "ens34"
            Interface "ens34"
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.11.0"
Connection to 10.0.12.26 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh nova-compute/0 sudo ovs-vsctl show
9c9c1811-47a3-4ce9-8baa-dba35340dcf2
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-data
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port phy-br-data
            Interface phy-br-data
                type: patch
                options: {peer=int-br-data}
        Port br-data
            Interface br-data
                type: internal
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port int-br-data
            Interface int-br-data
                type: patch
                options: {peer=phy-br-data}
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "2.11.0"
Connection to 10.0.12.34 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh nova-compute/1 sudo ovs-vsctl show
347cd81a-e30a-4f60-b390-bc9a3bf8d4e1
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-data
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port phy-br-data
            Interface phy-br-data
                type: patch
                options: {peer=int-br-data}
        Port br-data
            Interface br-data
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port int-br-data
            Interface int-br-data
                type: patch
                options: {peer=phy-br-data}
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.11.0"
Connection to 10.0.12.29 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh nova-compute/2 sudo ovs-vsctl show
d287b121-de0a-4535-a6f3-b63d91a33ba4
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-data
            Interface int-br-data
                type: patch
                options: {peer=phy-br-data}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-data
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port phy-br-data
            Interface phy-br-data
                type: patch
                options: {peer=int-br-data}
        Port br-data
            Interface br-data
                type: internal
    ovs_version: "2.11.0"
Connection to 10.0.12.30 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh neutron-gateway/0 ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:9e:fe:75 brd ff:ff:ff:ff:ff:ff
3: ens34: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:9e:fe:7f brd ff:ff:ff:ff:ff:ff
4: br-ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 8a:0b:db:82:e9:82 brd ff:ff:ff:ff:ff:ff
5: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 2a:40:a6:a7:c5:cc brd ff:ff:ff:ff:ff:ff
7: vethVIQQ7X@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:26:d5:70:6b:8b brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: veth66FHX4@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:2d:96:9d:32:1d brd ff:ff:ff:ff:ff:ff link-netnsid 1
11: vethLENBAF@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:b0:88:a6:e8:24 brd ff:ff:ff:ff:ff:ff link-netnsid 2
13: vethGC6UWW@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:bf:4e:da:a6:e3 brd ff:ff:ff:ff:ff:ff link-netnsid 3
15: vethNWSBV6@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:91:23:0a:2f:5b brd ff:ff:ff:ff:ff:ff link-netnsid 4
17: vethP3S8IX@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:96:c2:fe:89:1f brd ff:ff:ff:ff:ff:ff link-netnsid 5
19: vethFHE4VR@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:45:24:be:0d:75 brd ff:ff:ff:ff:ff:ff link-netnsid 6
21: vethLK38H5@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:93:1b:62:6b:ce brd ff:ff:ff:ff:ff:ff link-netnsid 7
23: vethX2INUE@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:ba:29:62:bc:74 brd ff:ff:ff:ff:ff:ff link-netnsid 8
25: veth4B56C6@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:67:51:07:b1:f7 brd ff:ff:ff:ff:ff:ff link-netnsid 9
26: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 16:7a:ad:a0:1e:bb brd ff:ff:ff:ff:ff:ff
27: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether ce:e8:33:ac:2d:4f brd ff:ff:ff:ff:ff:ff
28: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:9e:fe:7f brd ff:ff:ff:ff:ff:ff
29: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 3e:d8:ff:17:7b:4c brd ff:ff:ff:ff:ff:ff
Connection to 10.0.12.23 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh neutron-gateway/1 ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:0e:75:e3 brd ff:ff:ff:ff:ff:ff
3: ens34: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:0e:75:ed brd ff:ff:ff:ff:ff:ff
4: br-ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fa:c8:af:6c:47:fe brd ff:ff:ff:ff:ff:ff
5: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 3a:b5:94:e2:2d:c5 brd ff:ff:ff:ff:ff:ff
7: vethU3HVRG@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:c9:a3:65:19:14 brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: vethTN1IGA@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:3a:a2:13:4b:d7 brd ff:ff:ff:ff:ff:ff link-netnsid 1
11: vethSVPAIW@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:67:ca:e8:07:98 brd ff:ff:ff:ff:ff:ff link-netnsid 2
13: veth1VOOM6@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:ec:d9:48:ab:bc brd ff:ff:ff:ff:ff:ff link-netnsid 3
15: vethAIEK1W@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:84:b7:50:f9:a2 brd ff:ff:ff:ff:ff:ff link-netnsid 4
17: vethSGCTWO@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:dc:04:f5:a6:cf brd ff:ff:ff:ff:ff:ff link-netnsid 5
19: vethPKF1DK@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:c2:02:b8:b2:87 brd ff:ff:ff:ff:ff:ff link-netnsid 6
21: vethS5W69S@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:29:cf:d7:c4:e3 brd ff:ff:ff:ff:ff:ff link-netnsid 7
23: vethAI2BTT@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:28:06:ae:d5:a3 brd ff:ff:ff:ff:ff:ff link-netnsid 8
25: vethN82DI4@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:f8:2c:84:86:47 brd ff:ff:ff:ff:ff:ff link-netnsid 9
26: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 86:30:0a:54:05:22 brd ff:ff:ff:ff:ff:ff
27: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 7e:b5:72:d8:e5:4e brd ff:ff:ff:ff:ff:ff
28: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:0e:75:ed brd ff:ff:ff:ff:ff:ff
29: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 8e:d4:c7:45:e1:4f brd ff:ff:ff:ff:ff:ff
Connection to 10.0.12.22 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh neutron-gateway/2 ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:d5:ae:46 brd ff:ff:ff:ff:ff:ff
3: ens34: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:d5:ae:50 brd ff:ff:ff:ff:ff:ff
4: br-ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether fe:f8:85:f3:3b:6c brd ff:ff:ff:ff:ff:ff
5: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether be:11:8d:2b:75:59 brd ff:ff:ff:ff:ff:ff
7: veth5IKG7L@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:76:e7:fe:2f:8e brd ff:ff:ff:ff:ff:ff link-netnsid 0
9: vethQS9TMX@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:25:08:3c:87:bf brd ff:ff:ff:ff:ff:ff link-netnsid 1
11: vethTOA1OT@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:02:7d:0d:b4:95 brd ff:ff:ff:ff:ff:ff link-netnsid 2
13: vethTJ16DK@if12: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:b7:d0:3a:5f:c8 brd ff:ff:ff:ff:ff:ff link-netnsid 3
15: veth9UHKQE@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:c2:85:6b:63:2e brd ff:ff:ff:ff:ff:ff link-netnsid 4
17: veth30AKNE@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:49:e3:b3:05:dd brd ff:ff:ff:ff:ff:ff link-netnsid 5
19: veth2SVN8L@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:e1:7a:c7:7e:62 brd ff:ff:ff:ff:ff:ff link-netnsid 6
21: vethTP947O@if20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:22:a3:9a:b6:b0 brd ff:ff:ff:ff:ff:ff link-netnsid 7
23: vethFFYBNY@if22: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:2c:0e:12:6d:26 brd ff:ff:ff:ff:ff:ff link-netnsid 8
25: vethKQVDUF@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-ens33 state UP mode DEFAULT group default qlen 1000
    link/ether fe:1f:f4:5e:55:36 brd ff:ff:ff:ff:ff:ff link-netnsid 9
26: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 52:59:0c:bf:98:c7 brd ff:ff:ff:ff:ff:ff
27: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 4e:f4:9e:c5:f0:4b brd ff:ff:ff:ff:ff:ff
28: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:d5:ae:50 brd ff:ff:ff:ff:ff:ff
29: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether c2:26:c9:e5:d9:4c brd ff:ff:ff:ff:ff:ff
Connection to 10.0.12.26 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh nova-compute/0 ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:66:59:b9 brd ff:ff:ff:ff:ff:ff
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:66:59:c3 brd ff:ff:ff:ff:ff:ff
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 92:1c:c6:a3:0f:05 brd ff:ff:ff:ff:ff:ff
7: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 02:01:7c:25:f7:42 brd ff:ff:ff:ff:ff:ff
8: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 9a:fe:84:a0:9a:46 brd ff:ff:ff:ff:ff:ff
9: br-data: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 9e:b1:7d:06:ad:4d brd ff:ff:ff:ff:ff:ff
10: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 02:0e:70:f2:c1:4a brd ff:ff:ff:ff:ff:ff
Connection to 10.0.12.34 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh nova-compute/1 ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:2d:6e:18 brd ff:ff:ff:ff:ff:ff
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:2d:6e:22 brd ff:ff:ff:ff:ff:ff
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 72:2d:96:41:4a:76 brd ff:ff:ff:ff:ff:ff
7: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 4e:4c:9f:a8:de:4e brd ff:ff:ff:ff:ff:ff
8: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether e2:be:cd:88:4a:45 brd ff:ff:ff:ff:ff:ff
9: br-data: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 6e:e5:53:83:84:46 brd ff:ff:ff:ff:ff:ff
10: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 3a:38:05:fa:bb:48 brd ff:ff:ff:ff:ff:ff
Connection to 10.0.12.29 closed.
(venv) ubuntu@os-client:~/work/openstack/workspace$ juju ssh nova-compute/2 ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:8f:ad:93 brd ff:ff:ff:ff:ff:ff
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:8f:ad:9d brd ff:ff:ff:ff:ff:ff
6: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether d2:26:94:3d:96:67 brd ff:ff:ff:ff:ff:ff
7: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 46:ff:18:49:1f:40 brd ff:ff:ff:ff:ff:ff
8: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether be:de:d3:5a:e3:42 brd ff:ff:ff:ff:ff:ff
9: br-data: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether 66:68:89:95:95:42 brd ff:ff:ff:ff:ff:ff
10: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
    link/ether be:11:cb:17:06:40 brd ff:ff:ff:ff:ff:ff
Connection to 10.0.12.30 closed.

take snapshot “neutron”

Oops, I forgot to enable namespace-tenants in ceph-radosgw

If you already enabled namespace-tenants, skip this step.

If already deployed ceph-radosgw namespace-tenants config not take effect.

You must re-deploy ceph-radosgw.

ubuntu@os-client:~/work/openstack/deploy$ juju config ceph-radosgw namespace-tenants
falseubuntu@os-client:~/work/openstack/deploy$
ubuntu@os-client:~/work/openstack/deploy$ juju status "ceph*"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  18:16:52+09:00

App                     Version  Status  Scale  Charm         Store       Rev  OS      Notes
ceph-mon                13.2.8   active      3  ceph-mon      jujucharms   46  ubuntu
ceph-osd                13.2.8   active      3  ceph-osd      jujucharms  301  ubuntu
ceph-radosgw            13.2.8   active      3  ceph-radosgw  jujucharms  286  ubuntu
ceph-radosgw-hacluster           active      3  hacluster     jujucharms   66  ubuntu

Unit                         Workload  Agent  Machine  Public address  Ports   Message
ceph-mon/0*                  active    idle   0/lxd/0  10.0.12.35              Unit is ready and clustered
ceph-mon/1                   active    idle   1/lxd/0  10.0.12.36              Unit is ready and clustered
ceph-mon/2                   active    idle   2/lxd/0  10.0.12.37              Unit is ready and clustered
ceph-osd/0*                  active    idle   6        10.0.12.25              Unit is ready (1 OSD)
ceph-osd/1                   active    idle   7        10.0.12.31              Unit is ready (1 OSD)
ceph-osd/2                   active    idle   8        10.0.12.32              Unit is ready (1 OSD)
ceph-radosgw/0*              active    idle   0/lxd/1  10.0.15.0       80/tcp  Unit is ready
  ceph-radosgw-hacluster/1*  active    idle            10.0.15.0               Unit is ready and clustered
ceph-radosgw/1               active    idle   1/lxd/1  10.0.12.39      80/tcp  Unit is ready
  ceph-radosgw-hacluster/0   active    idle            10.0.12.39              Unit is ready and clustered
ceph-radosgw/2               active    idle   2/lxd/1  10.0.12.38      80/tcp  Unit is ready
  ceph-radosgw-hacluster/2   active    idle            10.0.12.38              Unit is ready and clustered

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/0  started  10.0.12.35  juju-a5ab4c-0-lxd-0  bionic  default  Container started
0/lxd/1  started  10.0.15.0   juju-a5ab4c-0-lxd-1  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/0  started  10.0.12.36  juju-a5ab4c-1-lxd-0  bionic  default  Container started
1/lxd/1  started  10.0.12.39  juju-a5ab4c-1-lxd-1  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/0  started  10.0.12.37  juju-a5ab4c-2-lxd-0  bionic  default  Container started
2/lxd/1  started  10.0.12.38  juju-a5ab4c-2-lxd-1  bionic  default  Container started
6        started  10.0.12.25  os-ceph1             bionic  default  Deployed
7        started  10.0.12.31  os-ceph2             bionic  default  Deployed
8        started  10.0.12.32  os-ceph3             bionic  default  Deployed

(venv) ubuntu@os-client:~/work/openstack/workspace$ swift stat
                                    Account: v1
                                 Containers: 0
                                    Objects: 0
                                      Bytes: 0
   Containers in policy "default-placement": 0
      Objects in policy "default-placement": 0
        Bytes in policy "default-placement": 0
Objects in policy "default-placement-bytes": 0
  Bytes in policy "default-placement-bytes": 0
                                X-Timestamp: 1589707062.39004
                X-Account-Bytes-Used-Actual: 0
                                 X-Trans-Id: tx000000000000000000001-005ec10136-ae580-default
                     X-Openstack-Request-Id: tx000000000000000000001-005ec10136-ae580-default
                              Accept-Ranges: bytes
                               Content-Type: text/plain; charset=utf-8
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack container list
juju remove-application ceph-radosgw
juju remove-application ceph-radosgw-hacluster
#
juju deploy --config config/ceph-radosgw.yaml -n 3 --to lxd:0,lxd:1,lxd:2 cs:ceph-radosgw ceph-radosgw
juju deploy --config config/ceph-radosgw.yaml cs:hacluster ceph-radosgw-hacluster
juju add-relation ceph-radosgw:mon ceph-mon:radosgw
juju add-relation ceph-radosgw:ha ceph-radosgw-hacluster:ha
juju add-relation keystone:identity-service ceph-radosgw:identity-service
ceph-radosgw:
  source: cloud:bionic-stein
  namespace-tenants: true
  vip: 10.0.14.129
ceph-radosgw-hacluster:
  corosync_transport: unicast
ubuntu@os-client:~/work/openstack/deploy$ juju remove-application ceph-radosgw
removing application ceph-radosgw
ubuntu@os-client:~/work/openstack/deploy$ juju remove-application ceph-radosgw-hacluster
removing application ceph-radosgw-hacluster
ubuntu@os-client:~/work/openstack/deploy$ juju status "ceph*"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  18:21:38+09:00

App                     Version  Status       Scale  Charm         Store       Rev  OS      Notes
ceph-mon                13.2.8   active           3  ceph-mon      jujucharms   46  ubuntu
ceph-osd                13.2.8   active           3  ceph-osd      jujucharms  301  ubuntu
ceph-radosgw            13.2.8   blocked          1  ceph-radosgw  jujucharms  286  ubuntu
ceph-radosgw-hacluster           maintenance      1  hacluster     jujucharms   66  ubuntu

Unit                         Workload     Agent      Machine  Public address  Ports   Message
ceph-mon/0*                  active       idle       0/lxd/0  10.0.12.35              Unit is ready and clustered
ceph-mon/1                   active       idle       1/lxd/0  10.0.12.36              Unit is ready and clustered
ceph-mon/2                   active       idle       2/lxd/0  10.0.12.37              Unit is ready and clustered
ceph-osd/0*                  active       idle       6        10.0.12.25              Unit is ready (1 OSD)
ceph-osd/1                   active       idle       7        10.0.12.31              Unit is ready (1 OSD)
ceph-osd/2                   active       idle       8        10.0.12.32              Unit is ready (1 OSD)
ceph-radosgw/0*              blocked      executing  0/lxd/1  10.0.15.0       80/tcp  Services not running that should be: haproxy
  ceph-radosgw-hacluster/1*  maintenance  executing           10.0.15.0               (stop) cleaning up prior to charm deletion

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/0  started  10.0.12.35  juju-a5ab4c-0-lxd-0  bionic  default  Container started
0/lxd/1  started  10.0.15.0   juju-a5ab4c-0-lxd-1  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/0  started  10.0.12.36  juju-a5ab4c-1-lxd-0  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/0  started  10.0.12.37  juju-a5ab4c-2-lxd-0  bionic  default  Container started
6        started  10.0.12.25  os-ceph1             bionic  default  Deployed
7        started  10.0.12.31  os-ceph2             bionic  default  Deployed
8        started  10.0.12.32  os-ceph3             bionic  default  Deployed

wait for remove completion.

ubuntu@os-client:~/work/openstack/deploy$ juju status "ceph*"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  18:25:06+09:00

App       Version  Status  Scale  Charm     Store       Rev  OS      Notes
ceph-mon  13.2.8   active      3  ceph-mon  jujucharms   46  ubuntu
ceph-osd  13.2.8   active      3  ceph-osd  jujucharms  301  ubuntu

Unit         Workload  Agent  Machine  Public address  Ports  Message
ceph-mon/0*  active    idle   0/lxd/0  10.0.12.35             Unit is ready and clustered
ceph-mon/1   active    idle   1/lxd/0  10.0.12.36             Unit is ready and clustered
ceph-mon/2   active    idle   2/lxd/0  10.0.12.37             Unit is ready and clustered
ceph-osd/0*  active    idle   6        10.0.12.25             Unit is ready (1 OSD)
ceph-osd/1   active    idle   7        10.0.12.31             Unit is ready (1 OSD)
ceph-osd/2   active    idle   8        10.0.12.32             Unit is ready (1 OSD)

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/0  started  10.0.12.35  juju-a5ab4c-0-lxd-0  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/0  started  10.0.12.36  juju-a5ab4c-1-lxd-0  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/0  started  10.0.12.37  juju-a5ab4c-2-lxd-0  bionic  default  Container started
6        started  10.0.12.25  os-ceph1             bionic  default  Deployed
7        started  10.0.12.31  os-ceph2             bionic  default  Deployed
8        started  10.0.12.32  os-ceph3             bionic  default  Deployed
ubuntu@os-client:~/work/openstack/deploy$ juju deploy --config config/ceph-radosgw.yaml -n 3 --to lxd:0,lxd:1,lxd:2 cs:ceph-radosgw ceph-radosgw
Located charm "cs:ceph-radosgw-286".
Deploying charm "cs:ceph-radosgw-286".
ubuntu@os-client:~/work/openstack/deploy$ juju deploy --config config/ceph-radosgw.yaml cs:hacluster ceph-radosgw-hacluster
Located charm "cs:hacluster-66".
Deploying charm "cs:hacluster-66".
ubuntu@os-client:~/work/openstack/deploy$ juju add-relation ceph-radosgw:mon ceph-mon:radosgw
ubuntu@os-client:~/work/openstack/deploy$ juju add-relation ceph-radosgw:ha ceph-radosgw-hacluster:ha
ubuntu@os-client:~/work/openstack/deploy$ juju add-relation keystone:identity-service ceph-radosgw:identity-service

make sure enabled namespace-tenants

ubuntu@os-client:~/work/openstack/deploy$ juju config ceph-radosgw namespace-tenants
trueubuntu@os-client:~/work/openstack/deploy$
trueubuntu@os-client:~/work/openstack/deploy$ juju status "ceph*"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  18:41:16+09:00

App                     Version  Status  Scale  Charm         Store       Rev  OS      Notes
ceph-mon                13.2.8   active      3  ceph-mon      jujucharms   46  ubuntu
ceph-osd                13.2.8   active      3  ceph-osd      jujucharms  301  ubuntu
ceph-radosgw            13.2.8   active      3  ceph-radosgw  jujucharms  286  ubuntu
ceph-radosgw-hacluster           active      3  hacluster     jujucharms   66  ubuntu

Unit                         Workload  Agent  Machine   Public address  Ports   Message
ceph-mon/0*                  active    idle   0/lxd/0   10.0.12.35              Unit is ready and clustered
ceph-mon/1                   active    idle   1/lxd/0   10.0.12.36              Unit is ready and clustered
ceph-mon/2                   active    idle   2/lxd/0   10.0.12.37              Unit is ready and clustered
ceph-osd/0*                  active    idle   6         10.0.12.25              Unit is ready (1 OSD)
ceph-osd/1                   active    idle   7         10.0.12.31              Unit is ready (1 OSD)
ceph-osd/2                   active    idle   8         10.0.12.32              Unit is ready (1 OSD)
ceph-radosgw/3               active    idle   0/lxd/10  10.0.12.65      80/tcp  Unit is ready
  ceph-radosgw-hacluster/5   active    idle             10.0.12.65              Unit is ready and clustered
ceph-radosgw/4               active    idle   1/lxd/10  10.0.12.66      80/tcp  Unit is ready
  ceph-radosgw-hacluster/4   active    idle             10.0.12.66              Unit is ready and clustered
ceph-radosgw/5*              active    idle   2/lxd/10  10.0.12.64      80/tcp  Unit is ready
  ceph-radosgw-hacluster/3*  active    idle             10.0.12.64              Unit is ready and clustered

Machine   State    DNS         Inst id               Series  AZ       Message
0         started  10.0.12.23  os-controller1        bionic  default  Deployed
0/lxd/0   started  10.0.12.35  juju-a5ab4c-0-lxd-0   bionic  default  Container started
0/lxd/10  started  10.0.12.65  juju-a5ab4c-0-lxd-10  bionic  default  Container started
1         started  10.0.12.22  os-controller2        bionic  default  Deployed
1/lxd/0   started  10.0.12.36  juju-a5ab4c-1-lxd-0   bionic  default  Container started
1/lxd/10  started  10.0.12.66  juju-a5ab4c-1-lxd-10  bionic  default  Container started
2         started  10.0.12.26  os-controller3        bionic  default  Deployed
2/lxd/0   started  10.0.12.37  juju-a5ab4c-2-lxd-0   bionic  default  Container started
2/lxd/10  started  10.0.12.64  juju-a5ab4c-2-lxd-10  bionic  default  Container started
6         started  10.0.12.25  os-ceph1              bionic  default  Deployed
7         started  10.0.12.31  os-ceph2              bionic  default  Deployed
8         started  10.0.12.32  os-ceph3              bionic  default  Deployed

Verify Operation

OpenStack Docs: Verify operation

source ~/work/openstack/workspace/admin-openrc
swift stat
openstack container create container1
vim lorem-ipsum.txt
sha256sum lorem-ipsum.txt
cat lorem-ipsum.txt
openstack object create container1 lorem-ipsum.txt
openstack object list container1
cd tmp
cat lorem-ipsum.txt
openstack object save container1 lorem-ipsum.txt
ls lorem-ipsum.txt
sha256sum lorem-ipsum.txt
cat lorem-ipsum.txt
rm lorem-ipsum.txt
cd ..
(venv) ubuntu@os-client:~/work/openstack/workspace$ source ~/work/openstack/workspace/admin-openrc
(venv) ubuntu@os-client:~/work/openstack/workspace$ swift stat
                                    Account: AUTH_f4b32f7133004e30a770ca7ef4084856
                                 Containers: 0
                                    Objects: 0
                                      Bytes: 0
Objects in policy "default-placement-bytes": 0
  Bytes in policy "default-placement-bytes": 0
   Containers in policy "default-placement": 0
      Objects in policy "default-placement": 0
        Bytes in policy "default-placement": 0
                                X-Timestamp: 1589708529.28326
                X-Account-Bytes-Used-Actual: 0
                                 X-Trans-Id: tx000000000000000000001-005ec106f1-b3573-default
                     X-Openstack-Request-Id: tx000000000000000000001-005ec106f1-b3573-default
                              Accept-Ranges: bytes
                               Content-Type: text/plain; charset=utf-8
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack container create container1
+---------------------------------------+------------+--------------------------------------------------+
| account                               | container  | x-trans-id                                       |
+---------------------------------------+------------+--------------------------------------------------+
| AUTH_f4b32f7133004e30a770ca7ef4084856 | container1 | tx000000000000000000001-005ec106fd-b35f1-default |
+---------------------------------------+------------+--------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ vim lorem-ipsum.txt
(venv) ubuntu@os-client:~/work/openstack/workspace$ sha256sum lorem-ipsum.txt
9a7884748fa090de828586132d104cdfb6bbcc228f6dacf30e0497d9ebf5732b  lorem-ipsum.txt
(venv) ubuntu@os-client:~/work/openstack/workspace$
(venv) ubuntu@os-client:~/work/openstack/workspace$ cat lorem-ipsum.txt
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque ultricies mauris maximus libero condimentum semper. Pellentesque varius neque at felis dignissim aliquam. Sed at erat in justo faucibus egestas ut eget turpis. Curabitur cursus ante eu faucibus consectetur. Sed non lacus sit amet enim efficitur dignissim. Nullam a arcu sed nisi mattis posuere eu sed leo. Vestibulum pellentesque orci quis elit rutrum suscipit. Nullam porttitor metus at nulla lobortis, ac auctor dui congue. Aenean at fermentum tellus, ac auctor felis.

Cras dignissim sem a elit ultricies vestibulum. Nulla tempor metus ac odio tincidunt, at blandit lacus condimentum. Fusce fermentum ligula fringilla tellus interdum ornare. Phasellus vehicula diam molestie, facilisis justo nec, sollicitudin est. Vestibulum non lacus metus. Curabitur in justo in nisl ornare dictum sit amet in lectus. Donec dui felis, lacinia sit amet semper non, commodo varius est. Cras sodales erat dolor. Phasellus rhoncus nunc at lectus ultrices sagittis. Nunc id sollicitudin lorem, ut vestibulum nulla. Ut lobortis porta turpis, quis pellentesque risus venenatis at. Etiam tincidunt imperdiet neque, eget sollicitudin est lobortis et. Curabitur eget ante consectetur, vestibulum massa eget, vestibulum leo. Nunc cursus consectetur justo, a mollis justo consectetur nec. Duis dapibus mauris ac quam tincidunt, sit amet volutpat ipsum condimentum. Phasellus sit amet lorem vel orci tincidunt malesuada consequat ac orci.

Vestibulum convallis lacus quis tortor consectetur scelerisque. Duis vitae purus quam. Nullam finibus viverra purus et tincidunt. Cras ullamcorper elementum ante nec auctor. Aenean tortor lorem, eleifend vitae nunc nec, elementum lobortis est. Sed maximus ipsum justo, quis venenatis mauris eleifend consequat. Morbi lacinia arcu ex. Aenean eu semper lorem. Maecenas porta lectus vel tellus molestie imperdiet. Aenean urna ante, mollis eu luctus id, tempus at dolor. Phasellus tempus, arcu maximus porta gravida, lectus augue venenatis ligula, vel euismod ex elit eget lacus. Ut consequat urna eu turpis auctor dictum. Duis vitae odio tellus. In non elit a eros semper sodales. Pellentesque non mattis enim.

Phasellus in sem posuere, ullamcorper neque in, ultrices enim. Mauris non elementum arcu, ut facilisis tortor. Nullam a lectus sed tellus rhoncus tempor sit amet vestibulum purus. Ut tristique tellus ac venenatis rutrum. Proin quis dapibus metus, ac elementum libero. Sed at leo molestie, accumsan arcu sit amet, vestibulum sapien. Duis vitae orci nunc.

Nam aliquam mauris a ultricies tempor. Duis turpis ipsum, vulputate nec tincidunt eu, cursus et ipsum. Pellentesque interdum nibh magna, quis dignissim nisl pretium ut. Nam libero orci, blandit in rutrum at, egestas sed ipsum. Integer eget nisi nec risus venenatis faucibus. Quisque magna ligula, venenatis semper velit sit amet, tempus imperdiet lacus. Fusce vitae mollis neque, sit amet vulputate nunc. Mauris gravida mollis arcu at sollicitudin. Suspendisse metus orci, laoreet in dignissim vel, tristique quis eros. Sed ullamcorper condimentum arcu sed aliquet. Suspendisse vulputate tristique lacus quis vestibulum. Phasellus imperdiet varius magna, at euismod arcu. Nam at turpis congue, ultrices urna vitae, cursus lectus. Phasellus purus erat, suscipit malesuada justo eget, imperdiet tempor lectus. Nullam porta erat diam, vel malesuada eros fermentum eu.

(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack object create container1 lorem-ipsum.txt
+-----------------+------------+----------------------------------+
| object          | container  | etag                             |
+-----------------+------------+----------------------------------+
| lorem-ipsum.txt | container1 | 6671724651e1d8efd499b5b2c3f5d35b |
+-----------------+------------+----------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack object list container1
+-----------------+
| Name            |
+-----------------+
| lorem-ipsum.txt |
+-----------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ cd tmp
(venv) ubuntu@os-client:~/work/openstack/workspace/tmp$ ls lorem-ipsum.txt
ls: cannot access 'lorem-ipsum.txt': No such file or directory
(venv) ubuntu@os-client:~/work/openstack/workspace/tmp$ openstack object save container1 lorem-ipsum.txt
(venv) ubuntu@os-client:~/work/openstack/workspace/tmp$ ls lorem-ipsum.txt
lorem-ipsum.txt
(venv) ubuntu@os-client:~/work/openstack/workspace/tmp$ sha256sum lorem-ipsum.txt
9a7884748fa090de828586132d104cdfb6bbcc228f6dacf30e0497d9ebf5732b  lorem-ipsum.txt
(venv) ubuntu@os-client:~/work/openstack/workspace/tmp$ cat lorem-ipsum.txt
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Quisque ultricies mauris maximus libero condimentum semper. Pellentesque varius neque at felis dignissim aliquam. Sed at erat in justo faucibus egestas ut eget turpis. Curabitur cursus ante eu faucibus consectetur. Sed non lacus sit amet enim efficitur dignissim. Nullam a arcu sed nisi mattis posuere eu sed leo. Vestibulum pellentesque orci quis elit rutrum suscipit. Nullam porttitor metus at nulla lobortis, ac auctor dui congue. Aenean at fermentum tellus, ac auctor felis.

Cras dignissim sem a elit ultricies vestibulum. Nulla tempor metus ac odio tincidunt, at blandit lacus condimentum. Fusce fermentum ligula fringilla tellus interdum ornare. Phasellus vehicula diam molestie, facilisis justo nec, sollicitudin est. Vestibulum non lacus metus. Curabitur in justo in nisl ornare dictum sit amet in lectus. Donec dui felis, lacinia sit amet semper non, commodo varius est. Cras sodales erat dolor. Phasellus rhoncus nunc at lectus ultrices sagittis. Nunc id sollicitudin lorem, ut vestibulum nulla. Ut lobortis porta turpis, quis pellentesque risus venenatis at. Etiam tincidunt imperdiet neque, eget sollicitudin est lobortis et. Curabitur eget ante consectetur, vestibulum massa eget, vestibulum leo. Nunc cursus consectetur justo, a mollis justo consectetur nec. Duis dapibus mauris ac quam tincidunt, sit amet volutpat ipsum condimentum. Phasellus sit amet lorem vel orci tincidunt malesuada consequat ac orci.

Vestibulum convallis lacus quis tortor consectetur scelerisque. Duis vitae purus quam. Nullam finibus viverra purus et tincidunt. Cras ullamcorper elementum ante nec auctor. Aenean tortor lorem, eleifend vitae nunc nec, elementum lobortis est. Sed maximus ipsum justo, quis venenatis mauris eleifend consequat. Morbi lacinia arcu ex. Aenean eu semper lorem. Maecenas porta lectus vel tellus molestie imperdiet. Aenean urna ante, mollis eu luctus id, tempus at dolor. Phasellus tempus, arcu maximus porta gravida, lectus augue venenatis ligula, vel euismod ex elit eget lacus. Ut consequat urna eu turpis auctor dictum. Duis vitae odio tellus. In non elit a eros semper sodales. Pellentesque non mattis enim.

Phasellus in sem posuere, ullamcorper neque in, ultrices enim. Mauris non elementum arcu, ut facilisis tortor. Nullam a lectus sed tellus rhoncus tempor sit amet vestibulum purus. Ut tristique tellus ac venenatis rutrum. Proin quis dapibus metus, ac elementum libero. Sed at leo molestie, accumsan arcu sit amet, vestibulum sapien. Duis vitae orci nunc.

Nam aliquam mauris a ultricies tempor. Duis turpis ipsum, vulputate nec tincidunt eu, cursus et ipsum. Pellentesque interdum nibh magna, quis dignissim nisl pretium ut. Nam libero orci, blandit in rutrum at, egestas sed ipsum. Integer eget nisi nec risus venenatis faucibus. Quisque magna ligula, venenatis semper velit sit amet, tempus imperdiet lacus. Fusce vitae mollis neque, sit amet vulputate nunc. Mauris gravida mollis arcu at sollicitudin. Suspendisse metus orci, laoreet in dignissim vel, tristique quis eros. Sed ullamcorper condimentum arcu sed aliquet. Suspendisse vulputate tristique lacus quis vestibulum. Phasellus imperdiet varius magna, at euismod arcu. Nam at turpis congue, ultrices urna vitae, cursus lectus. Phasellus purus erat, suscipit malesuada justo eget, imperdiet tempor lectus. Nullam porta erat diam, vel malesuada eros fermentum eu.

(venv) ubuntu@os-client:~/work/openstack/workspace/tmp$ rm lorem-ipsum.txt

openstack-dashboard(Horizon)

01600-deploy-openstack-dashboard.sh
#!/bin/bash
juju deploy --config config/openstack-dashboard.yaml -n 3 --to lxd:0,lxd:1,lxd:2 cs:openstack-dashboard openstack-dashboard
juju deploy --config config/openstack-dashboard.yaml cs:hacluster openstack-dashboard-hacluster
juju add-relation openstack-dashboard:ha openstack-dashboard-hacluster:ha
#
juju add-relation openstack-dashboard:shared-db mysql:shared-db
juju add-relation openstack-dashboard:identity-service keystone:identity-service
openstack-dashboard:
  cinder-backup: true
  webroot: /
  openstack-origin: cloud:bionic-stein
  vip: 10.0.14.136
openstack-dashboard-hacluster:
  corosync_transport: unicast
ubuntu@os-client:~/work/openstack/deploy$ bash 01600-deploy-openstack-dashboard.sh
Located charm "cs:openstack-dashboard-302".
Deploying charm "cs:openstack-dashboard-302".
Located charm "cs:hacluster-66".
Deploying charm "cs:hacluster-66".

watch status

juju debug-log --include openstack-dashboard
juju status "openstack-dashboard"
watch -n 1 --color juju status "openstack-dashboard" --color

Verify Operation

juju status "openstack-dashboard"

juju ssh openstack-dashboard/0 sudo crm status
juju ssh openstack-dashboard/0 ip address show

ping -c 4 10.0.14.136
ubuntu@os-client:~/work/openstack/deploy$ juju status "openstack-dashboard"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  08:23:14+09:00

App                            Version  Status  Scale  Charm                Store       Rev  OS      Notes
openstack-dashboard            15.2.0   active      3  openstack-dashboard  jujucharms  302  ubuntu
openstack-dashboard-hacluster           active      3  hacluster            jujucharms   66  ubuntu

Unit                                Workload  Agent  Machine   Public address  Ports           Message
openstack-dashboard/0               active    idle   0/lxd/12  10.0.12.72      80/tcp,443/tcp  Unit is ready
  openstack-dashboard-hacluster/2   active    idle             10.0.12.72                      Unit is ready and clustered
openstack-dashboard/1*              active    idle   1/lxd/12  10.0.12.70      80/tcp,443/tcp  Unit is ready
  openstack-dashboard-hacluster/1   active    idle             10.0.12.70                      Unit is ready and clustered
openstack-dashboard/2               active    idle   2/lxd/12  10.0.12.71      80/tcp,443/tcp  Unit is ready
  openstack-dashboard-hacluster/0*  active    idle             10.0.12.71                      Unit is ready and clustered

Machine   State    DNS         Inst id               Series  AZ       Message
0         started  10.0.12.23  os-controller1        bionic  default  Deployed
0/lxd/12  started  10.0.12.72  juju-a5ab4c-0-lxd-12  bionic  default  Container started
1         started  10.0.12.22  os-controller2        bionic  default  Deployed
1/lxd/12  started  10.0.12.70  juju-a5ab4c-1-lxd-12  bionic  default  Container started
2         started  10.0.12.26  os-controller3        bionic  default  Deployed
2/lxd/12  started  10.0.12.71  juju-a5ab4c-2-lxd-12  bionic  default  Container started

ubuntu@os-client:~/work/openstack/deploy$ juju ssh openstack-dashboard/0 sudo crm status
Stack: corosync
Current DC: juju-a5ab4c-0-lxd-12 (version 1.1.18-2b07d5c5a9) - partition with quorum
Last updated: Sun May 17 23:23:25 2020
Last change: Sun May 17 23:15:33 2020 by hacluster via crmd on juju-a5ab4c-0-lxd-12

3 nodes configured
4 resources configured

Online: [ juju-a5ab4c-0-lxd-12 juju-a5ab4c-1-lxd-12 juju-a5ab4c-2-lxd-12 ]

Full list of resources:

 Resource Group: grp_horizon_vips
     res_horizon_904eab7_vip    (ocf::heartbeat:IPaddr2):       Started juju-a5ab4c-0-lxd-12
 Clone Set: cl_horizon_haproxy [res_horizon_haproxy]
     Started: [ juju-a5ab4c-0-lxd-12 juju-a5ab4c-1-lxd-12 juju-a5ab4c-2-lxd-12 ]

Connection to 10.0.12.72 closed.
ubuntu@os-client:~/work/openstack/deploy$ juju ssh openstack-dashboard/0 ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
38: eth0@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:16:3e:48:7c:e8 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.0.12.72/22 brd 10.0.15.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.0.14.136/22 brd 10.0.15.255 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe48:7ce8/64 scope link
       valid_lft forever preferred_lft forever
Connection to 10.0.12.72 closed.
ubuntu@os-client:~/work/openstack/deploy$ ping -c 4 10.0.14.136
PING 10.0.14.136 (10.0.14.136) 56(84) bytes of data.
64 bytes from 10.0.14.136: icmp_seq=1 ttl=64 time=0.742 ms
64 bytes from 10.0.14.136: icmp_seq=2 ttl=64 time=0.209 ms
64 bytes from 10.0.14.136: icmp_seq=3 ttl=64 time=0.435 ms
64 bytes from 10.0.14.136: icmp_seq=4 ttl=64 time=0.217 ms

--- 10.0.14.136 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3074ms
rtt min/avg/max/mdev = 0.209/0.400/0.742/0.218 ms

Let's access

http://10.0.14.136/

  1. Domain: admin_domain
  2. User Name: admin
  3. Password: password

http://10.0.14.136/admin/info/

If Admin > System Information Page Not Report Error, OpenStack is Operational overview.

Go ahead. There is a few tasks.

NTP

Time synchronization is most important task clustered environment.

01700-deploy-ntp.sh
#!/bin/bash
juju deploy --config config/ntp.yaml cs:ntp ntp
juju add-relation ntp:juju-info ceph-osd:juju-info
juju add-relation ntp:juju-info ceph-osd-backup:juju-info
juju add-relation ntp:juju-info nova-compute:juju-info
juju add-relation ntp:juju-info neutron-gateway:juju-info
ntp.yaml
ntp:
  # https://www.ntppool.org/en/use.html
  pools: 0.pool.ntp.org 1.pool.ntp.org 2.pool.ntp.org 3.pool.ntp.org
  # Japan
  #source: ntp.nict.jp ntp1.jst.mfeed.ad.jp ntp2.jst.mfeed.ad.jp ntp3.jst.mfeed.ad.jp
ubuntu@os-client:~/work/openstack/deploy$ bash 01700-deploy-ntp.sh
Located charm "cs:ntp-39".
Deploying charm "cs:ntp-39".

watch status

juju debug-log --include ntp
juju status "ntp"
watch -n 1 --color juju status "ntp" --color

I realized wrong deployment target…

ubuntu@os-client:~/work/openstack/deploy$ juju add-unit --to 9,10,11 -n 3 ceph-osd-backup
ubuntu@os-client:~/work/openstack/deploy$ juju remove-unit ceph-osd-backup/0
removing unit ceph-osd-backup/0
ubuntu@os-client:~/work/openstack/deploy$ juju remove-unit ceph-osd-backup/1
removing unit ceph-osd-backup/1
ubuntu@os-client:~/work/openstack/deploy$ juju remove-unit ceph-osd-backup/2
removing unit ceph-osd-backup/2

Verify Operation

juju status "ntp"

juju ssh ceph-osd/0 sudo chronyc sources
juju ssh ceph-osd-backup/0 sudo chronyc sources
juju ssh nova-compute/0 sudo chronyc sources
juju ssh neutron-gateway/0 sudo chronyc sources

from man 1 chronyc

* indicates the source to which chronyd is currently synchronised.
ubuntu@os-client:~/work/openstack/deploy$ juju status "ntp"
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  12:40:12+09:00

App                  Version  Status  Scale  Charm                Store       Rev  OS      Notes
ceph-osd             13.2.8   active      3  ceph-osd             jujucharms  301  ubuntu
ceph-osd-backup      13.2.8   active      3  ceph-osd             jujucharms  301  ubuntu
neutron-gateway      14.1.0   active      3  neutron-gateway      jujucharms  280  ubuntu
neutron-openvswitch  14.1.0   active      0  neutron-openvswitch  jujucharms  274  ubuntu
nova-compute         19.1.0   active      3  nova-compute         jujucharms  314  ubuntu
ntp                  3.2      active     12  ntp                  jujucharms   39  ubuntu

Unit                      Workload  Agent      Machine  Public address  Ports    Message
ceph-osd-backup/0         active    idle       9        10.0.12.24               Unit is ready (1 OSD)
  ntp/5                   active    executing           10.0.12.24      123/udp  chrony: Ready
ceph-osd-backup/1*        active    idle       10       10.0.12.33               Unit is ready (1 OSD)
  ntp/3                   active    executing           10.0.12.33      123/udp  chrony: Ready
ceph-osd-backup/2         active    idle       11       10.0.12.28               Unit is ready (1 OSD)
  ntp/4                   active    executing           10.0.12.28      123/udp  chrony: Ready
ceph-osd/0*               active    idle       6        10.0.12.25               Unit is ready (1 OSD)
  ntp/2                   active    executing           10.0.12.25      123/udp  chrony: Ready
ceph-osd/1                active    idle       7        10.0.12.31               Unit is ready (1 OSD)
  ntp/1                   active    executing           10.0.12.31      123/udp  chrony: Ready
ceph-osd/2                active    idle       8        10.0.12.32               Unit is ready (1 OSD)
  ntp/0*                  active    executing           10.0.12.32      123/udp  chrony: Ready
neutron-gateway/0*        active    idle       0        10.0.12.23               Unit is ready
  ntp/10                  active    executing           10.0.12.23      123/udp  chrony: Ready
neutron-gateway/1         active    idle       1        10.0.12.22               Unit is ready
  ntp/9                   active    executing           10.0.12.22      123/udp  chrony: Ready
neutron-gateway/2         active    idle       2        10.0.12.26               Unit is ready
  ntp/11                  active    executing           10.0.12.26      123/udp  chrony: Ready
nova-compute/0*           active    idle       3        10.0.12.34               Unit is ready
  neutron-openvswitch/1   active    idle                10.0.12.34               Unit is ready
  ntp/6                   active    executing           10.0.12.34      123/udp  chrony: Ready
nova-compute/1            active    idle       4        10.0.12.29               Unit is ready
  neutron-openvswitch/2   active    idle                10.0.12.29               Unit is ready
  ntp/7                   active    executing           10.0.12.29      123/udp  chrony: Ready
nova-compute/2            active    idle       5        10.0.12.30               Unit is ready
  neutron-openvswitch/0*  active    idle                10.0.12.30               Unit is ready
  ntp/8                   active    executing           10.0.12.30      123/udp  chrony: Ready

Machine  State    DNS         Inst id         Series  AZ       Message
0        started  10.0.12.23  os-controller1  bionic  default  Deployed
1        started  10.0.12.22  os-controller2  bionic  default  Deployed
2        started  10.0.12.26  os-controller3  bionic  default  Deployed
3        started  10.0.12.34  os-compute1     bionic  default  Deployed
4        started  10.0.12.29  os-compute2     bionic  default  Deployed
5        started  10.0.12.30  os-compute3     bionic  default  Deployed
6        started  10.0.12.25  os-ceph1        bionic  default  Deployed
7        started  10.0.12.31  os-ceph2        bionic  default  Deployed
8        started  10.0.12.32  os-ceph3        bionic  default  Deployed
9        started  10.0.12.24  os-swift1       bionic  default  Deployed
10       started  10.0.12.33  os-swift2       bionic  default  Deployed
11       started  10.0.12.28  os-swift3       bionic  default  Deployed

ubuntu@os-client:~/work/openstack/deploy$ juju ssh ceph-osd/0 sudo chronyc sources
210 Number of sources = 16
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- jiro.paina.net                2   6   277    20  -3085us[-3217us] +/-   68ms
^- time.cloudflare.com           3   6   377    18  +3958us[+3827us] +/-   65ms
^- ntp-5.jonlight.com            2   6    73    25   -352us[ -485us] +/-   29ms
^- y.ns.gin.ntt.net              2   6   377    13  -3280us[-3280us] +/-  100ms
^- 122x215x240x52.ap122.ftt>     2   6   377    19   +441us[ +310us] +/-   37ms
^- 153.127.161.248               2   6   377    18   -749us[ -749us] +/-   51ms
^? kuroa.me                      2   7     3    93   -503us[ -656us] +/-   60ms
^- ntp.arupaka.net               2   6   377    29  -1936us[-2066us] +/-   16ms
^? time.cloudflare.com           0   8     0     -     +0ns[   +0ns] +/-    0ns
^? time.cloudflare.com           0   8     0     -     +0ns[   +0ns] +/-    0ns
^? 2001:ce8:78::2                0   8     0     -     +0ns[   +0ns] +/-    0ns
^? y.ns.gin.ntt.net              0   8     0     -     +0ns[   +0ns] +/-    0ns
^- tama.paina.net                2   7    32   355    +19us[ -377us] +/-   24ms
^- time.cloudflare.com           3   6   377    41  +4731us[+4599us] +/-   64ms
^* ntp-b2.nict.go.jp             1   6   377    38   -162us[ -292us] +/- 3105us
^- 30-213-226-103-static.ch>     1   6   377    39  -4959us[-5089us] +/-   25ms
Connection to 10.0.12.25 closed.
ubuntu@os-client:~/work/openstack/deploy$ juju ssh ceph-osd-backup/0 sudo chronyc sources
210 Number of sources = 16
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- jiro.paina.net                2   6   277    54  -4570us[-4570us] +/-   70ms
^- time.cloudflare.com           3   6   377    55  +4668us[+4668us] +/-   64ms
^- ntp-5.jonlight.com            2   6   377    55   +764us[ +764us] +/-   29ms
^- y.ns.gin.ntt.net              2   6   377    52  -3281us[-3281us] +/-   99ms
^- 122x215x240x52.ap122.ftt>     2   6   377    53   +308us[ +308us] +/-   37ms
^- 153.127.161.248               2   6   377    57   +395us[ +296us] +/-   51ms
^- ntp.arupaka.net               2   6   377     1  -2168us[-2168us] +/-   17ms
^- kuroa.me                      2   7    73   122   -771us[ -896us] +/-   60ms
^? time.cloudflare.com           0   8     0     -     +0ns[   +0ns] +/-    0ns
^? time.cloudflare.com           0   8     0     -     +0ns[   +0ns] +/-    0ns
^? 2403:71c0:2000::d:8b97        0   8     0     -     +0ns[   +0ns] +/-    0ns
^? y.ns.gin.ntt.net              0   8     0     -     +0ns[   +0ns] +/-    0ns
^- tama.paina.net                2   6   157    22   +211us[ +167us] +/-   30ms
^- time.cloudflare.com           3   6   377    13  +4771us[+4726us] +/-   64ms
^* ntp-b2.nict.go.jp             1   6   377    12   +258us[ +213us] +/- 3523us
^- 30-213-226-103-static.ch>     1   6   357    74  -2184us[-2225us] +/-   22ms
Connection to 10.0.12.24 closed.
ubuntu@os-client:~/work/openstack/deploy$ juju ssh nova-compute/0 sudo chronyc sources
210 Number of sources = 16
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^+ ntp-b2.nict.go.jp             1  10   377   479  +1290us[+1290us] +/- 3287us
^- time.cloudflare.com           3  10   377   33m  +4806us[+4953us] +/-   64ms
^* ntp-a2.nict.go.jp             1  10   377   768   +728us[ +824us] +/- 3617us
^- x.ns.gin.ntt.net              2  10   377   780   -278us[ -183us] +/-   73ms
^- 185.137.97.5                  2  10   377    19  +7197us[+7197us] +/-  166ms
^- corona.gora.si                2  10   377    41  -5740us[-5740us] +/-  141ms
^- unifi.versadns.com            2  10   377   950  +2902us[+2996us] +/-  188ms
^- mail.light-speed.de           2  10   377   757  +1666us[+1666us] +/-  145ms
^- frome.mc.man.ac.uk            2  10   377   821  -2555us[-2461us] +/-  150ms
^- 119.28.206.193                2  10   377   766    +10ms[  +10ms] +/-   56ms
^- x8d1ee404.agdsn.tu-dresd>     2  10   377   586  +3751us[+3751us] +/-  202ms
^- zero.gotroot.ca               2  10   377   128  -1015us[-1015us] +/-  115ms
^- y.ns.gin.ntt.net              2   8   377   223  +3484us[+3484us] +/-  123ms
^- 122x215x240x51.ap122.ftt>     2  10   377   911  +1541us[+1635us] +/-   41ms
^- extendwings.com               2  10   377   278   +608us[ +608us] +/-   16ms
^- tama.paina.net                2  10   377   586  +1755us[+1755us] +/-   21ms
Connection to 10.0.12.34 closed.
ubuntu@os-client:~/work/openstack/deploy$ juju ssh neutron-gateway/0 sudo chronyc sources
210 Number of sources = 16
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- static.226.144.216.95.cl>     2   6   377    46  -1613us[-1613us] +/-  151ms
^+ 44.190.6.254                  2   6   377    52  -3243us[-4872us] +/-   88ms
^- tor-relais1.link38.eu         2   6   377    59  -1465us[-3091us] +/-  171ms
^- main24.anyplace-hosting.>     2   6   377    54    +22ms[  +22ms] +/-  132ms
^+ ntp.kiba.net                  2   6   377    57  +3746us[+2124us] +/-   46ms
^* ntp-5.jonlight.com            2   6   217    60  -2185us[-3812us] +/-   30ms
^+ time.cloudflare.com           3   6   377    60  +1805us[ +179us] +/-   64ms
^+ tama.paina.net                2   6    37     2  -2111us[-2111us] +/-   30ms
^- ns2.infomir.com.ua            3   6   377    56    +33ms[  +33ms] +/-  170ms
^- telesto.host.static.dont>     2   6   377    57  -1546us[-1546us] +/-  169ms
^- ntp.vives.be                  2   6   377    59    +10ms[+8695us] +/-  162ms
^- 141.255.175.253               2   6   377    57  -8053us[-8053us] +/-  166ms
^- anduin.net                    2   6   377     3  -4833us[-4833us] +/-  170ms
^- ntp1.flashdance.cx            2   6   377     4  +5362us[+5362us] +/-  161ms
^+ 202.118.1.130                 1   6   377     5  +6354us[+6354us] +/-   44ms
^- time.panq.nl                  2   6   377    13  +4789us[+4789us] +/-  165ms
Connection to 10.0.12.23 closed.

nagios

This is final deployment charm.

Let's go!

01800-deploy-nagios.sh
#!/bin/bash
juju deploy --config config/nagios.yaml -n 1 --to 12 cs:nagios nagios
juju deploy --config config/nagios.yaml cs:ntp nagios-ntp
juju deploy cs:nrpe nrpe
juju add-relation nagios:juju-info nagios-ntp:juju-info
juju add-relation nagios:monitors nrpe:monitors
juju add-relation nrpe:nrpe-external-master ceph-mon:nrpe-external-master
juju add-relation nrpe:nrpe-external-master ceph-mon-backup:nrpe-external-master
juju add-relation nrpe:nrpe-external-master ceph-osd:nrpe-external-master
juju add-relation nrpe:nrpe-external-master ceph-osd-backup:nrpe-external-master
juju add-relation nrpe:nrpe-external-master ceph-radosgw:nrpe-external-master
juju add-relation nrpe:nrpe-external-master ceph-radosgw-hacluster:nrpe-external-master
juju add-relation nrpe:nrpe-external-master cinder:nrpe-external-master
juju add-relation nrpe:nrpe-external-master cinder-hacluster:nrpe-external-master
juju add-relation nrpe:nrpe-external-master glance:nrpe-external-master
juju add-relation nrpe:nrpe-external-master glance-hacluster:nrpe-external-master
juju add-relation nrpe:nrpe-external-master keystone:nrpe-external-master
juju add-relation nrpe:nrpe-external-master keystone-hacluster:nrpe-external-master
juju add-relation nrpe:nrpe-external-master memcached:nrpe-external-master
juju add-relation nrpe:nrpe-external-master mysql:nrpe-external-master
juju add-relation nrpe:nrpe-external-master mysql-hacluster:nrpe-external-master
juju add-relation nrpe:nrpe-external-master ncc-hacluster:nrpe-external-master
juju add-relation nrpe:nrpe-external-master neutron-api:nrpe-external-master
juju add-relation nrpe:nrpe-external-master neutron-gateway:nrpe-external-master
juju add-relation nrpe:nrpe-external-master neutron-hacluster:nrpe-external-master
juju add-relation nrpe:nrpe-external-master nova-cloud-controller:nrpe-external-master
juju add-relation nrpe:nrpe-external-master nova-compute:nrpe-external-master
juju add-relation nrpe:nrpe-external-master ntp:nrpe-external-master
juju add-relation nrpe:nrpe-external-master openstack-dashboard:nrpe-external-master
juju add-relation nrpe:nrpe-external-master openstack-dashboard-hacluster:nrpe-external-master
juju add-relation nrpe:nrpe-external-master rabbitmq-server:nrpe-external-master
nagios.yaml
nagios:
  nagiosadmin: admin
  password: password
nagios-ntp:
  # https://www.ntppool.org/en/use.html
  pools: 0.pool.ntp.org 1.pool.ntp.org 2.pool.ntp.org 3.pool.ntp.org
  # Japan
  #source: ntp.nict.jp ntp1.jst.mfeed.ad.jp ntp2.jst.mfeed.ad.jp ntp3.jst.mfeed.ad.jp
ubuntu@os-client:~/work/openstack/deploy$ juju deploy --config config/nagios.yaml -n 1 --to 12 cs:nagios nagios
Located charm "cs:nagios-36".
Deploying charm "cs:nagios-36".
ubuntu@os-client:~/work/openstack/deploy$ juju deploy --config config/nagios.yaml cs:ntp nagios-ntp
Located charm "cs:ntp-39".
Deploying charm "cs:ntp-39".
ubuntu@os-client:~/work/openstack/deploy$ juju deploy cs:nrpe nrpe
Located charm "cs:nrpe-63".
Deploying charm "cs:nrpe-63".
ubuntu@os-client:~/work/openstack/deploy$ juju status | head -n32 | tail -n28
ceph-mon                       13.2.8   active           3  ceph-mon               jujucharms   46  ubuntu
ceph-mon-backup                13.2.8   active           3  ceph-mon               jujucharms   46  ubuntu
ceph-osd                       13.2.8   active           3  ceph-osd               jujucharms  301  ubuntu
ceph-osd-backup                13.2.8   active           3  ceph-osd               jujucharms  301  ubuntu
ceph-radosgw                   13.2.8   active           3  ceph-radosgw           jujucharms  286  ubuntu
ceph-radosgw-hacluster                  active           3  hacluster              jujucharms   66  ubuntu
cinder                         14.0.4   active           3  cinder                 jujucharms  301  ubuntu
cinder-backup                  14.0.4   active           3  cinder-backup          jujucharms  248  ubuntu
cinder-ceph                    14.0.4   active           3  cinder-ceph            jujucharms  254  ubuntu
cinder-hacluster                        active           3  hacluster              jujucharms   66  ubuntu
glance                         18.0.1   active           3  glance                 jujucharms  295  ubuntu
glance-hacluster                        active           3  hacluster              jujucharms   66  ubuntu
keystone                       15.0.0   active           3  keystone               jujucharms  312  ubuntu
keystone-hacluster                      active           3  hacluster              jujucharms   66  ubuntu
memcached                               active           3  memcached              jujucharms   28  ubuntu
mysql                          5.7.20   active           3  percona-cluster        jujucharms  286  ubuntu
mysql-hacluster                         active           3  hacluster              jujucharms   66  ubuntu
ncc-hacluster                           active           3  hacluster              jujucharms   66  ubuntu
neutron-api                    14.1.0   active           3  neutron-api            jujucharms  284  ubuntu
neutron-gateway                14.1.0   active           3  neutron-gateway        jujucharms  280  ubuntu
neutron-hacluster                       active           3  hacluster              jujucharms   66  ubuntu
neutron-openvswitch            14.1.0   active           3  neutron-openvswitch    jujucharms  274  ubuntu
nova-cloud-controller          19.1.0   active           3  nova-cloud-controller  jujucharms  343  ubuntu
nova-compute                   19.1.0   active           3  nova-compute           jujucharms  314  ubuntu
ntp                            3.2      maintenance     12  ntp                    jujucharms   39  ubuntu
openstack-dashboard            15.2.0   active           3  openstack-dashboard    jujucharms  302  ubuntu
openstack-dashboard-hacluster           active           3  hacluster              jujucharms   66  ubuntu
rabbitmq-server                3.6.10   active           3  rabbitmq-server        jujucharms  100  ubuntu
ubuntu@os-client:~/work/openstack/deploy$ juju status | head -n32 | tail -n28 | awk '{ print $1 }'
ceph-mon
ceph-mon-backup
ceph-osd
ceph-osd-backup
ceph-radosgw
ceph-radosgw-hacluster
cinder
cinder-backup
cinder-ceph
cinder-hacluster
glance
glance-hacluster
keystone
keystone-hacluster
memcached
mysql
mysql-hacluster
ncc-hacluster
neutron-api
neutron-gateway
neutron-hacluster
neutron-openvswitch
nova-cloud-controller
nova-compute
ntp
openstack-dashboard
openstack-dashboard-hacluster
rabbitmq-server

watch status

juju debug-log --include nagios --include nrpe --include nagios-ntp
juju status "nagios" "nrpe"
watch -n 1 --color juju status "nagios" "nrpe" --color

Verify Operation

juju status "nagios" "nrpe"
juju ssh nagios/0 sudo chronyc sources
_nagios_ip=$(juju run --unit nagios/leader 'unit-get private-address')
echo "http://${_nagios_ip}/"
ubuntu@os-client:~/work/openstack/deploy$ juju ssh nagios/0 sudo chronyc sources
210 Number of sources = 17
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- ntp.kiba.net                  2   6   167    28  +3909us[+3909us] +/-   76ms
^- extendwings.com               2   6    77    30   -699us[ -699us] +/- 9648us
^- tama.paina.net                2   6    77    29   -331us[ -331us] +/-   38ms
^- kuroa.me                      2   6    77    29   -719us[ -719us] +/-   34ms
^- ntp-5.jonlight.com            2   6    77    31    -80us[  -80us] +/-   27ms
^- jiro.paina.net                2   6    77    30  -4750us[-4750us] +/-   53ms
^- y.ns.gin.ntt.net              2   6    77    31  -3752us[-3752us] +/-  109ms
^* ntp-a2.nict.go.jp             1   6    77    32    -75us[ -911us] +/- 2988us
^? 2403:71c0:2000::d:8b97        0   8     0     -     +0ns[   +0ns] +/-    0ns
^? time.cloudflare.com           0   8     0     -     +0ns[   +0ns] +/-    0ns
^? 2001:ce8:78::2                0   8     0     -     +0ns[   +0ns] +/-    0ns
^? t2.time.ir2.yahoo.com         0   8     0     -     +0ns[   +0ns] +/-    0ns
^- x.ns.gin.ntt.net              2   6    77    52  +2308us[+1475us] +/-   75ms
^- 153.127.161.248               2   6    77    49   -513us[ -513us] +/-   60ms
^- sv1.localdomain1.com          2   6    77    63   -182us[-1008us] +/-   27ms
^- time.cloudflare.com           3   6    77    60  +4146us[+4146us] +/-   65ms
^- ntp.arupaka.net               2   6    77    60  -3128us[-3128us] +/-   17ms
Connection to 10.0.12.27 closed.
ubuntu@os-client:~/work/openstack/deploy$ _nagios_ip=$(juju run --unit nagios/leader 'unit-get private-address')
ubuntu@os-client:~/work/openstack/deploy$ echo "http://${_nagios_ip}/"
http://10.0.12.27/
  1. ID: admin
  2. PASSWORD: password

This took pretty much long time to all metrics diccovered, collected by your machine spec.

Frequently deployment issue is due to poor machine spec, and little bit

Now Total Metrics items count is 637.

If you saw persist problem metric, such as “service is down”, but If you believe the services actually works fine. Monitoring status must be all green status. Anyway false-positive, or down is true, You need workaround just following. In my experiences, almost problem resolved this.

juju run-action unit/number resume --wait

or

juju run-action unit/number pause --wait
juju run-action unit/number resume --wait

or

juju ssh unit/number sudo systemctl reboot

In this case, mysql/2 (percona cluster) service treated as down and CRITICAL and these report is true.

ubuntu@os-client:~/work/openstack/deploy$ juju ssh mysql/2 systemctl status mysql.service
● mysql.service - Percona XtraDB Cluster daemon
   Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Mon 2020-05-18 01:29:07 UTC; 3h 50min ago

May 18 01:28:28 juju-a5ab4c-2-lxd-4 systemd[1]: Starting Percona XtraDB Cluster daemon...
May 18 01:28:31 juju-a5ab4c-2-lxd-4 mysql[21549]: WSREP: Recovered position f88cb036-92a1-11ea-9440-c3f6eafa530e:13888
May 18 01:28:31 juju-a5ab4c-2-lxd-4 mysql[21553]:  * Starting MySQL (Percona XtraDB Cluster) database server mysqld
May 18 01:29:07 juju-a5ab4c-2-lxd-4 systemd[1]: Stopped Percona XtraDB Cluster daemon.
Connection to 10.0.12.48 closed.
ubuntu@os-client:~/work/openstack/deploy$ juju ssh mysql/0 systemctl status mysql.service
● mysql.service - Percona XtraDB Cluster daemon
   Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-05-18 01:30:08 UTC; 3h 49min ago
 Main PID: 49445 (mysqld_safe)
    Tasks: 83 (limit: 19147)
   CGroup: /system.slice/mysql.service
           ├─49445 /bin/sh /usr/bin/mysqld_safe
           └─50050 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/percona-xtradb-cluster --plugin-dir=/usr/lib/mysql/plugin --wsrep-provider=/usr/lib/galera3/libgalera_smm

May 18 01:30:08 juju-a5ab4c-0-lxd-4 systemd[1]: Started Percona XtraDB Cluster daemon.
May 18 04:04:40 juju-a5ab4c-0-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:04:41 juju-a5ab4c-0-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:04:42 juju-a5ab4c-0-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:05:58 juju-a5ab4c-0-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:06:02 juju-a5ab4c-0-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:06:06 juju-a5ab4c-0-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:06:07 juju-a5ab4c-0-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:06:09 juju-a5ab4c-0-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:07:31 juju-a5ab4c-0-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
Connection to 10.0.12.47 closed.
ubuntu@os-client:~/work/openstack/deploy$ juju ssh mysql/1 systemctl status mysql.service
● mysql.service - Percona XtraDB Cluster daemon
   Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-05-18 01:29:47 UTC; 3h 50min ago
 Main PID: 39609 (mysqld_safe)
    Tasks: 314 (limit: 19147)
   CGroup: /system.slice/mysql.service
           ├─39609 /bin/sh /usr/bin/mysqld_safe
           └─40216 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/percona-xtradb-cluster --plugin-dir=/usr/lib/mysql/plugin --wsrep-provider=/usr/lib/galera3/libgalera_smm

May 18 03:58:23 juju-a5ab4c-1-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:05:39 juju-a5ab4c-1-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:05:41 juju-a5ab4c-1-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:05:43 juju-a5ab4c-1-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:07:17 juju-a5ab4c-1-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:07:19 juju-a5ab4c-1-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:07:19 juju-a5ab4c-1-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:07:20 juju-a5ab4c-1-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:07:21 juju-a5ab4c-1-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
May 18 04:08:38 juju-a5ab4c-1-lxd-4 systemd[1]: mysql.service: Failed to reset devices.list: Operation not permitted
Connection to 10.0.12.46 closed.

Let's recover it.

ubuntu@os-client:~/work/openstack/deploy$ juju actions mysql
Action         Description
backup         Full database backup
bootstrap-pxc  Bootstrap this unit of Percona.
*WARNING* This action will bootstrap this unit of Percona cluster. This
should only occur in a recovery scenario. Make sure this unit has the
highest sequence number in grastate.dat or data loss may occur.
See upstream Percona documentation for context
https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/
complete-cluster-series-upgrade  Perform final operations post series upgrade. Inform all nodes in the
cluster the upgrade is complete cluster wide. Update configuration with all
peers for wsrep replication.
This action should be performed on the current leader. Note the leader may
have changed during the series upgrade process.
notify-bootstrapped  No description
pause                Pause the MySQL service.
resume               Resume the MySQL service.

Above action list shows pause, resume action supported.

juju run-action mysql/2 resume --wait

juju status mysql shows (Resume) message.

Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  14:23:38+09:00

App              Version  Status  Scale  Charm            Store       Rev  OS      Notes
mysql            5.7.20   active      3  percona-cluster  jujucharms  286  ubuntu
mysql-hacluster           active      3  hacluster        jujucharms   66  ubuntu
nrpe                      active      3  nrpe             jujucharms   63  ubuntu

Unit                  Workload  Agent      Machine  Public address  Ports          Message
mysql/0*              active    idle       0/lxd/4  10.0.12.47      3306/tcp       Unit is ready
  mysql-hacluster/1*  active    idle                10.0.12.47                     Unit is ready and clustered
  nrpe/27             active    idle                10.0.12.47      icmp,5666/tcp  ready
mysql/1               active    idle       1/lxd/4  10.0.12.46      3306/tcp       Unit is ready
  mysql-hacluster/2   active    idle                10.0.12.46                     Unit is ready and clustered
  nrpe/28             active    idle                10.0.12.46      icmp,5666/tcp  ready
mysql/2               active    executing  2/lxd/4  10.0.12.48      3306/tcp       (resume) Unit is ready
  mysql-hacluster/0   active    idle                10.0.12.48                     Unit is ready and clustered
  nrpe/26             active    idle                10.0.12.48      icmp,5666/tcp  ready

Machine  State    DNS         Inst id              Series  AZ       Message
0        started  10.0.12.23  os-controller1       bionic  default  Deployed
0/lxd/4  started  10.0.12.47  juju-a5ab4c-0-lxd-4  bionic  default  Container started
1        started  10.0.12.22  os-controller2       bionic  default  Deployed
1/lxd/4  started  10.0.12.46  juju-a5ab4c-1-lxd-4  bionic  default  Container started
2        started  10.0.12.26  os-controller3       bionic  default  Deployed
2/lxd/4  started  10.0.12.48  juju-a5ab4c-2-lxd-4  bionic  default  Container started

It not take effect. next

ubuntu@os-client:~/work/openstack/deploy$ juju run-action mysql/2 pause --wait
unit-mysql-2:
  UnitId: mysql/2
  id: "25"
  results:
    Stderr: |
      Removed /etc/systemd/system/multi-user.target.wants/mysql@bootstrap.service.
      Created symlink /etc/systemd/system/mysql@bootstrap.service → /dev/null.
    Stdout: |
      active
      active
      active
      inactive
  status: completed
  timing:
    completed: 2020-05-18 05:26:00 +0000 UTC
    enqueued: 2020-05-18 05:25:39 +0000 UTC
    started: 2020-05-18 05:25:40 +0000 UTC
untu@os-client:~/work/openstack/deploy$ juju run-action mysql/2 resume --wait
unit-mysql-2:
  UnitId: mysql/2
  id: "26"
  results:
    Stderr: |
      Synchronizing state of mysql.service with SysV service script with /lib/systemd/systemd-sysv-install.
      Executing: /lib/systemd/systemd-sysv-install enable mysql
    Stdout: |
      inactive
      inactive
      inactive
      active
      Reading package lists...
      Building dependency tree...
      Reading state information...
      python-dbus is already the newest version (1.2.6-1).
      The following package was automatically installed and is no longer required:
        libfreetype6
      Use 'apt autoremove' to remove it.
      0 upgraded, 0 newly installed, 0 to remove and 13 not upgraded.
  status: completed
  timing:
    completed: 2020-05-18 05:26:48 +0000 UTC
    enqueued: 2020-05-18 05:26:11 +0000 UTC
    started: 2020-05-18 05:26:12 +0000 UTC
ubuntu@os-client:~/work/openstack/deploy$ juju ssh mysql/2 systemctl status mysql.service
● mysql.service - Percona XtraDB Cluster daemon
   Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-05-18 05:26:30 UTC; 55s ago
  Process: 46780 ExecStart=/etc/init.d/mysql start (code=exited, status=0/SUCCESS)
  Process: 46637 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
 Main PID: 46818 (mysqld_safe)
    Tasks: 82 (limit: 19147)
   CGroup: /system.slice/mysql.service
           ├─46818 /bin/sh /usr/bin/mysqld_safe
           └─47428 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/percona-xtradb-cluster --plugin-dir=/usr/lib/mysql/plugin --wsrep-provider=/usr/lib/galera3/libgalera_smm

May 18 05:26:13 juju-a5ab4c-2-lxd-4 systemd[1]: Starting Percona XtraDB Cluster daemon...
May 18 05:26:17 juju-a5ab4c-2-lxd-4 mysql[46776]: WSREP: Recovered position fbc937e1-98a6-11ea-aa76-1e3dcaac60a4:35784
May 18 05:26:17 juju-a5ab4c-2-lxd-4 mysql[46780]:  * Starting MySQL (Percona XtraDB Cluster) database server mysqld
May 18 05:26:19 juju-a5ab4c-2-lxd-4 mysql[46780]:  * State transfer in progress, setting sleep higher mysqld
May 18 05:26:30 juju-a5ab4c-2-lxd-4 mysql[46780]:    ...done.
May 18 05:26:30 juju-a5ab4c-2-lxd-4 systemd[1]: Started Percona XtraDB Cluster daemon.
Connection to 10.0.12.48 closed.

Problem resolved.

SWAP WARNING - 34% free (1353 MB out of 4095 MB)

Is temporarily deployment issue.

WARNING: too few PGs per OSD (8 < min 30)

Is ignorable.

WARNING: reachability is too low (72.66%) - should be greater than 75.00%

Is NTP issue

ubuntu@os-client:~/work/openstack/deploy$ juju ssh ceph-osd-backup/2 sudo systemctl restart chrony.service
Connection to 10.0.12.28 closed.
ubuntu@os-client:~/work/openstack/deploy$ juju ssh ceph-osd-backup/2 sudo systemctl status chrony.service
● chrony.service - chrony, an NTP client/server
   Loaded: loaded (/lib/systemd/system/chrony.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2020-05-18 06:00:55 UTC; 6s ago
     Docs: man:chronyd(8)
           man:chronyc(1)
           man:chrony.conf(5)
  Process: 85517 ExecStartPost=/usr/lib/chrony/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
  Process: 85494 ExecStart=/usr/lib/systemd/scripts/chronyd-starter.sh $DAEMON_OPTS (code=exited, status=0/SUCCESS)
 Main PID: 85512 (chronyd)
    Tasks: 1 (limit: 9470)
   CGroup: /system.slice/chrony.service
           └─85512 /usr/sbin/chronyd

May 18 06:00:55 os-swift3 systemd[1]: Starting chrony, an NTP client/server...
May 18 06:00:55 os-swift3 chronyd[85512]: chronyd version 3.2 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHASH +SIGND +ASYNCDNS +IPV6 -DEBUG)
May 18 06:00:55 os-swift3 chronyd[85512]: Frequency 5.849 +/- 0.110 ppm read from /var/lib/chrony/chrony.drift
May 18 06:00:55 os-swift3 systemd[1]: Started chrony, an NTP client/server.
Connection to 10.0.12.28 closed.
ubuntu@os-client:~/work/openstack/deploy$ juju ssh ceph-osd-backup/2 sudo chronyc sources
210 Number of sources = 19
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^- y.ns.gin.ntt.net              2   6    17    22  -2808us[-3661us] +/-  113ms
^+ 122x215x240x52.ap122.ftt>     2   6    17    20   +690us[ -162us] +/-   29ms
^+ jiro.paina.net                2   6    17    23  -4871us[-5724us] +/-   62ms
^* kuroa.me                      2   6    17    19   +195us[ -658us] +/-   36ms
^- time.richiemcintosh.com       2   6    17    19  -2739us[-2739us] +/-  115ms
^- bo.leptonics.com              3   6    17    19  +3451us[+3451us] +/-  204ms
^- www.kapos-net.hu              3   6    17    22    +15ms[  +14ms] +/-  187ms
^- 108.61.73.243                 2   6    17    21  +8468us[+7615us] +/-  124ms
^? ipv6.ntp2.rbauman.com         0   7     0     -     +0ns[   +0ns] +/-    0ns
^? time.cloudflare.com           0   7     0     -     +0ns[   +0ns] +/-    0ns
^? time.paina.net                0   7     0     -     +0ns[   +0ns] +/-    0ns
^? y.ns.gin.ntt.net              0   7     0     -     +0ns[   +0ns] +/-    0ns
^+ ntp-5.jonlight.com            2   6    17    33   +929us[  +76us] +/-   45ms
^- mx.execve.net                 3   6    17    32  +3768us[+2915us] +/-  116ms
^+ x.ns.gin.ntt.net              2   6    17    32  +1476us[ +623us] +/-   74ms
^- nettuno.ntp.irh.it            2   6    17    30  +6166us[+6166us] +/-  159ms
^- 51-159-6-183.rev.poneyte>     3   6    17    31  -5809us[-6662us] +/-  206ms
^- 202-65-114-202.jogja.cit>     2   6    17    34  +1867us[+1167us] +/-   98ms
^- up2.com                       2   6    17    31  +1336us[ +484us] +/-  153ms
Connection to 10.0.12.28 closed.

Review juju status

ubuntu@os-client:~/work/openstack/deploy$ juju status
Model    Controller       Cloud/Region    Version  SLA          Timestamp
default  maas-controller  mymaas/default  2.7.6    unsupported  15:27:50+09:00

App                            Version  Status  Scale  Charm                  Store       Rev  OS      Notes
ceph-mon                       13.2.8   active      3  ceph-mon               jujucharms   46  ubuntu
ceph-mon-backup                13.2.8   active      3  ceph-mon               jujucharms   46  ubuntu
ceph-osd                       13.2.8   active      3  ceph-osd               jujucharms  301  ubuntu
ceph-osd-backup                13.2.8   active      3  ceph-osd               jujucharms  301  ubuntu
ceph-radosgw                   13.2.8   active      3  ceph-radosgw           jujucharms  286  ubuntu
ceph-radosgw-hacluster                  active      3  hacluster              jujucharms   66  ubuntu
cinder                         14.0.4   active      3  cinder                 jujucharms  301  ubuntu
cinder-backup                  14.0.4   active      3  cinder-backup          jujucharms  248  ubuntu
cinder-ceph                    14.0.4   active      3  cinder-ceph            jujucharms  254  ubuntu
cinder-hacluster                        active      3  hacluster              jujucharms   66  ubuntu
glance                         18.0.1   active      3  glance                 jujucharms  295  ubuntu
glance-hacluster                        active      3  hacluster              jujucharms   66  ubuntu
keystone                       15.0.0   active      3  keystone               jujucharms  312  ubuntu
keystone-hacluster                      active      3  hacluster              jujucharms   66  ubuntu
memcached                               active      3  memcached              jujucharms   28  ubuntu
mysql                          5.7.20   active      3  percona-cluster        jujucharms  286  ubuntu
mysql-hacluster                         active      3  hacluster              jujucharms   66  ubuntu
nagios                                  active      1  nagios                 jujucharms   36  ubuntu
nagios-ntp                     3.2      active      1  ntp                    jujucharms   39  ubuntu
ncc-hacluster                           active      3  hacluster              jujucharms   66  ubuntu
neutron-api                    14.1.0   active      3  neutron-api            jujucharms  284  ubuntu
neutron-gateway                14.1.0   active      3  neutron-gateway        jujucharms  280  ubuntu
neutron-hacluster                       active      3  hacluster              jujucharms   66  ubuntu
neutron-openvswitch            14.1.0   active      3  neutron-openvswitch    jujucharms  274  ubuntu
nova-cloud-controller          19.1.0   active      3  nova-cloud-controller  jujucharms  343  ubuntu
nova-compute                   19.1.0   active      3  nova-compute           jujucharms  314  ubuntu
nrpe                                    active     48  nrpe                   jujucharms   63  ubuntu
ntp                            3.2      active     12  ntp                    jujucharms   39  ubuntu
openstack-dashboard            15.2.0   active      3  openstack-dashboard    jujucharms  302  ubuntu
openstack-dashboard-hacluster           active      3  hacluster              jujucharms   66  ubuntu
rabbitmq-server                3.6.10   active      3  rabbitmq-server        jujucharms  100  ubuntu

Unit                                Workload  Agent  Machine   Public address  Ports                       Message
ceph-mon-backup/0                   active    idle   0/lxd/11  10.0.12.68                                  Unit is ready and clustered
  nrpe/3                            active    idle             10.0.12.68      icmp,5666/tcp               ready
ceph-mon-backup/1*                  active    idle   1/lxd/11  10.0.12.67                                  Unit is ready and clustered
  nrpe/4                            active    idle             10.0.12.67      icmp,5666/tcp               ready
ceph-mon-backup/2                   active    idle   2/lxd/11  10.0.12.69                                  Unit is ready and clustered
  nrpe/5                            active    idle             10.0.12.69      icmp,5666/tcp               ready
ceph-mon/0*                         active    idle   0/lxd/0   10.0.12.35                                  Unit is ready and clustered
  nrpe/0*                           active    idle             10.0.12.35      icmp,5666/tcp               ready
ceph-mon/1                          active    idle   1/lxd/0   10.0.12.36                                  Unit is ready and clustered
  nrpe/1                            active    idle             10.0.12.36      icmp,5666/tcp               ready
ceph-mon/2                          active    idle   2/lxd/0   10.0.12.37                                  Unit is ready and clustered
  nrpe/2                            active    idle             10.0.12.37      icmp,5666/tcp               ready
ceph-osd-backup/0                   active    idle   9         10.0.12.24                                  Unit is ready (1 OSD)
  nrpe/21                           active    idle             10.0.12.24      icmp,5666/tcp               ready
  ntp/5                             active    idle             10.0.12.24      123/udp                     chrony: Ready, OK: offset is 0.000075
ceph-osd-backup/1*                  active    idle   10        10.0.12.33                                  Unit is ready (1 OSD)
  nrpe/19                           active    idle             10.0.12.33      icmp,5666/tcp               ready
  ntp/3                             active    idle             10.0.12.33      123/udp                     chrony: Ready, OK: offset is 0.000388
ceph-osd-backup/2                   active    idle   11        10.0.12.28                                  Unit is ready (1 OSD)
  nrpe/20                           active    idle             10.0.12.28      icmp,5666/tcp               ready
  ntp/4                             active    idle             10.0.12.28      123/udp                     chrony: Ready, OK: offset is -0.000297
ceph-osd/0*                         active    idle   6         10.0.12.25                                  Unit is ready (1 OSD)
  nrpe/6                            active    idle             10.0.12.25      icmp,5666/tcp               ready
  ntp/2                             active    idle             10.0.12.25      123/udp                     chrony: Ready, OK: offset is 0.000912
ceph-osd/1                          active    idle   7         10.0.12.31                                  Unit is ready (1 OSD)
  nrpe/8                            active    idle             10.0.12.31      icmp,5666/tcp               ready
  ntp/1                             active    idle             10.0.12.31      123/udp                     chrony: Ready, OK: offset is 0.000045
ceph-osd/2                          active    idle   8         10.0.12.32                                  Unit is ready (1 OSD)
  nrpe/7                            active    idle             10.0.12.32      icmp,5666/tcp               ready
  ntp/0*                            active    idle             10.0.12.32      123/udp                     chrony: Ready, OK: offset is 0.000400
ceph-radosgw/3                      active    idle   0/lxd/10  10.0.12.65      80/tcp                      Unit is ready
  ceph-radosgw-hacluster/3*         active    idle             10.0.12.65                                  Unit is ready and clustered
  nrpe/11                           active    idle             10.0.12.65      icmp,5666/tcp               ready
ceph-radosgw/4                      active    idle   1/lxd/10  10.0.12.66      80/tcp                      Unit is ready
  ceph-radosgw-hacluster/4          active    idle             10.0.12.66                                  Unit is ready and clustered
  nrpe/12                           active    idle             10.0.12.66      icmp,5666/tcp               ready
ceph-radosgw/5*                     active    idle   2/lxd/10  10.0.12.64      80/tcp                      Unit is ready
  ceph-radosgw-hacluster/5          active    idle             10.0.12.64                                  Unit is ready and clustered
  nrpe/13                           active    idle             10.0.12.64      icmp,5666/tcp               ready
cinder/0*                           active    idle   0/lxd/7   10.0.12.55      8776/tcp                    Unit is ready
  cinder-backup/0                   active    idle             10.0.12.55                                  Unit is ready
  cinder-ceph/2*                    active    idle             10.0.12.55                                  Unit is ready
  cinder-hacluster/2*               active    idle             10.0.12.55                                  Unit is ready and clustered
  nrpe/10                           active    idle             10.0.12.55      icmp,5666/tcp               ready
cinder/1                            active    idle   1/lxd/7   10.0.12.57      8776/tcp                    Unit is ready
  cinder-backup/2*                  active    idle             10.0.12.57                                  Unit is ready
  cinder-ceph/1                     active    idle             10.0.12.57                                  Unit is ready
  cinder-hacluster/1                active    idle             10.0.12.57                                  Unit is ready and clustered
  nrpe/9                            active    idle             10.0.12.57      icmp,5666/tcp               ready
cinder/2                            active    idle   2/lxd/7   10.0.12.56      8776/tcp                    Unit is ready
  cinder-backup/1                   active    idle             10.0.12.56                                  Unit is ready
  cinder-ceph/0                     active    idle             10.0.12.56                                  Unit is ready
  cinder-hacluster/0                active    idle             10.0.12.56                                  Unit is ready and clustered
  nrpe/22                           active    idle             10.0.12.56      icmp,5666/tcp               ready
glance/0*                           active    idle   0/lxd/6   10.0.12.53      9292/tcp                    Unit is ready
  glance-hacluster/2*               active    idle             10.0.12.53                                  Unit is ready and clustered
  nrpe/14                           active    idle             10.0.12.53      icmp,5666/tcp               ready
glance/1                            active    idle   1/lxd/6   10.0.12.54      9292/tcp                    Unit is ready
  glance-hacluster/1                active    idle             10.0.12.54                                  Unit is ready and clustered
  nrpe/16                           active    idle             10.0.12.54      icmp,5666/tcp               ready
glance/2                            active    idle   2/lxd/6   10.0.12.52      9292/tcp                    Unit is ready
  glance-hacluster/0                active    idle             10.0.12.52                                  Unit is ready and clustered
  nrpe/15                           active    idle             10.0.12.52      icmp,5666/tcp               ready
keystone/0*                         active    idle   0/lxd/5   10.0.12.49      5000/tcp                    Unit is ready
  keystone-hacluster/1*             active    idle             10.0.12.49                                  Unit is ready and clustered
  nrpe/17                           active    idle             10.0.12.49      icmp,5666/tcp               ready
keystone/1                          active    idle   1/lxd/5   10.0.12.50      5000/tcp                    Unit is ready
  keystone-hacluster/0              active    idle             10.0.12.50                                  Unit is ready and clustered
  nrpe/18                           active    idle             10.0.12.50      icmp,5666/tcp               ready
keystone/2                          active    idle   2/lxd/5   10.0.12.51      5000/tcp                    Unit is ready
  keystone-hacluster/2              active    idle             10.0.12.51                                  Unit is ready and clustered
  nrpe/46                           active    idle             10.0.12.51      icmp,5666/tcp               ready
memcached/0*                        active    idle   0/lxd/3   10.0.12.43      11211/tcp                   Unit is ready
  nrpe/23                           active    idle             10.0.12.43      icmp,5666/tcp               ready
memcached/1                         active    idle   1/lxd/3   10.0.12.44      11211/tcp                   Unit is ready
  nrpe/25                           active    idle             10.0.12.44      icmp,5666/tcp               ready
memcached/2                         active    idle   2/lxd/3   10.0.12.45      11211/tcp                   Unit is ready
  nrpe/24                           active    idle             10.0.12.45      icmp,5666/tcp               ready
mysql/0*                            active    idle   0/lxd/4   10.0.12.47      3306/tcp                    Unit is ready
  mysql-hacluster/1*                active    idle             10.0.12.47                                  Unit is ready and clustered
  nrpe/27                           active    idle             10.0.12.47      icmp,5666/tcp               ready
mysql/1                             active    idle   1/lxd/4   10.0.12.46      3306/tcp                    Unit is ready
  mysql-hacluster/2                 active    idle             10.0.12.46                                  Unit is ready and clustered
  nrpe/28                           active    idle             10.0.12.46      icmp,5666/tcp               ready
mysql/2                             active    idle   2/lxd/4   10.0.12.48      3306/tcp                    Unit is ready
  mysql-hacluster/0                 active    idle             10.0.12.48                                  Unit is ready and clustered
  nrpe/26                           active    idle             10.0.12.48      icmp,5666/tcp               ready
nagios/0*                           active    idle   12        10.0.12.27      80/tcp                      ready
  nagios-ntp/0*                     active    idle             10.0.12.27      123/udp                     chrony: Ready
neutron-api/0*                      active    idle   0/lxd/9   10.0.12.62      9696/tcp                    Unit is ready
  neutron-hacluster/2*              active    idle             10.0.12.62                                  Unit is ready and clustered
  nrpe/30                           active    idle             10.0.12.62      icmp,5666/tcp               ready
neutron-api/1                       active    idle   1/lxd/9   10.0.12.61      9696/tcp                    Unit is ready
  neutron-hacluster/1               active    idle             10.0.12.61                                  Unit is ready and clustered
  nrpe/33                           active    idle             10.0.12.61      icmp,5666/tcp               ready
neutron-api/2                       active    idle   2/lxd/9   10.0.12.63      9696/tcp                    Unit is ready
  neutron-hacluster/0               active    idle             10.0.12.63                                  Unit is ready and clustered
  nrpe/29                           active    idle             10.0.12.63      icmp,5666/tcp               ready
neutron-gateway/0*                  active    idle   0         10.0.12.23                                  Unit is ready
  nrpe/32                           active    idle             10.0.12.23      icmp,5666/tcp               ready
  ntp/10                            active    idle             10.0.12.23      123/udp                     chrony: Ready, OK: offset is 0.001769
neutron-gateway/1                   active    idle   1         10.0.12.22                                  Unit is ready
  nrpe/34                           active    idle             10.0.12.22      icmp,5666/tcp               ready
  ntp/9                             active    idle             10.0.12.22      123/udp                     chrony: Ready, OK: offset is 0.000338
neutron-gateway/2                   active    idle   2         10.0.12.26                                  Unit is ready
  nrpe/31                           active    idle             10.0.12.26      icmp,5666/tcp               ready
  ntp/11                            active    idle             10.0.12.26      123/udp                     chrony: Ready, OK: offset is -0.001703
nova-cloud-controller/0*            active    idle   0/lxd/8   10.0.12.59      8774/tcp,8775/tcp,8778/tcp  Unit is ready
  ncc-hacluster/2*                  active    idle             10.0.12.59                                  Unit is ready and clustered
  nrpe/36                           active    idle             10.0.12.59      icmp,5666/tcp               ready
nova-cloud-controller/1             active    idle   1/lxd/8   10.0.12.58      8774/tcp,8775/tcp,8778/tcp  Unit is ready
  ncc-hacluster/0                   active    idle             10.0.12.58                                  Unit is ready and clustered
  nrpe/47                           active    idle             10.0.12.58      icmp,5666/tcp               ready
nova-cloud-controller/2             active    idle   2/lxd/8   10.0.12.60      8774/tcp,8775/tcp,8778/tcp  Unit is ready
  ncc-hacluster/1                   active    idle             10.0.12.60                                  Unit is ready and clustered
  nrpe/35                           active    idle             10.0.12.60      icmp,5666/tcp               ready
nova-compute/0*                     active    idle   3         10.0.12.34                                  Unit is ready
  neutron-openvswitch/1             active    idle             10.0.12.34                                  Unit is ready
  nrpe/38                           active    idle             10.0.12.34      icmp,5666/tcp               ready
  ntp/6                             active    idle             10.0.12.34      123/udp                     chrony: Ready, OK: offset is 0.000147
nova-compute/1                      active    idle   4         10.0.12.29                                  Unit is ready
  neutron-openvswitch/2             active    idle             10.0.12.29                                  Unit is ready
  nrpe/39                           active    idle             10.0.12.29      icmp,5666/tcp               ready
  ntp/7                             active    idle             10.0.12.29      123/udp                     chrony: Ready, OK: offset is 0.000979
nova-compute/2                      active    idle   5         10.0.12.30                                  Unit is ready
  neutron-openvswitch/0*            active    idle             10.0.12.30                                  Unit is ready
  nrpe/37                           active    idle             10.0.12.30      icmp,5666/tcp               ready
  ntp/8                             active    idle             10.0.12.30      123/udp                     chrony: Ready, OK: offset is 0.000559
openstack-dashboard/0               active    idle   0/lxd/12  10.0.12.70      80/tcp,443/tcp              Unit is ready
  nrpe/42                           active    idle             10.0.12.70      icmp,5666/tcp               ready
  openstack-dashboard-hacluster/2   active    idle             10.0.12.70                                  Unit is ready and clustered
openstack-dashboard/1               active    idle   1/lxd/12  10.0.12.71      80/tcp,443/tcp              Unit is ready
  nrpe/40                           active    idle             10.0.12.71      icmp,5666/tcp               ready
  openstack-dashboard-hacluster/0*  active    idle             10.0.12.71                                  Unit is ready and clustered
openstack-dashboard/2*              active    idle   2/lxd/12  10.0.12.72      80/tcp,443/tcp              Unit is ready
  nrpe/41                           active    idle             10.0.12.72      icmp,5666/tcp               ready
  openstack-dashboard-hacluster/1   active    idle             10.0.12.72                                  Unit is ready and clustered
rabbitmq-server/0*                  active    idle   0/lxd/2   10.0.12.41      5672/tcp                    Unit is ready and clustered
  nrpe/44                           active    idle             10.0.12.41      icmp,5666/tcp               ready
rabbitmq-server/1                   active    idle   1/lxd/2   10.0.12.40      5672/tcp                    Unit is ready and clustered
  nrpe/43                           active    idle             10.0.12.40      icmp,5666/tcp               ready
rabbitmq-server/2                   active    idle   2/lxd/2   10.0.12.42      5672/tcp                    Unit is ready and clustered
  nrpe/45                           active    idle             10.0.12.42      icmp,5666/tcp               ready

Machine   State    DNS         Inst id               Series  AZ       Message
0         started  10.0.12.23  os-controller1        bionic  default  Deployed
0/lxd/0   started  10.0.12.35  juju-a5ab4c-0-lxd-0   bionic  default  Container started
0/lxd/2   started  10.0.12.41  juju-a5ab4c-0-lxd-2   bionic  default  Container started
0/lxd/3   started  10.0.12.43  juju-a5ab4c-0-lxd-3   bionic  default  Container started
0/lxd/4   started  10.0.12.47  juju-a5ab4c-0-lxd-4   bionic  default  Container started
0/lxd/5   started  10.0.12.49  juju-a5ab4c-0-lxd-5   bionic  default  Container started
0/lxd/6   started  10.0.12.53  juju-a5ab4c-0-lxd-6   bionic  default  Container started
0/lxd/7   started  10.0.12.55  juju-a5ab4c-0-lxd-7   bionic  default  Container started
0/lxd/8   started  10.0.12.59  juju-a5ab4c-0-lxd-8   bionic  default  Container started
0/lxd/9   started  10.0.12.62  juju-a5ab4c-0-lxd-9   bionic  default  Container started
0/lxd/10  started  10.0.12.65  juju-a5ab4c-0-lxd-10  bionic  default  Container started
0/lxd/11  started  10.0.12.68  juju-a5ab4c-0-lxd-11  bionic  default  Container started
0/lxd/12  started  10.0.12.70  juju-a5ab4c-0-lxd-12  bionic  default  Container started
1         started  10.0.12.22  os-controller2        bionic  default  Deployed
1/lxd/0   started  10.0.12.36  juju-a5ab4c-1-lxd-0   bionic  default  Container started
1/lxd/2   started  10.0.12.40  juju-a5ab4c-1-lxd-2   bionic  default  Container started
1/lxd/3   started  10.0.12.44  juju-a5ab4c-1-lxd-3   bionic  default  Container started
1/lxd/4   started  10.0.12.46  juju-a5ab4c-1-lxd-4   bionic  default  Container started
1/lxd/5   started  10.0.12.50  juju-a5ab4c-1-lxd-5   bionic  default  Container started
1/lxd/6   started  10.0.12.54  juju-a5ab4c-1-lxd-6   bionic  default  Container started
1/lxd/7   started  10.0.12.57  juju-a5ab4c-1-lxd-7   bionic  default  Container started
1/lxd/8   started  10.0.12.58  juju-a5ab4c-1-lxd-8   bionic  default  Container started
1/lxd/9   started  10.0.12.61  juju-a5ab4c-1-lxd-9   bionic  default  Container started
1/lxd/10  started  10.0.12.66  juju-a5ab4c-1-lxd-10  bionic  default  Container started
1/lxd/11  started  10.0.12.67  juju-a5ab4c-1-lxd-11  bionic  default  Container started
1/lxd/12  started  10.0.12.71  juju-a5ab4c-1-lxd-12  bionic  default  Container started
2         started  10.0.12.26  os-controller3        bionic  default  Deployed
2/lxd/0   started  10.0.12.37  juju-a5ab4c-2-lxd-0   bionic  default  Container started
2/lxd/2   started  10.0.12.42  juju-a5ab4c-2-lxd-2   bionic  default  Container started
2/lxd/3   started  10.0.12.45  juju-a5ab4c-2-lxd-3   bionic  default  Container started
2/lxd/4   started  10.0.12.48  juju-a5ab4c-2-lxd-4   bionic  default  Container started
2/lxd/5   started  10.0.12.51  juju-a5ab4c-2-lxd-5   bionic  default  Container started
2/lxd/6   started  10.0.12.52  juju-a5ab4c-2-lxd-6   bionic  default  Container started
2/lxd/7   started  10.0.12.56  juju-a5ab4c-2-lxd-7   bionic  default  Container started
2/lxd/8   started  10.0.12.60  juju-a5ab4c-2-lxd-8   bionic  default  Container started
2/lxd/9   started  10.0.12.63  juju-a5ab4c-2-lxd-9   bionic  default  Container started
2/lxd/10  started  10.0.12.64  juju-a5ab4c-2-lxd-10  bionic  default  Container started
2/lxd/11  started  10.0.12.69  juju-a5ab4c-2-lxd-11  bionic  default  Container started
2/lxd/12  started  10.0.12.72  juju-a5ab4c-2-lxd-12  bionic  default  Container started
3         started  10.0.12.34  os-compute1           bionic  default  Deployed
4         started  10.0.12.29  os-compute2           bionic  default  Deployed
5         started  10.0.12.30  os-compute3           bionic  default  Deployed
6         started  10.0.12.25  os-ceph1              bionic  default  Deployed
7         started  10.0.12.31  os-ceph2              bionic  default  Deployed
8         started  10.0.12.32  os-ceph3              bionic  default  Deployed
9         started  10.0.12.24  os-swift1             bionic  default  Deployed
10        started  10.0.12.33  os-swift2             bionic  default  Deployed
11        started  10.0.12.28  os-swift3             bionic  default  Deployed
12        started  10.0.12.27  os-nagios1            bionic  default  Deployed

Export Juju Model to Bundle

Now, before prepare OpenStack environment, Let's export juju current model bundle.

Above deployment configuration included.

juju export-bundle --filename bundle/openstack-bundle-stein-bionic-full-ha-6630a1bf-d134-417b-9906-f693e17861cc.yaml
ubuntu@os-client:~/work/openstack/deploy$ juju export-bundle --filename bundle/openstack-bundle-stein-bionic-full-ha.yaml
Bundle successfully exported to bundle/openstack-bundle-stein-bionic-full-ha.yaml

It seems You can deploy this bundle following:

juju deploy bundle/openstack-bundle-stein-bionic-full-ha.yaml

Unfortunately, I can say High-Availability Chamred OpenStack deployment by bundle file is not works for me 100%.

Next day I check juju status, something deployment task stuck in error….

OpenStack Docs: Install OpenStack from a bundle

But, I was verified openstack-base bundle is almost deploy success.

And, bundle export result is SCM friendly, you can check difference how configuration, relation changed and debug the model by exporting bundle.

OpenStack is one of complicated system, this feature is benefit.

(venv) ubuntu@os-client:~/work/openstack/deploy$ juju actions mysql
Action         Description
backup         Full database backup
bootstrap-pxc  Bootstrap this unit of Percona.
*WARNING* This action will bootstrap this unit of Percona cluster. This
should only occur in a recovery scenario. Make sure this unit has the
highest sequence number in grastate.dat or data loss may occur.
See upstream Percona documentation for context
https://www.percona.com/blog/2014/09/01/galera-replication-how-to-recover-a-pxc-cluster/
complete-cluster-series-upgrade  Perform final operations post series upgrade. Inform all nodes in the
cluster the upgrade is complete cluster wide. Update configuration with all
peers for wsrep replication.
This action should be performed on the current leader. Note the leader may
have changed during the series upgrade process.
mysqldump  MySQL dump of databases. Action will return mysqldump-file location of the
requested backup in the results. If the databases parameter is unset all
databases will be dumped. If the databases parameter is set only the
databases specified will be dumped. Note it may be necessary to use the
set-pxc-strict-mode action first to set either PERMISSIVE or MASTER to
allow locking of tables for mysqldump to complete successfully.
See https://www.percona.com/doc/percona-xtradb-cluster/LATEST/features/pxc-strict-mode.html
for more detail.
notify-bootstrapped  No description
pause                Pause the MySQL service.
resume               Resume the MySQL service.
set-pxc-strict-mode  Set PXC strict mode.
(venv) ubuntu@os-client:~/work/openstack/deploy$ juju run-action mysql/leader backup --wait
unit-mysql-0:
  UnitId: mysql/0
  id: "6"
  results:
    Stderr: |
      200523 22:17:47 innobackupex: Starting the backup operation

      IMPORTANT: Please check that the backup run completes successfully.
                 At the end of a successful backup run innobackupex
                 prints "completed OK!".

      200523 22:17:47  version_check Connecting to MySQL server with DSN 'dbi:mysql:;mysql_read_default_group=xtrabackup;port=3306;mysql_socket=/var/run/mysqld/mysqld.sock' as 'sstuser'  (using password: YES).
      200523 22:17:47  version_check Connected to MySQL server
      200523 22:17:47  version_check Executing a version check against the server...
      200523 22:17:47  version_check Done.
      200523 22:17:47 Connecting to MySQL server host: localhost, user: sstuser, password: set, port: 3306, socket: /var/run/mysqld/mysqld.sock
      Using server version 5.7.20-18-18
      innobackupex version 2.4.9 based on MySQL server 5.7.13 Linux (x86_64) (revision id: a467167cdd4)
      xtrabackup: uses posix_fadvise().
      xtrabackup: cd to /var/lib/percona-xtradb-cluster
      xtrabackup: open files limit requested 0, set to 1048576
      xtrabackup: using the following InnoDB configuration:
      xtrabackup:   innodb_data_home_dir = .
      xtrabackup:   innodb_data_file_path = ibdata1:12M:autoextend
      xtrabackup:   innodb_log_group_home_dir = ./
      xtrabackup:   innodb_log_files_in_group = 2
      xtrabackup:   innodb_log_file_size = 50331648
      InnoDB: Number of pools: 1
      200523 22:17:47 >> log scanned up to (88948363)
      xtrabackup: Generating a list of tablespaces
      InnoDB: Allocated tablespace ID 33 for keystone/policy, old maximum was 0
      200523 22:17:48 [01] Copying ./ibdata1 to /opt/backups/mysql/2020-05-23_22-17-47/ibdata1
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/policy.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/policy.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/nonlocal_user.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/nonlocal_user.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/role.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/role.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 >> log scanned up to (88950174)
      200523 22:17:48 [01] Copying ./keystone/user_option.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/user_option.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/access_token.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/access_token.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/service.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/service.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/assignment.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/assignment.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/password.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/password.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/user.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/user.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/migrate_version.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/migrate_version.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/project_tag.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/project_tag.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/credential.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/credential.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/sensitive_config.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/sensitive_config.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/policy_association.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/policy_association.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/config_register.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/config_register.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/trust.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/trust.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/registered_limit.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/registered_limit.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/idp_remote_ids.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/idp_remote_ids.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/trust_role.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/trust_role.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/identity_provider.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/identity_provider.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/local_user.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/local_user.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/federated_user.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/federated_user.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/project_endpoint.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/project_endpoint.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/request_token.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/request_token.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/project_endpoint_group.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/project_endpoint_group.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/system_assignment.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/system_assignment.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/service_provider.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/service_provider.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/application_credential.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/application_credential.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/project.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/project.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/implied_role.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/implied_role.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/token.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/token.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:48 [01] Copying ./keystone/federation_protocol.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/federation_protocol.ibd
      200523 22:17:48 [01]        ...done
      200523 22:17:49 [01] Copying ./keystone/group.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/group.ibd
      200523 22:17:49 [01]        ...done
      200523 22:17:49 [01] Copying ./keystone/access_rule.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/access_rule.ibd
      200523 22:17:49 [01]        ...done
      200523 22:17:49 [01] Copying ./keystone/endpoint.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/endpoint.ibd
      200523 22:17:49 [01]        ...done
      200523 22:17:49 [01] Copying ./keystone/application_credential_role.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/application_credential_role.ibd
      200523 22:17:49 [01]        ...done
      200523 22:17:49 [01] Copying ./keystone/user_group_membership.ibd to /opt/backups/mysql/2020-05-23_22-17-47/keystone/user_group_membership.ibd
      200523 22:17:49 [01]        ...done

(snip)

      200523 22:17:54 [01] Copying ./nova/shadow_floating_ips.ibd to /opt/backups/mysql/2020-05-23_22-17-47/nova/shadow_floating_ips.ibd
      200523 22:17:54 [01]        ...done
      200523 22:17:54 [01] Copying ./nova/shadow_instance_type_extra_specs.ibd to /opt/backups/mysql/2020-05-23_22-17-47/nova/shadow_instance_type_extra_specs.ibd
      200523 22:17:54 [01]        ...done
      200523 22:17:54 [01] Copying ./nova/resource_provider_aggregates.ibd to /opt/backups/mysql/2020-05-23_22-17-47/nova/resource_provider_aggregates.ibd
      200523 22:17:54 [01]        ...done
      200523 22:17:54 [01] Copying ./sys/sys_config.ibd to /opt/backups/mysql/2020-05-23_22-17-47/sys/sys_config.ibd
      200523 22:17:54 [01]        ...done
      200523 22:17:54 Starting prep copy of non-InnoDB tables and files
      200523 22:17:54 Starting rsync as: rsync -t . --files-from=/tmp/xtrabackup_rsyncfiles_pass1 /opt/backups/mysql/2020-05-23_22-17-47/
      200523 22:17:54 rsync finished successfully.
      200523 22:17:54 Finished a prep copy of non-InnoDB tables and files
      200523 22:17:54 Executing LOCK TABLES FOR BACKUP...
      200523 22:17:54 Starting to backup non-InnoDB tables and files
      200523 22:17:54 [00] Writing /opt/backups/mysql/2020-05-23_22-17-47/.gnupg/db.opt
      200523 22:17:54 [00]        ...done
      200523 22:17:54 Starting rsync as: rsync -t . --files-from=/tmp/xtrabackup_rsyncfiles_pass2 /opt/backups/mysql/2020-05-23_22-17-47/
      200523 22:17:54 rsync finished successfully.
      200523 22:17:54 Finished backing up non-InnoDB tables and files
      200523 22:17:54 Executing LOCK BINLOG FOR BACKUP...
      200523 22:17:54 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS...
      xtrabackup: The latest check point (for incremental): '87788405'
      xtrabackup: Stopping log copying thread.
      .200523 22:17:54 >> log scanned up to (88962905)

      200523 22:17:54 Executing UNLOCK BINLOG
      200523 22:17:54 Executing UNLOCK TABLES
      200523 22:17:54 All tables unlocked
      200523 22:17:54 Backup created in directory '/opt/backups/mysql/2020-05-23_22-17-47/'
      200523 22:17:54 [00] Writing /opt/backups/mysql/2020-05-23_22-17-47/backup-my.cnf
      200523 22:17:54 [00]        ...done
      200523 22:17:54 [00] Writing /opt/backups/mysql/2020-05-23_22-17-47/xtrabackup_info
      200523 22:17:54 [00]        ...done
      xtrabackup: Transaction log of lsn (87788405) to (88962905) was copied.
      200523 22:17:55 completed OK!
    outcome: Success
    time-completed: "2020-05-23 22:17:55"
  status: completed
  timing:
    completed: 2020-05-23 22:17:56 +0000 UTC
    enqueued: 2020-05-23 22:17:43 +0000 UTC
    started: 2020-05-23 22:17:47 +0000 UTC

(venv) ubuntu@os-client:~/work/openstack/deploy$ juju run-action mysql/leader mysqldump --wait
unit-mysql-0:
  UnitId: mysql/0
  id: "7"
  message: mysqldump failed
  results:
    Stderr: |
      mysqldump: Got error: 1105: Percona-XtraDB-Cluster prohibits use of LOCK TABLE/FLUSH TABLE <table> WITH READ LOCK with pxc_strict_mode = ENFORCING when using LOCK TABLES
    output: None
    traceback: |
      Traceback (most recent call last):
        File "/var/lib/juju/agents/unit-mysql-0/charm/actions/mysqldump", line 213, in mysqldump
          filename = percona_utils.mysqldump(basedir, databases=databases)
        File "/var/lib/juju/agents/unit-mysql-0/charm/hooks/percona_utils.py", line 1693, in mysqldump
          subprocess.check_call(bucmd)
        File "/usr/lib/python3.6/subprocess.py", line 311, in check_call
          raise CalledProcessError(retcode, cmd)
      subprocess.CalledProcessError: Command '['/usr/bin/mysqldump', '-u', 'root', '--default-character-set=utf8', '--triggers', '--routines', '--events', '--ignore-table=mysql.event', '--result-file', '/var/backups/mysql/mysqldump-all-databases-202005232219', '--all-databases']' returned non-zero exit status 2.
  status: failed
  timing:
    completed: 2020-05-23 22:19:16 +0000 UTC
    enqueued: 2020-05-23 22:19:12 +0000 UTC
    started: 2020-05-23 22:19:14 +0000 UTC

take snapshot “Before Prepare OpenStack Platform”

Prepare OpenStack Platform

OK. Go ahead prepare your only cloud.

There is no provider networking configuraiton, project, instance flavor at this timing…

(venv) ubuntu@os-client:~/work/openstack/workspace$ cat ~/.config/openstack/clouds.yaml
clouds:
  admin: &admin
    auth:
      auth_url: http://10.0.14.131:35357/
      project_name: admin
      username: admin
      password: password
    region_name: RegionOne
    project_domain_name: admin_domain
    user_domain_name: admin_domain
  lasthope: &lasthope
    auth:
      auth_url: http://10.0.14.131:35357/
      project_name: LastHopeProject
      username: LastHopeUser
      password: its3m1r6cl9
    region_name: RegionOne
    project_domain_name: LastHopeDomain
    user_domain_name: LastHopeDomain

Cloud images

openstack --os-cloud admin image create "cirros-0.5.1-x86_64" \
  --file cloud-images/cirros-0.5.1-x86_64-disk.img \
  --disk-format qcow2 --container-format bare \
  --property architecture=x86_64 \
  --public

openstack --os-cloud admin image create "ubuntu-server-20.04-x86_64-focal" \
  --file cloud-images/ubuntu/focal/focal-server-cloudimg-amd64.img \
  --disk-format qcow2 --container-format bare \
  --property architecture=x86_64 \
  --property hw_disk_bus=virtio \
  --property hw_vif_model=virtio \
  --public

openstack --os-cloud admin image create "ubuntu-server-18.04-x86_64-bionic" \
  --file cloud-images/ubuntu/bionic/bionic-server-cloudimg-amd64.img \
  --disk-format qcow2 --container-format bare \
  --property architecture=x86_64 \
  --property hw_disk_bus=virtio \
  --property hw_vif_model=virtio \
  --public

openstack --os-cloud admin image create "ubuntu-server-16.04-x86_64-xenial" \
  --file cloud-images/ubuntu/xenial/xenial-server-cloudimg-amd64-disk1.img \
  --disk-format qcow2 --container-format bare \
  --property architecture=x86_64 \
  --property hw_disk_bus=virtio \
  --property hw_vif_model=virtio \
  --public
openstack --os-cloud admin image list
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack image list
+--------------------------------------+-----------------------------------+--------+
| ID                                   | Name                              | Status |
+--------------------------------------+-----------------------------------+--------+
| 6d3d3d26-226a-4386-a310-d7fb53d1473a | cirros-0.5.1-x86_64               | active |
| 03276bca-3bb3-4412-bd65-945270c4cdde | ubuntu-server-16.04-x86_64-xenial | active |
| 32d7479a-ad0e-49b4-8977-03583011b444 | ubuntu-server-18.04-x86_64-bionic | active |
| 75b410a1-9f09-4050-8444-fcea1bcea0a3 | ubuntu-server-20.04-x86_64-focal  | active |
+--------------------------------------+-----------------------------------+--------+
openstack --os-cloud admin image show ubuntu-server-20.04-x86_64-focal
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field            | Value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum         | a0a570ad022bbd1cd1711acbc171d0b3                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| container_format | bare                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
| created_at       | 2020-05-30T06:50:27Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
| disk_format      | qcow2                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| file             | /v2/images/75b410a1-9f09-4050-8444-fcea1bcea0a3/file                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
| id               | 75b410a1-9f09-4050-8444-fcea1bcea0a3                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
| min_disk         | 0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
| min_ram          | 0                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
| name             | ubuntu-server-20.04-x86_64-focal                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| owner            | acb1d3f2eb8b42d3bbd08e0ab6677724                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| properties       | architecture='x86_64', direct_url='rbd://d76e3208-9cc4-11ea-adf1-00163ed2207f/glance/75b410a1-9f09-4050-8444-fcea1bcea0a3/snap', hw_disk_bus='virtio', hw_vif_model='virtio', locations='[{'url': 'rbd://d76e3208-9cc4-11ea-adf1-00163ed2207f/glance/75b410a1-9f09-4050-8444-fcea1bcea0a3/snap', 'metadata': {}}]', os_hash_algo='sha512', os_hash_value='317be5956466e6dd7083bc56e8d7cc32d53233c286b8cde0c7edb025bc0e43aac39d28ad5ef54d95896568387e445e214558286777b08b17d358aec0756a7ba8', os_hidden='False', owner_specified.openstack.md5='a0a570ad022bbd1cd1711acbc171d0b3', owner_specified.openstack.object='images/ubuntu-server-20.04-x86_64-focal', owner_specified.openstack.sha256='f8fea6a80ced88eabe9d41eb61d4d9970348c025fe303583183ab81347ceea82' |
| protected        | False                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| schema           | /v2/schemas/image                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
| size             | 533135360                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| status           | active                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
| tags             |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| updated_at       | 2020-05-30T06:50:45Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
| visibility       | public                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
+------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Create instance flavor

Instance flavor defines instance resource allocation specification. a.k.a instance type on AWS.

openstack --os-cloud admin flavor create --ram 64 --vcpus 1 --disk 1 m1.pico
openstack --os-cloud admin flavor create --ram 1024 --vcpus 1 --disk 1 m1.nano
openstack --os-cloud admin flavor create --ram 2048 --vcpus 2 --disk 20 m1.micro
openstack --os-cloud admin flavor create --ram 4096 --vcpus 4 --disk 20 m1.medium
openstack --os-cloud admin flavor create --ram 8192 --vcpus 8 --disk 20 c1.large

This lab may be limited m1.pico, m1.nano, otherwise instance may not running due to lack of lab power… more powerfull instance create issue due to Nested Virtualization, or High Overcommitted CPU hog… If you have more powerfull machine or multi-node bare metal deployment applicable, you can try more high power instance lanunch.

+----------------------------+--------------------------------------+
| Field                      | Value                                |
+----------------------------+--------------------------------------+
| OS-FLV-DISABLED:disabled   | False                                |
| OS-FLV-EXT-DATA:ephemeral  | 0                                    |
| disk                       | 1                                    |
| id                         | deda6dee-e721-4feb-a71f-236494cf34de |
| name                       | m1.pico                              |
| os-flavor-access:is_public | True                                 |
| properties                 |                                      |
| ram                        | 64                                   |
| rxtx_factor                | 1.0                                  |
| swap                       |                                      |
| vcpus                      | 1                                    |
+----------------------------+--------------------------------------+
openstack --os-cloud admin flavor list
+--------------------------------------+-----------+------+------+-----------+-------+-----------+
| ID                                   | Name      |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+
| 8b8d2354-5b5d-47c8-a24d-ce6219f676d8 | m1.nano   | 1024 |    1 |         0 |     1 | True      |
| b15d66f4-c2f9-4a35-a88d-b9a8f5c6ba5a | m1.medium | 4096 |   20 |         0 |     4 | True      |
| d7ed9288-9050-4464-8f4f-34f38c46cab8 | c1.large  | 8192 |   20 |         0 |     8 | True      |
| deda6dee-e721-4feb-a71f-236494cf34de | m1.pico   |   64 |    1 |         0 |     1 | True      |
| f0df4f4e-3d10-4f9c-86af-13ec047d590c | m1.micro  | 2048 |   20 |         0 |     2 | True      |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+

Define an external network

openstack --os-cloud admin network list
openstack --os-cloud admin network create Pub_Net \
  --share \
  --external \
  --default \
  --provider-network-type flat \
  --provider-physical-network physnet1
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                     | Value                                                                                                                                                                        |
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up            | UP                                                                                                                                                                           |
| availability_zone_hints   |                                                                                                                                                                              |
| availability_zones        |                                                                                                                                                                              |
| created_at                | 2020-05-30T07:54:45Z                                                                                                                                                         |
| description               |                                                                                                                                                                              |
| dns_domain                | None                                                                                                                                                                         |
| id                        | 68fb14e8-4716-4709-a80a-37536ac8d66d                                                                                                                                         |
| ipv4_address_scope        | None                                                                                                                                                                         |
| ipv6_address_scope        | None                                                                                                                                                                         |
| is_default                | True                                                                                                                                                                         |
| is_vlan_transparent       | None                                                                                                                                                                         |
| location                  | cloud='default', project.domain_id=, project.domain_name='admin_domain', project.id='acb1d3f2eb8b42d3bbd08e0ab6677724', project.name='admin', region_name='RegionOne', zone= |
| mtu                       | 1500                                                                                                                                                                         |
| name                      | Pub_Net                                                                                                                                                                      |
| port_security_enabled     | False                                                                                                                                                                        |
| project_id                | acb1d3f2eb8b42d3bbd08e0ab6677724                                                                                                                                             |
| provider:network_type     | flat                                                                                                                                                                         |
| provider:physical_network | physnet1                                                                                                                                                                     |
| provider:segmentation_id  | None                                                                                                                                                                         |
| qos_policy_id             | None                                                                                                                                                                         |
| revision_number           | 1                                                                                                                                                                            |
| router:external           | External                                                                                                                                                                     |
| segments                  | None                                                                                                                                                                         |
| shared                    | True                                                                                                                                                                         |
| status                    | ACTIVE                                                                                                                                                                       |
| subnets                   |                                                                                                                                                                              |
| tags                      |                                                                                                                                                                              |
| updated_at                | 2020-05-30T07:54:45Z                                                                                                                                                         |
+---------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
openstack --os-cloud admin network list
+--------------------------------------+---------+---------+
| ID                                   | Name    | Subnets |
+--------------------------------------+---------+---------+
| 68fb14e8-4716-4709-a80a-37536ac8d66d | Pub_Net |         |
+--------------------------------------+---------+---------+
openstack --os-cloud admin subnet create Pub_Subnet \
  --allocation-pool start=198.51.100.10,end=198.51.100.191 \
  --subnet-range 198.51.100.0/24 \
  --no-dhcp \
  --gateway 198.51.100.1 \
  --dns-nameserver 8.8.8.8 \
  --dns-nameserver 8.8.4.4 \
  --dns-nameserver 1.1.1.1 \
  --network Pub_Net
+----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                | Value                                                                                                                                                                        |
+----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_pools     | 198.51.100.10-198.51.100.191                                                                                                                                                 |
| cidr                 | 198.51.100.0/24                                                                                                                                                              |
| created_at           | 2020-05-30T08:19:49Z                                                                                                                                                         |
| description          |                                                                                                                                                                              |
| dns_nameservers      | 1.1.1.1, 8.8.4.4, 8.8.8.8                                                                                                                                                    |
| dns_publish_fixed_ip | None                                                                                                                                                                         |
| enable_dhcp          | False                                                                                                                                                                        |
| gateway_ip           | 198.51.100.1                                                                                                                                                                 |
| host_routes          |                                                                                                                                                                              |
| id                   | b00b9b8f-0efe-42c2-9ee8-6d17593ee189                                                                                                                                         |
| ip_version           | 4                                                                                                                                                                            |
| ipv6_address_mode    | None                                                                                                                                                                         |
| ipv6_ra_mode         | None                                                                                                                                                                         |
| location             | cloud='default', project.domain_id=, project.domain_name='admin_domain', project.id='acb1d3f2eb8b42d3bbd08e0ab6677724', project.name='admin', region_name='RegionOne', zone= |
| name                 | Pub_Subnet                                                                                                                                                                   |
| network_id           | 68fb14e8-4716-4709-a80a-37536ac8d66d                                                                                                                                         |
| prefix_length        | None                                                                                                                                                                         |
| project_id           | acb1d3f2eb8b42d3bbd08e0ab6677724                                                                                                                                             |
| revision_number      | 0                                                                                                                                                                            |
| segment_id           | None                                                                                                                                                                         |
| service_types        |                                                                                                                                                                              |
| subnetpool_id        | None                                                                                                                                                                         |
| tags                 |                                                                                                                                                                              |
| updated_at           | 2020-05-30T08:19:49Z                                                                                                                                                         |
+----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Working with domains and projects

  • Create Domain
openstack --os-cloud admin domain create LastHopeDomain
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description |                                  |
| enabled     | True                             |
| id          | b7302fd074b94f9aa668261eed9d3bae |
| name        | LastHopeDomain                   |
| tags        | []                               |
+-------------+----------------------------------+
  • Create Project
openstack --os-cloud admin project create --domain LastHopeDomain \
  --description 'Last Hope Project' LastHopeProject
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Last Hope Project                |
| domain_id   | b7302fd074b94f9aa668261eed9d3bae |
| enabled     | True                             |
| id          | e4d11ab5285f4a5eae38aa0b72d9dc34 |
| is_domain   | False                            |
| name        | LastHopeProject                  |
| parent_id   | b7302fd074b94f9aa668261eed9d3bae |
| tags        | []                               |
+-------------+----------------------------------+
  • Create User
openstack --os-cloud admin user create --domain LastHopeDomain \
  --project-domain LastHopeDomain --project LastHopeProject \
  --password-prompt LastHopeUser

If you extremely PoC User.

openstack --os-cloud admin user create --domain LastHopeDomain \
  --project-domain LastHopeDomain --project LastHopeProject \
  --password its3m1r6cl9 LastHopeUser
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| default_project_id  | e4d11ab5285f4a5eae38aa0b72d9dc34 |
| domain_id           | b7302fd074b94f9aa668261eed9d3bae |
| enabled             | True                             |
| id                  | 57ff551a119b4844af104fad89ec5007 |
| name                | LastHopeUser                     |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+
  • list roles
openstack --os-cloud admin role list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 02bd1ebc74d24ef7878cc2bdb7ebf188 | service |
| 2f6216f7bbca4c9fa2ca28ab44c8a6cd | member  |
| 4e5f778ba8a849fab71ba1aa02438ac1 | Admin   |
| 656bbea3986a4bf4bc7cfa07ee3ac372 | reader  |
+----------------------------------+---------+
  • assign role user to project
openstack --os-cloud admin role add \
  --project-domain LastHopeDomain \
  --project LastHopeProject \
  --user-domain LastHopeDomain \
  --user LastHopeUser \
  member

This command will no output.

  • Verify credential
openstack --os-cloud lasthope token issue
openstack --os-cloud lasthope project list
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack --os-cloud lasthope token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2020-05-30T10:36:12+0000                                                                                                                                                                |
| id         | gAAAAABe0ikMQZrWSbIpXL0TZX36DoCOyAVmgthzvSiNOQcRsnU1ndL8mOtwHlDq5ToMJdb0gzPE6pdJmG17o8OAWfkloNCHSTOCZ4AzmLyUCB8fPtCMWWycBMa8SPwVH6jNqwPW8DcPABi4alvYTYYUhaqs4X_1OHZ78KSiSQJsGwvyfKSg-v4 |
| project_id | e4d11ab5285f4a5eae38aa0b72d9dc34                                                                                                                                                        |
| user_id    | 57ff551a119b4844af104fad89ec5007                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack --os-cloud lasthope project list
+----------------------------------+-----------------+
| ID                               | Name            |
+----------------------------------+-----------------+
| e4d11ab5285f4a5eae38aa0b72d9dc34 | LastHopeProject |
+----------------------------------+-----------------+

Create a virtual network

Now switching Project Specific Work. Using always –os-cloud lasthope as credential.

you can use also environment variable based openrc file.

openstack --os-cloud lasthope network create LastHopeNetwork
+---------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                     | Value                                                                                                                                                                                     |
+---------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up            | UP                                                                                                                                                                                        |
| availability_zone_hints   |                                                                                                                                                                                           |
| availability_zones        |                                                                                                                                                                                           |
| created_at                | 2020-05-30T09:43:53Z                                                                                                                                                                      |
| description               |                                                                                                                                                                                           |
| dns_domain                | None                                                                                                                                                                                      |
| id                        | 42b9c40d-d549-45fa-a133-a22a5498254f                                                                                                                                                      |
| ipv4_address_scope        | None                                                                                                                                                                                      |
| ipv6_address_scope        | None                                                                                                                                                                                      |
| is_default                | False                                                                                                                                                                                     |
| is_vlan_transparent       | None                                                                                                                                                                                      |
| location                  | cloud='lasthope', project.domain_id=, project.domain_name='LastHopeDomain', project.id='e4d11ab5285f4a5eae38aa0b72d9dc34', project.name='LastHopeProject', region_name='RegionOne', zone= |
| mtu                       | 1450                                                                                                                                                                                      |
| name                      | LastHopeNetwork                                                                                                                                                                           |
| port_security_enabled     | False                                                                                                                                                                                     |
| project_id                | e4d11ab5285f4a5eae38aa0b72d9dc34                                                                                                                                                          |
| provider:network_type     | None                                                                                                                                                                                      |
| provider:physical_network | None                                                                                                                                                                                      |
| provider:segmentation_id  | None                                                                                                                                                                                      |
| qos_policy_id             | None                                                                                                                                                                                      |
| revision_number           | 1                                                                                                                                                                                         |
| router:external           | Internal                                                                                                                                                                                  |
| segments                  | None                                                                                                                                                                                      |
| shared                    | False                                                                                                                                                                                     |
| status                    | ACTIVE                                                                                                                                                                                    |
| subnets                   |                                                                                                                                                                                           |
| tags                      |                                                                                                                                                                                           |
| updated_at                | 2020-05-30T09:43:53Z                                                                                                                                                                      |
+---------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
openstack --os-cloud lasthope subnet create LastHopeSubnet \
  --allocation-pool start=172.16.0.10,end=172.16.15.254 \
  --subnet-range 172.16.0.0/20 \
  --gateway 172.16.0.1 \
  --dns-nameserver 172.16.0.3 \
  --dns-nameserver 8.8.8.8 \
  --dns-nameserver 8.8.4.4 \
  --dns-nameserver 1.1.1.1 \
  --network LastHopeNetwork
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                | Value                                                                                                                                                                                     |
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_pools     | 172.16.0.10-172.16.15.254                                                                                                                                                                 |
| cidr                 | 172.16.0.0/20                                                                                                                                                                             |
| created_at           | 2020-05-30T09:51:39Z                                                                                                                                                                      |
| description          |                                                                                                                                                                                           |
| dns_nameservers      | 1.1.1.1, 172.16.0.3, 8.8.4.4, 8.8.8.8                                                                                                                                                     |
| dns_publish_fixed_ip | None                                                                                                                                                                                      |
| enable_dhcp          | True                                                                                                                                                                                      |
| gateway_ip           | 172.16.0.1                                                                                                                                                                                |
| host_routes          |                                                                                                                                                                                           |
| id                   | 7e64e941-cd38-497c-8ac7-65a324d07749                                                                                                                                                      |
| ip_version           | 4                                                                                                                                                                                         |
| ipv6_address_mode    | None                                                                                                                                                                                      |
| ipv6_ra_mode         | None                                                                                                                                                                                      |
| location             | cloud='lasthope', project.domain_id=, project.domain_name='LastHopeDomain', project.id='e4d11ab5285f4a5eae38aa0b72d9dc34', project.name='LastHopeProject', region_name='RegionOne', zone= |
| name                 | LastHopeSubnet                                                                                                                                                                            |
| network_id           | 42b9c40d-d549-45fa-a133-a22a5498254f                                                                                                                                                      |
| prefix_length        | None                                                                                                                                                                                      |
| project_id           | e4d11ab5285f4a5eae38aa0b72d9dc34                                                                                                                                                          |
| revision_number      | 0                                                                                                                                                                                         |
| segment_id           | None                                                                                                                                                                                      |
| service_types        |                                                                                                                                                                                           |
| subnetpool_id        | None                                                                                                                                                                                      |
| tags                 |                                                                                                                                                                                           |
| updated_at           | 2020-05-30T09:51:39Z                                                                                                                                                                      |
+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  • Create Router
openstack --os-cloud lasthope router create LastHopeRouter
+-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                   | Value                                                                                                                                                                                     |
+-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up          | UP                                                                                                                                                                                        |
| availability_zone_hints |                                                                                                                                                                                           |
| availability_zones      |                                                                                                                                                                                           |
| created_at              | 2020-05-30T11:25:30Z                                                                                                                                                                      |
| description             |                                                                                                                                                                                           |
| external_gateway_info   | null                                                                                                                                                                                      |
| flavor_id               | None                                                                                                                                                                                      |
| id                      | be762041-8aa9-4bce-8655-0792681dd5d5                                                                                                                                                      |
| location                | cloud='lasthope', project.domain_id=, project.domain_name='LastHopeDomain', project.id='e4d11ab5285f4a5eae38aa0b72d9dc34', project.name='LastHopeProject', region_name='RegionOne', zone= |
| name                    | LastHopeRouter                                                                                                                                                                            |
| project_id              | e4d11ab5285f4a5eae38aa0b72d9dc34                                                                                                                                                          |
| revision_number         | 2                                                                                                                                                                                         |
| routes                  |                                                                                                                                                                                           |
| status                  | ACTIVE                                                                                                                                                                                    |
| tags                    |                                                                                                                                                                                           |
| updated_at              | 2020-05-30T11:25:30Z                                                                                                                                                                      |
+-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  • Connect Router to External Provider Network
openstack --os-cloud lasthope router set LastHopeRouter --external-gateway Pub_Net

This command no output.

  • Add Subnet to Router
openstack --os-cloud lasthope router add subnet LastHopeRouter LastHopeSubnet

This command no output.

OK. Now, verify created router detail.

openstack --os-cloud lasthope router show LastHopeRouter
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                   | Value                                                                                                                                                                                       |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up          | UP                                                                                                                                                                                          |
| availability_zone_hints |                                                                                                                                                                                             |
| availability_zones      | nova                                                                                                                                                                                        |
| created_at              | 2020-05-30T11:25:30Z                                                                                                                                                                        |
| description             |                                                                                                                                                                                             |
| external_gateway_info   | {"network_id": "68fb14e8-4716-4709-a80a-37536ac8d66d", "external_fixed_ips": [{"subnet_id": "b00b9b8f-0efe-42c2-9ee8-6d17593ee189", "ip_address": "198.51.100.166"}], "enable_snat": true}  |
| flavor_id               | None                                                                                                                                                                                        |
| id                      | be762041-8aa9-4bce-8655-0792681dd5d5                                                                                                                                                        |
| interfaces_info         | [{"port_id": "84398475-58c2-4045-9e10-a5cd1738229a", "ip_address": "172.16.0.1", "subnet_id": "7e64e941-cd38-497c-8ac7-65a324d07749"}]                                                      |
| location                | cloud='lasthope', project.domain_id=, project.domain_name='LastHopeDomain', project.id='e4d11ab5285f4a5eae38aa0b72d9dc34', project.name='LastHopeProject', region_name='RegionOne', zone=   |
| name                    | LastHopeRouter                                                                                                                                                                              |
| project_id              | e4d11ab5285f4a5eae38aa0b72d9dc34                                                                                                                                                            |
| revision_number         | 9                                                                                                                                                                                           |
| routes                  |                                                                                                                                                                                             |
| status                  | ACTIVE                                                                                                                                                                                      |
| tags                    |                                                                                                                                                                                             |
| updated_at              | 2020-05-30T11:26:44Z                                                                                                                                                                        |
+-------------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

And also you are OpenStack cloud platform Administrator! You can check advanced informations.

openstack --os-cloud admin router show LastHopeRouter
+-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                   | Value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up          | UP                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| availability_zone_hints |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| availability_zones      | nova                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
| created_at              | 2020-05-30T11:25:30Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
| description             |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| distributed             | False                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| external_gateway_info   | {"network_id": "68fb14e8-4716-4709-a80a-37536ac8d66d", "external_fixed_ips": [{"subnet_id": "b00b9b8f-0efe-42c2-9ee8-6d17593ee189", "ip_address": "198.51.100.166"}], "enable_snat": true}                                                                                                                                                                                                                                                                                                                                                                              |
| flavor_id               | None                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
| ha                      | True                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
| id                      | be762041-8aa9-4bce-8655-0792681dd5d5                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
| interfaces_info         | [{"port_id": "04c0aa18-ad67-4217-929f-546d07602d72", "ip_address": "169.254.194.237", "subnet_id": "10129050-76c7-4d04-82fe-b7b9f94cd3d1"}, {"port_id": "84398475-58c2-4045-9e10-a5cd1738229a", "ip_address": "172.16.0.1", "subnet_id": "7e64e941-cd38-497c-8ac7-65a324d07749"}, {"port_id": "b47e1d6d-6850-490a-9f79-256df8dd85c1", "ip_address": "169.254.192.184", "subnet_id": "10129050-76c7-4d04-82fe-b7b9f94cd3d1"}, {"port_id": "cc49271e-af80-4487-b24c-96090a2b4811", "ip_address": "169.254.195.213", "subnet_id": "10129050-76c7-4d04-82fe-b7b9f94cd3d1"}] |
| location                | cloud='admin', project.domain_id=, project.domain_name=, project.id='e4d11ab5285f4a5eae38aa0b72d9dc34', project.name=, region_name='RegionOne', zone=                                                                                                                                                                                                                                                                                                                                                                                                                   |
| name                    | LastHopeRouter                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
| project_id              | e4d11ab5285f4a5eae38aa0b72d9dc34                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
| revision_number         | 9                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| routes                  |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| status                  | ACTIVE                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
| tags                    |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| updated_at              | 2020-05-30T11:26:44Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
+-------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Notable output is ha = True. The created router count is 3 and provided gateway is VIP, redundant by VRRP. This behavior configured by neutron-api charm configuration, enable-l3ha: true, max-l3-agents-per-router: 3.

Prepare and register SSH Key

ssh-keygen -t rsa -b 4096 -N '' -C "openstack-lasthope-key" -f ~/.ssh/lasthope
mv ~/.ssh/lasthope{,.pem}
(venv) ubuntu@os-client:~/work/openstack/workspace$ ssh-keygen -t rsa -b 4096 -N '' -C "openstack-lasthope-key" -f ~/.ssh/lasthope
Generating public/private rsa key pair.
Your identification has been saved in /home/ubuntu/.ssh/lasthope.
Your public key has been saved in /home/ubuntu/.ssh/lasthope.pub.
The key fingerprint is:
SHA256:VK9DfFXt2U9OsO1Jo+TvDKH5Z/c7Rb8ubuemA/Qc4PY openstack-lasthope-key
The key's randomart image is:
+---[RSA 4096]----+
|          .   ..o|
|         o o .. .|
|        . + +  =o|
|       . . * o.+B|
|        S = *.+*=|
|           ooE..*|
|           o... o|
|            .+oO.|
|            o+%**|
+----[SHA256]-----+
(venv) ubuntu@os-client:~/work/openstack/workspace$ mv ~/.ssh/lasthope{,.pem}
(venv) ubuntu@os-client:~/work/openstack/workspace$ ls -l ~/.ssh/
total 28
-rw-r--r-- 1 ubuntu ubuntu  471 May  9 11:33 authorized_keys
-rw-rw-r-- 1 ubuntu ubuntu   51 May 14 07:56 config
-rw------- 1 ubuntu ubuntu  288 May  9 12:27 id_ecdsa
-rw-r--r-- 1 ubuntu ubuntu  222 May  9 12:27 id_ecdsa.pub
-rw-r--r-- 1 ubuntu ubuntu  888 May 23 14:54 known_hosts
-rw------- 1 ubuntu ubuntu 3243 May 30 20:59 lasthope.pem
-rw-r--r-- 1 ubuntu ubuntu  748 May 30 20:59 lasthope.pub
openstack --os-cloud lasthope keypair create --public-key ~/.ssh/lasthope.pub lasthope-key
+-------------+-------------------------------------------------+
| Field       | Value                                           |
+-------------+-------------------------------------------------+
| fingerprint | db:7d:db:ea:94:39:7f:96:27:de:1b:c4:0b:c5:db:16 |
| name        | lasthope-key                                    |
| user_id     | 57ff551a119b4844af104fad89ec5007                |
+-------------+-------------------------------------------------+

Configure Security Group

openstack --os-cloud lasthope security group create --description 'Allow Essential' Allow_Essential
openstack --os-cloud lasthope security group rule create --proto icmp Allow_Essential

openstack --os-cloud lasthope security group create --description 'Allow SSH' Allow_SSH
openstack --os-cloud lasthope security group rule create --proto tcp --dst-port 22 Allow_SSH

openstack --os-cloud lasthope security group create --description 'Allow WEB' Allow_WEB
openstack --os-cloud lasthope security group rule create --proto tcp --dst-port 80 Allow_WEB
openstack --os-cloud lasthope security group rule create --proto tcp --dst-port 443 Allow_WEB
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack --os-cloud lasthope security group create --description 'Allow SSH' Allow_SSH
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field           | Value                                                                                                                                                                                     |
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at      | 2020-05-30T12:04:04Z                                                                                                                                                                      |
| description     | Allow SSH                                                                                                                                                                                 |
| id              | edb5e07e-21eb-41b0-b1a8-e832dd5d2de8                                                                                                                                                      |
| location        | cloud='lasthope', project.domain_id=, project.domain_name='LastHopeDomain', project.id='e4d11ab5285f4a5eae38aa0b72d9dc34', project.name='LastHopeProject', region_name='RegionOne', zone= |
| name            | Allow_SSH                                                                                                                                                                                 |
| project_id      | e4d11ab5285f4a5eae38aa0b72d9dc34                                                                                                                                                          |
| revision_number | 1                                                                                                                                                                                         |
| rules           | created_at='2020-05-30T12:04:04Z', direction='egress', ethertype='IPv6', id='4c6a6334-2b91-4090-9bda-f92a789bf427', updated_at='2020-05-30T12:04:04Z'                                     |
|                 | created_at='2020-05-30T12:04:04Z', direction='egress', ethertype='IPv4', id='c6cb6e2e-bbcd-4598-8df5-6d789920f467', updated_at='2020-05-30T12:04:04Z'                                     |
| stateful        | None                                                                                                                                                                                      |
| tags            | []                                                                                                                                                                                        |
| updated_at      | 2020-05-30T12:04:04Z                                                                                                                                                                      |
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack --os-cloud lasthope security group rule create --proto tcp --dst-port 22 Allow_SSH
+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field             | Value                                                                                                                                                                                     |
+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at        | 2020-05-30T12:04:11Z                                                                                                                                                                      |
| description       |                                                                                                                                                                                           |
| direction         | ingress                                                                                                                                                                                   |
| ether_type        | IPv4                                                                                                                                                                                      |
| id                | 1b4bb193-6de0-42d3-8396-85e3c377b124                                                                                                                                                      |
| location          | cloud='lasthope', project.domain_id=, project.domain_name='LastHopeDomain', project.id='e4d11ab5285f4a5eae38aa0b72d9dc34', project.name='LastHopeProject', region_name='RegionOne', zone= |
| name              | None                                                                                                                                                                                      |
| port_range_max    | 22                                                                                                                                                                                        |
| port_range_min    | 22                                                                                                                                                                                        |
| project_id        | e4d11ab5285f4a5eae38aa0b72d9dc34                                                                                                                                                          |
| protocol          | tcp                                                                                                                                                                                       |
| remote_group_id   | None                                                                                                                                                                                      |
| remote_ip_prefix  | 0.0.0.0/0                                                                                                                                                                                 |
| revision_number   | 0                                                                                                                                                                                         |
| security_group_id | edb5e07e-21eb-41b0-b1a8-e832dd5d2de8                                                                                                                                                      |
| tags              | []                                                                                                                                                                                        |
| updated_at        | 2020-05-30T12:04:11Z                                                                                                                                                                      |
+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
(snip)

Launch a Instance by OSC

  • allocate an floating IP.
openstack --os-cloud lasthope floating ip create Pub_Net
+---------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field               | Value                                                                                                                                                                                                               |
+---------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at          | 2020-05-30T11:40:37Z                                                                                                                                                                                                |
| description         |                                                                                                                                                                                                                     |
| dns_domain          | None                                                                                                                                                                                                                |
| dns_name            | None                                                                                                                                                                                                                |
| fixed_ip_address    | None                                                                                                                                                                                                                |
| floating_ip_address | 198.51.100.165                                                                                                                                                                                                      |
| floating_network_id | 68fb14e8-4716-4709-a80a-37536ac8d66d                                                                                                                                                                                |
| id                  | 1f4b3227-057a-464f-ba66-8a71eeadfd38                                                                                                                                                                                |
| location            | Munch({'cloud': 'lasthope', 'region_name': 'RegionOne', 'zone': None, 'project': Munch({'id': 'e4d11ab5285f4a5eae38aa0b72d9dc34', 'name': 'LastHopeProject', 'domain_id': None, 'domain_name': 'LastHopeDomain'})}) |
| name                | 198.51.100.165                                                                                                                                                                                                      |
| port_details        | None                                                                                                                                                                                                                |
| port_id             | None                                                                                                                                                                                                                |
| project_id          | e4d11ab5285f4a5eae38aa0b72d9dc34                                                                                                                                                                                    |
| qos_policy_id       | None                                                                                                                                                                                                                |
| revision_number     | 0                                                                                                                                                                                                                   |
| router_id           | None                                                                                                                                                                                                                |
| status              | DOWN                                                                                                                                                                                                                |
| subnet_id           | None                                                                                                                                                                                                                |
| tags                | []                                                                                                                                                                                                                  |
| updated_at          | 2020-05-30T11:40:37Z                                                                                                                                                                                                |
+---------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
  • review available image, and flavor, defined network.
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack --os-cloud lasthope image list
+--------------------------------------+-----------------------------------+--------+
| ID                                   | Name                              | Status |
+--------------------------------------+-----------------------------------+--------+
| 6d3d3d26-226a-4386-a310-d7fb53d1473a | cirros-0.5.1-x86_64               | active |
| 03276bca-3bb3-4412-bd65-945270c4cdde | ubuntu-server-16.04-x86_64-xenial | active |
| 32d7479a-ad0e-49b4-8977-03583011b444 | ubuntu-server-18.04-x86_64-bionic | active |
| 75b410a1-9f09-4050-8444-fcea1bcea0a3 | ubuntu-server-20.04-x86_64-focal  | active |
+--------------------------------------+-----------------------------------+--------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack --os-cloud lasthope flavor list
+--------------------------------------+-----------+------+------+-----------+-------+-----------+
| ID                                   | Name      |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+
| 8b8d2354-5b5d-47c8-a24d-ce6219f676d8 | m1.nano   | 1024 |    1 |         0 |     1 | True      |
| b15d66f4-c2f9-4a35-a88d-b9a8f5c6ba5a | m1.medium | 4096 |   20 |         0 |     4 | True      |
| d7ed9288-9050-4464-8f4f-34f38c46cab8 | c1.large  | 8192 |   20 |         0 |     8 | True      |
| deda6dee-e721-4feb-a71f-236494cf34de | m1.pico   |   64 |    1 |         0 |     1 | True      |
| f0df4f4e-3d10-4f9c-86af-13ec047d590c | m1.micro  | 2048 |   20 |         0 |     2 | True      |
+--------------------------------------+-----------+------+------+-----------+-------+-----------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack --os-cloud lasthope network list
+--------------------------------------+-----------------+--------------------------------------+
| ID                                   | Name            | Subnets                              |
+--------------------------------------+-----------------+--------------------------------------+
| 42b9c40d-d549-45fa-a133-a22a5498254f | LastHopeNetwork | 7e64e941-cd38-497c-8ac7-65a324d07749 |
| 68fb14e8-4716-4709-a80a-37536ac8d66d | Pub_Net         | b00b9b8f-0efe-42c2-9ee8-6d17593ee189 |
+--------------------------------------+-----------------+--------------------------------------+
  • Launch instance
openstack --os-cloud lasthope server create lasthope-web \
  --availability-zone nova \
  --image 'ubuntu-server-20.04-x86_64-focal' \
  --boot-from-volume 100 \
  --flavor m1.nano \
  --key-name lasthope-key \
  --security-group Allow_Essential \
  --security-group Allow_SSH \
  --security-group Allow_WEB \
  --network LastHopeNetwork
+-----------------------------+------------------------------------------------+
| Field                       | Value                                          |
+-----------------------------+------------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                         |
| OS-EXT-AZ:availability_zone | nova                                           |
| OS-EXT-STS:power_state      | NOSTATE                                        |
| OS-EXT-STS:task_state       | scheduling                                     |
| OS-EXT-STS:vm_state         | building                                       |
| OS-SRV-USG:launched_at      | None                                           |
| OS-SRV-USG:terminated_at    | None                                           |
| accessIPv4                  |                                                |
| accessIPv6                  |                                                |
| addresses                   |                                                |
| adminPass                   | qna6JQTCj4jq                                   |
| config_drive                |                                                |
| created                     | 2020-05-30T22:40:18Z                           |
| flavor                      | m1.nano (8b8d2354-5b5d-47c8-a24d-ce6219f676d8) |
| hostId                      |                                                |
| id                          | ac93038c-2299-41a4-b062-69c6167c538e           |
| image                       |                                                |
| key_name                    | lasthope-key                                   |
| name                        | lasthope-web                                   |
| progress                    | 0                                              |
| project_id                  | e4d11ab5285f4a5eae38aa0b72d9dc34               |
| properties                  |                                                |
| security_groups             | name='4c7627ed-366d-45ee-ba29-a2e45bc1445c'    |
|                             | name='edb5e07e-21eb-41b0-b1a8-e832dd5d2de8'    |
|                             | name='885135e9-7215-45df-a0e3-e8ff20d2883f'    |
| status                      | BUILD                                          |
| updated                     | 2020-05-30T22:40:18Z                           |
| user_id                     | 57ff551a119b4844af104fad89ec5007               |
| volumes_attached            |                                                |
+-----------------------------+------------------------------------------------+
openstack --os-cloud lasthope server show lasthope-web
+-----------------------------+----------------------------------------------------------+
| Field                       | Value                                                    |
+-----------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                                   |
| OS-EXT-AZ:availability_zone | nova                                                     |
| OS-EXT-STS:power_state      | NOSTATE                                                  |
| OS-EXT-STS:task_state       | block_device_mapping                                     |
| OS-EXT-STS:vm_state         | building                                                 |
| OS-SRV-USG:launched_at      | None                                                     |
| OS-SRV-USG:terminated_at    | None                                                     |
| accessIPv4                  |                                                          |
| accessIPv6                  |                                                          |
| addresses                   |                                                          |
| config_drive                |                                                          |
| created                     | 2020-05-30T22:40:17Z                                     |
| flavor                      | m1.nano (8b8d2354-5b5d-47c8-a24d-ce6219f676d8)           |
| hostId                      | b4f8fb86fc7d5355f23a0b40e9da982aaf20798f2c08ac0b97ebdd8f |
| id                          | ac93038c-2299-41a4-b062-69c6167c538e                     |
| image                       |                                                          |
| key_name                    | lasthope-key                                             |
| name                        | lasthope-web                                             |
| progress                    | 0                                                        |
| project_id                  | e4d11ab5285f4a5eae38aa0b72d9dc34                         |
| properties                  |                                                          |
| security_groups             | name='Allow_Essential'                                   |
|                             | name='Allow_WEB'                                         |
|                             | name='Allow_SSH'                                         |
| status                      | BUILD                                                    |
| updated                     | 2020-05-30T22:40:21Z                                     |
| user_id                     | 57ff551a119b4844af104fad89ec5007                         |
| volumes_attached            |                                                          |
+-----------------------------+----------------------------------------------------------+
openstack --os-cloud lasthope volume list
+--------------------------------------+------+-------------+------+-------------+
| ID                                   | Name | Status      | Size | Attached to |
+--------------------------------------+------+-------------+------+-------------+
| a2a66254-d2d9-477d-bc49-974903eed372 |      | downloading |  100 |             |
+--------------------------------------+------+-------------+------+-------------+
+--------------------------------------+------+--------+------+---------------------------------------+
| ID                                   | Name | Status | Size | Attached to                           |
+--------------------------------------+------+--------+------+---------------------------------------+
| a2a66254-d2d9-477d-bc49-974903eed372 |      | in-use |  100 | Attached to lasthope-web on /dev/vda  |
+--------------------------------------+------+--------+------+---------------------------------------+
openstack --os-cloud lasthope server show lasthope-web
+-----------------------------+----------------------------------------------------------+
| Field                       | Value                                                    |
+-----------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                                   |
| OS-EXT-AZ:availability_zone | nova                                                     |
| OS-EXT-STS:power_state      | Running                                                  |
| OS-EXT-STS:task_state       | None                                                     |
| OS-EXT-STS:vm_state         | active                                                   |
| OS-SRV-USG:launched_at      | 2020-05-30T22:41:13.000000                               |
| OS-SRV-USG:terminated_at    | None                                                     |
| accessIPv4                  |                                                          |
| accessIPv6                  |                                                          |
| addresses                   | LastHopeNetwork=172.16.1.251                             |
| config_drive                |                                                          |
| created                     | 2020-05-30T22:40:17Z                                     |
| flavor                      | m1.nano (8b8d2354-5b5d-47c8-a24d-ce6219f676d8)           |
| hostId                      | b4f8fb86fc7d5355f23a0b40e9da982aaf20798f2c08ac0b97ebdd8f |
| id                          | ac93038c-2299-41a4-b062-69c6167c538e                     |
| image                       |                                                          |
| key_name                    | lasthope-key                                             |
| name                        | lasthope-web                                             |
| progress                    | 0                                                        |
| project_id                  | e4d11ab5285f4a5eae38aa0b72d9dc34                         |
| properties                  |                                                          |
| security_groups             | name='Allow_Essential'                                   |
|                             | name='Allow_WEB'                                         |
|                             | name='Allow_SSH'                                         |
| status                      | ACTIVE                                                   |
| updated                     | 2020-05-30T22:41:14Z                                     |
| user_id                     | 57ff551a119b4844af104fad89ec5007                         |
| volumes_attached            | id='a2a66254-d2d9-477d-bc49-974903eed372'                |
+-----------------------------+----------------------------------------------------------+
openstack --os-cloud lasthope server add floating ip lasthope-web 198.51.100.165

This command no output.

openstack --os-cloud lasthope server show lasthope-web
+-----------------------------+----------------------------------------------------------+
| Field                       | Value                                                    |
+-----------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig           | MANUAL                                                   |
| OS-EXT-AZ:availability_zone | nova                                                     |
| OS-EXT-STS:power_state      | Running                                                  |
| OS-EXT-STS:task_state       | None                                                     |
| OS-EXT-STS:vm_state         | active                                                   |
| OS-SRV-USG:launched_at      | 2020-05-30T22:41:13.000000                               |
| OS-SRV-USG:terminated_at    | None                                                     |
| accessIPv4                  |                                                          |
| accessIPv6                  |                                                          |
| addresses                   | LastHopeNetwork=172.16.1.251, 198.51.100.165             |
| config_drive                |                                                          |
| created                     | 2020-05-30T22:40:17Z                                     |
| flavor                      | m1.nano (8b8d2354-5b5d-47c8-a24d-ce6219f676d8)           |
| hostId                      | b4f8fb86fc7d5355f23a0b40e9da982aaf20798f2c08ac0b97ebdd8f |
| id                          | ac93038c-2299-41a4-b062-69c6167c538e                     |
| image                       |                                                          |
| key_name                    | lasthope-key                                             |
| name                        | lasthope-web                                             |
| progress                    | 0                                                        |
| project_id                  | e4d11ab5285f4a5eae38aa0b72d9dc34                         |
| properties                  |                                                          |
| security_groups             | name='Allow_Essential'                                   |
|                             | name='Allow_WEB'                                         |
|                             | name='Allow_SSH'                                         |
| status                      | ACTIVE                                                   |
| updated                     | 2020-05-30T22:41:14Z                                     |
| user_id                     | 57ff551a119b4844af104fad89ec5007                         |
| volumes_attached            | id='a2a66254-d2d9-477d-bc49-974903eed372'                |
+-----------------------------+----------------------------------------------------------+
  • Verify basic IPv4 reachability.
(venv) ubuntu@os-client:~/work/openstack/workspace$ ping -c4 198.51.100.165
PING 198.51.100.165 (198.51.100.165) 56(84) bytes of data.
64 bytes from 198.51.100.165: icmp_seq=1 ttl=50 time=19.2 ms
64 bytes from 198.51.100.165: icmp_seq=2 ttl=50 time=17.6 ms
64 bytes from 198.51.100.165: icmp_seq=3 ttl=50 time=17.2 ms
64 bytes from 198.51.100.165: icmp_seq=4 ttl=50 time=16.9 ms

--- 198.51.100.165 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 16.959/17.797/19.295/0.910 ms
  • SSH
alias skipssh='ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no"'
skipssh ubuntu@198.51.100.165 -i ~/.ssh/lasthope.pem
(venv) ubuntu@os-client:~/work/openstack/workspace$ skipssh ubuntu@198.51.100.165 -i ~/.ssh/lasthope.pem
Warning: Permanently added '198.51.100.165' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-29-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sat May 30 22:44:04 UTC 2020

  System load:  0.45              Processes:             98
  Usage of /:   1.2% of 96.75GB   Users logged in:       0
  Memory usage: 18%               IPv4 address for ens3: 172.16.1.251
  Swap usage:   0%

0 updates can be installed immediately.
0 of these updates are security updates.


The list of available updates is more than a week old.
To check for new updates run: sudo apt update


The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@lasthope-web:~$
  • Verify instance internet reachability.
ping -c4 www.google.com
ubuntu@lasthope-web:~$ ping -c4 www.google.com
PING www.google.com (172.217.175.36) 56(84) bytes of data.
64 bytes from nrt20s19-in-f4.1e100.net (172.217.175.36): icmp_seq=1 ttl=51 time=14.9 ms
64 bytes from nrt20s19-in-f4.1e100.net (172.217.175.36): icmp_seq=2 ttl=51 time=10.1 ms
64 bytes from nrt20s19-in-f4.1e100.net (172.217.175.36): icmp_seq=3 ttl=51 time=10.7 ms
64 bytes from nrt20s19-in-f4.1e100.net (172.217.175.36): icmp_seq=4 ttl=51 time=11.4 ms

--- www.google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3006ms
rtt min/avg/max/mdev = 10.106/11.764/14.914/1.871 ms
  • Up to date all packages
sudo apt update && sudo apt upgrade --auto-remove && sudo systemctl reboot
ubuntu@lasthope-web:~$ sudo apt update && sudo apt upgrade --auto-remove && sudo systemctl reboot
Get:1 http://security.ubuntu.com/ubuntu focal-security InRelease [107 kB]
Get:2 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [90.1 kB]
Get:3 http://security.ubuntu.com/ubuntu focal-security/main Translation-en [34.1 kB]
Get:4 http://security.ubuntu.com/ubuntu focal-security/main amd64 c-n-f Metadata [2,548 B]
Get:5 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [8,924 B]
Get:6 http://security.ubuntu.com/ubuntu focal-security/restricted Translation-en [2,516 B]
Get:7 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [31.7 kB]
Get:8 http://security.ubuntu.com/ubuntu focal-security/universe Translation-en [15.0 kB]
Get:9 http://security.ubuntu.com/ubuntu focal-security/universe amd64 c-n-f Metadata [1,220 B]
Get:10 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 Packages [1,172 B]
Get:11 http://security.ubuntu.com/ubuntu focal-security/multiverse Translation-en [540 B]
Get:12 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 c-n-f Metadata [116 B]
Hit:13 http://nova.clouds.archive.ubuntu.com/ubuntu focal InRelease
Get:14 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates InRelease [107 kB]
Get:15 http://nova.clouds.archive.ubuntu.com/ubuntu focal-backports InRelease [98.3 kB]
Get:16 http://nova.clouds.archive.ubuntu.com/ubuntu focal/universe amd64 Packages [8,628 kB]
Get:17 http://nova.clouds.archive.ubuntu.com/ubuntu focal/universe Translation-en [5,124 kB]
Get:18 http://nova.clouds.archive.ubuntu.com/ubuntu focal/universe amd64 c-n-f Metadata [265 kB]
Get:19 http://nova.clouds.archive.ubuntu.com/ubuntu focal/multiverse amd64 Packages [144 kB]
Get:20 http://nova.clouds.archive.ubuntu.com/ubuntu focal/multiverse Translation-en [104 kB]
Get:21 http://nova.clouds.archive.ubuntu.com/ubuntu focal/multiverse amd64 c-n-f Metadata [9,136 B]
Get:22 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [148 kB]
Get:23 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main Translation-en [55.4 kB]
Get:24 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/main amd64 c-n-f Metadata [3,736 B]
Get:25 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [8,924 B]
Get:26 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/restricted Translation-en [2,516 B]
Get:27 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [74.9 kB]
Get:28 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/universe Translation-en [32.7 kB]
Get:29 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/universe amd64 c-n-f Metadata [2,608 B]
Get:30 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/multiverse amd64 Packages [1,172 B]
Get:31 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/multiverse Translation-en [540 B]
Get:32 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates/multiverse amd64 c-n-f Metadata [116 B]
Get:33 http://nova.clouds.archive.ubuntu.com/ubuntu focal-backports/main amd64 c-n-f Metadata [112 B]
Get:34 http://nova.clouds.archive.ubuntu.com/ubuntu focal-backports/restricted amd64 c-n-f Metadata [116 B]
Get:35 http://nova.clouds.archive.ubuntu.com/ubuntu focal-backports/universe amd64 Packages [2,792 B]
Get:36 http://nova.clouds.archive.ubuntu.com/ubuntu focal-backports/universe Translation-en [1,280 B]
Get:37 http://nova.clouds.archive.ubuntu.com/ubuntu focal-backports/universe amd64 c-n-f Metadata [188 B]
Get:38 http://nova.clouds.archive.ubuntu.com/ubuntu focal-backports/multiverse amd64 c-n-f Metadata [116 B]
Fetched 15.1 MB in 15s (978 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
31 packages can be upgraded. Run 'apt list --upgradable' to see them.
Reading package lists... Done
Building dependency tree
Reading state information... Done
Calculating upgrade... Done
The following NEW packages will be installed:
  linux-headers-5.4.0-33 linux-headers-5.4.0-33-generic linux-image-5.4.0-33-generic linux-modules-5.4.0-33-generic
The following packages will be upgraded:
  apport apt apt-utils bind9-dnsutils bind9-host bind9-libs glib-networking glib-networking-common glib-networking-services libapt-pkg6.0 libjson-c4 libnetplan0 libnss-systemd libpam-systemd libsystemd0 libudev1
  linux-headers-generic linux-headers-virtual linux-image-virtual linux-virtual netplan.io python3-apport python3-problem-report systemd systemd-sysv systemd-timesyncd tzdata ubuntu-minimal ubuntu-server
  ubuntu-standard udev
31 upgraded, 4 newly installed, 0 to remove and 0 not upgraded.
Need to get 45.6 MB of archives.
After this operation, 170 MB of additional disk space will be used.
Do you want to continue? [Y/n]

(snip)

Unpacking python3-apport (2.20.11-0ubuntu27.2) over (2.20.11-0ubuntu27) ...
Preparing to unpack .../12-apport_2.20.11-0ubuntu27.2_all.deb ...
Unpacking apport (2.20.11-0ubuntu27.2) over (2.20.11-0ubuntu27) ...
Preparing to unpack .../13-glib-networking-common_2.64.2-1build1_all.deb ...
Unpacking glib-networking-common (2.64.2-1build1) over (2.64.1-1) ...
Preparing to unpack .../14-glib-networking_2.64.2-1build1_amd64.deb ...
Unpacking glib-networking:amd64 (2.64.2-1build1) over (2.64.1-1) ...
Preparing to unpack .../15-glib-networking-services_2.64.2-1build1_amd64.deb ...
Unpacking glib-networking-services (2.64.2-1build1) over (2.64.1-1) ...
Selecting previously unselected package linux-headers-5.4.0-33.
Preparing to unpack .../16-linux-headers-5.4.0-33_5.4.0-33.37_all.deb ...
Unpacking linux-headers-5.4.0-33 (5.4.0-33.37) ...
Selecting previously unselected package linux-headers-5.4.0-33-generic.
Preparing to unpack .../17-linux-headers-5.4.0-33-generic_5.4.0-33.37_amd64.deb ...
Unpacking linux-headers-5.4.0-33-generic (5.4.0-33.37) ...
Selecting previously unselected package linux-modules-5.4.0-33-generic.
Preparing to unpack .../18-linux-modules-5.4.0-33-generic_5.4.0-33.37_amd64.deb ...
Unpacking linux-modules-5.4.0-33-generic (5.4.0-33.37) ...
Selecting previously unselected package linux-image-5.4.0-33-generic.
Preparing to unpack .../19-linux-image-5.4.0-33-generic_5.4.0-33.37_amd64.deb ...
Unpacking linux-image-5.4.0-33-generic (5.4.0-33.37) ...
Preparing to unpack .../20-linux-virtual_5.4.0.33.38_amd64.deb ...
Unpacking linux-virtual (5.4.0.33.38) over (5.4.0.29.34) ...
Preparing to unpack .../21-linux-image-virtual_5.4.0.33.38_amd64.deb ...
Unpacking linux-image-virtual (5.4.0.33.38) over (5.4.0.29.34) ...
Preparing to unpack .../22-linux-headers-virtual_5.4.0.33.38_amd64.deb ...
Unpacking linux-headers-virtual (5.4.0.33.38) over (5.4.0.29.34) ...
Preparing to unpack .../23-linux-headers-generic_5.4.0.33.38_amd64.deb ...
Unpacking linux-headers-generic (5.4.0.33.38) over (5.4.0.29.34) ...
Preparing to unpack .../24-ubuntu-server_1.450.1_amd64.deb ...
Unpacking ubuntu-server (1.450.1) over (1.450) ...
Setting up apt-utils (2.0.2ubuntu0.1) ...
Setting up linux-modules-5.4.0-33-generic (5.4.0-33.37) ...
Setting up linux-headers-5.4.0-33 (5.4.0-33.37) ...
Setting up linux-image-5.4.0-33-generic (5.4.0-33.37) ...
I: /boot/vmlinuz is now a symlink to vmlinuz-5.4.0-33-generic
I: /boot/initrd.img is now a symlink to initrd.img-5.4.0-33-generic
Setting up python3-problem-report (2.20.11-0ubuntu27.2) ...
Setting up libnetplan0:amd64 (0.99-0ubuntu3~20.04.1) ...
Setting up python3-apport (2.20.11-0ubuntu27.2) ...
Setting up tzdata (2020a-0ubuntu0.20.04) ...

Current default time zone: 'Etc/UTC'
Local time is now:      Sat 30 May 2020 01:42:16 PM UTC.
Universal Time is now:  Sat May 30 13:42:16 UTC 2020.
Run 'dpkg-reconfigure tzdata' if you wish to change it.

Setting up udev (245.4-4ubuntu3.1) ...
update-initramfs: deferring update (trigger activated)
Setting up linux-headers-5.4.0-33-generic (5.4.0-33.37) ...
Setting up linux-image-virtual (5.4.0.33.38) ...
Setting up libjson-c4:amd64 (0.13.1+dfsg-7ubuntu0.3) ...
Setting up glib-networking-common (2.64.2-1build1) ...
Setting up glib-networking-services (2.64.2-1build1) ...
Setting up bind9-libs:amd64 (1:9.16.1-0ubuntu2.1) ...
Setting up linux-headers-generic (5.4.0.33.38) ...
Setting up apport (2.20.11-0ubuntu27.2) ...
apport-autoreport.service is a disabled or a static unit, not starting it.
Setting up ubuntu-server (1.450.1) ...
Setting up glib-networking:amd64 (2.64.2-1build1) ...
Setting up bind9-host (1:9.16.1-0ubuntu2.1) ...
Setting up linux-headers-virtual (5.4.0.33.38) ...
Setting up linux-virtual (5.4.0.33.38) ...
Setting up bind9-dnsutils (1:9.16.1-0ubuntu2.1) ...
Setting up systemd (245.4-4ubuntu3.1) ...
Setting up netplan.io (0.99-0ubuntu3~20.04.1) ...
Setting up systemd-timesyncd (245.4-4ubuntu3.1) ...
packet_write_wait: Connection to 198.51.100.165 port 22: Broken pipe
(venv) ubuntu@os-client:~/work/openstack/workspace$
(venv) ubuntu@os-client:~/work/openstack/workspace$ skipssh ubuntu@198.51.100.165 -i ~/.ssh/lasthope.pem
Warning: Permanently added '198.51.100.165' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-33-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sat May 30 22:53:33 UTC 2020

  System load:  0.06              Processes:             97
  Usage of /:   1.6% of 96.75GB   Users logged in:       0
  Memory usage: 16%               IPv4 address for ens3: 172.16.1.251
  Swap usage:   0%


0 updates can be installed immediately.
0 of these updates are security updates.


Last login: Sat May 30 22:53:30 2020 from 39.111.157.168
ubuntu@lasthope-web:~$ sudo apt update
Hit:1 http://security.ubuntu.com/ubuntu focal-security InRelease
Hit:2 http://nova.clouds.archive.ubuntu.com/ubuntu focal InRelease
Hit:3 http://nova.clouds.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:4 http://nova.clouds.archive.ubuntu.com/ubuntu focal-backports InRelease
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
  • set password for console access.
sudo passwd ubuntu
ubuntu@lasthope-web:~$ sudo passwd ubuntu
New password:
Retype new password:
passwd: password updated successfully

Let's install WordPress!!

How To Install Linux, Apache, MySQL, PHP (LAMP) stack on Ubuntu 20.04 | DigitalOcean

sudo apt install jq
sudo apt install software-properties-common
sudo add-apt-repository universe
sudo apt update
sudo apt install apache2 php libapache2-mod-php php-mysql mysql-server certbot python3-certbot-apache
ubuntu@lasthope-web:~$ sudo apt install apache2 php libapache2-mod-php php-mysql mysql-server certbot python3-certbot-apache
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  apache2-bin apache2-data apache2-utils augeas-lenses libapache2-mod-php7.4 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libaugeas0 libcgi-fast-perl libcgi-pm-perl
  libencode-locale-perl libevent-core-2.1-7 libfcgi-perl libhtml-parser-perl libhtml-tagset-perl libhtml-template-perl libhttp-date-perl libhttp-message-perl libio-html-perl libjansson4 liblua5.2-0
  liblwp-mediatypes-perl libmecab2 libtimedate-perl liburi-perl mecab-ipadic mecab-ipadic-utf8 mecab-utils mysql-client-8.0 mysql-client-core-8.0 mysql-common mysql-server-8.0 mysql-server-core-8.0
  php-common php7.4 php7.4-cli php7.4-common php7.4-json php7.4-mysql php7.4-opcache php7.4-readline python3-acme python3-augeas python3-certbot python3-configargparse python3-future python3-icu
  python3-josepy python3-mock python3-parsedatetime python3-pbr python3-requests-toolbelt python3-rfc3339 python3-tz python3-zope.component python3-zope.event python3-zope.hookable ssl-cert
Suggested packages:
  apache2-doc apache2-suexec-pristine | apache2-suexec-custom www-browser augeas-doc python3-certbot-nginx python-certbot-doc php-pear augeas-tools libdata-dump-perl libipc-sharedcache-perl
  libwww-perl mailx tinyca python-acme-doc python-certbot-apache-doc python-future-doc python-mock-doc openssl-blacklist
The following NEW packages will be installed:
  apache2 apache2-bin apache2-data apache2-utils augeas-lenses certbot libapache2-mod-php libapache2-mod-php7.4 libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap libaugeas0
  libcgi-fast-perl libcgi-pm-perl libencode-locale-perl libevent-core-2.1-7 libfcgi-perl libhtml-parser-perl libhtml-tagset-perl libhtml-template-perl libhttp-date-perl libhttp-message-perl
  libio-html-perl libjansson4 liblua5.2-0 liblwp-mediatypes-perl libmecab2 libtimedate-perl liburi-perl mecab-ipadic mecab-ipadic-utf8 mecab-utils mysql-client-8.0 mysql-client-core-8.0
  mysql-common mysql-server mysql-server-8.0 mysql-server-core-8.0 php php-common php-mysql php7.4 php7.4-cli php7.4-common php7.4-json php7.4-mysql php7.4-opcache php7.4-readline python3-acme
  python3-augeas python3-certbot python3-certbot-apache python3-configargparse python3-future python3-icu python3-josepy python3-mock python3-parsedatetime python3-pbr python3-requests-toolbelt
  python3-rfc3339 python3-tz python3-zope.component python3-zope.event python3-zope.hookable ssl-cert
0 upgraded, 67 newly installed, 0 to remove and 0 not upgraded.
Need to get 38.0 MB of archives.
After this operation, 282 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
  • Apache modules
sudo ls -l /etc/apache2/mods-enabled/
ubuntu@lasthope-web:~$ sudo ls -l /etc/apache2/mods-enabled/
total 0
lrwxrwxrwx 1 root root 36 May 31 00:20 access_compat.load -> ../mods-available/access_compat.load
lrwxrwxrwx 1 root root 28 May 31 00:20 alias.conf -> ../mods-available/alias.conf
lrwxrwxrwx 1 root root 28 May 31 00:20 alias.load -> ../mods-available/alias.load
lrwxrwxrwx 1 root root 33 May 31 00:20 auth_basic.load -> ../mods-available/auth_basic.load
lrwxrwxrwx 1 root root 33 May 31 00:20 authn_core.load -> ../mods-available/authn_core.load
lrwxrwxrwx 1 root root 33 May 31 00:20 authn_file.load -> ../mods-available/authn_file.load
lrwxrwxrwx 1 root root 33 May 31 00:20 authz_core.load -> ../mods-available/authz_core.load
lrwxrwxrwx 1 root root 33 May 31 00:20 authz_host.load -> ../mods-available/authz_host.load
lrwxrwxrwx 1 root root 33 May 31 00:20 authz_user.load -> ../mods-available/authz_user.load
lrwxrwxrwx 1 root root 32 May 31 00:20 autoindex.conf -> ../mods-available/autoindex.conf
lrwxrwxrwx 1 root root 32 May 31 00:20 autoindex.load -> ../mods-available/autoindex.load
lrwxrwxrwx 1 root root 30 May 31 00:20 deflate.conf -> ../mods-available/deflate.conf
lrwxrwxrwx 1 root root 30 May 31 00:20 deflate.load -> ../mods-available/deflate.load
lrwxrwxrwx 1 root root 26 May 31 00:20 dir.conf -> ../mods-available/dir.conf
lrwxrwxrwx 1 root root 26 May 31 00:20 dir.load -> ../mods-available/dir.load
lrwxrwxrwx 1 root root 26 May 31 00:20 env.load -> ../mods-available/env.load
lrwxrwxrwx 1 root root 29 May 31 00:20 filter.load -> ../mods-available/filter.load
lrwxrwxrwx 1 root root 27 May 31 00:20 mime.conf -> ../mods-available/mime.conf
lrwxrwxrwx 1 root root 27 May 31 00:20 mime.load -> ../mods-available/mime.load
lrwxrwxrwx 1 root root 34 May 31 00:20 mpm_prefork.conf -> ../mods-available/mpm_prefork.conf
lrwxrwxrwx 1 root root 34 May 31 00:20 mpm_prefork.load -> ../mods-available/mpm_prefork.load
lrwxrwxrwx 1 root root 34 May 31 00:20 negotiation.conf -> ../mods-available/negotiation.conf
lrwxrwxrwx 1 root root 34 May 31 00:20 negotiation.load -> ../mods-available/negotiation.load
lrwxrwxrwx 1 root root 29 May 31 00:20 php7.4.conf -> ../mods-available/php7.4.conf
lrwxrwxrwx 1 root root 29 May 31 00:20 php7.4.load -> ../mods-available/php7.4.load
lrwxrwxrwx 1 root root 33 May 31 00:20 reqtimeout.conf -> ../mods-available/reqtimeout.conf
lrwxrwxrwx 1 root root 33 May 31 00:20 reqtimeout.load -> ../mods-available/reqtimeout.load
lrwxrwxrwx 1 root root 30 May 31 00:40 rewrite.load -> ../mods-available/rewrite.load
lrwxrwxrwx 1 root root 31 May 31 00:20 setenvif.conf -> ../mods-available/setenvif.conf
lrwxrwxrwx 1 root root 31 May 31 00:20 setenvif.load -> ../mods-available/setenvif.load
lrwxrwxrwx 1 root root 36 May 31 00:39 socache_shmcb.load -> ../mods-available/socache_shmcb.load
lrwxrwxrwx 1 root root 26 May 31 00:39 ssl.conf -> ../mods-available/ssl.conf
lrwxrwxrwx 1 root root 26 May 31 00:39 ssl.load -> ../mods-available/ssl.load
lrwxrwxrwx 1 root root 29 May 31 00:20 status.conf -> ../mods-available/status.conf
lrwxrwxrwx 1 root root 29 May 31 00:20 status.load -> ../mods-available/status.load
  • Verify DNS
(venv) ubuntu@os-client:~/work/openstack/workspace$ dig host-198-51-100-165.pg1x.com

; <<>> DiG 9.11.3-1ubuntu1.12-Ubuntu <<>> host-198-51-100-165.pg1x.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 61567
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;host-198-51-100-165.pg1x.com.  IN      A

;; ANSWER SECTION:
host-198-51-100-165.pg1x.com. 299 IN    A       198.51.100.165

;; Query time: 12 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Sun May 31 09:39:04 JST 2020
;; MSG SIZE  rcvd: 73
  • Verify PHP Installation
sudo rm /var/www/html/index.html
echo "<?php phpinfo();" | sudo tee /var/www/html/index.php

http://198.51.100.165/

sudo rm /var/www/html/index.php
  • Enable Let's Encrypt

Certbot - Ubuntufocal Apache

sudo certbot --apache
ubuntu@lasthope-web:~$ sudo certbot --apache
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
Enter email address (used for urgent renewal and security notices) (Enter 'c' to
cancel): sre@pg1x.com

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.2-November-15-2017.pdf. You must
agree in order to register with the ACME server at
https://acme-v02.api.letsencrypt.org/directory
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(A)gree/(C)ancel: A

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing to share your email address with the Electronic Frontier
Foundation, a founding partner of the Let's Encrypt project and the non-profit
organization that develops Certbot? We'd like to send you email about our work
encrypting the web, EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: Y
No names were found in your configuration files. Please enter in your domain
name(s) (comma and/or space separated)  (Enter 'c' to cancel): host-198-51-100-165.pg1x.com
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for host-198-51-100-165.pg1x.com
Enabled Apache rewrite module
Waiting for verification...
Cleaning up challenges
Created an SSL vhost at /etc/apache2/sites-available/000-default-le-ssl.conf
Enabled Apache socache_shmcb module
Enabled Apache ssl module
Deploying Certificate to VirtualHost /etc/apache2/sites-available/000-default-le-ssl.conf
Enabling available site: /etc/apache2/sites-available/000-default-le-ssl.conf

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2
Enabled Apache rewrite module
Redirecting vhost in /etc/apache2/sites-enabled/000-default.conf to ssl vhost in /etc/apache2/sites-available/000-default-le-ssl.conf

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Congratulations! You have successfully enabled
https://host-198-51-100-165.pg1x.com

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=host-198-51-100-165.pg1x.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/host-198-51-100-165.pg1x.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/host-198-51-100-165.pg1x.com/privkey.pem
   Your cert will expire on 2020-08-28. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

ubuntu@lasthope-web:~$ systemctl list-timers
NEXT                        LEFT          LAST                        PASSED       UNIT                         ACTIVATES
Sun 2020-05-31 01:09:00 UTC 24min left    Sun 2020-05-31 00:39:05 UTC 5min ago     phpsessionclean.timer        phpsessionclean.service
Sun 2020-05-31 03:10:07 UTC 2h 25min left Sat 2020-05-30 22:41:52 UTC 2h 3min ago  e2scrub_all.timer            e2scrub_all.service
Sun 2020-05-31 06:27:13 UTC 5h 42min left Sat 2020-05-30 22:41:52 UTC 2h 3min ago  apt-daily-upgrade.timer      apt-daily-upgrade.service
Sun 2020-05-31 10:39:10 UTC 9h left       Sat 2020-05-30 22:41:52 UTC 2h 3min ago  fwupd-refresh.timer          fwupd-refresh.service
Sun 2020-05-31 14:49:29 UTC 14h left      n/a                         n/a          certbot.timer                certbot.service
Sun 2020-05-31 16:41:39 UTC 15h left      Sat 2020-05-30 22:41:52 UTC 2h 3min ago  apt-daily.timer              apt-daily.service
Sun 2020-05-31 21:34:50 UTC 20h left      Sun 2020-05-31 00:20:44 UTC 24min ago    motd-news.timer              motd-news.service
Sun 2020-05-31 23:05:56 UTC 22h left      Sat 2020-05-30 23:05:56 UTC 1h 39min ago systemd-tmpfiles-clean.timer systemd-tmpfiles-clean.service
Mon 2020-06-01 00:00:00 UTC 23h left      Sat 2020-05-30 22:41:52 UTC 2h 3min ago  fstrim.timer                 fstrim.service
Mon 2020-06-01 00:00:00 UTC 23h left      Sun 2020-05-31 00:00:38 UTC 44min ago    logrotate.timer              logrotate.service
Mon 2020-06-01 00:00:00 UTC 23h left      Sun 2020-05-31 00:00:38 UTC 44min ago    man-db.timer                 man-db.service

11 timers listed.
Pass --all to see loaded but inactive timers, too.

https://host-198-51-100-165.pg1x.com

  • Prepare MySQL 8.0
sudo mysql
ubuntu@lasthope-web:~$ sudo mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 8.0.20-0ubuntu0.20.04.1 (Ubuntu)

Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>
CREATE USER 'wordpress'@'localhost' IDENTIFIED BY 'g2jWAkvsZfcGRA54';
GRANT ALL ON `wordpress`.* TO 'wordpress'@'localhost';
CREATE DATABASE `wordpress`;
  • Get and Install WordPress

https://wordpress.org/

cd /var/www
sudo mv html{,.bak}
sudo wget https://wordpress.org/latest.tar.gz
sudo tar xf latest.tar.gz
ls -la
sudo mv wordpress html
ls -la
sudo chown -R www-data:www-data html
ls -ld html
ls -la html
ubuntu@lasthope-web:/var/www$ ls -la
total 11964
drwxr-xr-x  4 root   root        4096 May 31 00:54 .
drwxr-xr-x 14 root   root        4096 May 31 00:18 ..
drwxr-xr-x  2 root   root        4096 May 31 00:27 html.bak
-rw-r--r--  1 root   root    12234700 Apr 29 18:59 latest.tar.gz
drwxr-xr-x  5 nobody nogroup     4096 Apr 29 18:58 wordpress
ubuntu@lasthope-web:/var/www$ sudo mv wordpress html
ubuntu@lasthope-web:/var/www$ ls -la
total 11964
drwxr-xr-x  4 root   root        4096 May 31 00:55 .
drwxr-xr-x 14 root   root        4096 May 31 00:18 ..
drwxr-xr-x  5 nobody nogroup     4096 Apr 29 18:58 html
drwxr-xr-x  2 root   root        4096 May 31 00:27 html.bak
-rw-r--r--  1 root   root    12234700 Apr 29 18:59 latest.tar.gz
ubuntu@lasthope-web:/var/www$ ls -ld html
drwxr-xr-x 5 www-data www-data 4096 Apr 29 18:58 html
ubuntu@lasthope-web:/var/www$ ls -la html
total 216
drwxr-xr-x  5 www-data www-data  4096 Apr 29 18:58 .
drwxr-xr-x  4 root     root      4096 May 31 00:55 ..
-rw-r--r--  1 www-data www-data   405 Feb  6 06:33 index.php
-rw-r--r--  1 www-data www-data 19915 Feb 12 11:54 license.txt
-rw-r--r--  1 www-data www-data  7278 Jan 10 14:05 readme.html
-rw-r--r--  1 www-data www-data  6912 Feb  6 06:33 wp-activate.php
drwxr-xr-x  9 www-data www-data  4096 Apr 29 18:58 wp-admin
-rw-r--r--  1 www-data www-data   351 Feb  6 06:33 wp-blog-header.php
-rw-r--r--  1 www-data www-data  2275 Feb  6 06:33 wp-comments-post.php
-rw-r--r--  1 www-data www-data  2913 Feb  6 06:33 wp-config-sample.php
drwxr-xr-x  4 www-data www-data  4096 Apr 29 18:58 wp-content
-rw-r--r--  1 www-data www-data  3940 Feb  6 06:33 wp-cron.php
drwxr-xr-x 21 www-data www-data 12288 Apr 29 18:58 wp-includes
-rw-r--r--  1 www-data www-data  2496 Feb  6 06:33 wp-links-opml.php
-rw-r--r--  1 www-data www-data  3300 Feb  6 06:33 wp-load.php
-rw-r--r--  1 www-data www-data 47874 Feb 10 03:50 wp-login.php
-rw-r--r--  1 www-data www-data  8509 Apr 14 11:34 wp-mail.php
-rw-r--r--  1 www-data www-data 19396 Apr 10 03:59 wp-settings.php
-rw-r--r--  1 www-data www-data 31111 Feb  6 06:33 wp-signup.php
-rw-r--r--  1 www-data www-data  4755 Feb  6 06:33 wp-trackback.php
-rw-r--r--  1 www-data www-data  3133 Feb  6 06:33 xmlrpc.php

https://host-198-51-100-165.pg1x.com

first instance launched.

Launch a Instance by OpenStack Dashboard (Horizon)

Above tasks also done by OpenStack Dashboard UI called Horizon.

http://10.0.14.136/

check detailed terminology

Instance status

ubuntu@os-client:~$ source ~/work/venv/bin/activate
(venv) ubuntu@os-client:~$ openstack --os-cloud admin server list --all-projects
+--------------------------------------+---------------+--------+-----------------------------------------------+-------+---------+
| ID                                   | Name          | Status | Networks                                      | Image | Flavor  |
+--------------------------------------+---------------+--------+-----------------------------------------------+-------+---------+
| 4466efaf-c9ae-44be-8b02-66cbbd388824 | hello-horizon | ACTIVE | LastHopeNetwork=172.16.2.194, 198.51.100.165 |       | m1.nano |
+--------------------------------------+---------------+--------+-----------------------------------------------+-------+---------+
(venv) ubuntu@os-client:~$ openstack --os-cloud admin server show 4466efaf-c9ae-44be-8b02-66cbbd388824
+-------------------------------------+----------------------------------------------------------+
| Field                               | Value                                                    |
+-------------------------------------+----------------------------------------------------------+
| OS-DCF:diskConfig                   | AUTO                                                     |
| OS-EXT-AZ:availability_zone         | nova                                                     |
| OS-EXT-SRV-ATTR:host                | os-compute2.os.pg1x.net                                  |
| OS-EXT-SRV-ATTR:hypervisor_hostname | os-compute2.os.pg1x.net                                  |
| OS-EXT-SRV-ATTR:instance_name       | instance-00000022                                        |
| OS-EXT-STS:power_state              | Running                                                  |
| OS-EXT-STS:task_state               | None                                                     |
| OS-EXT-STS:vm_state                 | active                                                   |
| OS-SRV-USG:launched_at              | 2020-05-31T01:18:24.000000                               |
| OS-SRV-USG:terminated_at            | None                                                     |
| accessIPv4                          |                                                          |
| accessIPv6                          |                                                          |
| addresses                           | LastHopeNetwork=172.16.2.194, 198.51.100.165             |
| config_drive                        |                                                          |
| created                             | 2020-05-31T01:17:11Z                                     |
| flavor                              | m1.nano (8b8d2354-5b5d-47c8-a24d-ce6219f676d8)           |
| hostId                              | b4f8fb86fc7d5355f23a0b40e9da982aaf20798f2c08ac0b97ebdd8f |
| id                                  | 4466efaf-c9ae-44be-8b02-66cbbd388824                     |
| image                               |                                                          |
| key_name                            | lasthope-key                                             |
| name                                | hello-horizon                                            |
| progress                            | 0                                                        |
| project_id                          | e4d11ab5285f4a5eae38aa0b72d9dc34                         |
| properties                          |                                                          |
| security_groups                     | name='Allow_Essential'                                   |
|                                     | name='Allow_WEB'                                         |
|                                     | name='default'                                           |
|                                     | name='Allow_SSH'                                         |
| status                              | ACTIVE                                                   |
| updated                             | 2020-05-31T01:18:24Z                                     |
| user_id                             | 57ff551a119b4844af104fad89ec5007                         |
| volumes_attached                    | id='ffc0623a-9c18-41aa-aec0-efa67b218304'                |
+-------------------------------------+----------------------------------------------------------+
ubuntu@os-client:~$ juju ssh nova-compute/0 sudo virsh list
 Id   Name   State
--------------------

Connection to 10.0.12.74 closed.
ubuntu@os-client:~$ juju ssh nova-compute/1 sudo virsh list
 Id   Name                State
-----------------------------------
 11   instance-00000022   running

Connection to 10.0.12.80 closed.
ubuntu@os-client:~$ juju ssh nova-compute/2 sudo virsh list
 Id   Name   State
--------------------

Connection to 10.0.12.82 closed.
(venv) ubuntu@os-client:~$ alias skipssh='ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no"'
(venv) ubuntu@os-client:~$ skipssh ubuntu@198.51.100.165 -i ~/.ssh/lasthope.pemem
Warning: Permanently added '198.51.100.165' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-33-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sun May 31 02:04:57 UTC 2020

  System load:  0.0               Processes:             95
  Usage of /:   8.2% of 19.21GB   Users logged in:       0
  Memory usage: 16%               IPv4 address for ens3: 172.16.2.194
  Swap usage:   0%


0 updates can be installed immediately.
0 of these updates are security updates.


Last login: Sun May 31 01:22:32 2020
ubuntu@hello-horizon:~$ cat .ssh/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDD3J5YjGc+XeSygZpElCha6JuZxyIP/HE3eVE6ENWyup0vnIlJXHt9DfLy99+tpRDFZBhzfg+0gQ5AQ/CxBk5XCYVHN+pQ54wlROtg23DrUpGpieYz+K87+Mk7wZH3O0WG0aKVEt2w5nZlvvk8xluxQyGvn4ZBLRuKHxpidvVeKMdacIae9Ldgz1R3OvpgiahPVg4vwTJsEYK6GmMH8TMzhDa3waU3Mvz239LI2EdGf5LBe2yHFMugZM3S3dBSGmdkIqYDShlconLNpppoEpxU9j34IXLp3dfztd1u2v1xOQovAEWSJMWGIeAhfC8xoRxX5AQ41ejk6bYIH7J9yL5r/Ya4B43LdOkEeK+pXqz9OmPy4Ra56GXfVcIBp3a50EztjNryC6rLEuiNJaPYR1JSmvY6mGJdZJ/RqugbsC1rG8D4Der1yIr8VPN4TfRLLr5DotAoeqoQdQn4cmn6B5ajqS1qvNLo/7imVkq6FU1vE3rUzmCAi2i1jjBx0BQjYuCjPEs7T6Zv1rYRk+0twCZOlNn1Mwf9UVjS6ARcDOM2L4t7t1nl7gFAaWlFYQKSd9CYsfU7wGktzYmodKlHPF9g11AQ2YB/I9MBZdxFA6dclh0rQgRin0cUuAjJJa/DaSVroOTfFVoErVuWA7UjI0T6u6hr3FCdZZhBczaK3UoNCw== openstack-lasthope-key

above ssh public key deployed by Nova Metadata Agent Service using cloud-init. registered earlier.

Neutron Networking SDN component status

Neutron standard implementation uses Open vSwitch (OVS). Overlay tenant network is configured VXLAN above neutron-api overlay-network-type: vxlan configuration. How Open vSwitch Configured? Check this.

ubuntu@os-client:~$ juju ssh neutron-gateway/0 ip address show ens34
3: ens34: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000
    link/ether 00:0c:29:9e:fe:7f brd ff:ff:ff:ff:ff:ff
    inet6 fe80::20c:29ff:fe9e:fe7f/64 scope link
       valid_lft forever preferred_lft forever
Connection to 10.0.12.84 closed.
ubuntu@os-client:~$ juju ssh neutron-gateway/1 ip address show ens34
3: ens34: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000
    link/ether 00:0c:29:0e:75:ed brd ff:ff:ff:ff:ff:ff
    inet6 fe80::20c:29ff:fe0e:75ed/64 scope link
       valid_lft forever preferred_lft forever
Connection to 10.0.12.86 closed.
ubuntu@os-client:~$ juju ssh neutron-gateway/2 ip address show ens34
3: ens34: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc fq_codel master ovs-system state UP group default qlen 1000
    link/ether 00:0c:29:d5:ae:50 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::20c:29ff:fed5:ae50/64 scope link
       valid_lft forever preferred_lft forever
Connection to 10.0.12.77 closed.
(venv) ubuntu@os-client:~$ juju ssh neutron-gateway/0 ip netns
qrouter-be762041-8aa9-4bce-8655-0792681dd5d5 (id: 13)
qdhcp-42b9c40d-d549-45fa-a133-a22a5498254f (id: 12)
Connection to 10.0.12.84 closed.
(venv) ubuntu@os-client:~$ juju ssh neutron-gateway/1 ip netns
qrouter-be762041-8aa9-4bce-8655-0792681dd5d5 (id: 12)
Connection to 10.0.12.86 closed.
(venv) ubuntu@os-client:~$ juju ssh neutron-gateway/2 ip netns
qrouter-be762041-8aa9-4bce-8655-0792681dd5d5 (id: 12)
Connection to 10.0.12.77 closed.
(venv) ubuntu@os-client:~$ juju ssh neutron-gateway/0 ip netns exec qrouter-be762041-8aa9-4bce-8655-0792681dd5d5 ip address show
setting the network namespace "qrouter-be762041-8aa9-4bce-8655-0792681dd5d5" failed: Operation not permitted
Connection to 10.0.12.84 closed.
(venv) ubuntu@os-client:~$ juju ssh neutron-gateway/0 sudo ip netns exec qrouter-be762041-8aa9-4bce-8655-0792681dd5d5 ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ha-b47e1d6d-68@if37: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:58:a6:3a brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 169.254.192.184/18 brd 169.254.255.255 scope global ha-b47e1d6d-68
       valid_lft forever preferred_lft forever
    inet 169.254.0.43/24 scope global ha-b47e1d6d-68
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe58:a63a/64 scope link
       valid_lft forever preferred_lft forever
3: qg-260e9c8c-a6@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:d7:85:45 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 198.51.100.166/28 scope global qg-260e9c8c-a6
       valid_lft forever preferred_lft forever
    inet 198.51.100.165/32 scope global qg-260e9c8c-a6
       valid_lft forever preferred_lft forever
    inet6 2408:211:c221:a600:f816:3eff:fed7:8545/64 scope global dynamic mngtmpaddr
       valid_lft 2591883sec preferred_lft 604683sec
    inet6 fe80::f816:3eff:fed7:8545/64 scope link
       valid_lft forever preferred_lft forever
4: qr-84398475-58@if40: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:b4:03:66 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.16.0.1/20 scope global qr-84398475-58
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feb4:366/64 scope link
       valid_lft forever preferred_lft forever
Connection to 10.0.12.84 closed.
(venv) ubuntu@os-client:~$ juju ssh neutron-gateway/1 sudo ip netns exec qrouter-be762041-8aa9-4bce-8655-0792681dd5d5 ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ha-cc49271e-af@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:f1:56:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 169.254.195.213/18 brd 169.254.255.255 scope global ha-cc49271e-af
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fef1:5605/64 scope link
       valid_lft forever preferred_lft forever
3: qg-260e9c8c-a6@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:d7:85:45 brd ff:ff:ff:ff:ff:ff link-netnsid 0
4: qr-84398475-58@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:b4:03:66 brd ff:ff:ff:ff:ff:ff link-netnsid 0
Connection to 10.0.12.86 closed.
(venv) ubuntu@os-client:~$ juju ssh neutron-gateway/2 sudo ip netns exec qrouter-be762041-8aa9-4bce-8655-0792681dd5d5 ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ha-04c0aa18-ad@if36: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:02:78:64 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 169.254.194.237/18 brd 169.254.255.255 scope global ha-04c0aa18-ad
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe02:7864/64 scope link
       valid_lft forever preferred_lft forever
3: qg-260e9c8c-a6@if38: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:d7:85:45 brd ff:ff:ff:ff:ff:ff link-netnsid 0
4: qr-84398475-58@if39: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether fa:16:3e:b4:03:66 brd ff:ff:ff:ff:ff:ff link-netnsid 0
Connection to 10.0.12.77 closed.
ubuntu@os-client:~$ juju ssh neutron-gateway/0 sudo ovs-vsctl show
cad58b5b-c1d9-4e22-bed5-7b99a746671d
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a000c4d"
            Interface "vxlan-0a000c4d"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.12.84", out_key=flow, remote_ip="10.0.12.77"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a000c50"
            Interface "vxlan-0a000c50"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.12.84", out_key=flow, remote_ip="10.0.12.80"}
        Port "vxlan-0a000c56"
            Interface "vxlan-0a000c56"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.12.84", out_key=flow, remote_ip="10.0.12.86"}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port "tap84398475-58"
            tag: 1
            Interface "tap84398475-58"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "tap6a67b0a0-a4"
            tag: 1
            Interface "tap6a67b0a0-a4"
                type: internal
        Port "tapb47e1d6d-68"
            tag: 2
            Interface "tapb47e1d6d-68"
        Port "tap260e9c8c-a6"
            tag: 3
            Interface "tap260e9c8c-a6"
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-ex
            Interface br-ex
                type: internal
        Port "ens34"
            Interface "ens34"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    ovs_version: "2.11.0"
Connection to 10.0.12.84 closed.
ubuntu@os-client:~$ juju ssh neutron-gateway/1 sudo ovs-vsctl show
b41c1f9a-bf38-4c7a-854f-31c5934c8182
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "tap84398475-58"
            tag: 3
            Interface "tap84398475-58"
        Port "tapcc49271e-af"
            tag: 1
            Interface "tapcc49271e-af"
        Port br-int
            Interface br-int
                type: internal
        Port "tap260e9c8c-a6"
            tag: 2
            Interface "tap260e9c8c-a6"
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "vxlan-0a000c4d"
            Interface "vxlan-0a000c4d"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.12.86", out_key=flow, remote_ip="10.0.12.77"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a000c54"
            Interface "vxlan-0a000c54"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.12.86", out_key=flow, remote_ip="10.0.12.84"}
        Port "vxlan-0a000c50"
            Interface "vxlan-0a000c50"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.12.86", out_key=flow, remote_ip="10.0.12.80"}
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "ens34"
            Interface "ens34"
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
    ovs_version: "2.11.0"
Connection to 10.0.12.86 closed.
ubuntu@os-client:~$ juju ssh neutron-gateway/2 sudo ovs-vsctl show
1fb09226-6075-40ca-ab7b-625be9d049b0
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a000c50"
            Interface "vxlan-0a000c50"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.12.77", out_key=flow, remote_ip="10.0.12.80"}
        Port "vxlan-0a000c56"
            Interface "vxlan-0a000c56"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.12.77", out_key=flow, remote_ip="10.0.12.86"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-0a000c54"
            Interface "vxlan-0a000c54"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.12.77", out_key=flow, remote_ip="10.0.12.84"}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "tap04c0aa18-ad"
            tag: 1
            Interface "tap04c0aa18-ad"
        Port "tap84398475-58"
            tag: 3
            Interface "tap84398475-58"
        Port "tap260e9c8c-a6"
            tag: 2
            Interface "tap260e9c8c-a6"
    Bridge br-ex
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "ens34"
            Interface "ens34"
    ovs_version: "2.11.0"
Connection to 10.0.12.77 closed.
(venv) ubuntu@os-client:~$ juju ssh nova-compute/0 ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:66:59:b9 brd ff:ff:ff:ff:ff:ff
    inet 10.0.12.74/22 brd 10.0.15.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe66:59b9/64 scope link
       valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:66:59:c3 brd ff:ff:ff:ff:ff:ff
    inet6 2408:211:c221:a600:20c:29ff:fe66:59c3/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 2591806sec preferred_lft 604606sec
    inet6 fe80::20c:29ff:fe66:59c3/64 scope link
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 42:e5:3c:f7:7c:5f brd ff:ff:ff:ff:ff:ff
5: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 12:eb:4e:54:12:44 brd ff:ff:ff:ff:ff:ff
6: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ea:99:9c:c2:f0:48 brd ff:ff:ff:ff:ff:ff
7: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 06:63:61:6e:54:4e brd ff:ff:ff:ff:ff:ff
8: br-data: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether d6:c9:2b:44:33:4c brd ff:ff:ff:ff:ff:ff
Connection to 10.0.12.74 closed.
(venv) ubuntu@os-client:~$ juju ssh nova-compute/1 ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:2d:6e:18 brd ff:ff:ff:ff:ff:ff
    inet 10.0.12.80/22 brd 10.0.15.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe2d:6e18/64 scope link
       valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:2d:6e:22 brd ff:ff:ff:ff:ff:ff
    inet6 2408:211:c221:a600:20c:29ff:fe2d:6e22/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 2591803sec preferred_lft 604603sec
    inet6 fe80::20c:29ff:fe2d:6e22/64 scope link
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 6e:db:bd:31:8a:5b brd ff:ff:ff:ff:ff:ff
5: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether da:26:5d:8e:c9:4a brd ff:ff:ff:ff:ff:ff
6: br-int: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
    link/ether a6:86:5a:fc:61:45 brd ff:ff:ff:ff:ff:ff
7: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 5a:4f:99:c5:e3:4c brd ff:ff:ff:ff:ff:ff
8: br-data: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether aa:b1:fa:63:53:4d brd ff:ff:ff:ff:ff:ff
51: qbr86829de3-ae: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000
    link/ether 32:f1:84:6a:12:53 brd ff:ff:ff:ff:ff:ff
52: qvo86829de3-ae@qvb86829de3-ae: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master ovs-system state UP group default qlen 1000
    link/ether 06:c6:5f:77:3e:2d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4c6:5fff:fe77:3e2d/64 scope link
       valid_lft forever preferred_lft forever
53: qvb86829de3-ae@qvo86829de3-ae: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue master qbr86829de3-ae state UP group default qlen 1000
    link/ether 32:f1:84:6a:12:53 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::30f1:84ff:fe6a:1253/64 scope link
       valid_lft forever preferred_lft forever
54: tap86829de3-ae: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc fq_codel master qbr86829de3-ae state UNKNOWN group default qlen 1000
    link/ether fe:16:3e:44:71:4d brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc16:3eff:fe44:714d/64 scope link
       valid_lft forever preferred_lft forever
55: vxlan_sys_4789: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65000 qdisc noqueue master ovs-system state UNKNOWN group default qlen 1000
    link/ether 72:0f:de:d5:24:f8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::700f:deff:fed5:24f8/64 scope link
       valid_lft forever preferred_lft forever
Connection to 10.0.12.80 closed.
(venv) ubuntu@os-client:~$ juju ssh nova-compute/2 ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:8f:ad:93 brd ff:ff:ff:ff:ff:ff
    inet 10.0.12.82/22 brd 10.0.15.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe8f:ad93/64 scope link
       valid_lft forever preferred_lft forever
3: ens34: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:8f:ad:9d brd ff:ff:ff:ff:ff:ff
    inet6 2408:211:c221:a600:20c:29ff:fe8f:ad9d/64 scope global dynamic mngtmpaddr noprefixroute
       valid_lft 2591797sec preferred_lft 604597sec
    inet6 fe80::20c:29ff:fe8f:ad9d/64 scope link
       valid_lft forever preferred_lft forever
4: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 32:66:d3:83:50:97 brd ff:ff:ff:ff:ff:ff
5: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether c2:96:f2:63:ef:4c brd ff:ff:ff:ff:ff:ff
6: br-ex: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 12:bb:3b:8b:b2:48 brd ff:ff:ff:ff:ff:ff
7: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ee:6c:9a:33:f9:4f brd ff:ff:ff:ff:ff:ff
8: br-data: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 42:5b:de:bb:06:44 brd ff:ff:ff:ff:ff:ff
Connection to 10.0.12.82 closed.
(venv) ubuntu@os-client:~$ juju ssh nova-compute/0 sudo ovs-vsctl show
aa549baa-ed5c-43cb-8d65-cec047c40e15
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-data
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port phy-br-data
            Interface phy-br-data
                type: patch
                options: {peer=int-br-data}
        Port br-data
            Interface br-data
                type: internal
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port int-br-data
            Interface int-br-data
                type: patch
                options: {peer=phy-br-data}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.11.0"
Connection to 10.0.12.74 closed.
(venv) ubuntu@os-client:~$ juju ssh nova-compute/1 sudo ovs-vsctl show
0d8941aa-f3c2-46da-a3c9-a45ba818baf5
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-data
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port phy-br-data
            Interface phy-br-data
                type: patch
                options: {peer=int-br-data}
        Port br-data
            Interface br-data
                type: internal
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "vxlan-0a000c4d"
            Interface "vxlan-0a000c4d"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.12.80", out_key=flow, remote_ip="10.0.12.77"}
        Port "vxlan-0a000c54"
            Interface "vxlan-0a000c54"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.12.80", out_key=flow, remote_ip="10.0.12.84"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "vxlan-0a000c56"
            Interface "vxlan-0a000c56"
                type: vxlan
                options: {df_default="true", egress_pkt_mark="0", in_key=flow, local_ip="10.0.12.80", out_key=flow, remote_ip="10.0.12.86"}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port "qvo86829de3-ae"
            tag: 10
            Interface "qvo86829de3-ae"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-data
            Interface int-br-data
                type: patch
                options: {peer=phy-br-data}
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.11.0"
Connection to 10.0.12.80 closed.
(venv) ubuntu@os-client:~$ juju ssh nova-compute/2 sudo ovs-vsctl show
2129e1c4-0aa6-4765-a9da-01224564786e
    Manager "ptcp:6640:127.0.0.1"
        is_connected: true
    Bridge br-int
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port int-br-data
            Interface int-br-data
                type: patch
                options: {peer=phy-br-data}
    Bridge br-tun
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-data
        Controller "tcp:127.0.0.1:6633"
            is_connected: true
        fail_mode: secure
        Port phy-br-data
            Interface phy-br-data
                type: patch
                options: {peer=int-br-data}
        Port br-data
            Interface br-data
                type: internal
    ovs_version: "2.11.0"
Connection to 10.0.12.82 closed.
  1. Provider network interface ens34 bind to Open vSwitch System.
  2. Tenant network configured as VXLAN by Open vSwitch.

Metadata agent service

This service provide metadata to instance for cloud-init. cloud-init is used also cloud computing service provider internals.

(venv) ubuntu@os-client:~$ alias skipssh='ssh -o "UserKnownHostsFile=/dev/null" -o "StrictHostKeyChecking=no"'
(venv) ubuntu@os-client:~$ skipssh ubuntu@198.51.100.165 -i ~/.ssh/lasthope.pemem
Warning: Permanently added '198.51.100.165' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-33-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

  System information as of Sun May 31 02:04:57 UTC 2020

  System load:  0.0               Processes:             95
  Usage of /:   8.2% of 19.21GB   Users logged in:       0
  Memory usage: 16%               IPv4 address for ens3: 172.16.2.194
  Swap usage:   0%


0 updates can be installed immediately.
0 of these updates are security updates.


Last login: Sun May 31 01:22:32 2020
ubuntu@hello-horizon:~$ sudo apt install jq
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  libjq1 libonig5
The following NEW packages will be installed:
  jq libjq1 libonig5
0 upgraded, 3 newly installed, 0 to remove and 0 not upgraded.
Need to get 313 kB of archives.
After this operation, 1,062 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://nova.clouds.archive.ubuntu.com/ubuntu focal/universe amd64 libonig5 amd64 6.9.4-1 [142 kB]
Get:2 http://nova.clouds.archive.ubuntu.com/ubuntu focal/universe amd64 libjq1 amd64 1.6-1 [121 kB]
Get:3 http://nova.clouds.archive.ubuntu.com/ubuntu focal/universe amd64 jq amd64 1.6-1 [50.2 kB]
Fetched 313 kB in 2s (176 kB/s)
Selecting previously unselected package libonig5:amd64.
(Reading database ... 94099 files and directories currently installed.)
Preparing to unpack .../libonig5_6.9.4-1_amd64.deb ...
Unpacking libonig5:amd64 (6.9.4-1) ...
Selecting previously unselected package libjq1:amd64.
Preparing to unpack .../libjq1_1.6-1_amd64.deb ...
Unpacking libjq1:amd64 (1.6-1) ...
Selecting previously unselected package jq.
Preparing to unpack .../archives/jq_1.6-1_amd64.deb ...
Unpacking jq (1.6-1) ...
Setting up libonig5:amd64 (6.9.4-1) ...
Setting up libjq1:amd64 (1.6-1) ...
Setting up jq (1.6-1) ...
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for libc-bin (2.31-0ubuntu9) ...
ubuntu@hello-horizon:~$
security-groupsubuntu@hello-horizon:~$ curl http://169.254.169.254/latest/meta-data/public-ipv4
198.51.100.165
ubuntu@hello-horizon:~$ curl http://169.254.169.254/openstack/latest/meta_data.json
{"uuid": "4466efaf-c9ae-44be-8b02-66cbbd388824", "public_keys": {"lasthope-key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDD3J5YjGc+XeSygZpElCha6JuZxyIP/HE3eVE6ENWyup0vnIlJXHt9DfLy99+tpRDFZBhzfg+0gQ5AQ/CxBk5XCYVHN+pQ54wlROtg23DrUpGpieYz+K87+Mk7wZH3O0WG0aKVEt2w5nZlvvk8xluxQyGvn4ZBLRuKHxpidvVeKMdacIae9Ldgz1R3OvpgiahPVg4vwTJsEYK6GmMH8TMzhDa3waU3Mvz239LI2EdGf5LBe2yHFMugZM3S3dBSGmdkIqYDShlconLNpppoEpxU9j34IXLp3dfztd1u2v1xOQovAEWSJMWGIeAhfC8xoRxX5AQ41ejk6bYIH7J9yL5r/Ya4B43LdOkEeK+pXqz9OmPy4Ra56GXfVcIBp3a50EztjNryC6rLEuiNJaPYR1JSmvY6mGJdZJ/RqugbsC1rG8D4Der1yIr8VPN4TfRLLr5DotAoeqoQdQn4cmn6B5ajqS1qvNLo/7imVkq6FU1vE3rUzmCAi2i1jjBx0BQjYuCjPEs7T6Zv1rYRk+0twCZOlNn1Mwf9UVjS6ARcDOM2L4t7t1nl7gFAaWlFYQKSd9CYsfU7wGktzYmodKlHPF9g11AQ2YB/I9MBZdxFA6dclh0rQgRin0cUuAjJJa/DaSVroOTfFVoErVuWA7UjI0T6u6hr3FCdZZhBczaK3UoNCw== openstack-lasthope-key\n"}, "keys": [{"name": "lasthope-key", "type": "ssh", "data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDD3J5YjGc+XeSygZpElCha6JuZxyIP/HE3eVE6ENWyup0vnIlJXHt9DfLy99+tpRDFZBhzfg+0gQ5AQ/CxBk5XCYVHN+pQ54wlROtg23DrUpGpieYz+K87+Mk7wZH3O0WG0aKVEt2w5nZlvvk8xluxQyGvn4ZBLRuKHxpidvVeKMdacIae9Ldgz1R3OvpgiahPVg4vwTJsEYK6GmMH8TMzhDa3waU3Mvz239LI2EdGf5LBe2yHFMugZM3S3dBSGmdkIqYDShlconLNpppoEpxU9j34IXLp3dfztd1u2v1xOQovAEWSJMWGIeAhfC8xoRxX5AQ41ejk6bYIH7J9yL5r/Ya4B43LdOkEeK+pXqz9OmPy4Ra56GXfVcIBp3a50EztjNryC6rLEuiNJaPYR1JSmvY6mGJdZJ/RqugbsC1rG8D4Der1yIr8VPN4TfRLLr5DotAoeqoQdQn4cmn6B5ajqS1qvNLo/7imVkq6FU1vE3rUzmCAi2i1jjBx0BQjYuCjPEs7T6Zv1rYRk+0twCZOlNn1Mwf9UVjS6ARcDOM2L4t7t1nl7gFAaWlFYQKSd9CYsfU7wGktzYmodKlHPF9g11AQ2YB/I9MBZdxFA6dclh0rQgRin0cUuAjJJa/DaSVroOTfFVoErVuWA7UjI0T6u6hr3FCdZZhBczaK3UoNCw== openstack-lasthope-key\n"}], "hostname": "hello-horizon.novalocal", "name": "hello-horizon", "launch_index": 0, "availability_zone": "nova", "random_seed": "FVrhqMFx83wyY2+rQM9hbbT0PIc1FRRJMeVy0623pZaV4cVAwERcn2eZt0ekH2n1UC/aSrX5pz5vtDLKYxbYo/CAYS3ciDCoKISF+lVFDMgwA9GekQHQ0IEJZzBtz9IaIUJaazDX5MAOiFnTPM+b1++BGwcYgsJ7qYnEhEXDCAdh74ssLlEIwVUBgYRo2VXFdNQrNimzwCcN4seMJhm8SArf810LPq6+pWDDiDFFGNz1ZHzn1sThmim0HYO25h8HCzjA9qvY0HDyFA73J1yktN0djs0iy2aT9DWNORVY3yUAWfHAYGty0ecXNzvc3w7FEi+zuKU0aF1IqliSl4FnZDOkz+QScnpf++FzUIEMWxDaWcEvbwJOs4ZpOg2cV9Q5CRhGQ/DxavfIU2TSgIJ++PpvyGs5TX76eHbggiGzSfKhrTo9t19n6aG9Oo/XzsJuZcZA+nSnz82Gm3LP5TSBUL48VKJuvMKS1sCFwZVTMzVpMR6MIs0xwD54/3Ebyym07WUpkcfSKLfa7ikCe0E7nqwsfizb5PbZr4RfuvCjDzP1w2A8RwE8Ah/PbnmzEDZREmlZIPOlN9ymeh21BWBIsh9GRWRqESr9T7nQf7U7JTDsS8WrnC8FwokotkywG6HufqUyCRvqBvYpeOyLIWyMPcACkmbSDvUvk2pBkk3f4Fk=", "project_id": "e4d11ab5285f4a5eae38aa0b72d9dc34", "devices": []}ubuntu@hello-horizon:~$ curl http://169.254.169.254/openstack/latest/meta_data.json | jq '.'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2526  100  2526    0     0  19136      0 --:--:-- --:--:-- --:--:-- 19136
{
  "uuid": "4466efaf-c9ae-44be-8b02-66cbbd388824",
  "public_keys": {
    "lasthope-key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDD3J5YjGc+XeSygZpElCha6JuZxyIP/HE3eVE6ENWyup0vnIlJXHt9DfLy99+tpRDFZBhzfg+0gQ5AQ/CxBk5XCYVHN+pQ54wlROtg23DrUpGpieYz+K87+Mk7wZH3O0WG0aKVEt2w5nZlvvk8xluxQyGvn4ZBLRuKHxpidvVeKMdacIae9Ldgz1R3OvpgiahPVg4vwTJsEYK6GmMH8TMzhDa3waU3Mvz239LI2EdGf5LBe2yHFMugZM3S3dBSGmdkIqYDShlconLNpppoEpxU9j34IXLp3dfztd1u2v1xOQovAEWSJMWGIeAhfC8xoRxX5AQ41ejk6bYIH7J9yL5r/Ya4B43LdOkEeK+pXqz9OmPy4Ra56GXfVcIBp3a50EztjNryC6rLEuiNJaPYR1JSmvY6mGJdZJ/RqugbsC1rG8D4Der1yIr8VPN4TfRLLr5DotAoeqoQdQn4cmn6B5ajqS1qvNLo/7imVkq6FU1vE3rUzmCAi2i1jjBx0BQjYuCjPEs7T6Zv1rYRk+0twCZOlNn1Mwf9UVjS6ARcDOM2L4t7t1nl7gFAaWlFYQKSd9CYsfU7wGktzYmodKlHPF9g11AQ2YB/I9MBZdxFA6dclh0rQgRin0cUuAjJJa/DaSVroOTfFVoErVuWA7UjI0T6u6hr3FCdZZhBczaK3UoNCw== openstack-lasthope-key\n"
  },
  "keys": [
    {
      "name": "lasthope-key",
      "type": "ssh",
      "data": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDD3J5YjGc+XeSygZpElCha6JuZxyIP/HE3eVE6ENWyup0vnIlJXHt9DfLy99+tpRDFZBhzfg+0gQ5AQ/CxBk5XCYVHN+pQ54wlROtg23DrUpGpieYz+K87+Mk7wZH3O0WG0aKVEt2w5nZlvvk8xluxQyGvn4ZBLRuKHxpidvVeKMdacIae9Ldgz1R3OvpgiahPVg4vwTJsEYK6GmMH8TMzhDa3waU3Mvz239LI2EdGf5LBe2yHFMugZM3S3dBSGmdkIqYDShlconLNpppoEpxU9j34IXLp3dfztd1u2v1xOQovAEWSJMWGIeAhfC8xoRxX5AQ41ejk6bYIH7J9yL5r/Ya4B43LdOkEeK+pXqz9OmPy4Ra56GXfVcIBp3a50EztjNryC6rLEuiNJaPYR1JSmvY6mGJdZJ/RqugbsC1rG8D4Der1yIr8VPN4TfRLLr5DotAoeqoQdQn4cmn6B5ajqS1qvNLo/7imVkq6FU1vE3rUzmCAi2i1jjBx0BQjYuCjPEs7T6Zv1rYRk+0twCZOlNn1Mwf9UVjS6ARcDOM2L4t7t1nl7gFAaWlFYQKSd9CYsfU7wGktzYmodKlHPF9g11AQ2YB/I9MBZdxFA6dclh0rQgRin0cUuAjJJa/DaSVroOTfFVoErVuWA7UjI0T6u6hr3FCdZZhBczaK3UoNCw== openstack-lasthope-key\n"
    }
  ],
  "hostname": "hello-horizon.novalocal",
  "name": "hello-horizon",
  "launch_index": 0,
  "availability_zone": "nova",
  "random_seed": "i/3auYVany5QDxiR6NnnKWKg3L8ZGUX+v0Huj+KmcceUJ2ZGiXpdqyKlrUhxRde3LhEfXWYstM6v64ZQy/cHm2By2DWA4HMCw2xpCDhdkzfAxgNsBXBNK9YxzLDCGDOXGc+z76WHOTN7YmB+A7xVc+XpQb4JK8w8uGUk6TKEZKvLVesyYB2vhvgKxjMXQIDJ6DpkhDuqnlcbnYBg3qw1Ln3WByWlr2hwlcovdxrSL0HBHc0dd4o5p8crFjvOJ3m1aQiYwQHHFQFTO1ryD4Ry7NGYxmFpFBZzRruJpR10L5vllaoGe7jryoLN3JuyLAFP/W8IzKB2Zt0IBPbVQZKrUy9NRVkj2oPBh/N34inPqLr+ASluz7kR36mzvhSSrSgYO0ko89Z6+G43MYzJKfpSQxxYz6ULEXOekgQgW1f/xt88JOYEdCEwi/dlNqiCF8PljYgp7McUQtkQvqgy/yyjjhfrSJGW88kAwY36/3BLcftx4+yWoz/NKl6AcIu9DShfB9sOnNlvm5krZY9oNnx92Nks/C723vcAaxxaKh99mma7CsiV8bOYBPpvkj1PXyQL3UJBdVY8R8MeEXvggzrjUytQF/OJnivb884HNbcupELscIxZzofXJYwG0UKE+9MCbnAbJXFQDs+bYyTeb12ZbNDOf8ZxkXOUDkibv1SxmnU=",
  "project_id": "e4d11ab5285f4a5eae38aa0b72d9dc34",
  "devices": []
}
ubuntu@hello-horizon:~$ curl http://169.254.169.254/openstack/latest/meta_data.json | jq '.uuid'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2526  100  2526    0     0    495      0  0:00:05  0:00:05 --:--:--   648
"4466efaf-c9ae-44be-8b02-66cbbd388824"

Common Issues

Shutdown, snapshot, When all controller nodes up, Percona XtraDB Cluster not up

See Cold Boot section.

Not

ubuntu@os-client:~/work/openstack$ juju run --application mysql cat /var/lib/percona-xtradb-cluster/grastate.dat
- Stdout: |
    # GALERA saved state
    version: 2.1
    uuid:    c5132ae1-9cc5-11ea-8109-8b9a8e1bbad7
    seqno:   98150
    safe_to_bootstrap: 1
  UnitId: mysql/0
- Stdout: |
    # GALERA saved state
    version: 2.1
    uuid:    c5132ae1-9cc5-11ea-8109-8b9a8e1bbad7
    seqno:   98150
    safe_to_bootstrap: 0
  UnitId: mysql/1
- Stdout: |
    # GALERA saved state
    version: 2.1
    uuid:    c5132ae1-9cc5-11ea-8109-8b9a8e1bbad7
    seqno:   98119
    safe_to_bootstrap: 0
  UnitId: mysql/2
juju run-action mysql/0 bootstrap-pxc --wait
juju run-action mysql/1 notify-bootstrapped --wait

Many times I tried recovery process, If your shutdown timing is went wrong, not recoverable itself forever…

So, you should back up Percona XtraDB Cluster and deployed openstack.

It's ideal to think to implement regular backup task.

And currently I can say, Percona XtraDB Cluster recovery process is most complicated and difficut task. It' may recoverd and works partially, but not completed. I've never recovered completely fine status cluster…

Resource: res_mysql_2e4d5b2_vip not yet configured

Write here to recover Resource: res_mysql_2e4d5b2_vip not yet configured

Services not running that should be: *

juju run-action mysql/0 resume --wait
seq 0 2 | xargs -I{} juju run-action nova-cloud-controller/{} resume --wait
seq 0 2 | xargs -I{} juju ssh rabbitmq-server/{} sudo systemctl reboot

Resource: ceph-osd not join the cluster forever

  1. At first boot Ceph OSDs and stable time.
  2. Second, Boot ceph-mon Ceph Monitor.
  3. Wait a some minutes, OSDs join the cluster gradually.

Reference Deployment

https://api.jujucharms.com/charmstore/v5/bundle/openstack-base-64/archive/repo-info

commit-sha-1: 68e8037e562a6bee2a8222523f45d693386bc79e
commit-short: 68e8037
branch: master
remote: https://github.com/openstack-charmers/openstack-bundles
info-generated: Wed 04 Dec 2019 07:17:25 PM UTC
note: This file should exist only in a built or released bundle artifact (not in the bundle source code tree).

git checkout 68e8037 -b bundle/openstack-stein-bionic-64

ubuntu@os-client:~/work/openstack/tmp/openstack-bundles$ git checkout 68e8037 -b bundle/openstack-stein-bionic-64
Switched to a new branch 'bundle/openstack-stein-bionic-64'

ubuntu@os-client:~/work/openstack/tmp/openstack-bundles$ git log -n1
commit 68e8037e562a6bee2a8222523f45d693386bc79e (HEAD -> bundle/openstack-stein-bionic-64)
Merge: 73e7771 f583d1c
Author: Frode Nordahl <frode.nordahl@canonical.com>
Date:   Wed Dec 4 17:24:53 2019 +0100

    Merge pull request #146 from pmatulis/improve-and-correct-readme

    Improve and correct readme

Misc Notes

(venv) ubuntu@os-client:~/work/openstack$ openstack image list --format json | jq -r '.[].ID' | xargs -n1 echo openstack image delete
openstack image delete 57441468-5d77-4e10-8fbc-b69c87a1ba9e
openstack image delete 8a9358da-1e5c-4e09-a4cb-5cd8fa06dfd1
openstack image delete dfbabf9a-f9d5-4d69-b6c2-f671ff5c1f87
openstack image delete 8b093082-0b27-403a-bf98-8d5f2ada90fd
(venv) ubuntu@os-client:~/work/openstack$ openstack image list --format json | jq -r '.[].ID' | xargs -n1 openstack --os-cloud default image delete
(venv) ubuntu@os-client:~/work/openstack$ openstack image list

(venv) ubuntu@os-client:~/work/openstack$

Previous launch instance workaround

Current OpenStack Stein release openstackclient OSC more operator, developer friendly.

In previous release(Pike…?), in OSC We have to use following methond to prepare instance. Create volume, specify network.

openstack --os-cloud lasthope network show LastHopeNetwork --format json
{
  "admin_state_up": true,
  "availability_zone_hints": [],
  "availability_zones": [
    "nova"
  ],
  "created_at": "2020-05-30T09:43:53Z",
  "description": "",
  "dns_domain": null,
  "id": "42b9c40d-d549-45fa-a133-a22a5498254f",
  "ipv4_address_scope": null,
  "ipv6_address_scope": null,
  "is_default": null,
  "is_vlan_transparent": null,
  "location": {
    "cloud": "lasthope",
    "region_name": "RegionOne",
    "zone": null,
    "project": {
      "id": "e4d11ab5285f4a5eae38aa0b72d9dc34",
      "name": "LastHopeProject",
      "domain_id": null,
      "domain_name": "LastHopeDomain"
    }
  },
  "mtu": 1450,
  "name": "LastHopeNetwork",
  "port_security_enabled": false,
  "project_id": "e4d11ab5285f4a5eae38aa0b72d9dc34",
  "provider:network_type": null,
  "provider:physical_network": null,
  "provider:segmentation_id": null,
  "qos_policy_id": null,
  "revision_number": 2,
  "router:external": false,
  "segments": null,
  "shared": false,
  "status": "ACTIVE",
  "subnets": [
    "7e64e941-cd38-497c-8ac7-65a324d07749"
  ],
  "tags": [],
  "updated_at": "2020-05-30T09:51:39Z"
}

get network id using JSON processor jq. jq is powerfull JSON magick like sed, awk, grep.

sudo apt install jq

jq

openstack --os-cloud lasthope network show LastHopeNetwork --format json | jq -r '.id' | tr -d '\n'
42b9c40d-d549-45fa-a133-a22a5498254f

Amazing…

  • Create volume as instance persistent root volume
openstack --os-cloud lasthope volume create \
  --image ubuntu-server-20.04-x86_64-focal \
  --size 100 \
  --bootable \
  vol-lasthope-web-root
+---------------------+--------------------------------------+
| Field               | Value                                |
+---------------------+--------------------------------------+
| attachments         | []                                   |
| availability_zone   | nova                                 |
| bootable            | false                                |
| consistencygroup_id | None                                 |
| created_at          | 2020-05-30T12:40:49.000000           |
| description         | None                                 |
| encrypted           | False                                |
| id                  | 5a8ab603-02dc-4117-805d-0ad8b2910363 |
| multiattach         | False                                |
| name                | vol-lasthope-web-root                |
| properties          |                                      |
| replication_status  | None                                 |
| size                | 100                                  |
| snapshot_id         | None                                 |
| source_volid        | None                                 |
| status              | creating                             |
| type                | None                                 |
| updated_at          | None                                 |
| user_id             | 57ff551a119b4844af104fad89ec5007     |
+---------------------+--------------------------------------+
openstack --os-cloud lasthope volume list
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack --os-cloud lasthope volume list
+--------------------------------------+-----------------------+----------+------+-------------+
| ID                                   | Name                  | Status   | Size | Attached to |
+--------------------------------------+-----------------------+----------+------+-------------+
| 5a8ab603-02dc-4117-805d-0ad8b2910363 | vol-lasthope-web-root | creating |  100 |             |
+--------------------------------------+-----------------------+----------+------+-------------+
(venv) ubuntu@os-client:~/work/openstack/workspace$ openstack --os-cloud lasthope volume list
+--------------------------------------+-----------------------+-----------+------+-------------+
| ID                                   | Name                  | Status    | Size | Attached to |
+--------------------------------------+-----------------------+-----------+------+-------------+
| 5a8ab603-02dc-4117-805d-0ad8b2910363 | vol-lasthope-web-root | available |  100 |             |
+--------------------------------------+-----------------------+-----------+------+-------------+
openstack --os-cloud lasthope volume show 5a8ab603-02dc-4117-805d-0ad8b2910363
+------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                        | Value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
+------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| attachments                  | []                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| availability_zone            | nova                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| bootable                     | true                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| consistencygroup_id          | None                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| created_at                   | 2020-05-30T12:40:49.000000                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
| description                  | None                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| encrypted                    | False                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
| id                           | 5a8ab603-02dc-4117-805d-0ad8b2910363                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| multiattach                  | False                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
| name                         | vol-lasthope-web-root                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
| os-vol-tenant-attr:tenant_id | e4d11ab5285f4a5eae38aa0b72d9dc34                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| properties                   |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
| replication_status           | None                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| size                         | 100                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
| snapshot_id                  | None                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| source_volid                 | None                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| status                       | available                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
| type                         | None                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| updated_at                   | 2020-05-30T12:41:43.000000                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
| user_id                      | 57ff551a119b4844af104fad89ec5007                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| volume_image_metadata        | {'signature_verified': 'False', 'architecture': 'x86_64', 'hw_disk_bus': 'virtio', 'hw_vif_model': 'virtio', 'owner_specified.openstack.md5': 'a0a570ad022bbd1cd1711acbc171d0b3', 'owner_specified.openstack.object': 'images/ubuntu-server-20.04-x86_64-focal', 'owner_specified.openstack.sha256': 'f8fea6a80ced88eabe9d41eb61d4d9970348c025fe303583183ab81347ceea82', 'image_id': '75b410a1-9f09-4050-8444-fcea1bcea0a3', 'image_name': 'ubuntu-server-20.04-x86_64-focal', 'checksum': 'a0a570ad022bbd1cd1711acbc171d0b3', 'container_format': 'bare', 'disk_format': 'qcow2', 'min_disk': '0', 'min_ram': '0', 'size': '533135360'} |
+------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

References

tech/cloud/openstack/stein/deploy-charmed-openstack-ha-bionic/deploy-charmed-openstack-ha-bionic.1590894895.txt.gz · Last modified: 2020/05/31 12:14 by wnoguchi