OSP总监操作员在OpenShift之上创建了一组自定义资源定义,以管理Tripleo Undercloud通常创建的资源。 These CRDs are split into two types for hardware provisioning and software configuration.
OSP主管运营商通过OLM操作员生命周期管理器安装和管理。 OLM是在OpenShift安装的情况下自动安装的。要获取最新的OSP总监操作员快照,您需要创建适当的CatalogSource,operatorGroup和订阅以驱动OLM的安装:
oc new-project openstack
apiVersion : operators.coreos.com/v1alpha1
kind : CatalogSource
metadata :
name : osp-director-operator-index
namespace : openstack
spec :
sourceType : grpc
image : quay.io/openstack-k8s-operators/osp-director-operator-index:0.0.1
apiVersion : operators.coreos.com/v1
kind : OperatorGroup
metadata :
name : " osp-director-operator-group "
namespace : openstack
spec :
targetNamespaces :
- openstack
apiVersion : operators.coreos.com/v1alpha1
kind : Subscription
metadata :
name : osp-director-operator-subscription
namespace : openstack
spec :
config :
env :
- name : WATCH_NAMESPACE
value : openstack,openshift-machine-api,openshift-sriov-network-operator
source : osp-director-operator-index
sourceNamespace : openstack
name : osp-director-operator
startingCSV : osp-director-operator.v0.0.1
channel : alpha
我们有一个脚本可以在此处使用OLM自动化安装,以获取特定标签:脚本以自动安装安装
注意:将来的某个时候,我们可能会集成到OperatorHub中,以便OSP主管运营商可以在OCP安装中自动提供默认的OLM目录源。
在部署OpenStack之前创建基本RHEL数据量。通过OpenShift虚拟化配置的控制器VM将使用此功能。这样做的方法如下:
virtctl
: sudo subscription-manager repos --enable=cnv-2.6-for-rhel-8-x86_64-rpms
sudo dnf install -y kubevirt-virtctl
curl -O http://download.devel.redhat.com/brewroot/packages/rhel-guest-image/8.4/1168/images/rhel-guest-image-8.4-1168.x86_64.qcow2
dnf install -y libguestfs-tools-c
virt-customize -a < rhel guest image > --run-command ' sed -i -e "s/^(kernelopts=.*)net.ifnames=0 (.*)/12/" /boot/grub2/grubenv '
virt-customize -a < rhel guest image > --run-command ' sed -i -e "s/^(GRUB_CMDLINE_LINUX=.*)net.ifnames=0 (.*)/12/" /etc/default/grub '
/etc/hosts
中添加以下内容: <cluster ingress VIP> cdi-uploadproxy-openshift-cnv.apps.<cluster name>.<domain name>
virtctl
将图像上传到OpenShift虚拟化: virtctl image-upload dv openstack-base-img -n openstack --size=50Gi --image-path=<local path to image> --storage-class <desired storage class> --insecure
storage-class
,请从所示的内容中选择一个: oc get storageclass
定义您的OpenStackNetConfig自定义资源。 CTLPLANE需要至少一个网络。 Optionally you may define multiple networks in the CR to be used with TripleO's network isolation architecture.除网络定义外,OpenStackNet还提供了用于通过OpenShift虚拟化将任何VM连接到本网络的网络配置策略的信息。 The following is an example of a simple IPv4 ctlplane network which uses linux bridge for its host configuration.
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackNetConfig
metadata :
name : openstacknetconfig
spec :
attachConfigurations :
br-osp :
nodeNetworkConfigurationPolicy :
nodeSelector :
node-role.kubernetes.io/worker : " "
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : enp7s0
description : Linux bridge with enp7s0 as a port
name : br-osp
state : up
type : linux-bridge
mtu : 1500
# optional DnsServers list
dnsServers :
- 192.168.25.1
# optional DnsSearchDomains list
dnsSearchDomains :
- osptest.test.metalkube.org
- some.other.domain
# DomainName of the OSP environment
domainName : osptest.test.metalkube.org
networks :
- name : Control
nameLower : ctlplane
subnets :
- name : ctlplane
ipv4 :
allocationEnd : 192.168.25.250
allocationStart : 192.168.25.100
cidr : 192.168.25.0/24
gateway : 192.168.25.1
attachConfiguration : br-osp
# optional: (OSP17 only) specify all phys networks with optional MAC address prefix, used to
# create static OVN Bridge MAC address mappings. Unique OVN bridge mac address per node is
# dynamically allocated by creating OpenStackMACAddress resource and create a MAC per physnet per node.
# - If PhysNetworks is not provided, the tripleo default physnet datacentre gets created.
# - If the macPrefix is not specified for a physnet, the default macPrefix "fa:16:3a" is used.
# - If PreserveReservations is not specified, the default is true.
ovnBridgeMacMappings :
preserveReservations : True
physNetworks :
- macPrefix : fa:16:3a
name : datacentre
- macPrefix : fa:16:3b
name : datacentre2
# optional: configure static mapping for the networks per nodes. If there is none, a random gets created
reservations :
controller-0 :
macReservations :
datacentre : fa:16:3a:aa:aa:aa
datacentre2 : fa:16:3b:aa:aa:aa
compute-0 :
macReservations :
datacentre : fa:16:3a:bb:bb:bb
datacentre2 : fa:16:3b:bb:bb:bb
If you write the above YAML into a file called networkconfig.yaml you can create the OpenStackNetConfig via this command:
oc create -n openstack -f networkconfig.yaml
使用VLAN使用网络隔离,将VLAN ID添加到网络定义的规格中
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackNetConfig
metadata :
name : openstacknetconfig
spec :
attachConfigurations :
br-osp :
nodeNetworkConfigurationPolicy :
nodeSelector :
node-role.kubernetes.io/worker : " "
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : enp7s0
description : Linux bridge with enp7s0 as a port
name : br-osp
state : up
type : linux-bridge
mtu : 1500
br-ex :
nodeNetworkConfigurationPolicy :
nodeSelector :
node-role.kubernetes.io/worker : " "
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : enp6s0
description : Linux bridge with enp6s0 as a port
name : br-ex-osp
state : up
type : linux-bridge
mtu : 1500
# optional DnsServers list
dnsServers :
- 192.168.25.1
# optional DnsSearchDomains list
dnsSearchDomains :
- osptest.test.metalkube.org
- some.other.domain
# DomainName of the OSP environment
domainName : osptest.test.metalkube.org
networks :
- name : Control
nameLower : ctlplane
subnets :
- name : ctlplane
ipv4 :
allocationEnd : 192.168.25.250
allocationStart : 192.168.25.100
cidr : 192.168.25.0/24
gateway : 192.168.25.1
attachConfiguration : br-osp
- name : InternalApi
nameLower : internal_api
mtu : 1350
subnets :
- name : internal_api
attachConfiguration : br-osp
vlan : 20
ipv4 :
allocationEnd : 172.17.0.250
allocationStart : 172.17.0.10
cidr : 172.17.0.0/24
- name : External
nameLower : external
subnets :
- name : external
ipv6 :
allocationEnd : 2001:db8:fd00:1000:ffff:ffff:ffff:fffe
allocationStart : 2001:db8:fd00:1000::10
cidr : 2001:db8:fd00:1000::/64
gateway : 2001:db8:fd00:1000::1
attachConfiguration : br-ex
- name : Storage
nameLower : storage
mtu : 1350
subnets :
- name : storage
ipv4 :
allocationEnd : 172.18.0.250
allocationStart : 172.18.0.10
cidr : 172.18.0.0/24
vlan : 30
attachConfiguration : br-osp
- name : StorageMgmt
nameLower : storage_mgmt
mtu : 1350
subnets :
- name : storage_mgmt
ipv4 :
allocationEnd : 172.19.0.250
allocationStart : 172.19.0.10
cidr : 172.19.0.0/24
vlan : 40
attachConfiguration : br-osp
- name : Tenant
nameLower : tenant
vip : False
mtu : 1350
subnets :
- name : tenant
ipv4 :
allocationEnd : 172.20.0.250
allocationStart : 172.20.0.10
cidr : 172.20.0.0/24
vlan : 50
attachConfiguration : br-osp
当使用vlan与Linux-Bridge进行网络隔离时
注意:要将巨型框架用于桥梁,请为设备创建配置以配置Correnct MTU:
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackNetConfig
metadata :
name : openstacknetconfig
spec :
attachConfigurations :
br-osp :
nodeNetworkConfigurationPolicy :
nodeSelector :
node-role.kubernetes.io/worker : " "
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : enp7s0
description : Linux bridge with enp7s0 as a port
name : br-osp
state : up
type : linux-bridge
mtu : 9000
- name : enp7s0
description : Configuring enp7s0 on workers
type : ethernet
state : up
mtu : 9000
Create ConfigMaps which define any custom Heat environments, Heat templates and custom roles file (name must be roles_data.yaml
) used for TripleO network configuration. Any adminstrator defined Heat environment files can be provided in the ConfigMap and will be used as a convention in later steps used to create the Heat stack for Overcloud deployment. As a convention each OSP Director Installation will use 2 ConfigMaps named heat-env-config
and tripleo-tarball-config
to provide this information. The heat-env-config
configmap holds all deployment environment files where each file gets added as -e file.yaml
to the openstack stack create
command.一个很好的例子是:
“ Tarball配置映射”可用于提供(二进制)TARBALL,在生成剧本时在Tripleo-Heat-templates中提取。 Each tarball should contain a directory of files relative to the root of a tht directory. You will want to store things like the following examples in a config map containing custom tarballs:
net-config文件。
网络环境
注意:虚拟机的Net-Config文件由操作员创建,但可以使用“ Tarball Config Map”覆盖。要覆盖预渲染的net-config使用<role lowercase>-nic-template.yaml
osp16.2或<role lowercase>-nic-template.j2
for osp17。注意:OpenStackVmset控制器创建的VM的网络接口名称由分配给VM角色的网络名称按字母顺序排列。一个例外是VM POD的default
网络接口,该接口始终是第一个接口。 The resulting inteface section of the virtual machine definition will look like this:
interfaces :
- masquerade : {}
model : virtio
name : default
- bridge : {}
model : virtio
name : ctlplane
- bridge : {}
model : virtio
name : external
- bridge : {}
model : virtio
name : internalapi
- bridge : {}
model : virtio
name : storage
- bridge : {}
model : virtio
name : storagemgmt
- bridge : {}
model : virtio
name : tenant
因此,CTLPLANE接口为NIC2,外部NIC3,等等。
注意:FIP流量不会传递给具有ML2/OVN和DVR的VLAN租户网络。 DVR is enabled by default.如果您需要具有OVN的VLAN租户网络,则可以禁用DVR。要禁用DVR,请在环境文件中包含以下几行:
parameter_defaults :
NeutronEnableDVR : false
支持“ OVN中的分布式VLAN流量”的支持在管理MAC地址中跟踪“在Tripleo中为OVN中的分布式VLAN流量添加支持”(https://bugs.launchpad.net/tripleo/tripleo/++bug/1881593)
[git repo config映射]此configmap包含用于存储生成的剧本的git repo的SSH键和URL(下图)
Once you customize the above template/examples for your environment you can create configmaps for both the 'heat-env-config' and 'tripleo-tarball-config'(tarballs) ConfigMaps by using these example commands on the files containing each respective configmap type (每种configmap的一个目录):
# create the configmap for heat-env-config
oc create configmap -n openstack heat-env-config --from-file=heat-env-config/ --dry-run=client -o yaml | oc apply -f -
# create the configmap containing a tarball of t-h-t network config files. NOTE: these files may overwrite default t-h-t files so keep this in mind when naming them.
cd < dir with net config files >
tar -cvzf net-config.tar.gz * .yaml
oc create configmap -n openstack tripleo-tarball-config --from-file=tarball-config.tar.gz
# create the Git secret used for the repo where Ansible playbooks are stored
oc create secret generic git-secret -n openstack --from-file=git_ssh_identity= < path to git id_rsa > --from-literal=git_url= < your git server URL (git@...) >
(可选)为您的OpenStackControlplane创建一个秘密。此秘密将为您的虚拟机和baremetal主机提供默认密码。如果没有提供任何秘密,您将只能使用OSP-Controlplane-SSH-Keys秘密中定义的SSH键登录。
apiVersion : v1
kind : Secret
metadata :
name : userpassword
namespace : openstack
data :
# 12345678
NodeRootPassword : MTIzNDU2Nzg=
If you write the above YAML into a file called ctlplane-secret.yaml you can create the Secret via this command:
oc create -n openstack -f ctlplane-secret.yaml
定义您的OpenStackControlplane自定义资源。 The OpenStackControlPlane custom resource provides a central place to create and scale VMs used for the OSP Controllers along with any additional vmsets for your deployment.基本的演示安装需要至少1个控制器VM,并且根据OSP高可用性准则3建议使用控制器VM。
NOTE : If the rhel-guest-image is used as base to deploy the OpenStackControlPlane virtual machines, make sure to remove the net.ifnames=0 kernel parameter from the image to have the biosdev network interface naming. This can be done like:
dnf install -y libguestfs-tools-c
virt-customize -a bms-image.qcow2 --run-command ' sed -i -e "s/^(kernelopts=.*)net.ifnames=0 (.*)/12/" /boot/grub2/grubenv '
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackControlPlane
metadata :
name : overcloud
namespace : openstack
spec :
openStackClientImageURL : quay.io/openstack-k8s-operators/rhosp16-openstack-tripleoclient:16.2_20210713.1
openStackClientNetworks :
- ctlplane
- external
- internalapi
# openStackClientStorageClass must support RWX
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
openStackClientStorageClass : host-nfs-storageclass
passwordSecret : userpassword
gitSecret : git-secret
virtualMachineRoles :
controller :
roleName : Controller
roleCount : 3
networks :
- ctlplane
- internalapi
- external
- tenant
- storage
- storagemgmt
cores : 6
memory : 12
rootDisk :
diskSize : 50
baseImageVolumeName : openstack-base-img
# storageClass must support RWX to be able to live migrate VMs
storageClass : host-nfs-storageclass
storageAccessMode : ReadWriteMany
# When using OpenShift Virtualization with OpenShift Container Platform Container Storage,
# specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks,
# RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs.
# To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and VolumeMode: Block.
storageVolumeMode : Filesystem
# Optional
# DedicatedIOThread - Disks with dedicatedIOThread set to true will be allocated an exclusive thread.
# This is generally useful if a specific Disk is expected to have heavy I/O traffic, e.g. a database spindle.
dedicatedIOThread : false
additionalDisks :
# name must be uniqe and must not be rootDisk
- name : dataDisk1
diskSize : 100
storageClass : host-nfs-storageclass
storageAccessMode : ReadWriteMany
storageVolumeMode : Filesystem
# Optional block storage settings
# IOThreadsPolicy - IO thread policy for the domain. Currently valid policies are shared and auto.
# However, if any disk requests a dedicated IOThread, ioThreadsPolicy will be enabled and default to shared.
# When ioThreadsPolicy is set to auto IOThreads will also be "isolated" from the vCPUs and placed on the same physical CPU as the QEMU emulator thread.
# An ioThreadsPolicy of shared indicates that KubeVirt should use one thread that will be shared by all disk devices.
ioThreadsPolicy : auto
# Block Multi-Queue is a framework for the Linux block layer that maps Device I/O queries to multiple queues.
# This splits I/O processing up across multiple threads, and therefor multiple CPUs. libvirt recommends that the
# number of queues used should match the number of CPUs allocated for optimal performance.
blockMultiQueue : false
如果将上述YAML写入名为OpenStackControlplane.yaml的文件,则可以通过此命令创建OpenStackControlplane:
oc create -f openstackcontrolplane.yaml
注意,使用POD抗亲和力规则(PreferredDuringsChedulingIngignoredDuringExecution),请在同一VMSET(VM角色)内进行VM(VM角色)中分布。因此,如果没有其他资源可用(例如,更新期间的工作人员重新启动),角色的多个VM最终可能会出现在同一工作者节点上。如果在维护/重新启动后出现一个节点,则不会发生自动实时迁移。在下一个调度请求中,VM再次重新定位。
定义OpenStackBaremetalset以扩展OSP计算宿主。 OpenStackBaremetal资源可用于定义和缩放计算资源,并可选地定义和扩展其他类型的Tripleo角色的Baremetal主机。 The example below defines a single Compute host to be created.
NOTE : If the rhel-guest-image is used as base to deploy the OpenStackBaremetalSet compute nodes, make sure to remove the net.ifnames=0 kernel parameter from the image to have the biosdev network interface naming.这可以像:
dnf install -y libguestfs-tools-c
virt-customize -a bms-image.qcow2 --run-command ' sed -i -e "s/^(kernelopts=.*)net.ifnames=0 (.*)/12/" /boot/grub2/grubenv '
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackBaremetalSet
metadata :
name : compute
namespace : openstack
spec :
# How many nodes to provision
count : 1
# The image to install on the provisioned nodes. NOTE: needs to be accessible on the OpenShift Metal3 provisioning network.
baseImageUrl : http://host/images/rhel-image-8.4.x86_64.qcow2
# NOTE: these are automatically created via the OpenStackControlplane CR above
deploymentSSHSecret : osp-controlplane-ssh-keys
# The interface on the nodes that will be assigned an IP from the mgmtCidr
ctlplaneInterface : enp7s0
# Networks to associate with this host
networks :
- ctlplane
- internalapi
- tenant
- storage
roleName : Compute
passwordSecret : userpassword
如果将上述yaml写入名为compute.yaml的文件,则可以通过此命令创建OpenStackBaremetalset:
oc create -f compute.yaml
节点注册(注册超云系统到所需的频道)
Wait for the above resource to finish deploying (Compute and ControlPlane).一旦资源完成部署,请进行节点注册。
使用5.9中所述的过程。手动运行基于Ansible的注册会这样做。
注意:我们建议使用手动注册,因为它的工作原理,无论基本图像选择如何。 If you are using overcloud-full as your base deployment image then automatic RHSM registration could be used via the tht rhsm.yaml environment role/file as an alternative to this approach.
oc rsh openstackclient
bash
cd /home/cloud-admin
< create the ansible playbook for the overcloud nodes - e.g. rhsm.yaml >
# register the overcloud nodes to required repositories
ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory ./rhsm.yaml
(可选)创建角色文件a)使用OpenStackClient Pod生成自定义角色文件
oc rsh openstackclient
unset OS_CLOUD
cd /home/cloud-admin/
openstack overcloud roles generate Controller ComputeHCI > roles_data.yaml
exit
b)从OpenStackClient Pod中复制自定义角色文件
oc cp openstackclient:/home/cloud-admin/roles_data.yaml roles_data.yaml
更新tarballConfigMap
configmap,以将roles_data.yaml
文件添加到tarball并更新配置map。
注意:确保将roles_data.yaml
用作文件名。
定义OpenStackConfiggenerator为OSP群集部署生成Ansible Playbook。
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackConfigGenerator
metadata :
name : default
namespace : openstack
spec :
enableFencing : False
imageURL : quay.io/openstack-k8s-operators/rhosp16-openstack-tripleoclient:16.2_20210713.1
gitSecret : git-secret
heatEnvConfigMap : heat-env-config
tarballConfigMap : tripleo-tarball-config
# (optional) for debugging it is possible to set the interactive mode.
# In this mode the playbooks won't get rendered automatically. Just the environment to start the rendering gets created
# interactive: true
# (optional) provide custom registry or specific container versions via the ephemeralHeatSettings
# ephemeralHeatSettings:
# heatAPIImageURL: quay.io/tripleotraincentos8/centos-binary-heat-api:current-tripleo
# heatEngineImageURL: quay.io/tripleotraincentos8/centos-binary-heat-engine:current-tripleo
# mariadbImageURL: quay.io/tripleotraincentos8/centos-binary-mariadb:current-tripleo
# rabbitImageURL: quay.io/tripleotraincentos8/centos-binary-rabbitmq:current-tripleo
如果将上述yaml写入名为generator.yaml的文件,则可以通过此命令创建OpenStackConfiggenerator:
oc create -f generator.yaml
The osconfiggenerator created above will automatically generate playbooks any time you scale or modify the ConfigMaps for your OSP deployment. Generating these playbooks takes several minutes. You can monitor the osconfiggenerator's status condition for it to finish.
获取最新的OsconFigversion(Ansible Playbooks)。选择在下一步中使用的最新OsconFigversion的哈希/摘要。
oc get -n openstack --sort-by {.metadata.creationTimestamp} osconfigversions -o json
NOTE: OsConfigVersion objects also have a 'git diff' attribute that can be used to easily compare the changes between Ansible playbook versions.
Create an OsDeploy (executes Ansible playbooks)
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackDeploy
metadata :
name : default
spec :
configVersion : n5fch96h548h75hf4hbdhb8hfdh676h57bh96h5c5h59hf4h88h...
configGenerator : default
如果将上述YAML写入名为decloy.yaml的文件,则可以通过此命令创建OpenStackDeploy:
oc create -f deploy.yaml
随着部署的运行,它将创建一个Kubernetes作业来执行Ansible Playbook。您可以拖延此作业/吊舱的日志,以观看Ansible Playbook的运行。此外,您可以通过登录“ OpenStackClient” Pod,进入/home/cloud-admin/work //目录来手动访问执行的Ansible剧本。 There you will find the ansible playbooks along with the ansible.log file for the running deployment.
可以部署Tripleo的超融合基础架构,其中计算节点也充当Ceph OSD节点。通过Tripleo安装CEPH的工作流将是:
确保使用quay.io/openstack-k8s-operators/rhosp16-openstack-tripleoclient:16.2_20210521.1
或以后进行OpenStackClient openStackClientImageURL
。
具有带有额外磁盘的计算节点以用作OSD,并为ComputeHCI角色创建一个Baremetalset,该角色具有storagemgmt网络,除默认的计算网络外,还设置了IsHCI
参数。
NOTE : If the rhel-guest-image is used as base to deploy the OpenStackBaremetalSet compute nodes, make sure to remove the net.ifnames=0 kernel parameter form the image to have the biosdev network interface naming.这可以像:
dnf install -y libguestfs-tools-c
virt-customize -a bms-image.qcow2 --run-command ' sed -i -e "s/^(kernelopts=.*)net.ifnames=0 (.*)/12/" /boot/grub2/grubenv '
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackBaremetalSet
metadata :
name : computehci
namespace : openstack
spec :
# How many nodes to provision
replicas : 2
# The image to install on the provisioned nodes
baseImageUrl : http://host/images/rhel-image-8.4.x86_64.qcow2
# The secret containing the SSH pub key to place on the provisioned nodes
deploymentSSHSecret : osp-controlplane-ssh-keys
# The interface on the nodes that will be assigned an IP from the mgmtCidr
ctlplaneInterface : enp7s0
# Networks to associate with this host
networks :
- ctlplane
- internalapi
- tenant
- storage
- storagemgmt
roleName : ComputeHCI
passwordSecret : userpassword
Deploying OpenStack once you have the OSP Director Operator installed
which includes the computeHCI role/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
and any other customization to the Tripleo Deploy custom configMap, eg storage-backend.yaml
: resource_registry :
OS::TripleO::Services::CephMgr : deployment/ceph-ansible/ceph-mgr.yaml
OS::TripleO::Services::CephMon : deployment/ceph-ansible/ceph-mon.yaml
OS::TripleO::Services::CephOSD : deployment/ceph-ansible/ceph-osd.yaml
OS::TripleO::Services::CephClient : deployment/ceph-ansible/ceph-client.yaml
parameter_defaults :
# needed for now because of the repo used to create tripleo-deploy image
CephAnsibleRepo : " rhelosp-ceph-4-tools "
CephAnsiblePlaybookVerbosity : 3
CinderEnableIscsiBackend : false
CinderEnableRbdBackend : true
CinderBackupBackend : ceph
CinderEnableNfsBackend : false
NovaEnableRbdBackend : true
GlanceBackend : rbd
CinderRbdPoolName : " volumes "
NovaRbdPoolName : " vms "
GlanceRbdPoolName : " images "
CephPoolDefaultPgNum : 32
CephPoolDefaultSize : 2
CephAnsibleDisksConfig :
devices :
- ' /dev/sdb '
- ' /dev/sdc '
- ' /dev/sdd '
osd_scenario : lvm
osd_objectstore : bluestore
CephAnsibleExtraConfig :
is_hci : true
CephConfigOverrides :
rgw_swift_enforce_content_length : true
rgw_swift_versioning_enabled : true
自定义了环境的上述模板/示例后,创建/UPDATE CONIFUTMAPS,例如Deploying OpenStack once you have the OSP Director Operator installed
中所解释的内容
Deploying OpenStack once you have the OSP Director Operator installed
and specify the roles generated roles file. NOTE : Make sure to use quay.io/openstack-k8s-operators/rhosp16-openstack-tripleoclient:16.2_20210521.1
or later for the osconfiggenerator imageURL
.
Wait for the OpenStackConfigGenerator to finish the playbook rendering job.
Obtain the hash/digest of the latest OpenStackConfigVersion.
为指定的OpenStackConfigversion创建一个OpenStackDeploy。这将部署Ansible剧本。
删除大概计算主机需要以下步骤:
如果删除计算节点,请禁用超云上传出节点上的计算服务,以防止节点安排新实例
openstack compute service list
openstack compute service set < hostname > nova-compute --disable
BMH资源的注释
oc annotate -n openshift-machine-api bmh/openshift-worker-3 osp-director.openstack.org/delete-host=true --overwrite
注释状态正在使用annotatedForDeletion
参数中反映在OSBAREMETALSET/OSVMSET中:
oc get osbms computehci -o json | jq .status
{
" baremetalHosts " : {
" computehci-0 " : {
" annotatedForDeletion " : true,
" ctlplaneIP " : " 192.168.25.105/24 " ,
" hostRef " : " openshift-worker-3 " ,
" hostname " : " computehci-0 " ,
" networkDataSecretName " : " computehci-cloudinit-networkdata-openshift-worker-3 " ,
" provisioningState " : " provisioned " ,
" userDataSecretName " : " computehci-cloudinit-userdata-openshift-worker-3 "
},
" computehci-1 " : {
" annotatedForDeletion " : false,
" ctlplaneIP " : " 192.168.25.106/24 " ,
" hostRef " : " openshift-worker-4 " ,
" hostname " : " computehci-1 " ,
" networkDataSecretName " : " computehci-cloudinit-networkdata-openshift-worker-4 " ,
" provisioningState " : " provisioned " ,
" userDataSecretName " : " computehci-cloudinit-userdata-openshift-worker-4 "
}
},
" provisioningStatus " : {
" readyCount " : 2,
" reason " : " All requested BaremetalHosts have been provisioned " ,
" state " : " provisioned "
}
}
减少OSBAREMETALSET的资源数量将触发CorrensPonding Controller来处理资源删除
oc patch osbms computehci --type=merge --patch ' {"spec":{"count":1}} '
因此:
oc get osnet ctlplane -o json | jq .status.roleReservations.ComputeHCI
{
" addToPredictableIPs " : true,
" reservations " : [
{
" deleted " : true,
" hostname " : " computehci-0 " ,
" ip " : " 192.168.25.105 " ,
" vip " : false
},
{
" deleted " : false,
" hostname " : " computehci-1 " ,
" ip " : " 192.168.25.106 " ,
" vip " : false
}
]
}
这导致以下行为
现在,如果删除了计算节点,则在OpenStack控制平面上有几个剩余条目寄存器D寄存器D寄存器,并且不会自动清洁。要清理它们,请执行以下步骤。
openstack compute service list
openstack compute service delete < service-id >
openstack network agent list
for AGENT in $( openstack network agent list --host < scaled-down-node > -c ID -f value ) ; do openstack network agent delete $AGENT ; done
删除VM需要以下步骤:
如果VM托管在删除之前应禁用的任何OSP服务,请执行此操作。
VM资源的注释
oc annotate -n openstack vm/controller-1 osp-director.openstack.org/delete-host=true --overwrite
减少OpenStackControlplane CR中虚拟机器的资源。相关控制器以处理资源删除
oc patch osctlplane overcloud --type=merge --patch ' {"spec":{"virtualMachineRoles":{"<RoleName>":{"roleCount":2}}}} '
因此:
这导致以下行为
If the VM did host any OSP service which should be removed, delete the service using the corresponding openstack command.
可以部署Tripleo的路由网络(脊柱/叶网络)体系结构来配置超云网络。使用子网参数用基本网络定义其他叶子子网。
现在的一个限制是,金属3只能有一个供应网络。
使用多个子网安装超云的工作流将是:
定义您的OpenStackNetConfig自定义资源,并指定OverCloud网络的所有子网。操作员将渲染Tripleo Network_data.yaml用于使用的OSP版本。
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackNetConfig
metadata :
name : openstacknetconfig
spec :
attachConfigurations :
br-osp :
nodeNetworkConfigurationPolicy :
nodeSelector :
node-role.kubernetes.io/worker : " "
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : enp7s0
description : Linux bridge with enp7s0 as a port
name : br-osp
state : up
type : linux-bridge
mtu : 1500
br-ex :
nodeNetworkConfigurationPolicy :
nodeSelector :
node-role.kubernetes.io/worker : " "
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : enp6s0
description : Linux bridge with enp6s0 as a port
name : br-ex-osp
state : up
type : linux-bridge
mtu : 1500
# optional DnsServers list
dnsServers :
- 192.168.25.1
# optional DnsSearchDomains list
dnsSearchDomains :
- osptest.test.metalkube.org
- some.other.domain
# DomainName of the OSP environment
domainName : osptest.test.metalkube.org
networks :
- name : Control
nameLower : ctlplane
subnets :
- name : ctlplane
ipv4 :
allocationEnd : 192.168.25.250
allocationStart : 192.168.25.100
cidr : 192.168.25.0/24
gateway : 192.168.25.1
attachConfiguration : br-osp
- name : InternalApi
nameLower : internal_api
mtu : 1350
subnets :
- name : internal_api
ipv4 :
allocationEnd : 172.17.0.250
allocationStart : 172.17.0.10
cidr : 172.17.0.0/24
routes :
- destination : 172.17.1.0/24
nexthop : 172.17.0.1
- destination : 172.17.2.0/24
nexthop : 172.17.0.1
vlan : 20
attachConfiguration : br-osp
- name : internal_api_leaf1
ipv4 :
allocationEnd : 172.17.1.250
allocationStart : 172.17.1.10
cidr : 172.17.1.0/24
routes :
- destination : 172.17.0.0/24
nexthop : 172.17.1.1
- destination : 172.17.2.0/24
nexthop : 172.17.1.1
vlan : 21
attachConfiguration : br-osp
- name : internal_api_leaf2
ipv4 :
allocationEnd : 172.17.2.250
allocationStart : 172.17.2.10
cidr : 172.17.2.0/24
routes :
- destination : 172.17.1.0/24
nexthop : 172.17.2.1
- destination : 172.17.0.0/24
nexthop : 172.17.2.1
vlan : 22
attachConfiguration : br-osp
- name : External
nameLower : external
subnets :
- name : external
ipv4 :
allocationEnd : 10.0.0.250
allocationStart : 10.0.0.10
cidr : 10.0.0.0/24
gateway : 10.0.0.1
attachConfiguration : br-ex
- name : Storage
nameLower : storage
mtu : 1350
subnets :
- name : storage
ipv4 :
allocationEnd : 172.18.0.250
allocationStart : 172.18.0.10
cidr : 172.18.0.0/24
routes :
- destination : 172.18.1.0/24
nexthop : 172.18.0.1
- destination : 172.18.2.0/24
nexthop : 172.18.0.1
vlan : 30
attachConfiguration : br-osp
- name : storage_leaf1
ipv4 :
allocationEnd : 172.18.1.250
allocationStart : 172.18.1.10
cidr : 172.18.1.0/24
routes :
- destination : 172.18.0.0/24
nexthop : 172.18.1.1
- destination : 172.18.2.0/24
nexthop : 172.18.1.1
vlan : 31
attachConfiguration : br-osp
- name : storage_leaf2
ipv4 :
allocationEnd : 172.18.2.250
allocationStart : 172.18.2.10
cidr : 172.18.2.0/24
routes :
- destination : 172.18.0.0/24
nexthop : 172.18.2.1
- destination : 172.18.1.0/24
nexthop : 172.18.2.1
vlan : 32
attachConfiguration : br-osp
- name : StorageMgmt
nameLower : storage_mgmt
mtu : 1350
subnets :
- name : storage_mgmt
ipv4 :
allocationEnd : 172.19.0.250
allocationStart : 172.19.0.10
cidr : 172.19.0.0/24
routes :
- destination : 172.19.1.0/24
nexthop : 172.19.0.1
- destination : 172.19.2.0/24
nexthop : 172.19.0.1
vlan : 40
attachConfiguration : br-osp
- name : storage_mgmt_leaf1
ipv4 :
allocationEnd : 172.19.1.250
allocationStart : 172.19.1.10
cidr : 172.19.1.0/24
routes :
- destination : 172.19.0.0/24
nexthop : 172.19.1.1
- destination : 172.19.2.0/24
nexthop : 172.19.1.1
vlan : 41
attachConfiguration : br-osp
- name : storage_mgmt_leaf2
ipv4 :
allocationEnd : 172.19.2.250
allocationStart : 172.19.2.10
cidr : 172.19.2.0/24
routes :
- destination : 172.19.0.0/24
nexthop : 172.19.2.1
- destination : 172.19.1.0/24
nexthop : 172.19.2.1
vlan : 42
attachConfiguration : br-osp
- name : Tenant
nameLower : tenant
vip : False
mtu : 1350
subnets :
- name : tenant
ipv4 :
allocationEnd : 172.20.0.250
allocationStart : 172.20.0.10
cidr : 172.20.0.0/24
routes :
- destination : 172.20.1.0/24
nexthop : 172.20.0.1
- destination : 172.20.2.0/24
nexthop : 172.20.0.1
vlan : 50
attachConfiguration : br-osp
- name : tenant_leaf1
ipv4 :
allocationEnd : 172.20.1.250
allocationStart : 172.20.1.10
cidr : 172.20.1.0/24
routes :
- destination : 172.20.0.0/24
nexthop : 172.20.1.1
- destination : 172.20.2.0/24
nexthop : 172.20.1.1
vlan : 51
attachConfiguration : br-osp
- name : tenant_leaf2
ipv4 :
allocationEnd : 172.20.2.250
allocationStart : 172.20.2.10
cidr : 172.20.2.0/24
routes :
- destination : 172.20.0.0/24
nexthop : 172.20.2.1
- destination : 172.20.1.0/24
nexthop : 172.20.2.1
vlan : 52
attachConfiguration : br-osp
如果将上述yaml写入名为networkConfig.yaml的文件,则可以通过此命令创建OpenStackNetConfig:
oc create -n openstack -f networkconfig.yaml
...
# ##############################################################################
# Role: ComputeLeaf1 #
# ##############################################################################
- name : ComputeLeaf1
description : |
Basic ComputeLeaf1 Node role
# Create external Neutron bridge (unset if using ML2/OVS without DVR)
tags :
- external_bridge
networks :
InternalApi :
subnet : internal_api_leaf1
Tenant :
subnet : tenant_leaf1
Storage :
subnet : storage_leaf1
HostnameFormatDefault : ' %stackname%-novacompute-leaf1-%index% '
...
# ##############################################################################
# Role: ComputeLeaf2 #
# ##############################################################################
- name : ComputeLeaf2
description : |
Basic ComputeLeaf1 Node role
# Create external Neutron bridge (unset if using ML2/OVS without DVR)
tags :
- external_bridge
networks :
InternalApi :
subnet : internal_api_leaf2
Tenant :
subnet : tenant_leaf2
Storage :
subnet : storage_leaf2
HostnameFormatDefault : ' %stackname%-novacompute-leaf2-%index% '
...
更新tarballConfigMap
configmap,以将roles_data.yaml
文件添加到tarball并更新配置map。
注意:确保将roles_data.yaml
用作文件名。
OSP 16.2 Tripleo NIC模板具有每个默认值的InterFacerout参数。命名路由的环境/网络 - 环境中呈现的路由参数通常会在中子网络host_routes属性上设置,并添加到角色InterFaceroutes参数中。由于没有中子子,因此需要将{{network.name}}路由添加到需要的情况下,并在两个列表中汇总:
parameters:
...
{{ $net.Name }}Routes:
default: []
description: >
Routes for the storage network traffic.
JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
Unless the default is changed, the parameter is automatically resolved
from the subnet host_routes attribute.
type: json
...
- type: interface
...
routes:
list_concat_unique:
- get_param: {{ $net.Name }}Routes
- get_param: {{ $net.Name }}InterfaceRoutes
路由子网信息被自动渲染到tripleo环境文件environments/network-environment.yaml
,该脚本中使用的脚本呈现了Ansible Playbooks。 In the NIC templates therefore use Routes_<subnet_name>, eg StorageRoutes_storage_leaf1 to set the correct routing on the host.
对于Computeaf1计算角色,需要修改NIC模板以使用这些模板:
...
StorageRoutes_storage_leaf1 :
default : []
description : >
Routes for the storage network traffic.
JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
Unless the default is changed, the parameter is automatically resolved
from the subnet host_routes attribute.
type : json
...
InternalApiRoutes_internal_api_leaf1 :
default : []
description : >
Routes for the internal_api network traffic.
JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
Unless the default is changed, the parameter is automatically resolved
from the subnet host_routes attribute.
type : json
...
TenantRoutes_tenant_leaf1 :
default : []
description : >
Routes for the internal_api network traffic.
JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
Unless the default is changed, the parameter is automatically resolved
from the subnet host_routes attribute.
type : json
...
get_param : StorageIpSubnet
routes :
list_concat_unique :
- get_param : StorageRoutes_storage_leaf1
- type : vlan
...
get_param : InternalApiIpSubnet
routes :
list_concat_unique :
- get_param : InternalApiRoutes_internal_api_leaf1
...
get_param : TenantIpSubnet
routes :
list_concat_unique :
- get_param : TenantRoutes_tenant_leaf1
- type : ovs_bridge
...
Update the tarballConfigMap
configmap to add the NIC templates roles_data.yaml
file to the tarball and update the configmap.
注意:确保将roles_data.yaml
用作文件名。
到目前为止,仅使用多个子网部署对OSP16.2进行了测试,并且与OSP17.0单个子网兼容。
TBD
确保将新的创建的NIC模板添加到环境文件中的新节点角色的resource_registry
:
resource_registry :
OS::TripleO::Compute::Net::SoftwareConfig : net-config-two-nic-vlan-compute.yaml
OS::TripleO::ComputeLeaf1::Net::SoftwareConfig : net-config-two-nic-vlan-compute_leaf1.yaml
OS::TripleO::ComputeLeaf2::Net::SoftwareConfig : net-config-two-nic-vlan-compute_leaf2.yaml
在这一点上,我们可以提供超云。
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackControlPlane
metadata :
name : overcloud
namespace : openstack
spec :
gitSecret : git-secret
openStackClientImageURL : registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:16.2
openStackClientNetworks :
- ctlplane
- external
- internal_api
- internal_api_leaf1 # optionally the openstackclient can also be connected to subnets
openStackClientStorageClass : host-nfs-storageclass
passwordSecret : userpassword
domainName : ostest.test.metalkube.org
virtualMachineRoles :
Controller :
roleName : Controller
roleCount : 1
networks :
- ctlplane
- internal_api
- external
- tenant
- storage
- storage_mgmt
cores : 6
memory : 20
rootDisk :
diskSize : 40
baseImageVolumeName : controller-base-img
storageClass : host-nfs-storageclass
storageAccessMode : ReadWriteMany
storageVolumeMode : Filesystem
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackBaremetalSet
metadata :
name : computeleaf1
namespace : openstack
spec :
# How many nodes to provision
count : 1
# The image to install on the provisioned nodes
baseImageUrl : http://192.168.111.1/images/rhel-guest-image-8.4-1168.x86_64.qcow2
provisionServerName : openstack
# The secret containing the SSH pub key to place on the provisioned nodes
deploymentSSHSecret : osp-controlplane-ssh-keys
# The interface on the nodes that will be assigned an IP from the mgmtCidr
ctlplaneInterface : enp7s0
# Networks to associate with this host
networks :
- ctlplane
- internal_api_leaf1
- external
- tenant_leaf1
- storage_leaf1
roleName : ComputeLeaf1
passwordSecret : userpassword
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackBaremetalSet
metadata :
name : computeleaf2
namespace : openstack
spec :
# How many nodes to provision
count : 1
# The image to install on the provisioned nodes
baseImageUrl : http://192.168.111.1/images/rhel-guest-image-8.4-1168.x86_64.qcow2
provisionServerName : openstack
# The secret containing the SSH pub key to place on the provisioned nodes
deploymentSSHSecret : osp-controlplane-ssh-keys
# The interface on the nodes that will be assigned an IP from the mgmtCidr
ctlplaneInterface : enp7s0
# Networks to associate with this host
networks :
- ctlplane
- internal_api_leaf2
- external
- tenant_leaf2
- storage_leaf2
roleName : ComputeLeaf2
passwordSecret : userpassword
Define an OpenStackConfigGenerator to generate ansible playbooks for the OSP cluster deployment as in Deploying OpenStack once you have the OSP Director Operator installed
and specify the roles generated roles file.
As described before in Run the software deployment
check, apply, register the overcloud nodes to required repositories and run the sofware deployment from inside the openstackclient pod.
OSP-D运算符提供了一个API来创建和还原其当前CR,ConfigMap和秘密配置的备份。 This API consists of two CRDs:
OpenStackBackupRequest
OpenStackBackup
OpenStackBackupRequest
CRD用于启动备份的创建或恢复,而OpenStackBackup
CRD则用于实际存储属于操作员的CR,ConfigMap和秘密数据。 This allows for several benefits:
OpenStackBackup
CR,用户无需手动导出/导入操作员的配置的每个部分OpenStackBackup
,请创建一个带有mode
设置的OpenStackBackupRequest
,以save
在其规格中。例如: apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackBackupRequest
metadata :
name : openstackbackupsave
namespace : openstack
spec :
mode : save
additionalConfigMaps : []
additionalSecrets : []
规格字段如下:
mode: save
表示这是创建备份的请求。additionalConfigMaps
and additionalSecrets
lists may be used to include supplemental ConfigMaps and Secrets of which the operator is otherwise unaware (ie ConfigMaps and Secrets manually created for certain purposes).OpenStackControlPlane
, OpenStackBaremetalSet
, etc) in the namespace, without requiring the user to include them in these additional lists.OpenStackBackupRequest
,请监视其状态: oc get -n openstack osbackuprequest openstackbackupsave
这样的事情应该出现:
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP
openstackbackupsave save Quiescing
The Quiescing
state indicates that the operator is waiting for provisioning state of all OSP-D operator CRs to reach their "finished" equivalent.此要求的时间将根据OSP-D操作员CRS的数量以及当前的配置状态的偶然性而有所不同。 NOTE: It is possible that the operator will never fully quiesce due to errors and/or "waiting" states in existing CRs.要查看哪些CRD/CRS阻止贵族,请研究操作员日志。例如:
oc logs < OSP-D operator pod > -c manager -f
...
2022-01-11T18:26:15.180Z INFO controllers.OpenStackBackupRequest Quiesce for save for OpenStackBackupRequest openstackbackupsave is waiting for: [OpenStackBaremetalSet: compute, OpenStackControlPlane: overcloud, OpenStackVMSet: controller]
如果OpenStackBackupRequest
进入Error
状态,请查看其完整内容以查看遇到的错误( oc get -n openstack openstackbackuprequest <name> -o yaml
)。
OpenStackBackupRequest
通过创建和保存代表当前OSP-D运算符配置的OpenStackBackup
来尊重时,它将输入Saved
状态。例如: oc get -n openstack osbackuprequest
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP
openstackbackupsave save Saved 2022-01-11T19:12:58Z
相关的OpenStackBackup
也将创建。例如:
oc get -n openstack osbackup
NAME AGE
openstackbackupsave-1641928378 6m7s
OpenStackBackup
,请创建一个OpenStackBackupRequest
,其mode
设置以在其规格中restore
。例如: apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackBackupRequest
metadata :
name : openstackbackuprestore
namespace : openstack
spec :
mode : restore
restoreSource : openstackbackupsave-1641928378
Spec fields are as follows:
mode: restore
表示这是恢复现有OpenStackBackup
的请求。restoreSource
指示应还原哪个OpenStackBackup
。 With mode
set to restore
, the OSP-D operator will take the contents of the restoreSource
OpenStackBackup
and attempt to apply them against the existing CRs, ConfigMaps and Secrets currently present within the namespace. Thus it will overwrite any existing OSP-D operator resources in the namespace with the same names as those in the OpenStackBackup
, and will create new resources for those not currently found in the namespace.如果需要,可以将mode
设置为cleanRestore
以在尝试修复之前完全擦除名称空间中现有的OSP-D运算符资源,从而使OpenStackBackup
中的所有资源完全重新创建。
OpenStackBackupRequest
,请监视其状态: oc get -n openstack osbackuprequest openstackbackuprestore
Something like this should appear to indicate that all resources from the OpenStackBackup
are being applied against the cluster:
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP
openstackbackuprestore restore openstackbackupsave-1641928378 Loading
然后,一旦加载了所有资源,操作员将开始调和以尝试提供所有资源:
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP
openstackbackuprestore restore openstackbackupsave-1641928378 Reconciling
如果OpenStackBackupRequest
进入Error
状态,请查看其完整内容以查看遇到的错误( oc get -n openstack openstackbackuprequest <name> -o yaml
)。
OpenStackBackupRequest
通过完全恢复OpenStackBackup
而受到尊重时,它将进入Restored
状态。例如: oc get -n openstack osbackuprequest
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP
openstackbackuprestore restore openstackbackupsave-1641928378 Restored 2022-01-12T13:48:57Z
在这一点上,应恢复和完全准备所选的OpenStackBackup
所含的所有资源。
OSP主管操作员在每个OSDEPLOY资源完成执行后会自动创建一个ConfigMap。此ConfigMap以Osdeploy资源名称命名,并带有Tripleo-Exports-的前缀。例如,tripleo-exports-default将是“默认” osdeploy资源的配置名称。每个ConfigMap包含2个YAML文件:
文件名 | 描述 | Tripleo命令等效 |
---|---|---|
ctlplane-export.yaml | Used with multiple stacks for DCN | 超云导出 |
ctlplane-export-filter.yaml | 用于多个带有“控制器”堆栈的堆栈 | 超云电池导出 |
使用下面的命令从ConfigMap提取YAML文件。提取后,可以将YAML文件添加到OsconFiggenerator资源上的自定义加热参数中。
oc extract cm/tripleo-exports-default
注意:OSP主管运营商尚未为CEPH堆栈产生出口。
If required it is possible to change CPU/RAM of an openstackvmset configured via the openstackcontrolplane.工作流量如下:
例如,更改控制器虚拟机是具有8个内核和22GB RAM:
oc patch -n openstack osctlplane overcloud --type= ' json ' -p= ' [{"op": "add", "path": "/spec/virtualMachineRoles/controller/cores", "value": 8 }] '
oc patch -n openstack osctlplane overcloud --type= ' json ' -p= ' [{"op": "add", "path": "/spec/virtualMachineRoles/controller/memory", "value": 22 }] '
oc get osvmset
NAME CORES RAM DESIRED READY STATUS REASON
controller 8 22 1 1 Provisioned All requested VirtualMachines have been provisioned
virtctl start <VM>
为VM供电。 请参阅OSP更新过程文档