OSP總監操作員在OpenShift之上創建了一組自定義資源定義,以管理Tripleo Undercloud通常創建的資源。這些CRD分為兩種類型,用於硬件配置和軟件配置。
OSP主管運營商通過OLM操作員生命週期管理器安裝和管理。 OLM是在OpenShift安裝的情況下自動安裝的。要獲取最新的OSP總監操作員快照,您需要創建適當的CatalogSource,operatorGroup和訂閱以驅動OLM的安裝:
oc new-project openstack
apiVersion : operators.coreos.com/v1alpha1
kind : CatalogSource
metadata :
name : osp-director-operator-index
namespace : openstack
spec :
sourceType : grpc
image : quay.io/openstack-k8s-operators/osp-director-operator-index:0.0.1
apiVersion : operators.coreos.com/v1
kind : OperatorGroup
metadata :
name : " osp-director-operator-group "
namespace : openstack
spec :
targetNamespaces :
- openstack
apiVersion : operators.coreos.com/v1alpha1
kind : Subscription
metadata :
name : osp-director-operator-subscription
namespace : openstack
spec :
config :
env :
- name : WATCH_NAMESPACE
value : openstack,openshift-machine-api,openshift-sriov-network-operator
source : osp-director-operator-index
sourceNamespace : openstack
name : osp-director-operator
startingCSV : osp-director-operator.v0.0.1
channel : alpha
我們有一個腳本可以在此處使用OLM自動化安裝,以獲取特定標籤:腳本以自動安裝安裝
注意:將來的某個時候,我們可能會集成到OperatorHub中,以便OSP主管運營商可以在OCP安裝中自動提供默認的OLM目錄源。
在部署OpenStack之前創建基本RHEL數據量。通過OpenShift虛擬化配置的控制器VM將使用此功能。這樣做的方法如下:
virtctl
: sudo subscription-manager repos --enable=cnv-2.6-for-rhel-8-x86_64-rpms
sudo dnf install -y kubevirt-virtctl
curl -O http://download.devel.redhat.com/brewroot/packages/rhel-guest-image/8.4/1168/images/rhel-guest-image-8.4-1168.x86_64.qcow2
dnf install -y libguestfs-tools-c
virt-customize -a < rhel guest image > --run-command ' sed -i -e "s/^(kernelopts=.*)net.ifnames=0 (.*)/12/" /boot/grub2/grubenv '
virt-customize -a < rhel guest image > --run-command ' sed -i -e "s/^(GRUB_CMDLINE_LINUX=.*)net.ifnames=0 (.*)/12/" /etc/default/grub '
/etc/hosts
中添加以下內容: <cluster ingress VIP> cdi-uploadproxy-openshift-cnv.apps.<cluster name>.<domain name>
virtctl
將圖像上傳到OpenShift虛擬化: virtctl image-upload dv openstack-base-img -n openstack --size=50Gi --image-path=<local path to image> --storage-class <desired storage class> --insecure
storage-class
,請從所示的內容中選擇一個: oc get storageclass
定義您的OpenStackNetConfig自定義資源。 CTLPLANE需要至少一個網絡。您可以選擇地定義CR中的多個網絡,以與Tripleo的網絡隔離體系結構一起使用。除網絡定義外,OpenStackNet還提供了用於通過OpenShift虛擬化將任何VM連接到本網絡的網絡配置策略的信息。以下是一個簡單的IPv4 CTLPLANE網絡的示例,該網絡使用Linux橋進行主機配置。
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackNetConfig
metadata :
name : openstacknetconfig
spec :
attachConfigurations :
br-osp :
nodeNetworkConfigurationPolicy :
nodeSelector :
node-role.kubernetes.io/worker : " "
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : enp7s0
description : Linux bridge with enp7s0 as a port
name : br-osp
state : up
type : linux-bridge
mtu : 1500
# optional DnsServers list
dnsServers :
- 192.168.25.1
# optional DnsSearchDomains list
dnsSearchDomains :
- osptest.test.metalkube.org
- some.other.domain
# DomainName of the OSP environment
domainName : osptest.test.metalkube.org
networks :
- name : Control
nameLower : ctlplane
subnets :
- name : ctlplane
ipv4 :
allocationEnd : 192.168.25.250
allocationStart : 192.168.25.100
cidr : 192.168.25.0/24
gateway : 192.168.25.1
attachConfiguration : br-osp
# optional: (OSP17 only) specify all phys networks with optional MAC address prefix, used to
# create static OVN Bridge MAC address mappings. Unique OVN bridge mac address per node is
# dynamically allocated by creating OpenStackMACAddress resource and create a MAC per physnet per node.
# - If PhysNetworks is not provided, the tripleo default physnet datacentre gets created.
# - If the macPrefix is not specified for a physnet, the default macPrefix "fa:16:3a" is used.
# - If PreserveReservations is not specified, the default is true.
ovnBridgeMacMappings :
preserveReservations : True
physNetworks :
- macPrefix : fa:16:3a
name : datacentre
- macPrefix : fa:16:3b
name : datacentre2
# optional: configure static mapping for the networks per nodes. If there is none, a random gets created
reservations :
controller-0 :
macReservations :
datacentre : fa:16:3a:aa:aa:aa
datacentre2 : fa:16:3b:aa:aa:aa
compute-0 :
macReservations :
datacentre : fa:16:3a:bb:bb:bb
datacentre2 : fa:16:3b:bb:bb:bb
如果將上述yaml寫入名為networkConfig.yaml的文件,則可以通過此命令創建OpenStackNetConfig:
oc create -n openstack -f networkconfig.yaml
使用VLAN使用網絡隔離,將VLAN ID添加到網絡定義的規格中
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackNetConfig
metadata :
name : openstacknetconfig
spec :
attachConfigurations :
br-osp :
nodeNetworkConfigurationPolicy :
nodeSelector :
node-role.kubernetes.io/worker : " "
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : enp7s0
description : Linux bridge with enp7s0 as a port
name : br-osp
state : up
type : linux-bridge
mtu : 1500
br-ex :
nodeNetworkConfigurationPolicy :
nodeSelector :
node-role.kubernetes.io/worker : " "
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : enp6s0
description : Linux bridge with enp6s0 as a port
name : br-ex-osp
state : up
type : linux-bridge
mtu : 1500
# optional DnsServers list
dnsServers :
- 192.168.25.1
# optional DnsSearchDomains list
dnsSearchDomains :
- osptest.test.metalkube.org
- some.other.domain
# DomainName of the OSP environment
domainName : osptest.test.metalkube.org
networks :
- name : Control
nameLower : ctlplane
subnets :
- name : ctlplane
ipv4 :
allocationEnd : 192.168.25.250
allocationStart : 192.168.25.100
cidr : 192.168.25.0/24
gateway : 192.168.25.1
attachConfiguration : br-osp
- name : InternalApi
nameLower : internal_api
mtu : 1350
subnets :
- name : internal_api
attachConfiguration : br-osp
vlan : 20
ipv4 :
allocationEnd : 172.17.0.250
allocationStart : 172.17.0.10
cidr : 172.17.0.0/24
- name : External
nameLower : external
subnets :
- name : external
ipv6 :
allocationEnd : 2001:db8:fd00:1000:ffff:ffff:ffff:fffe
allocationStart : 2001:db8:fd00:1000::10
cidr : 2001:db8:fd00:1000::/64
gateway : 2001:db8:fd00:1000::1
attachConfiguration : br-ex
- name : Storage
nameLower : storage
mtu : 1350
subnets :
- name : storage
ipv4 :
allocationEnd : 172.18.0.250
allocationStart : 172.18.0.10
cidr : 172.18.0.0/24
vlan : 30
attachConfiguration : br-osp
- name : StorageMgmt
nameLower : storage_mgmt
mtu : 1350
subnets :
- name : storage_mgmt
ipv4 :
allocationEnd : 172.19.0.250
allocationStart : 172.19.0.10
cidr : 172.19.0.0/24
vlan : 40
attachConfiguration : br-osp
- name : Tenant
nameLower : tenant
vip : False
mtu : 1350
subnets :
- name : tenant
ipv4 :
allocationEnd : 172.20.0.250
allocationStart : 172.20.0.10
cidr : 172.20.0.0/24
vlan : 50
attachConfiguration : br-osp
當使用vlan與Linux-Bridge進行網絡隔離時
注意:要將巨型框架用於橋樑,請為設備創建配置以配置Correnct MTU:
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackNetConfig
metadata :
name : openstacknetconfig
spec :
attachConfigurations :
br-osp :
nodeNetworkConfigurationPolicy :
nodeSelector :
node-role.kubernetes.io/worker : " "
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : enp7s0
description : Linux bridge with enp7s0 as a port
name : br-osp
state : up
type : linux-bridge
mtu : 9000
- name : enp7s0
description : Configuring enp7s0 on workers
type : ethernet
state : up
mtu : 9000
創建ConfigMaps,以定義用於Tripleo網絡配置的任何自定義加熱環境,熱模板和自定義角色文件(必須是roles_data.yaml
)。任何管理員定義的加熱環境文件都可以在配置中提供,並將用作較晚的步驟,用於創建用於超雲部署的熱量堆棧。作為慣例,每個OSP主管安裝將使用2個名為heat-env-config
和tripleo-tarball-config
的配置map來提供此信息。 heat-env-config
CONFIGMAP保留所有部署環境文件,其中每個文件添加為-e file.yaml
到openstack stack create
命令。一個很好的例子是:
“ Tarball配置映射”可用於提供(二進制)TARBALL,在生成劇本時在Tripleo-Heat-templates中提取。每個TARBALL應包含相對於THT目錄的根目錄的文件目錄。您將需要在包含自定義Tarballs的配置映射中存儲以下示例之類的內容:
net-config文件。
網絡環境
注意:虛擬機的Net-Config文件由操作員創建,但可以使用“ Tarball Config Map”覆蓋。要覆蓋預渲染的net-config使用<role lowercase>-nic-template.yaml
osp16.2或<role lowercase>-nic-template.j2
for osp17。注意:OpenStackVmset控制器創建的VM的網絡接口名稱由分配給VM角色的網絡名稱按字母順序排列。一個例外是VM POD的default
網絡接口,該接口始終是第一個接口。虛擬機定義的最終的截面部分看起來像這樣:
interfaces :
- masquerade : {}
model : virtio
name : default
- bridge : {}
model : virtio
name : ctlplane
- bridge : {}
model : virtio
name : external
- bridge : {}
model : virtio
name : internalapi
- bridge : {}
model : virtio
name : storage
- bridge : {}
model : virtio
name : storagemgmt
- bridge : {}
model : virtio
name : tenant
因此,CTLPLANE接口為NIC2,外部NIC3,等等。
注意:FIP流量不會傳遞給具有ML2/OVN和DVR的VLAN租戶網絡。默認情況下啟用了DVR。如果您需要具有OVN的VLAN租戶網絡,則可以禁用DVR。要禁用DVR,請在環境文件中包含以下幾行:
parameter_defaults :
NeutronEnableDVR : false
支持“ OVN中的分佈式VLAN流量”的支持在管理MAC地址中跟踪“在Tripleo中為OVN中的分佈式VLAN流量添加支持”(https://bugs.launchpad.net/tripleo/tripleo/++ bug/1881593)
[git repo config映射]此configmap包含用於存儲生成的劇本的git repo的SSH鍵和URL(下圖)
自定義了上述模板/示例的環境示例後,您可以通過在包含每個相應configmap類型的文件上使用這些示例命令來為“ heat-env-config”和“ tripleo-tarball-config”(tarballs)configmaps創建configmap (每種configmap的一個目錄):
# create the configmap for heat-env-config
oc create configmap -n openstack heat-env-config --from-file=heat-env-config/ --dry-run=client -o yaml | oc apply -f -
# create the configmap containing a tarball of t-h-t network config files. NOTE: these files may overwrite default t-h-t files so keep this in mind when naming them.
cd < dir with net config files >
tar -cvzf net-config.tar.gz * .yaml
oc create configmap -n openstack tripleo-tarball-config --from-file=tarball-config.tar.gz
# create the Git secret used for the repo where Ansible playbooks are stored
oc create secret generic git-secret -n openstack --from-file=git_ssh_identity= < path to git id_rsa > --from-literal=git_url= < your git server URL (git@...) >
(可選)為您的OpenStackControlplane創建一個秘密。此秘密將為您的虛擬機和baremetal主機提供默認密碼。如果沒有提供任何秘密,您將只能使用OSP-Controlplane-SSH-Keys秘密中定義的SSH鍵登錄。
apiVersion : v1
kind : Secret
metadata :
name : userpassword
namespace : openstack
data :
# 12345678
NodeRootPassword : MTIzNDU2Nzg=
如果將上述yaml寫入名為ctlplane-secret.yaml的文件,則可以通過此命令創建秘密:
oc create -n openstack -f ctlplane-secret.yaml
定義您的OpenStackControlplane自定義資源。 OpenStackControlplane自定義資源為OSP控制器創建和擴展VM提供了一個中心位置,以及用於部署的任何其他VMSET。基本的演示安裝需要至少1個控制器VM,並且根據OSP高可用性準則3建議使用控制器VM。
注意:如果將Rhel-Guest圖像用作部署OpenStackControlplane虛擬機的基礎,請確保從圖像中刪除NET.IFNAMES = 0內核參數,以使BioSDEV網絡接口命名。這可以像:
dnf install -y libguestfs-tools-c
virt-customize -a bms-image.qcow2 --run-command ' sed -i -e "s/^(kernelopts=.*)net.ifnames=0 (.*)/12/" /boot/grub2/grubenv '
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackControlPlane
metadata :
name : overcloud
namespace : openstack
spec :
openStackClientImageURL : quay.io/openstack-k8s-operators/rhosp16-openstack-tripleoclient:16.2_20210713.1
openStackClientNetworks :
- ctlplane
- external
- internalapi
# openStackClientStorageClass must support RWX
# https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
openStackClientStorageClass : host-nfs-storageclass
passwordSecret : userpassword
gitSecret : git-secret
virtualMachineRoles :
controller :
roleName : Controller
roleCount : 3
networks :
- ctlplane
- internalapi
- external
- tenant
- storage
- storagemgmt
cores : 6
memory : 12
rootDisk :
diskSize : 50
baseImageVolumeName : openstack-base-img
# storageClass must support RWX to be able to live migrate VMs
storageClass : host-nfs-storageclass
storageAccessMode : ReadWriteMany
# When using OpenShift Virtualization with OpenShift Container Platform Container Storage,
# specify RBD block mode persistent volume claims (PVCs) when creating virtual machine disks. With virtual machine disks,
# RBD block mode volumes are more efficient and provide better performance than Ceph FS or RBD filesystem-mode PVCs.
# To specify RBD block mode PVCs, use the 'ocs-storagecluster-ceph-rbd' storage class and VolumeMode: Block.
storageVolumeMode : Filesystem
# Optional
# DedicatedIOThread - Disks with dedicatedIOThread set to true will be allocated an exclusive thread.
# This is generally useful if a specific Disk is expected to have heavy I/O traffic, e.g. a database spindle.
dedicatedIOThread : false
additionalDisks :
# name must be uniqe and must not be rootDisk
- name : dataDisk1
diskSize : 100
storageClass : host-nfs-storageclass
storageAccessMode : ReadWriteMany
storageVolumeMode : Filesystem
# Optional block storage settings
# IOThreadsPolicy - IO thread policy for the domain. Currently valid policies are shared and auto.
# However, if any disk requests a dedicated IOThread, ioThreadsPolicy will be enabled and default to shared.
# When ioThreadsPolicy is set to auto IOThreads will also be "isolated" from the vCPUs and placed on the same physical CPU as the QEMU emulator thread.
# An ioThreadsPolicy of shared indicates that KubeVirt should use one thread that will be shared by all disk devices.
ioThreadsPolicy : auto
# Block Multi-Queue is a framework for the Linux block layer that maps Device I/O queries to multiple queues.
# This splits I/O processing up across multiple threads, and therefor multiple CPUs. libvirt recommends that the
# number of queues used should match the number of CPUs allocated for optimal performance.
blockMultiQueue : false
如果將上述YAML寫入名為OpenStackControlplane.yaml的文件,則可以通過此命令創建OpenStackControlplane:
oc create -f openstackcontrolplane.yaml
注意,使用POD抗親和力規則(PreferredDuringsChedulingIngignoredDuringExecution),請在同一VMSET(VM角色)內進行VM(VM角色)中分佈。因此,如果沒有其他資源可用(例如,更新期間的工作人員重新啟動),角色的多個VM最終可能會出現在同一工作者節點上。如果在維護/重新啟動後出現一個節點,則不會發生自動實時遷移。在下一個調度請求中,VM再次重新定位。
定義OpenStackBaremetalset以擴展OSP計算宿主。 OpenStackBaremetal資源可用於定義和縮放計算資源,並可選地定義和擴展其他類型的Tripleo角色的Baremetal主機。下面的示例定義了要創建的單個計算主機。
注意:如果將Rhel-Guest圖像用作部署OpenStackBaremetalset計算節點的基礎,請確保從圖像中刪除NET.IFNAMES = 0內核參數以具有BioSDEV網絡接口命名。這可以像:
dnf install -y libguestfs-tools-c
virt-customize -a bms-image.qcow2 --run-command ' sed -i -e "s/^(kernelopts=.*)net.ifnames=0 (.*)/12/" /boot/grub2/grubenv '
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackBaremetalSet
metadata :
name : compute
namespace : openstack
spec :
# How many nodes to provision
count : 1
# The image to install on the provisioned nodes. NOTE: needs to be accessible on the OpenShift Metal3 provisioning network.
baseImageUrl : http://host/images/rhel-image-8.4.x86_64.qcow2
# NOTE: these are automatically created via the OpenStackControlplane CR above
deploymentSSHSecret : osp-controlplane-ssh-keys
# The interface on the nodes that will be assigned an IP from the mgmtCidr
ctlplaneInterface : enp7s0
# Networks to associate with this host
networks :
- ctlplane
- internalapi
- tenant
- storage
roleName : Compute
passwordSecret : userpassword
如果將上述yaml寫入名為compute.yaml的文件,則可以通過此命令創建OpenStackBaremetalset:
oc create -f compute.yaml
節點註冊(註冊超雲系統到所需的頻道)
等待上述資源完成部署(計算和控制平面)。一旦資源完成部署,請進行節點註冊。
使用5.9中所述的過程。手動運行基於Ansible的註冊會這樣做。
注意:我們建議使用手動註冊,因為它的工作原理,無論基本圖像選擇如何。如果您將OverCloud-Full用作基本部署圖像,則可以通過THT RHSM.YAML環境角色/文件使用自動RHSM註冊作為此方法的替代方法。
oc rsh openstackclient
bash
cd /home/cloud-admin
< create the ansible playbook for the overcloud nodes - e.g. rhsm.yaml >
# register the overcloud nodes to required repositories
ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory ./rhsm.yaml
(可選)創建角色文件a)使用OpenStackClient Pod生成自定義角色文件
oc rsh openstackclient
unset OS_CLOUD
cd /home/cloud-admin/
openstack overcloud roles generate Controller ComputeHCI > roles_data.yaml
exit
b)從OpenStackClient Pod中復制自定義角色文件
oc cp openstackclient:/home/cloud-admin/roles_data.yaml roles_data.yaml
更新tarballConfigMap
configmap,以將roles_data.yaml
文件添加到tarball並更新配置map。
注意:確保將roles_data.yaml
用作文件名。
定義OpenStackConfiggenerator為OSP群集部署生成Ansible Playbook。
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackConfigGenerator
metadata :
name : default
namespace : openstack
spec :
enableFencing : False
imageURL : quay.io/openstack-k8s-operators/rhosp16-openstack-tripleoclient:16.2_20210713.1
gitSecret : git-secret
heatEnvConfigMap : heat-env-config
tarballConfigMap : tripleo-tarball-config
# (optional) for debugging it is possible to set the interactive mode.
# In this mode the playbooks won't get rendered automatically. Just the environment to start the rendering gets created
# interactive: true
# (optional) provide custom registry or specific container versions via the ephemeralHeatSettings
# ephemeralHeatSettings:
# heatAPIImageURL: quay.io/tripleotraincentos8/centos-binary-heat-api:current-tripleo
# heatEngineImageURL: quay.io/tripleotraincentos8/centos-binary-heat-engine:current-tripleo
# mariadbImageURL: quay.io/tripleotraincentos8/centos-binary-mariadb:current-tripleo
# rabbitImageURL: quay.io/tripleotraincentos8/centos-binary-rabbitmq:current-tripleo
如果將上述yaml寫入名為generator.yaml的文件,則可以通過此命令創建OpenStackConfiggenerator:
oc create -f generator.yaml
每當您縮放或修改OSP部署的配置毫米時,上面創建的OsconFiggenerator都會自動生成劇本。生成這些劇本需要幾分鐘。您可以監視OsconFiggener的狀態條件,以使其完成。
獲取最新的OsconFigversion(Ansible Playbooks)。選擇在下一步中使用的最新OsconFigversion的哈希/摘要。
oc get -n openstack --sort-by {.metadata.creationTimestamp} osconfigversions -o json
注意:OsconFigversion對像還具有“ git diff”屬性,可用於輕鬆比較Ansible Playbook版本之間的更改。
創建一個Osdeploy(執行Ansible Playbooks)
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackDeploy
metadata :
name : default
spec :
configVersion : n5fch96h548h75hf4hbdhb8hfdh676h57bh96h5c5h59hf4h88h...
configGenerator : default
如果將上述YAML寫入名為decloy.yaml的文件,則可以通過此命令創建OpenStackDeploy:
oc create -f deploy.yaml
隨著部署的運行,它將創建一個Kubernetes作業來執行Ansible Playbook。您可以拖延此作業/吊艙的日誌,以觀看Ansible Playbook的運行。此外,您可以通過登錄“ OpenStackClient” Pod,進入/home/cloud-admin/work //目錄來手動訪問執行的Ansible劇本。在那裡,您會找到可播放部署的Ansible.log文件以及Ansible.log文件。
可以部署Tripleo的超融合基礎架構,其中計算節點也充當Ceph OSD節點。通過Tripleo安裝CEPH的工作流將是:
確保使用quay.io/openstack-k8s-operators/rhosp16-openstack-tripleoclient:16.2_20210521.1
或以後進行OpenStackClient openStackClientImageURL
。
具有帶有額外磁盤的計算節點以用作OSD,並為ComputeHCI角色創建一個Baremetalset,該角色具有storagemgmt網絡,除默認的計算網絡外,還設置了IsHCI
參數。
注意:如果將Rhel-Guest圖像用作部署OpenStackBaremetalset計算節點的基礎,請確保刪除NET.IFNAMES = 0內核參數形式形式的圖像以具有BioSDEV網絡接口命名。這可以像:
dnf install -y libguestfs-tools-c
virt-customize -a bms-image.qcow2 --run-command ' sed -i -e "s/^(kernelopts=.*)net.ifnames=0 (.*)/12/" /boot/grub2/grubenv '
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackBaremetalSet
metadata :
name : computehci
namespace : openstack
spec :
# How many nodes to provision
replicas : 2
# The image to install on the provisioned nodes
baseImageUrl : http://host/images/rhel-image-8.4.x86_64.qcow2
# The secret containing the SSH pub key to place on the provisioned nodes
deploymentSSHSecret : osp-controlplane-ssh-keys
# The interface on the nodes that will be assigned an IP from the mgmtCidr
ctlplaneInterface : enp7s0
# Networks to associate with this host
networks :
- ctlplane
- internalapi
- tenant
- storage
- storagemgmt
roleName : ComputeHCI
passwordSecret : userpassword
Deploying OpenStack once you have the OSP Director Operator installed
角色文件,其中包括ComputeHCI角色/usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml
添加相關部署參數,以及tripleo部署自定義配置的任何其他自定義storage-backend.yaml
resource_registry :
OS::TripleO::Services::CephMgr : deployment/ceph-ansible/ceph-mgr.yaml
OS::TripleO::Services::CephMon : deployment/ceph-ansible/ceph-mon.yaml
OS::TripleO::Services::CephOSD : deployment/ceph-ansible/ceph-osd.yaml
OS::TripleO::Services::CephClient : deployment/ceph-ansible/ceph-client.yaml
parameter_defaults :
# needed for now because of the repo used to create tripleo-deploy image
CephAnsibleRepo : " rhelosp-ceph-4-tools "
CephAnsiblePlaybookVerbosity : 3
CinderEnableIscsiBackend : false
CinderEnableRbdBackend : true
CinderBackupBackend : ceph
CinderEnableNfsBackend : false
NovaEnableRbdBackend : true
GlanceBackend : rbd
CinderRbdPoolName : " volumes "
NovaRbdPoolName : " vms "
GlanceRbdPoolName : " images "
CephPoolDefaultPgNum : 32
CephPoolDefaultSize : 2
CephAnsibleDisksConfig :
devices :
- ' /dev/sdb '
- ' /dev/sdc '
- ' /dev/sdd '
osd_scenario : lvm
osd_objectstore : bluestore
CephAnsibleExtraConfig :
is_hci : true
CephConfigOverrides :
rgw_swift_enforce_content_length : true
rgw_swift_versioning_enabled : true
自定義了環境的上述模板/示例後,創建/UPDATE CONIFUTMAPS,例如Deploying OpenStack once you have the OSP Director Operator installed
中所解釋的內容
Deploying OpenStack once you have the OSP Director Operator installed
並指定生成的角色文件文件。注意:請確保使用quay.io/openstack-k8s-operators/rhosp16-openstack-tripleoclient:16.2_20210521.1
或更高版本的OsconFiggenerator imageURL
。
等待OpenStackConfiggenerator完成劇本渲染工作。
獲取最新的OpenStackConfigversion的哈希/摘要。
為指定的OpenStackConfigversion創建一個OpenStackDeploy。這將部署Ansible劇本。
刪除大概計算主機需要以下步驟:
如果刪除計算節點,請禁用超雲上傳出節點上的計算服務,以防止節點安排新實例
openstack compute service list
openstack compute service set < hostname > nova-compute --disable
BMH資源的註釋
oc annotate -n openshift-machine-api bmh/openshift-worker-3 osp-director.openstack.org/delete-host=true --overwrite
註釋狀態正在使用annotatedForDeletion
參數中反映在OSBAREMETALSET/OSVMSET中:
oc get osbms computehci -o json | jq .status
{
" baremetalHosts " : {
" computehci-0 " : {
" annotatedForDeletion " : true,
" ctlplaneIP " : " 192.168.25.105/24 " ,
" hostRef " : " openshift-worker-3 " ,
" hostname " : " computehci-0 " ,
" networkDataSecretName " : " computehci-cloudinit-networkdata-openshift-worker-3 " ,
" provisioningState " : " provisioned " ,
" userDataSecretName " : " computehci-cloudinit-userdata-openshift-worker-3 "
},
" computehci-1 " : {
" annotatedForDeletion " : false,
" ctlplaneIP " : " 192.168.25.106/24 " ,
" hostRef " : " openshift-worker-4 " ,
" hostname " : " computehci-1 " ,
" networkDataSecretName " : " computehci-cloudinit-networkdata-openshift-worker-4 " ,
" provisioningState " : " provisioned " ,
" userDataSecretName " : " computehci-cloudinit-userdata-openshift-worker-4 "
}
},
" provisioningStatus " : {
" readyCount " : 2,
" reason " : " All requested BaremetalHosts have been provisioned " ,
" state " : " provisioned "
}
}
減少OSBAREMETALSET的資源數量將觸發CorrensPonding Controller來處理資源刪除
oc patch osbms computehci --type=merge --patch ' {"spec":{"count":1}} '
因此:
oc get osnet ctlplane -o json | jq .status.roleReservations.ComputeHCI
{
" addToPredictableIPs " : true,
" reservations " : [
{
" deleted " : true,
" hostname " : " computehci-0 " ,
" ip " : " 192.168.25.105 " ,
" vip " : false
},
{
" deleted " : false,
" hostname " : " computehci-1 " ,
" ip " : " 192.168.25.106 " ,
" vip " : false
}
]
}
這導致以下行為
現在,如果刪除了計算節點,則在OpenStack控制平面上有幾個剩餘條目寄存器D寄存器D寄存器,並且不會自動清潔。要清理它們,請執行以下步驟。
openstack compute service list
openstack compute service delete < service-id >
openstack network agent list
for AGENT in $( openstack network agent list --host < scaled-down-node > -c ID -f value ) ; do openstack network agent delete $AGENT ; done
刪除VM需要以下步驟:
如果VM託管在刪除之前應禁用的任何OSP服務,請執行此操作。
VM資源的註釋
oc annotate -n openstack vm/controller-1 osp-director.openstack.org/delete-host=true --overwrite
減少OpenStackControlplane CR中虛擬機器的資源。相關控制器以處理資源刪除
oc patch osctlplane overcloud --type=merge --patch ' {"spec":{"virtualMachineRoles":{"<RoleName>":{"roleCount":2}}}} '
因此:
這導致以下行為
如果VM確實託管了應刪除的任何OSP服務,請使用相應的OpenStack命令刪除服務。
可以部署Tripleo的路由網絡(脊柱/葉網絡)體系結構來配置超云網絡。使用子網參數用基本網絡定義其他葉子子網。
現在的一個限制是,金屬3只能有一個供應網絡。
使用多個子網安裝超雲的工作流將是:
定義您的OpenStackNetConfig自定義資源,並指定OverCloud網絡的所有子網。操作員將渲染Tripleo Network_data.yaml用於使用的OSP版本。
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackNetConfig
metadata :
name : openstacknetconfig
spec :
attachConfigurations :
br-osp :
nodeNetworkConfigurationPolicy :
nodeSelector :
node-role.kubernetes.io/worker : " "
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : enp7s0
description : Linux bridge with enp7s0 as a port
name : br-osp
state : up
type : linux-bridge
mtu : 1500
br-ex :
nodeNetworkConfigurationPolicy :
nodeSelector :
node-role.kubernetes.io/worker : " "
desiredState :
interfaces :
- bridge :
options :
stp :
enabled : false
port :
- name : enp6s0
description : Linux bridge with enp6s0 as a port
name : br-ex-osp
state : up
type : linux-bridge
mtu : 1500
# optional DnsServers list
dnsServers :
- 192.168.25.1
# optional DnsSearchDomains list
dnsSearchDomains :
- osptest.test.metalkube.org
- some.other.domain
# DomainName of the OSP environment
domainName : osptest.test.metalkube.org
networks :
- name : Control
nameLower : ctlplane
subnets :
- name : ctlplane
ipv4 :
allocationEnd : 192.168.25.250
allocationStart : 192.168.25.100
cidr : 192.168.25.0/24
gateway : 192.168.25.1
attachConfiguration : br-osp
- name : InternalApi
nameLower : internal_api
mtu : 1350
subnets :
- name : internal_api
ipv4 :
allocationEnd : 172.17.0.250
allocationStart : 172.17.0.10
cidr : 172.17.0.0/24
routes :
- destination : 172.17.1.0/24
nexthop : 172.17.0.1
- destination : 172.17.2.0/24
nexthop : 172.17.0.1
vlan : 20
attachConfiguration : br-osp
- name : internal_api_leaf1
ipv4 :
allocationEnd : 172.17.1.250
allocationStart : 172.17.1.10
cidr : 172.17.1.0/24
routes :
- destination : 172.17.0.0/24
nexthop : 172.17.1.1
- destination : 172.17.2.0/24
nexthop : 172.17.1.1
vlan : 21
attachConfiguration : br-osp
- name : internal_api_leaf2
ipv4 :
allocationEnd : 172.17.2.250
allocationStart : 172.17.2.10
cidr : 172.17.2.0/24
routes :
- destination : 172.17.1.0/24
nexthop : 172.17.2.1
- destination : 172.17.0.0/24
nexthop : 172.17.2.1
vlan : 22
attachConfiguration : br-osp
- name : External
nameLower : external
subnets :
- name : external
ipv4 :
allocationEnd : 10.0.0.250
allocationStart : 10.0.0.10
cidr : 10.0.0.0/24
gateway : 10.0.0.1
attachConfiguration : br-ex
- name : Storage
nameLower : storage
mtu : 1350
subnets :
- name : storage
ipv4 :
allocationEnd : 172.18.0.250
allocationStart : 172.18.0.10
cidr : 172.18.0.0/24
routes :
- destination : 172.18.1.0/24
nexthop : 172.18.0.1
- destination : 172.18.2.0/24
nexthop : 172.18.0.1
vlan : 30
attachConfiguration : br-osp
- name : storage_leaf1
ipv4 :
allocationEnd : 172.18.1.250
allocationStart : 172.18.1.10
cidr : 172.18.1.0/24
routes :
- destination : 172.18.0.0/24
nexthop : 172.18.1.1
- destination : 172.18.2.0/24
nexthop : 172.18.1.1
vlan : 31
attachConfiguration : br-osp
- name : storage_leaf2
ipv4 :
allocationEnd : 172.18.2.250
allocationStart : 172.18.2.10
cidr : 172.18.2.0/24
routes :
- destination : 172.18.0.0/24
nexthop : 172.18.2.1
- destination : 172.18.1.0/24
nexthop : 172.18.2.1
vlan : 32
attachConfiguration : br-osp
- name : StorageMgmt
nameLower : storage_mgmt
mtu : 1350
subnets :
- name : storage_mgmt
ipv4 :
allocationEnd : 172.19.0.250
allocationStart : 172.19.0.10
cidr : 172.19.0.0/24
routes :
- destination : 172.19.1.0/24
nexthop : 172.19.0.1
- destination : 172.19.2.0/24
nexthop : 172.19.0.1
vlan : 40
attachConfiguration : br-osp
- name : storage_mgmt_leaf1
ipv4 :
allocationEnd : 172.19.1.250
allocationStart : 172.19.1.10
cidr : 172.19.1.0/24
routes :
- destination : 172.19.0.0/24
nexthop : 172.19.1.1
- destination : 172.19.2.0/24
nexthop : 172.19.1.1
vlan : 41
attachConfiguration : br-osp
- name : storage_mgmt_leaf2
ipv4 :
allocationEnd : 172.19.2.250
allocationStart : 172.19.2.10
cidr : 172.19.2.0/24
routes :
- destination : 172.19.0.0/24
nexthop : 172.19.2.1
- destination : 172.19.1.0/24
nexthop : 172.19.2.1
vlan : 42
attachConfiguration : br-osp
- name : Tenant
nameLower : tenant
vip : False
mtu : 1350
subnets :
- name : tenant
ipv4 :
allocationEnd : 172.20.0.250
allocationStart : 172.20.0.10
cidr : 172.20.0.0/24
routes :
- destination : 172.20.1.0/24
nexthop : 172.20.0.1
- destination : 172.20.2.0/24
nexthop : 172.20.0.1
vlan : 50
attachConfiguration : br-osp
- name : tenant_leaf1
ipv4 :
allocationEnd : 172.20.1.250
allocationStart : 172.20.1.10
cidr : 172.20.1.0/24
routes :
- destination : 172.20.0.0/24
nexthop : 172.20.1.1
- destination : 172.20.2.0/24
nexthop : 172.20.1.1
vlan : 51
attachConfiguration : br-osp
- name : tenant_leaf2
ipv4 :
allocationEnd : 172.20.2.250
allocationStart : 172.20.2.10
cidr : 172.20.2.0/24
routes :
- destination : 172.20.0.0/24
nexthop : 172.20.2.1
- destination : 172.20.1.0/24
nexthop : 172.20.2.1
vlan : 52
attachConfiguration : br-osp
如果將上述yaml寫入名為networkConfig.yaml的文件,則可以通過此命令創建OpenStackNetConfig:
oc create -n openstack -f networkconfig.yaml
...
# ##############################################################################
# Role: ComputeLeaf1 #
# ##############################################################################
- name : ComputeLeaf1
description : |
Basic ComputeLeaf1 Node role
# Create external Neutron bridge (unset if using ML2/OVS without DVR)
tags :
- external_bridge
networks :
InternalApi :
subnet : internal_api_leaf1
Tenant :
subnet : tenant_leaf1
Storage :
subnet : storage_leaf1
HostnameFormatDefault : ' %stackname%-novacompute-leaf1-%index% '
...
# ##############################################################################
# Role: ComputeLeaf2 #
# ##############################################################################
- name : ComputeLeaf2
description : |
Basic ComputeLeaf1 Node role
# Create external Neutron bridge (unset if using ML2/OVS without DVR)
tags :
- external_bridge
networks :
InternalApi :
subnet : internal_api_leaf2
Tenant :
subnet : tenant_leaf2
Storage :
subnet : storage_leaf2
HostnameFormatDefault : ' %stackname%-novacompute-leaf2-%index% '
...
更新tarballConfigMap
configmap,以將roles_data.yaml
文件添加到tarball並更新配置map。
注意:確保將roles_data.yaml
用作文件名。
OSP 16.2 Tripleo NIC模板具有每個默認值的InterFacerout參數。命名路由的環境/網絡 - 環境中呈現的路由參數通常會在中子網絡host_routes屬性上設置,並添加到角色InterFaceroutes參數中。由於沒有中子子,因此需要將{{network.name}}路由添加到需要的情況下,並在兩個列表中匯總:
parameters:
...
{{ $net.Name }}Routes:
default: []
description: >
Routes for the storage network traffic.
JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
Unless the default is changed, the parameter is automatically resolved
from the subnet host_routes attribute.
type: json
...
- type: interface
...
routes:
list_concat_unique:
- get_param: {{ $net.Name }}Routes
- get_param: {{ $net.Name }}InterfaceRoutes
路由子網信息被自動渲染到tripleo環境文件environments/network-environment.yaml
,該腳本中使用的腳本呈現了Ansible Playbooks。 In the NIC templates therefore use Routes_<subnet_name>, eg StorageRoutes_storage_leaf1 to set the correct routing on the host.
對於Computeaf1計算角色,需要修改NIC模板以使用這些模板:
...
StorageRoutes_storage_leaf1 :
default : []
description : >
Routes for the storage network traffic.
JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
Unless the default is changed, the parameter is automatically resolved
from the subnet host_routes attribute.
type : json
...
InternalApiRoutes_internal_api_leaf1 :
default : []
description : >
Routes for the internal_api network traffic.
JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
Unless the default is changed, the parameter is automatically resolved
from the subnet host_routes attribute.
type : json
...
TenantRoutes_tenant_leaf1 :
default : []
description : >
Routes for the internal_api network traffic.
JSON route e.g. [{'destination':'10.0.0.0/16', 'nexthop':'10.0.0.1'}]
Unless the default is changed, the parameter is automatically resolved
from the subnet host_routes attribute.
type : json
...
get_param : StorageIpSubnet
routes :
list_concat_unique :
- get_param : StorageRoutes_storage_leaf1
- type : vlan
...
get_param : InternalApiIpSubnet
routes :
list_concat_unique :
- get_param : InternalApiRoutes_internal_api_leaf1
...
get_param : TenantIpSubnet
routes :
list_concat_unique :
- get_param : TenantRoutes_tenant_leaf1
- type : ovs_bridge
...
Update the tarballConfigMap
configmap to add the NIC templates roles_data.yaml
file to the tarball and update the configmap.
注意:確保將roles_data.yaml
用作文件名。
到目前為止,僅使用多個子網部署對OSP16.2進行了測試,並且與OSP17.0單個子網兼容。
TBD
確保將新的創建的NIC模板添加到環境文件中的新節點角色的resource_registry
:
resource_registry :
OS::TripleO::Compute::Net::SoftwareConfig : net-config-two-nic-vlan-compute.yaml
OS::TripleO::ComputeLeaf1::Net::SoftwareConfig : net-config-two-nic-vlan-compute_leaf1.yaml
OS::TripleO::ComputeLeaf2::Net::SoftwareConfig : net-config-two-nic-vlan-compute_leaf2.yaml
在這一點上,我們可以提供超雲。
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackControlPlane
metadata :
name : overcloud
namespace : openstack
spec :
gitSecret : git-secret
openStackClientImageURL : registry.redhat.io/rhosp-rhel8/openstack-tripleoclient:16.2
openStackClientNetworks :
- ctlplane
- external
- internal_api
- internal_api_leaf1 # optionally the openstackclient can also be connected to subnets
openStackClientStorageClass : host-nfs-storageclass
passwordSecret : userpassword
domainName : ostest.test.metalkube.org
virtualMachineRoles :
Controller :
roleName : Controller
roleCount : 1
networks :
- ctlplane
- internal_api
- external
- tenant
- storage
- storage_mgmt
cores : 6
memory : 20
rootDisk :
diskSize : 40
baseImageVolumeName : controller-base-img
storageClass : host-nfs-storageclass
storageAccessMode : ReadWriteMany
storageVolumeMode : Filesystem
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackBaremetalSet
metadata :
name : computeleaf1
namespace : openstack
spec :
# How many nodes to provision
count : 1
# The image to install on the provisioned nodes
baseImageUrl : http://192.168.111.1/images/rhel-guest-image-8.4-1168.x86_64.qcow2
provisionServerName : openstack
# The secret containing the SSH pub key to place on the provisioned nodes
deploymentSSHSecret : osp-controlplane-ssh-keys
# The interface on the nodes that will be assigned an IP from the mgmtCidr
ctlplaneInterface : enp7s0
# Networks to associate with this host
networks :
- ctlplane
- internal_api_leaf1
- external
- tenant_leaf1
- storage_leaf1
roleName : ComputeLeaf1
passwordSecret : userpassword
apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackBaremetalSet
metadata :
name : computeleaf2
namespace : openstack
spec :
# How many nodes to provision
count : 1
# The image to install on the provisioned nodes
baseImageUrl : http://192.168.111.1/images/rhel-guest-image-8.4-1168.x86_64.qcow2
provisionServerName : openstack
# The secret containing the SSH pub key to place on the provisioned nodes
deploymentSSHSecret : osp-controlplane-ssh-keys
# The interface on the nodes that will be assigned an IP from the mgmtCidr
ctlplaneInterface : enp7s0
# Networks to associate with this host
networks :
- ctlplane
- internal_api_leaf2
- external
- tenant_leaf2
- storage_leaf2
roleName : ComputeLeaf2
passwordSecret : userpassword
定義一個OpenStackConfiggenerator,以生成用於OSP群集部署的Ansible Playbook,就像Deploying OpenStack once you have the OSP Director Operator installed
並指定生成的角色文件文件。
如前所述,在Run the software deployment
檢查中,應用,註冊超雲節點以需要存儲庫,然後從OpenStackClient Pod內部運行SOFWare部署。
OSP-D運算符提供了一個API來創建和還原其當前CR,ConfigMap和秘密配置的備份。 This API consists of two CRDs:
OpenStackBackupRequest
OpenStackBackup
OpenStackBackupRequest
CRD用於啟動備份的創建或恢復,而OpenStackBackup
CRD則用於實際存儲屬於操作員的CR,ConfigMap和秘密數據。 This allows for several benefits:
OpenStackBackup
CR,用戶無需手動導出/導入操作員的配置的每個部分OpenStackBackup
,請創建一個帶有mode
設置的OpenStackBackupRequest
,以save
在其規格中。例如: apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackBackupRequest
metadata :
name : openstackbackupsave
namespace : openstack
spec :
mode : save
additionalConfigMaps : []
additionalSecrets : []
規格字段如下:
mode: save
表示這是創建備份的請求。additionalConfigMaps
和additionalSecrets
列表可用於包括補充配置示意和秘密,否則將操作員的秘密(即為某些目的手動創建的configmaps和秘密)。OpenStackControlPlane
, OpenStackBaremetalSet
等)相關的所有配置示意圖和秘密,而無需將用戶包含在這些附加列表中。OpenStackBackupRequest
,請監視其狀態: oc get -n openstack osbackuprequest openstackbackupsave
這樣的事情應該出現:
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP
openstackbackupsave save Quiescing
Quiescing
狀態表明,操作員正在等待所有OSP-D操作員CR的配置狀態,以達到其“完成”同等用途。此要求的時間將根據OSP-D操作員CRS的數量以及當前的配置狀態的偶然性而有所不同。 NOTE: It is possible that the operator will never fully quiesce due to errors and/or "waiting" states in existing CRs.要查看哪些CRD/CRS阻止貴族,請研究操作員日誌。例如:
oc logs < OSP-D operator pod > -c manager -f
...
2022-01-11T18:26:15.180Z INFO controllers.OpenStackBackupRequest Quiesce for save for OpenStackBackupRequest openstackbackupsave is waiting for: [OpenStackBaremetalSet: compute, OpenStackControlPlane: overcloud, OpenStackVMSet: controller]
如果OpenStackBackupRequest
進入Error
狀態,請查看其完整內容以查看遇到的錯誤( oc get -n openstack openstackbackuprequest <name> -o yaml
)。
OpenStackBackupRequest
通過創建和保存代表當前OSP-D運算符配置的OpenStackBackup
來尊重時,它將輸入Saved
狀態。例如: oc get -n openstack osbackuprequest
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP
openstackbackupsave save Saved 2022-01-11T19:12:58Z
相關的OpenStackBackup
也將創建。例如:
oc get -n openstack osbackup
NAME AGE
openstackbackupsave-1641928378 6m7s
OpenStackBackup
,請創建一個OpenStackBackupRequest
,其mode
設置以在其規格中restore
。例如: apiVersion : osp-director.openstack.org/v1beta1
kind : OpenStackBackupRequest
metadata :
name : openstackbackuprestore
namespace : openstack
spec :
mode : restore
restoreSource : openstackbackupsave-1641928378
規格字段如下:
mode: restore
表示這是恢復現有OpenStackBackup
的請求。restoreSource
指示應還原哪個OpenStackBackup
。將mode
設置為restore
,OSP-D操作員將採用restoreSource
OpenStackBackup
的內容,並嘗試將它們應用於命名空間中當前存在的現有CRS,Configmaps和秘密。因此,它將覆蓋名稱空間中的任何現有的OSP-D運算符資源,其名稱與OpenStackBackup
中的名稱相同,並將為當前在命名空間中發現的人創建新資源。如果需要,可以將mode
設置為cleanRestore
以在嘗試修復之前完全擦除名稱空間中現有的OSP-D運算符資源,從而使OpenStackBackup
中的所有資源完全重新創建。
OpenStackBackupRequest
,請監視其狀態: oc get -n openstack osbackuprequest openstackbackuprestore
Something like this should appear to indicate that all resources from the OpenStackBackup
are being applied against the cluster:
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP
openstackbackuprestore restore openstackbackupsave-1641928378 Loading
然後,一旦加載了所有資源,操作員將開始調和以嘗試提供所有資源:
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP
openstackbackuprestore restore openstackbackupsave-1641928378 Reconciling
如果OpenStackBackupRequest
進入Error
狀態,請查看其完整內容以查看遇到的錯誤( oc get -n openstack openstackbackuprequest <name> -o yaml
)。
OpenStackBackupRequest
通過完全恢復OpenStackBackup
而受到尊重時,它將進入Restored
狀態。例如: oc get -n openstack osbackuprequest
NAME OPERATION SOURCE STATUS COMPLETION TIMESTAMP
openstackbackuprestore restore openstackbackupsave-1641928378 Restored 2022-01-12T13:48:57Z
在這一點上,應恢復和完全準備所選的OpenStackBackup
所含的所有資源。
OSP主管操作員在每個OSDEPLOY資源完成執行後會自動創建一個ConfigMap。此ConfigMap以Osdeploy資源名稱命名,並帶有Tripleo-Exports-的前綴。例如,tripleo-exports-default將是“默認” osdeploy資源的配置名稱。每個ConfigMap包含2個YAML文件:
文件名 | 描述 | Tripleo命令等效 |
---|---|---|
ctlplane-export.yaml | Used with multiple stacks for DCN | 超雲導出 |
ctlplane-export-filter.yaml | 用於多個帶有“控制器”堆棧的堆棧 | 超雲電池導出 |
使用下面的命令從ConfigMap提取YAML文件。提取後,可以將YAML文件添加到OsconFiggenerator資源上的自定義加熱參數中。
oc extract cm/tripleo-exports-default
注意:OSP主管運營商尚未為CEPH堆棧產生出口。
If required it is possible to change CPU/RAM of an openstackvmset configured via the openstackcontrolplane.工作流量如下:
例如,更改控制器虛擬機是具有8個內核和22GB RAM:
oc patch -n openstack osctlplane overcloud --type= ' json ' -p= ' [{"op": "add", "path": "/spec/virtualMachineRoles/controller/cores", "value": 8 }] '
oc patch -n openstack osctlplane overcloud --type= ' json ' -p= ' [{"op": "add", "path": "/spec/virtualMachineRoles/controller/memory", "value": 22 }] '
oc get osvmset
NAME CORES RAM DESIRED READY STATUS REASON
controller 8 22 1 1 Provisioned All requested VirtualMachines have been provisioned
virtctl start <VM>
為VM供電。 請參閱OSP更新過程文檔