This article describes how to migrate applications with the data from one namespace to another. First of all, this procedure is usefull for the migration to version 1.18, where two features (logging and monitoring) were transferred to another namespace. Also, this procedure may be used by the administrators in some other specific scenarios.
You have installed Kublr cluster v1.16.0, with enabled Logging or Monitoring.
Migrate cluster from one control plane to another as described in documentation.
Edit cluster spec and turn off features logging and monitoring and add custom values for custom PCV.
Spec patch
For logging
logging:
logCollection:
enabled: false
For monitoring
monitoring
enabled: false
Delete helm2 charts for logging and monitoring.
Ingress patch
$ helm2 delete --purge kublr-logging
release "kublr-logging" deleted
$ helm2 delete --purge kublr-monitoring
release "kublr-monitoring" deleted
Delete PVC for logging and monitoring.
Ingress patch
$ kubectl -n kube-system delete pvc -l 'app in (elasticsearch, kublr-monitoring-grafana, kublr-monitoring-prometheus)'
Enable feature in spec and wait installation.
For logging
logging:
logCollection:
enabled: true
For monitoring
monitoring:
enabled: true
Migrate cluster from one control plane to another as described in documentation.
If you have custom created pvc for logging or monitoring, please add one of the label app=elasticsearch or app=kublr-monitoring-grafana or app=kublr-monitoring-prometheus to PVC for automatic migration.
Create folder and download scripts prepare.sh and patch.sh.
Open script files and edit path to helm2 in HELM2 variable (minimal Helm v2.14.0 required).
Run prepare.sh script that will:
Edit cluster spec and turn off features logging and monitoring and add custom values for custom PCV.
Spec patch
For logging
logging:
logCollection:
enabled: false
For monitoring
monitoring
enabled: false
Run patch.sh script that will:
Wait some 20 sec and check new PVCs status, it should be Bound.
Ingress patch
$ kubectl get pvc -n kublr -l 'app in (elasticsearch, kublr-monitoring-grafana, kublr-monitoring-prometheus)'
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-kublr-logging-elasticsearch-data-0 Bound pvc-9d4c97ad-d9fc-4d11-b2b4-eb253383602c 120Gi RWO kublr-system 34m
data-kublr-logging-elasticsearch-master-0 Bound pvc-e2de333f-3177-486a-9a5e-35a67a968972 4Gi RWO kublr-system 34m
kublr-monitoring-grafana Bound pvc-89dc4a54-7d3d-4c51-9807-1714576cdb2e 10Gi RWO kublr-system 34m
kublr-monitoring-prometheus Bound pvc-7cf1877d-8316-4356-a806-8b6484093e96 120Gi RWO kublr-system 34m
Edit cluster spec. Turn on features and scpecify custom PVC in spec (if you used another PVC names please use it).
For logging
logging:
logCollection:
enabled: true
For monitoring
monitoring:
enabled: true
values:
grafana:
persistence:
preconfiguredPersistentVolumeClaim: kublr-monitoring-grafana
prometheus:
persistence:
preconfiguredPersistentVolumeClaim: kublr-monitoring-prometheus
Check features status in control plane.
Check that everything is working as is and if you need change PV reclaim policy to Delete.