Configure storage

Storage defaults

SUSE Observability doesn’t specify a specific storage class on its PVC’s (persistent volume claims) by default, for cloud providers like EKS and AKS this means the default storage class will be used.

The defaults for those storage classes are typically to delete the PV (persistent volume) when the PVC is deleted. However, even when running helm delete to remove a stackstate release the PVC’s will remain present within the namespace and will be reused if a helm install is run in the same namespace with the same release name.

To remove the PVC’s either remove them manually with kubectl delete pvc or delete the entire namespace.

For production environments, NFS is not recommended and supported for storage provisioning in SUSE Observability due to the potential risk of data corruption.

Customize storage

You can customize the storageClass and size settings for different volumes in the Helm chart. These example values files show how to change the storage class or the volume size. These can be merged to change both at the same time. For the size we provide the sample for both HA and NonHa depending on the sizing profile chosen during the installation process.

  • Changing storage class

  • Changing volume size for HA

  • Changing volume size Non-Ha

global:
  # The storage class for all of the persistent volumes
  storageClass: "standard"
clickhouse:
  persistence:
    # Size of persistent volume for each clickhouse pod
    size: 50Gi
elasticsearch:
  volumeClaimTemplate:
    resources:
      requests:
        # size of volume for each Elasticsearch pod
        storage: 250Gi

hbase:
  hdfs:
    datanode:
      persistence:
        # size of volume for HDFS data nodes
        size: 250Gi

    namenode:
      persistence:
        # size of volume for HDFS name nodes
        size: 20Gi


kafka:
  persistence:
    # size of persistent volume for each Kafka pod
    size: 100Gi


zookeeper:
  persistence:
    # size of persistent volume for each Zookeeper pod
    size: 8Gi

victoria-metrics-0:
  server:
    persistentVolume:
      size: 250Gi
victoria-metrics-1:
  server:
    persistentVolume:
      size: 250Gi

stackstate:
  components:
    checks:
      tmpToPVC:
        volumeSize: 2Gi
    healthSync:
      tmpToPVC:
        volumeSize: 2Gi
    state:
      tmpToPVC:
        volumeSize: 2Gi
    sync:
      tmpToPVC:
        volumeSize: 2Gi
    vmagent:
      persistence:
        size: 10Gi
  experimental:
    storeTransactionLogsToPVC:
      volumeSize: 600Mi
  stackpacks:
    pvc:
      size: 1Gi

backup:
  configuration:
    scheduled:
      pvc:
        size: 1Gi
minio:
  persistence:
    size: 500Gi
clickhouse:
  persistence:
    # Size of persistent volume for each clickhouse pod
    size: 50Gi

elasticsearch:
  volumeClaimTemplate:
    resources:
      requests:
        # size of volume for each Elasticsearch pod
        storage: 250Gi

hbase:
  stackgraph:
    persistence:
      # Size of persistent volume for the single stackgraph hbase pod
      size: 100Gi

kafka:
  persistence:
    # size of persistent volume for each Kafka pod
    size: 100Gi


zookeeper:
  persistence:
    # size of persistent volume for each Zookeeper pod
    size: 8Gi

victoria-metrics-0:
  server:
    persistentVolume:
      size: 50Gi

stackstate:
  components:
    checks:
      tmpToPVC:
        volumeSize: 2Gi
    healthSync:
      tmpToPVC:
        volumeSize: 2Gi
    state:
      tmpToPVC:
        volumeSize: 2Gi
    sync:
      tmpToPVC:
        volumeSize: 2Gi
    vmagent:
      persistence:
        size: 10Gi
  experimental:
    storeTransactionLogsToPVC:
      volumeSize: 600Mi
  stackpacks:
    localpvc:
      size: 1Gi
    pvc:
      size: 1Gi

backup:
  configuration:
    scheduled:
      pvc:
        size: 1Gi
minio:
  persistence:
    size: 500Gi

The NonHa example belongs to the biggest NonHa instance meant to observe 100 nodes and retain data for 2 weeks.