* Lowered BOOKIE_MEM and PULSAR_MEM in init containers. Default BOOKIE_MEM and PULSAR_MEM settings from conf/pulsar_env.sh and conf/bkenv.sh (-Xms2g -Xmx2g -XX:MaxDirectMemorySize=4g) are too high for low-memory systems.
### Motivation
Enables support for using the Pulsar bookies as persistent state storage for functions.
### Modifications
- Added an option to enable/disable using bookies as state storage
- Adds extra server components options to the bookkeeper to enable necessary features for bookies to be used as state storage
- Adds stateStorageServiceUrl to the broker configmap
* To address the function role vs clusterrole issue
* making backwards compatable
* updated value.yaml to include limit functions to namespace
* Added documentation to clarify the new attribute
* moved limit_to_namespace under functions.rbac
* Refactor GitHub Actions CI to a single workflow
* Handle case where "ct lint" fails because of no chart changes
* Re-order scenarios
* Remove excessive default GC logging
* Bump cert-manager version to v1.12.2
* Use compatible cert-manager version
* Install debugging tools (k9s) for ssh access
* Only apply for interactive shells
* Fix JWT symmetric test
* Fix part that was missing from #356
* Install k9s on the fly when k9s is used
- set KUBECONFIG on the fly for kubectl too
* Upgrade upgrade kind, chart releaser and helm versions
* Disable podMonitory for values-broker-tls.yaml file
- was missing from #317
* Use k8s 1.18.20
* Use ubuntu-20.04 runtime
- k8s < 1.19 doesn't support cgroup v2
* Upgrade to k8s 1.19 as baseline
* Baseline to k8s 1.20
* Set ip family to ipv4
* Add more logging to kind cluster creation
* Simplify duplicate job deletion
* use verbosity flag
* Upgrade to k8s 1.24
* Replace removed tolerate-unready-endpoints annotation with publishNotReadyAddresses
(cherry picked from commit e90926053a2b01bb95529fbaddc8d2ce2cdeec63)
* Use k8s 1.21 as baseline
* Run on ubuntu-22.04
* Use Pulsar 2.10.4
* Fix PodMonitor name conflicts for multiple releases in same namespace
Signed-off-by: Edward Zeng <jie.zeng@zilliz.com>
* Use pulsar.fullname for PodMonitor name prefix
Signed-off-by: Edward Zeng <jie.zeng@zilliz.com>
Co-authored-by: Michael Marshall <mmarshall@apache.org>
Signed-off-by: Edward Zeng <jie.zeng@zilliz.com>
Fixes#257
### Motivation
Fix PodMonitor name conflicts for multiple releases in same namespace
### Modifications
Use release name instead of hardcode `pulsar.name` for pod monitor name.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
* Add missing license headers and .rat-excludes
* Fix .rat-excludes files
### Motivation
As part of our updated release process, we need to make sure that all relevant files have license headers.
### Modifications
* Add license headers formatted appropriately for each file type
### Verifying this change
The follow script shows that the solution is complete:
```shell
$ java -jar ../apache-rat-0.15/apache-rat-0.15.jar . -E .rat-excludes
Ignored 18 lines in your exclusion files as comments or empty lines.
*****************************************************
Summary
-------
Generated at: 2022-10-20T17:54:42-05:00
Notes: 4
Binaries: 1
Archives: 0
Standards: 92
Apache Licensed: 92
Generated Documents: 0
JavaDocs are generated, thus a license header is optional.
Generated files do not require license headers.
0 Unknown Licenses
*****************************************************
Files with Apache License headers will be marked AL
Binary files (which do not require any license headers) will be marked B
Compressed archives will be marked A
Notices, licenses etc. will be marked N
AL ./.asf.yaml
AL ./.rat-excludes
N ./LICENSE
N ./NOTICE
AL ./README.md
AL ./Vagrantfile
AL ./license_test.go
AL ./charts/pulsar/.helmignore
AL ./charts/pulsar/Chart.yaml
N ./charts/pulsar/LICENSE
N ./charts/pulsar/NOTICE
AL ./charts/pulsar/values.yaml
B ./charts/pulsar/charts/kube-prometheus-stack-41.5.1.tgz
AL ./charts/pulsar/templates/_autorecovery.tpl
AL ./charts/pulsar/templates/_bookkeeper.tpl
AL ./charts/pulsar/templates/_broker.tpl
AL ./charts/pulsar/templates/_configurationstore.tpl
AL ./charts/pulsar/templates/_helpers.tpl
AL ./charts/pulsar/templates/_toolset.tpl
AL ./charts/pulsar/templates/_zookeeper.tpl
AL ./charts/pulsar/templates/autorecovery-configmap.yaml
AL ./charts/pulsar/templates/autorecovery-podmonitor.yaml
AL ./charts/pulsar/templates/autorecovery-rbac.yaml
AL ./charts/pulsar/templates/autorecovery-service.yaml
AL ./charts/pulsar/templates/autorecovery-statefulset.yaml
AL ./charts/pulsar/templates/bookkeeper-cluster-initialize.yaml
AL ./charts/pulsar/templates/bookkeeper-configmap.yaml
AL ./charts/pulsar/templates/bookkeeper-pdb.yaml
AL ./charts/pulsar/templates/bookkeeper-podmonitor.yaml
AL ./charts/pulsar/templates/bookkeeper-rbac.yaml
AL ./charts/pulsar/templates/bookkeeper-service.yaml
AL ./charts/pulsar/templates/bookkeeper-statefulset.yaml
AL ./charts/pulsar/templates/bookkeeper-storageclass.yaml
AL ./charts/pulsar/templates/broker-cluster-role-binding.yaml
AL ./charts/pulsar/templates/broker-configmap.yaml
AL ./charts/pulsar/templates/broker-hpa.yaml
AL ./charts/pulsar/templates/broker-pdb.yaml
AL ./charts/pulsar/templates/broker-podmonitor.yaml
AL ./charts/pulsar/templates/broker-rbac.yaml
AL ./charts/pulsar/templates/broker-service-account.yaml
AL ./charts/pulsar/templates/broker-service.yaml
AL ./charts/pulsar/templates/broker-statefulset.yaml
AL ./charts/pulsar/templates/dashboard-deployment.yaml
AL ./charts/pulsar/templates/dashboard-ingress.yaml
AL ./charts/pulsar/templates/dashboard-service.yaml
AL ./charts/pulsar/templates/function-worker-configmap.yaml
AL ./charts/pulsar/templates/keytool.yaml
AL ./charts/pulsar/templates/namespace.yaml
AL ./charts/pulsar/templates/proxy-configmap.yaml
AL ./charts/pulsar/templates/proxy-hpa.yaml
AL ./charts/pulsar/templates/proxy-ingress.yaml
AL ./charts/pulsar/templates/proxy-pdb.yaml
AL ./charts/pulsar/templates/proxy-podmonitor.yaml
AL ./charts/pulsar/templates/proxy-rbac.yaml
AL ./charts/pulsar/templates/proxy-service.yaml
AL ./charts/pulsar/templates/proxy-statefulset.yaml
AL ./charts/pulsar/templates/pulsar-cluster-initialize.yaml
AL ./charts/pulsar/templates/pulsar-manager-admin-secret.yaml
AL ./charts/pulsar/templates/pulsar-manager-configmap.yaml
AL ./charts/pulsar/templates/pulsar-manager-deployment.yaml
AL ./charts/pulsar/templates/pulsar-manager-ingress.yaml
AL ./charts/pulsar/templates/pulsar-manager-service.yaml
AL ./charts/pulsar/templates/tls-cert-internal-issuer.yaml
AL ./charts/pulsar/templates/tls-certs-internal.yaml
AL ./charts/pulsar/templates/toolset-configmap.yaml
AL ./charts/pulsar/templates/toolset-rbac.yaml
AL ./charts/pulsar/templates/toolset-service.yaml
AL ./charts/pulsar/templates/toolset-statefulset.yaml
AL ./charts/pulsar/templates/zookeeper-configmap.yaml
AL ./charts/pulsar/templates/zookeeper-pdb.yaml
AL ./charts/pulsar/templates/zookeeper-podmonitor.yaml
AL ./charts/pulsar/templates/zookeeper-rbac.yaml
AL ./charts/pulsar/templates/zookeeper-service.yaml
AL ./charts/pulsar/templates/zookeeper-statefulset.yaml
AL ./charts/pulsar/templates/zookeeper-storageclass.yaml
AL ./examples/values-bookkeeper-aws.yaml
AL ./examples/values-cs.yaml
AL ./examples/values-jwt-asymmetric.yaml
AL ./examples/values-jwt-symmetric.yaml
AL ./examples/values-local-cluster.yaml
AL ./examples/values-local-pv.yaml
AL ./examples/values-minikube.yaml
AL ./examples/values-no-persistence.yaml
AL ./examples/values-one-node.yaml
AL ./examples/values-tls.yaml
AL ./examples/values-zookeeper-aws.yaml
AL ./hack/common.sh
AL ./hack/kind-cluster-build.sh
AL ./scripts/set-pulsar-version.sh
AL ./scripts/cert-manager/install-cert-manager.sh
AL ./scripts/pulsar/cleanup_helm_release.sh
AL ./scripts/pulsar/common.sh
AL ./scripts/pulsar/common_auth.sh
AL ./scripts/pulsar/generate_token.sh
AL ./scripts/pulsar/generate_token_secret_key.sh
AL ./scripts/pulsar/get_token.sh
AL ./scripts/pulsar/prepare_helm_release.sh
*****************************************************
```
Fixes#309
### Motivation
Fix the metadataPrefix initialization.
### Modifications
* Fix the script by adding `&& echo`
### Verifying this change
I manually verified that this change works and correctly puts the metadata in the prefixed location.
* Allow to use selectors with volumeClaimTemplates
* Fixed naming inconsistency, added null value
Co-authored-by: Claudio Vellage <claudio.vellage@pm.me>
Co-authored-by: Michael Marshall <mmarshall@apache.org>
### Motivation
Currently it's not possible to use selectors with volumeClaimTemplates which makes it hard/impossible to bind statically provisioned PVs.
### Modifications
Added (optional) selectors to `volumeClaimTemplates` and documented in values file.
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
* allow specifying the nodeSelector for the init jobs
* Use pulsar_metadata.nodeSelector
Co-authored-by: samuel <samuel.verstraete@aprimo.com>
### Motivation
When deploying pulsar to an AKS cluster with windows nodepools i was unable to specify that the Jobs of the initalize release had to run on linux nodes. With the change i can now specify a node selector for the init jobs.
### Modifications
add nodeSelector on pulsar_init and bookie_init
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
### Motivation
In #269, we added a way to configure external zookeeper servers. However, it was added to the wrong section of the zookeeper config. The `zookeeper.configData` section is mapped directly into the zookeeper configmap.
### Modifications
Move `zookeeper.configData.ZOOKEEPER_SERVERS` to `zookeeper.externalZookeeperServerList`
### Verifying this change
This is a cosmetic change on an unreleased feature.
* Replace monitoring solution with kube-prometheus-stack dependency
* Enable pod monitors
* Download necessary chart dependencies for CI
* Actually run dependency update
* Enable missed podMonitor
* Disable alertmanager by default for feature parity
Related issues #294#65
Supersedes #296 and #297
### Motivation
Our helm chart is out of date. I propose we make a breaking change for the monitoring solution and start using the `kube-prometheus-stack` as a dependency. This should make upgrades easier and will let users leverage all of that chart's features.
This change will result in the removal of the StreamNative Grafana Dashboards. We'll need to figure out the right way to address that. The apache/pulsar project has grafana dashboards, but they have not been maintained. With this added dependency, we'll have the benefit of being able to use k8s `ConfigMap`s to configure grafana dashboards.
### Modifications
* Remove old prometheus and grafana configuration
* Add kube-prometheus-stack chart as a dependency
* Enable several components by default. I am not opinionated on these, but it is based on the other values in the chart.
### Verifying this change
This is a large change that will require manual validation, and may break deployments. I propose this triggers a helm chart 3.0.0 release.
* added pdb version detection
* refresh
* Update bookkeeper-pdb.yaml
update the capabilities syntax
* Update broker-pdb.yaml
update capability syntax
* Update proxy-pdb.yaml
update capability version syntax
* Update zookeeper-pdb.yaml
update capability version syntax
* Update zookeeper-pdb.yaml
fix typo
* Update bookkeeper-pdb.yaml
Co-authored-by: Marvin Cai <cai19930303@gmail.com>
Fixes pod disruption budget version warning
### Motivation
PDB policy api version, v1beta1 is deprecated in k8s1.21+ (not available in 1.25+).
### Modifications
zookeeper-pdb, proxy-pdb, broker-pdb and bookkeepr-pdb templates are modified. If k8s api-resources container policy/v1, the *-pdb.yaml will generate respective apiVersion.
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
Co-authored-by: Stepan Mazurov <smazurov@quantummetric.com>
### Motivation
In #204, api version of the cert resources was updated to v1. This was insufficient because `v1` has different spec from `v1alpha1`
This MR finishes the work that #204 and @lhotari started.
### Modifications
Changed the spec of certs to match v1 cert manager spec.
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
* Bump Apache Pulsar 2.10.1
* Do not bump .Chart.version
* Remove unnecessary jq download that was failing with Permission Denied
Co-authored-by: Michael Marshall <mmarshall@apache.org>
### Motivation
This is essentially the same as https://github.com/apache/pulsar-helm-chart/pull/176. Without this change, an init pod can fail and be in `Error` state even though the second pod succeeded. This will prevent misleading errors.
### Modifications
* Replace `Never` with `OnFailure`
### Verifying this change
This is a trivial change.
Co-authored-by: Michael Marshall <mmarshall@apache.org>
### Motivation
There was a suggestion [in a dev mailing list discussion](https://lists.apache.org/thread/bgkvcyt1qq6h67p2k8xwp89xlncbqn3d) that the Helm chart's appVersion should be used as the default image tag.
### Additional context
There are some limitations in Helm. It is not possible to set "appVersion" from the command line. There's in an open feature request https://github.com/helm/helm/issues/8194 to add such a feature to Helm.
### Modifications
- change default values.yaml and set the tags for the images that use the Pulsar image to an empty value
- add "defaultPulsarImageTag" to values.yaml
- add a helper template "pulsar.imageFullName" that contains the logic to fall back to .Values.defaultPulsarImageTag and if it's not set, falling back to .Chart.AppVersion
- use the helper template in all other templates that require the logic
Fixes#288
### Motivation
When specifying multiple roles in `.Values.auth.superUsers` the values are converted to a comma-separated list by piping the dict through `values` and `join` in helm templating, `values` however doesn't guarantee that the order of elements will be the same every time. Therefor it recommends also passing it through `sortAlpha` to sort the list alphabetically.
This is a problematic when `.Values.broker.restartPodsOnConfigMapChange` is enabled because the checksum of the configmap changes every time the list's order is changed, resulting in the statefulsets rolling out a new version of the pods.
### Modifications
Pass list through `sortAlpha`.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
* Add nodeSelector to cluster initialize pod
* Add option to values file
* Update charts/pulsar/templates/pulsar-cluster-initialize.yaml
Co-authored-by: Michael Marshall <mikemarsh17@gmail.com>
* Fix typo in values
Co-authored-by: Michael Marshall <mikemarsh17@gmail.com>
### Motivation
Add an option to choose where to run pulsar-cluster-initialize pod. Sometimes there is a necessity to run only on certain nodes.
### Modifications
Added nodeSelector option to the pulsar-cluster-initialize job.
* Add imagePullSecrets for zookeeper
* Add imagePullSecrets for zookeeper
Co-authored-by: Kevin Huynh <khuynh@littlebigcode.fr>
All components have the imagePullSecrets to avoid quota limit to init correctly the pods except zookeeper
Fixes https://github.com/apache/pulsar-helm-chart/issues/250
### Motivation
`httpNumThreads` is hardcoded to 8 in `charts/pulsar/templates/proxy-configmap.yaml`
When trying to override in `values.yaml` by using `proxy.configData.httpNumThreads` we get an error because the keys get duplicated.
This happens because `{{ toYaml .Values.proxy.configData | indent 2 }}` doesn't deduplicate the keys and there is no other way to set `httpNumThreads`
### Modifications
Removing the key from charts/pulsar/templates/proxy-configmap.yaml and adding it to the values yaml solves the problem.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Master Issue: https://github.com/apache/pulsar/issues/11269
### Motivation
Apache Pulsar's docker images for 2.10.0 and above are non-root by default. In order to ensure there is a safe upgrade path, we need to expose the `securityContext` for the Bookkeeper and Zookeeper StatefulSets. Here is the relevant k8s documentation on this k8s feature: https://kubernetes.io/docs/tasks/configure-pod-container/security-context.
Once released, all deployments using the default `values.yaml` configuration for the `securityContext` will pay a one time penalty on upgrade where the kubelet will recursively chown files to be root group writable. It's possible to temporarily avoid this penalty by setting `securityContext: {}`.
### Modifications
* Add config blocks for the `bookkeeper.securityContext` and `zookeeper.securityContext`.
* Default to `fsGroup: 0`. This is already the default group id in the docker image, and the docker image assumes the user has root group permission.
* Default to `fsGroupChangePolicy: "OnRootMismatch"`. This configuration will work for all deployments where the user id is stable. If the user id switches between restarts, like it does in OpenShift, please set to `Always`.
* Remove gc configuration writing to directory that the user lacks permission. (Perhaps we want to write to `/pulsar/log/bookie-gc.log`?)
* Add documentation to the README.
### Verifying this change
I first attempted verification of this change with minikube. It did not work because minikube uses hostPath volumes by default. I then tested on EKS v1.21.9-eks-0d102a7. I tested by deploying the current, latest version of the helm chart (2.9.3) and then upgrading to this PR's version of the helm chart along with using the 2.10.0 docker image. I also tested upgrading from a default version
Test 1 is a plain upgrade using the default 2.9.3 version of the chart, then upgrading to this PR's version of the chart with the modification to use the 2.10.0 docker images. It worked as expected.
```bash
$ helm install test apache/pulsar
$ # Wait for chart to deploy, then run the following, which uses Pulsar version 2.10.0:
$ helm upgrade test -f charts/pulsar/values.yaml charts/pulsar/
```
Test 2 is a plain upgrade using the default 2.9.3 version of the chart, then an upgrade to this PR's version of the chart, then an upgrade to this PR's version of the chart using 2.10.0 docker images. There is a minor error described in the `README.md`. The solution is to chown the bookie's data directory.
```bash
$ helm install test apache/pulsar
$ # Wait for chart to deploy, then run the following, which uses Pulsar version 2.9.2:
$ helm upgrade test -f charts/pulsar/values.yaml charts/pulsar/
$ # Upgrade using Pulsar version 2.10.0
$ helm upgrade test -f charts/pulsar/values.yaml charts/pulsar/
```
### GC Logging
In my testing, I ran into the following errors when using `-Xlog:gc:/var/log/bookie-gc.log`:
```
pulsar-bookkeeper-verify-clusterid [0.008s] Error opening log file '/var/log/bookie-gc.log': Permission denied
pulsar-bookkeeper-verify-clusterid [0.008s] Initialization of output 'file=/var/log/bookie-gc.log' using options '(null)' failed.
pulsar-bookkeeper-verify-clusterid [0.005s] Error opening log file '/var/log/bookie-gc.log': Permission denied
pulsar-bookkeeper-verify-clusterid [0.006s] Initialization of output 'file=/var/log/bookie-gc.log' using options '(null)' failed.
pulsar-bookkeeper-verify-clusterid Invalid -Xlog option '-Xlog:gc:/var/log/bookie-gc.log', see error log for details.
pulsar-bookkeeper-verify-clusterid Error: Could not create the Java Virtual Machine.
pulsar-bookkeeper-verify-clusterid Error: A fatal exception has occurred. Program will exit.
pulsar-bookkeeper-verify-clusterid Invalid -Xlog option '-Xlog:gc:/var/log/bookie-gc.log', see error log for details.
pulsar-bookkeeper-verify-clusterid Error: Could not create the Java Virtual Machine.
pulsar-bookkeeper-verify-clusterid Error: A fatal exception has occurred. Program will exit.
```
I resolved the error by removing the setting.
### OpenShift Observations
I wanted to seamlessly support OpenShift, so I investigated using configuring the bookkeeper and zookeeper process with `umask 002` so that they would create files and directories that are group writable (OpenShift has a stable group id, but gives the process a random user id). That worked for most tools when switching the user id, but not for RocksDB, which creates a lock file at `/pulsar/data/bookkeeper/ledgers/current/ledgers/LOCK` with the permission `0644` ignoring the umask. Here is the relevant error:
```
2022-05-14T03:45:06,903+0000 ERROR org.apache.bookkeeper.server.Main - Failed to build bookie server
java.io.IOException: Error open RocksDB database
at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.<init>(KeyValueStorageRocksDB.java:199) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.<init>(KeyValueStorageRocksDB.java:88) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.lambda$static$0(KeyValueStorageRocksDB.java:62) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.bookie.storage.ldb.LedgerMetadataIndex.<init>(LedgerMetadataIndex.java:68) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage.<init>(SingleDirectoryDbLedgerStorage.java:169) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage.newSingleDirectoryDbLedgerStorage(DbLedgerStorage.java:150) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage.initialize(DbLedgerStorage.java:129) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.bookie.Bookie.<init>(Bookie.java:818) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.proto.BookieServer.newBookie(BookieServer.java:152) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.proto.BookieServer.<init>(BookieServer.java:120) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.server.service.BookieService.<init>(BookieService.java:52) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.server.Main.buildBookieServer(Main.java:304) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.server.Main.doMain(Main.java:226) [org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
at org.apache.bookkeeper.server.Main.main(Main.java:208) [org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
Caused by: org.rocksdb.RocksDBException: while open a file for lock: /pulsar/data/bookkeeper/ledgers/current/ledgers/LOCK: Permission denied
at org.rocksdb.RocksDB.open(Native Method) ~[org.rocksdb-rocksdbjni-6.10.2.jar:?]
at org.rocksdb.RocksDB.open(RocksDB.java:239) ~[org.rocksdb-rocksdbjni-6.10.2.jar:?]
at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.<init>(KeyValueStorageRocksDB.java:196) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
... 13 more
```
As such, in order to support OpenShift, I exposed the `fsGroupChangePolicy`, which allows for OpenShift support, but not necessarily _seamless_ support.
* Bump version to `2.9.2`
* Because the latest Pulsar image is based on Java 11, some JVM param for printing GC information has been abandoned, change to use the new JVM param. refer to https://docs.oracle.com/en/java/javase/11/tools/java.html#GUID-BE93ABDC-999C-4CB5-A88B-1994AAAC74D5 and https://issues.redhat.com/browse/CLOUD-3040.
original param | new param
--|--
`-XX:+PrintGCDetails` | `-Xlog:gc*`
`-XX:+PrintGCApplicationStoppedTime` | `-Xlog:safepoint`
`-XX:+PrintHeapAtGC` | `-Xlog:gc+heap=trace`
`-XX:+PrintGCTimeStamps` | `-Xlog:gc::utctime`
* remove JVM param `-XX:G1LogLevel=finest`
- NOTICE: we are no more using "bin/pulsar-zookeeper-ruok.sh" from the apachepulsar/pulsar docker image. The probe script is part of the chart.
* Pass "-q 1" to netcat (nc) to fix issue with Zookeeper ruok probe
- see https://github.com/apache/pulsar/pull/14088
* Send ruok to TLS port when TLS is enabled
* Bump chart version
- allows having multiple Pulsar clusters in different K8S namespaces but having the same helm release name
- PodSecurityPolicy is a cluster-level-resource and name would collide without this change