### Motivation
This is essentially the same as https://github.com/apache/pulsar-helm-chart/pull/176. Without this change, an init pod can fail and be in `Error` state even though the second pod succeeded. This will prevent misleading errors.
### Modifications
* Replace `Never` with `OnFailure`
### Verifying this change
This is a trivial change.
Co-authored-by: Michael Marshall <mmarshall@apache.org>
### Motivation
There was a suggestion [in a dev mailing list discussion](https://lists.apache.org/thread/bgkvcyt1qq6h67p2k8xwp89xlncbqn3d) that the Helm chart's appVersion should be used as the default image tag.
### Additional context
There are some limitations in Helm. It is not possible to set "appVersion" from the command line. There's in an open feature request https://github.com/helm/helm/issues/8194 to add such a feature to Helm.
### Modifications
- change default values.yaml and set the tags for the images that use the Pulsar image to an empty value
- add "defaultPulsarImageTag" to values.yaml
- add a helper template "pulsar.imageFullName" that contains the logic to fall back to .Values.defaultPulsarImageTag and if it's not set, falling back to .Chart.AppVersion
- use the helper template in all other templates that require the logic
### Motivation
When using standard bookkeeper installation on PSP cluster initialization fails because has to be started as root
### Modifications
Add same ServiceAccount and SecurityContext for bookkeeper-cluster-initialize as in bookkeeper specyfication.
UPDATE: Seems that when using in cluster TLS encryption other components also require RW access to root FS, I added PSP for proxy, zookeepe, broker and toolset
### Verifying this change
- [x] Make sure that the change passes the CI checks.
It remains possible to override the current release namespace by setting
the `namespace` value though this may lead to having the helm metadata
and the pulsar components in different namespaces
Fixes#66
### Motivation
Trying to deploy the chart in a namespace using the usual helm pattern fails for example
```
kubectl create ns pulsartest
helm upgrade --install pulsar -n pulsartest apache/pulsar
Error: namespaces "pulsar" not found
```
fixing that while keeping the helm metadata and the deployed objects in the same namespace requires declaring the namespace twice
```
kubectl create ns pulsartest
helm upgrade --install pulsar -n pulsartest apache/pulsar --set namespace=pulsartest
Error: namespaces "pulsar" not found
```
This is needlessly confusing for newcomers who follow the helm documentation and is contrary to helm best practices.
### Modifications
I changed the chart to use the context namespace `.Release.Namespace` by default while preserving the ability to override that by explicitly providing a namespace on the commande line, with the this modification both examples behave as expected
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Co-authored-by: Elad Dolev <elad@firebolt.io>
### Motivation
Give the ability to deploy multi-cluster instance on K8s clusters with non-default `clusterDomain`, and connect to external configuration-store
### Modifications
- give the ability to change cluster's name
- give the ability to change `clusterDomain`
- fix external configuration store functionality
- use broker ports variables
- use label templates, and add `component` label in several places
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Fixes#47
### Motivation
Only create the initialize job on install.
### Modifications
- Added an initialize value that can be set to true on install, matching the documentation in the README.md
## Motivation
### Case
I have a physical zk cluster and want configure bookkeeper & broker & proxy to use it.
So I set components.zookeeper as false, and only found pulsar.zookeeper.connect to set my physical zk address.
But deploy stage was stucked in bookkeeper wait-zookeeper-ready container.
### Issue
The wait-zookeeper-ready initContainer in bookkeeper-cluster-initialize Job used spliced zk Service hosts to detect zk ready or not, other component init Job initContainer do the same thing. Actually, zk service are unreachable because I disabled zk component.
## Modifications
- Add optional pulsar_metadata.userProvidedZookeepers config for this case, and make component's init Job use user zk to detect liveness, instead of spliced Service hosts.
- Delete redundant image reference in bookkeeper init Job.