* set template for ca issuer name and secret name + geo-replication installation example
* remove geo-replication from this PR
* use certs template to define ca name and secret name
* Handle proxy, toolset and zookeeper in the same way as others
* Make the logic more consistent by separating the selfsigning issuer configuration
---------
Co-authored-by: GLECROC <guillaume.lecroc@cnp.fr>
Co-authored-by: Lari Hotari <lhotari@users.noreply.github.com>
Co-authored-by: Lari Hotari <lhotari@apache.org>
- Add timeouts for waiting for zk and bk to become available.
- If the waiting gets stuck for some reason, the Pulsar deployment never
becomes starts the broker services.
- timeouts will help failures recover eventually
* add missing section in values.yaml for pulsar_metadata resources
* add resources to all init containers and an additional section to specify them in values.yaml
* increase memory defaults for init containers
* remove empty lines
* Add newline to end of file
* Proposal: service accounts creation should be decoupled from PodSecurityPolicy.
* Rename *-rbac.yaml to *-psp.yaml and move service account to *-service-account.yaml
* Test with psp enabled
Co-authored-by: Lari Hotari <lhotari@apache.org>
* Lowered BOOKIE_MEM and PULSAR_MEM in init containers. Default BOOKIE_MEM and PULSAR_MEM settings from conf/pulsar_env.sh and conf/bkenv.sh (-Xms2g -Xmx2g -XX:MaxDirectMemorySize=4g) are too high for low-memory systems.
Co-authored-by: Michael Marshall <mmarshall@apache.org>
### Motivation
There was a suggestion [in a dev mailing list discussion](https://lists.apache.org/thread/bgkvcyt1qq6h67p2k8xwp89xlncbqn3d) that the Helm chart's appVersion should be used as the default image tag.
### Additional context
There are some limitations in Helm. It is not possible to set "appVersion" from the command line. There's in an open feature request https://github.com/helm/helm/issues/8194 to add such a feature to Helm.
### Modifications
- change default values.yaml and set the tags for the images that use the Pulsar image to an empty value
- add "defaultPulsarImageTag" to values.yaml
- add a helper template "pulsar.imageFullName" that contains the logic to fall back to .Values.defaultPulsarImageTag and if it's not set, falling back to .Chart.AppVersion
- use the helper template in all other templates that require the logic
* [Security] Workaround for CVE-2021-44228 Log4J RCE when Log4J >= 2.10.0
- prevents the exploit by disabling message pattern lookups
* Bump the chart version
Updates CA name generation to be configurable allowing the swapping in of a CA.
### Motivation
We recently swapped out cert issuers and found that with the current helm chart we were unable to do a hot swap without downtime (via helm) because the CA cert name is not configurable. Being able to change the name of the CA allows us to create a new CA first -> Validate -> then swap over in follow up apply/release.
### Modifications
Adds the ability to specify the suffix used to generate the CA name (not the whole name in order to preserve back compatibility regardless of the release name.)
Fixes#147
### Motivation
This gives the helm chart user the ability to specify a secret or other type of volume to be mounted into any of the statefulset pods
### Modifications
* Added conditionals to `bookkeeper`, `broker`, `proxy`, `toolset`, and `zookeeper` statefulsets which allow the chart user to specify extraVolumes and extraVolumeMounts for deployed pods.
* Added `extraVolumes` and `extraVolumeMounts` parameters to values.yaml
Fixes#125
### Motivation
The default images in the values.yaml are in docker hub. This PR allows us to provide image pull secrets for the containers which will allow us to get around Docker Hub's rate limiting if the nodes are not logged into Docker Hub.
### Modifications
Added a new template to generate `imagePullSecrets`, and included them in the deployments and statefulsets. This will only add them if they are specified under `images.imagePullSecrets`
### Verifying this change
- [] Make sure that the change passes the CI checks.
### Motivation
When using standard bookkeeper installation on PSP cluster initialization fails because has to be started as root
### Modifications
Add same ServiceAccount and SecurityContext for bookkeeper-cluster-initialize as in bookkeeper specyfication.
UPDATE: Seems that when using in cluster TLS encryption other components also require RW access to root FS, I added PSP for proxy, zookeepe, broker and toolset
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Fixes#71
### Motivation
Pods are not restarting when config maps are changed after changing values.yaml file, so they need to be restarted manually in order to pick up new values from config map.
### Modifications
As I mentioned `restartPodsOnConfigMapChange` flag for each component is added in values.yaml file whether to restart pods on configmap change or not, default is `false`.
In statefulset templates for each component is added part which is adding annotation that contains hash of corresponding configmap if `restartPodsOnConfigMapChange` is `true`, which will cause pods to restart if corresponding configmap has been changed (https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments).
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
It remains possible to override the current release namespace by setting
the `namespace` value though this may lead to having the helm metadata
and the pulsar components in different namespaces
Fixes#66
### Motivation
Trying to deploy the chart in a namespace using the usual helm pattern fails for example
```
kubectl create ns pulsartest
helm upgrade --install pulsar -n pulsartest apache/pulsar
Error: namespaces "pulsar" not found
```
fixing that while keeping the helm metadata and the deployed objects in the same namespace requires declaring the namespace twice
```
kubectl create ns pulsartest
helm upgrade --install pulsar -n pulsartest apache/pulsar --set namespace=pulsartest
Error: namespaces "pulsar" not found
```
This is needlessly confusing for newcomers who follow the helm documentation and is contrary to helm best practices.
### Modifications
I changed the chart to use the context namespace `.Release.Namespace` by default while preserving the ability to override that by explicitly providing a namespace on the commande line, with the this modification both examples behave as expected
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Co-authored-by: Elad Dolev <elad@firebolt.io>
### Motivation
Give the ability to deploy multi-cluster instance on K8s clusters with non-default `clusterDomain`, and connect to external configuration-store
### Modifications
- give the ability to change cluster's name
- give the ability to change `clusterDomain`
- fix external configuration store functionality
- use broker ports variables
- use label templates, and add `component` label in several places
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Fixes#39
### Motivation
The match expression for the "app" label was incorrect breaking the antiaffinity since they would never match. Fixing this makes the podAntiAffinity work, but now requires at least N nodes to be in the cluster where N = largest replica set with affinity. Added the option to set the affinity type to preferredDuringSchedulingIgnoredDuringExecution where it will try to follow the affinity, but will still deploy a pod if it needs to break it.
### Modifications
- Fixed app matchExpression
- Added option to set the affinity type
- bumped chart version
### Verifying this change
- [X] Make sure that the change passes the CI checks.
## Motivation
### Case
I have a physical zk cluster and want configure bookkeeper & broker & proxy to use it.
So I set components.zookeeper as false, and only found pulsar.zookeeper.connect to set my physical zk address.
But deploy stage was stucked in bookkeeper wait-zookeeper-ready container.
### Issue
The wait-zookeeper-ready initContainer in bookkeeper-cluster-initialize Job used spliced zk Service hosts to detect zk ready or not, other component init Job initContainer do the same thing. Actually, zk service are unreachable because I disabled zk component.
## Modifications
- Add optional pulsar_metadata.userProvidedZookeepers config for this case, and make component's init Job use user zk to detect liveness, instead of spliced Service hosts.
- Delete redundant image reference in bookkeeper init Job.