* Add multi volume support in bookkeeper. (#112)
* Add multi volumes support in bookkeeper configmap.
Co-authored-by: druidliu <druidliu@tencent.com>
Fixes#112
### Motivation
*Add option for user to choose whether using multi volume in bookeeper, especially while using `local-storage`.*
### Modifications
Add `useMultiVolumes` option under `.Values.bookkeeper.volumes.journal` and `.Values.bookkeeper.volumes.ledgers`.
User can choose how many volumes could be used for bookkeeper jounal or ledgers.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
* Added -Dlog4j2.formatMsgNoLookups=true to PULSAR_MANAGER_OPTS
* Bump the chart version to release changes
Co-authored-by: Lari Hotari <lhotari@apache.org>
Replace folding block with multiline string to workaround https://github.com/kubernetes-sigs/kustomize/issues/4201
There are also other places where this bug is hit, but extra generated newline is not significant.
Co-authored-by: Lari Hotari <lhotari@users.noreply.github.com>
* Update ingress api version, extension/v1beta1 will not be supported in new k8s version, this change keep backward compatibility for lower kubernetes version
* Update deprecated util Capabilities.KubeVersion.GitVersion to Capabilities.KubeVersion.Version
* [Security] Workaround for CVE-2021-44228 Log4J RCE when Log4J >= 2.10.0
- prevents the exploit by disabling message pattern lookups
* Bump the chart version
* Fixes#173 Support both Role Binding and Cluster Role Binding depending on rbac.limit_to_namespace
* Rev version
* Get Role/Cluster the right way around
Updates CA name generation to be configurable allowing the swapping in of a CA.
### Motivation
We recently swapped out cert issuers and found that with the current helm chart we were unable to do a hot swap without downtime (via helm) because the CA cert name is not configurable. Being able to change the name of the CA allows us to create a new CA first -> Validate -> then swap over in follow up apply/release.
### Modifications
Adds the ability to specify the suffix used to generate the CA name (not the whole name in order to preserve back compatibility regardless of the release name.)
Fixes#142
### Motivation
Expose HTTP Port on ZooKeeper service so we can use Prometheus
### Modifications
Bug fix to expose HTTP port on ZooKeeper service
Fixes#147
### Motivation
This gives the helm chart user the ability to specify a secret or other type of volume to be mounted into any of the statefulset pods
### Modifications
* Added conditionals to `bookkeeper`, `broker`, `proxy`, `toolset`, and `zookeeper` statefulsets which allow the chart user to specify extraVolumes and extraVolumeMounts for deployed pods.
* Added `extraVolumes` and `extraVolumeMounts` parameters to values.yaml
Fixes#125
### Motivation
The default images in the values.yaml are in docker hub. This PR allows us to provide image pull secrets for the containers which will allow us to get around Docker Hub's rate limiting if the nodes are not logged into Docker Hub.
### Modifications
Added a new template to generate `imagePullSecrets`, and included them in the deployments and statefulsets. This will only add them if they are specified under `images.imagePullSecrets`
### Verifying this change
- [] Make sure that the change passes the CI checks.
Fixes #<xyz>
### Motivation
would be nice to have this option here so people can run admin commands against the prometheus.
### Modifications
added a new value and modified the deployment, taken from the official prom helm.
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
Fixes#116
### Motivation
Theres indentation issues for the `checksum/config` annotation in these templates, which would either fail linting or not apply at all in some situations.
### Modifications
I've added indentation at the specified places such that this isn't an issue anymore.
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
### Motivation
* While component certs can be configured with a custom duration the CA cert for self-signed configuration uses default values. It can be convenient to have this certificate expire more than a month out.
### Modifications
* Updates the internal issuer `{{ .Release.Name }}-ca-tls` certificate to make `duration` and `renewBefore` configurable. Does not use `common` so that the CA can be configured to last much longer than individual components certs if desired.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
This commit let's users override the apiVersion referenced in this
chart so that the chart can be used with newer cert-manager releases.
(script/cert-manager/install-cert-manager.sh installs 0.13.0 when
current version is 1.2.0...)
Fixes#68
### Motivation
cert-manager apiVersion changed after cert-manager 1.0.0 was released, which prevents the chart from provisionning certificates with newer cert-manager installation because of an incompatible apiVersion.
I have a cluster with cert-manager >1.0.0 installed, making `apiVersion` overridable makes it easy for me to install pulsar on that cluster
### Modifications
I introduced the value `certs.internal_issuer.apiVersion`, which by default uses the apiVersion that was previously hardcoded (`cert-manager.io/v1alpha2`)
I replaced all occurrences of that apiVersion by a reference to the value so that users can override it to `cert-manager.io/v1` if they have a newer version of cert-manager installed.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Adds dynamic superusers configuration
### Motivation
Allow dynamic superusers management. Adding new superuser entry to `.Values.auth.superUsers` will results in adding concatenated list to config
### Modifications
Change static list to dynamic one
### Motivation
In some case, my k8s node only have 1 large capacity ssd, for deploying 1 bookie, I need:
- Partition the ssd into 2 disks, and make 2 pv over it.
- Just make 1 pv over it, but journal & ledgers under same mount path (this PR did)
Both can't isolate IO for journal & ledgers, so I prefer the second one for reusability.
### Modifications
values.yaml
- add `useSingleCommonVolume` option, default false
bookkeeper-statefulset.yaml
- mount the only PV to path `/pulsar/data/bookkeeper`
- use configured common storageClassName
bookkeeper-storageclass.yaml
- use configured provisioner for the common storageClass
### Others
This may not be an issue for everyone, if it's not necessary to merge, I'll just use it locally
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Fixes#94
### Motivation
fix `io.kubernetes.client.openapi.ApiException: Forbidden`
### Modifications
fix typo
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Fixes for wrong namespace handling in some RBAC and missing dnsNames for TLS
### Motivation
Fixes old unused handling of namespace name in RBAC for autorecovery and bookkeeper.
Fixes Helm exception of missing key when not defining TLS dnsNames
### Modifications
Use namespace template in RBAC definitions for bookkeeper and autorecovery. Add if around every `toYaml .Values.tls.bookie.dnsNames` clause in TLS certs definitions.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
### Motivation
As I wanted to use [streamnative/apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard) with this helm chart and own cluster wide Prometheus stack I decided that use of PodMonitor CRD is a good way. Unfortunately prometheus config has some metrics relabelings that are required by grafana dashboard. I decied to port them directly to PodMonitor definition
### Modifications
* Added missing PodMonitor for autorecovery
* Port relabelings from `prometheus-configmap.yaml` to each PodMonitor
### Verifying this change
- [x] Make sure that the change passes the CI checks.
### Motivation
When using standard bookkeeper installation on PSP cluster initialization fails because has to be started as root
### Modifications
Add same ServiceAccount and SecurityContext for bookkeeper-cluster-initialize as in bookkeeper specyfication.
UPDATE: Seems that when using in cluster TLS encryption other components also require RW access to root FS, I added PSP for proxy, zookeepe, broker and toolset
### Verifying this change
- [x] Make sure that the change passes the CI checks.