* Bump version to `2.9.2`
* Because the latest Pulsar image is based on Java 11, some JVM param for printing GC information has been abandoned, change to use the new JVM param. refer to https://docs.oracle.com/en/java/javase/11/tools/java.html#GUID-BE93ABDC-999C-4CB5-A88B-1994AAAC74D5 and https://issues.redhat.com/browse/CLOUD-3040.
original param | new param
--|--
`-XX:+PrintGCDetails` | `-Xlog:gc*`
`-XX:+PrintGCApplicationStoppedTime` | `-Xlog:safepoint`
`-XX:+PrintHeapAtGC` | `-Xlog:gc+heap=trace`
`-XX:+PrintGCTimeStamps` | `-Xlog:gc::utctime`
* remove JVM param `-XX:G1LogLevel=finest`
* Add multi volume support in bookkeeper. (#112)
* Add multi volumes support in bookkeeper configmap.
Co-authored-by: druidliu <druidliu@tencent.com>
Fixes#112
### Motivation
*Add option for user to choose whether using multi volume in bookeeper, especially while using `local-storage`.*
### Modifications
Add `useMultiVolumes` option under `.Values.bookkeeper.volumes.journal` and `.Values.bookkeeper.volumes.ledgers`.
User can choose how many volumes could be used for bookkeeper jounal or ledgers.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Updates CA name generation to be configurable allowing the swapping in of a CA.
### Motivation
We recently swapped out cert issuers and found that with the current helm chart we were unable to do a hot swap without downtime (via helm) because the CA cert name is not configurable. Being able to change the name of the CA allows us to create a new CA first -> Validate -> then swap over in follow up apply/release.
### Modifications
Adds the ability to specify the suffix used to generate the CA name (not the whole name in order to preserve back compatibility regardless of the release name.)
Fixes#147
### Motivation
This gives the helm chart user the ability to specify a secret or other type of volume to be mounted into any of the statefulset pods
### Modifications
* Added conditionals to `bookkeeper`, `broker`, `proxy`, `toolset`, and `zookeeper` statefulsets which allow the chart user to specify extraVolumes and extraVolumeMounts for deployed pods.
* Added `extraVolumes` and `extraVolumeMounts` parameters to values.yaml
Fixes #<xyz>
### Motivation
would be nice to have this option here so people can run admin commands against the prometheus.
### Modifications
added a new value and modified the deployment, taken from the official prom helm.
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
### Motivation
* While component certs can be configured with a custom duration the CA cert for self-signed configuration uses default values. It can be convenient to have this certificate expire more than a month out.
### Modifications
* Updates the internal issuer `{{ .Release.Name }}-ca-tls` certificate to make `duration` and `renewBefore` configurable. Does not use `common` so that the CA can be configured to last much longer than individual components certs if desired.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
This commit let's users override the apiVersion referenced in this
chart so that the chart can be used with newer cert-manager releases.
(script/cert-manager/install-cert-manager.sh installs 0.13.0 when
current version is 1.2.0...)
Fixes#68
### Motivation
cert-manager apiVersion changed after cert-manager 1.0.0 was released, which prevents the chart from provisionning certificates with newer cert-manager installation because of an incompatible apiVersion.
I have a cluster with cert-manager >1.0.0 installed, making `apiVersion` overridable makes it easy for me to install pulsar on that cluster
### Modifications
I introduced the value `certs.internal_issuer.apiVersion`, which by default uses the apiVersion that was previously hardcoded (`cert-manager.io/v1alpha2`)
I replaced all occurrences of that apiVersion by a reference to the value so that users can override it to `cert-manager.io/v1` if they have a newer version of cert-manager installed.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
### Motivation
In some case, my k8s node only have 1 large capacity ssd, for deploying 1 bookie, I need:
- Partition the ssd into 2 disks, and make 2 pv over it.
- Just make 1 pv over it, but journal & ledgers under same mount path (this PR did)
Both can't isolate IO for journal & ledgers, so I prefer the second one for reusability.
### Modifications
values.yaml
- add `useSingleCommonVolume` option, default false
bookkeeper-statefulset.yaml
- mount the only PV to path `/pulsar/data/bookkeeper`
- use configured common storageClassName
bookkeeper-storageclass.yaml
- use configured provisioner for the common storageClass
### Others
This may not be an issue for everyone, if it's not necessary to merge, I'll just use it locally
### Verifying this change
- [x] Make sure that the change passes the CI checks.
### Motivation
As I wanted to use [streamnative/apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard) with this helm chart and own cluster wide Prometheus stack I decided that use of PodMonitor CRD is a good way. Unfortunately prometheus config has some metrics relabelings that are required by grafana dashboard. I decied to port them directly to PodMonitor definition
### Modifications
* Added missing PodMonitor for autorecovery
* Port relabelings from `prometheus-configmap.yaml` to each PodMonitor
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Fixes#71
### Motivation
Pods are not restarting when config maps are changed after changing values.yaml file, so they need to be restarted manually in order to pick up new values from config map.
### Modifications
As I mentioned `restartPodsOnConfigMapChange` flag for each component is added in values.yaml file whether to restart pods on configmap change or not, default is `false`.
In statefulset templates for each component is added part which is adding annotation that contains hash of corresponding configmap if `restartPodsOnConfigMapChange` is `true`, which will cause pods to restart if corresponding configmap has been changed (https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments).
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
Add PSP and add/modify RBAC. I'm open for all discussion.
### Motivation
On clusters which use PSP and restrictive default policy pulsar cannot be installed, because it uses root user and requires writable container root directory. Additionally default RBAC for broker are too permissive (usage of ClusterRoleBinding) in my opinion.
### Modifications
Add PSP and RBAC for bookkeeper and autorecovery to add
exception to allow startup even in secure environment
where containers cannot access RW on root by default.
Add option for limiting broker ClusterRoleBinding
to single namespace by replacing to RoleBinding
### Verifying this change
- [x] Make sure that the change passes the CI checks.
It remains possible to override the current release namespace by setting
the `namespace` value though this may lead to having the helm metadata
and the pulsar components in different namespaces
Fixes#66
### Motivation
Trying to deploy the chart in a namespace using the usual helm pattern fails for example
```
kubectl create ns pulsartest
helm upgrade --install pulsar -n pulsartest apache/pulsar
Error: namespaces "pulsar" not found
```
fixing that while keeping the helm metadata and the deployed objects in the same namespace requires declaring the namespace twice
```
kubectl create ns pulsartest
helm upgrade --install pulsar -n pulsartest apache/pulsar --set namespace=pulsartest
Error: namespaces "pulsar" not found
```
This is needlessly confusing for newcomers who follow the helm documentation and is contrary to helm best practices.
### Modifications
I changed the chart to use the context namespace `.Release.Namespace` by default while preserving the ability to override that by explicitly providing a namespace on the commande line, with the this modification both examples behave as expected
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Signed-off-by: xiaolong.ran <rxl@apache.org>
### Motivation
Bump the image version to 2.6.2
### Verifying this change
- [x] Make sure that the change passes the CI checks.
### Motivation
* ```publishNotReadyAddresses``` is a service spec and not a service annotation. This is mentioned in the K8s API docs at https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#servicespec-v1-core
### Modifications
* Modified ```publishNotReadyAddresses``` from annotation to service spec
### Verifying this change
- [x] Make sure that the change passes the CI checks.
### Motivation
* It's not recommended to run a production zookkeeper cluster with forceSync as "no". This is also mentioned in the forceSync section in https://pulsar.apache.org/docs/en/next/reference-configuration/#zookeeper
### Modifications
* Removed ```-Dzookeeper.forceSync=no``` from ```values.yaml``` as default ```forceSync``` is ```yes```.
Fixes#50
### Motivation
The host option is not required to setup an ingress, so I made it an optional value
### Modifications
*Describe the modifications you've done.*
Made setting the host optional.
Co-authored-by: Elad Dolev <elad@firebolt.io>
### Motivation
Give the ability to deploy multi-cluster instance on K8s clusters with non-default `clusterDomain`, and connect to external configuration-store
### Modifications
- give the ability to change cluster's name
- give the ability to change `clusterDomain`
- fix external configuration store functionality
- use broker ports variables
- use label templates, and add `component` label in several places
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Fixes#47
### Motivation
Only create the initialize job on install.
### Modifications
- Added an initialize value that can be set to true on install, matching the documentation in the README.md
Fixes#39
### Motivation
The match expression for the "app" label was incorrect breaking the antiaffinity since they would never match. Fixing this makes the podAntiAffinity work, but now requires at least N nodes to be in the cluster where N = largest replica set with affinity. Added the option to set the affinity type to preferredDuringSchedulingIgnoredDuringExecution where it will try to follow the affinity, but will still deploy a pod if it needs to break it.
### Modifications
- Fixed app matchExpression
- Added option to set the affinity type
- bumped chart version
### Verifying this change
- [X] Make sure that the change passes the CI checks.
### Motivation
Allow Grafana to be served from a sub path.
### Modifications
- Added a config map to add extra environment variables to the grafana deployment. As the grafana image adds new features that require environment variables, this can be used to set them.
- Bumped the grafana image to allow a reverse proxy
- removed ingress annotations as they are specific to nginx, and to match all the other ingresses
- bumped the chart version as per the README
Example values:
```
grafana:
configData:
GRAFANA_ROOT_URL: /pulsar/grafana
GRAFANA_SERVE_FROM_SUB_PATH: "true"
ingress:
enabled: true
port: 3000
path: "/pulsar/grafana/?(.*)"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
```
* Add 'http' port specification to zookeeper statefulset
This makes the zookeeper spec inline with the other statefulset specs
in this chart and it provides a port target for custom podMonitors
* Added PodMonitors for bookie, broker, proxy, and zookeeper
New PodMonitors are needed for prometheus-operator to pickup scrape
targets.
Defaults to disabled so users need to opt in to deploy
* Added Apache license info to podmonitor yamls
## Motivation
### Case
I have a physical zk cluster and want configure bookkeeper & broker & proxy to use it.
So I set components.zookeeper as false, and only found pulsar.zookeeper.connect to set my physical zk address.
But deploy stage was stucked in bookkeeper wait-zookeeper-ready container.
### Issue
The wait-zookeeper-ready initContainer in bookkeeper-cluster-initialize Job used spliced zk Service hosts to detect zk ready or not, other component init Job initContainer do the same thing. Actually, zk service are unreachable because I disabled zk component.
## Modifications
- Add optional pulsar_metadata.userProvidedZookeepers config for this case, and make component's init Job use user zk to detect liveness, instead of spliced Service hosts.
- Delete redundant image reference in bookkeeper init Job.
### Motivation
We need to be able to change annotation to inject AWS IAM role (EKS based deployment).
https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html
With 2.6.0 and this annotation change we were able to use Tiered Storage with S3 and EKS/IAM(OIDC).
e.g :
```
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::66666:role/my-iam-role-with-s3-access
```
values.yaml
```
broker:
service_account:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::66666:role/my-iam-role-with-s3-access
```
### Modifications
Added a value to allow to change annotations fro broker service account.
I've tried following style from other part of the code.
### Verifying this change
- [ ] Make sure that the change passes the CI checks.