* [Security] Workaround for CVE-2021-44228 Log4J RCE when Log4J >= 2.10.0
- prevents the exploit by disabling message pattern lookups
* Bump the chart version
* Fixes#173 Support both Role Binding and Cluster Role Binding depending on rbac.limit_to_namespace
* Rev version
* Get Role/Cluster the right way around
Updates CA name generation to be configurable allowing the swapping in of a CA.
### Motivation
We recently swapped out cert issuers and found that with the current helm chart we were unable to do a hot swap without downtime (via helm) because the CA cert name is not configurable. Being able to change the name of the CA allows us to create a new CA first -> Validate -> then swap over in follow up apply/release.
### Modifications
Adds the ability to specify the suffix used to generate the CA name (not the whole name in order to preserve back compatibility regardless of the release name.)
Fixes#142
### Motivation
Expose HTTP Port on ZooKeeper service so we can use Prometheus
### Modifications
Bug fix to expose HTTP port on ZooKeeper service
Fixes#147
### Motivation
This gives the helm chart user the ability to specify a secret or other type of volume to be mounted into any of the statefulset pods
### Modifications
* Added conditionals to `bookkeeper`, `broker`, `proxy`, `toolset`, and `zookeeper` statefulsets which allow the chart user to specify extraVolumes and extraVolumeMounts for deployed pods.
* Added `extraVolumes` and `extraVolumeMounts` parameters to values.yaml
Fixes#125
### Motivation
The default images in the values.yaml are in docker hub. This PR allows us to provide image pull secrets for the containers which will allow us to get around Docker Hub's rate limiting if the nodes are not logged into Docker Hub.
### Modifications
Added a new template to generate `imagePullSecrets`, and included them in the deployments and statefulsets. This will only add them if they are specified under `images.imagePullSecrets`
### Verifying this change
- [] Make sure that the change passes the CI checks.
Fixes #<xyz>
### Motivation
would be nice to have this option here so people can run admin commands against the prometheus.
### Modifications
added a new value and modified the deployment, taken from the official prom helm.
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
Fixes#116
### Motivation
Theres indentation issues for the `checksum/config` annotation in these templates, which would either fail linting or not apply at all in some situations.
### Modifications
I've added indentation at the specified places such that this isn't an issue anymore.
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
### Motivation
* While component certs can be configured with a custom duration the CA cert for self-signed configuration uses default values. It can be convenient to have this certificate expire more than a month out.
### Modifications
* Updates the internal issuer `{{ .Release.Name }}-ca-tls` certificate to make `duration` and `renewBefore` configurable. Does not use `common` so that the CA can be configured to last much longer than individual components certs if desired.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
This commit let's users override the apiVersion referenced in this
chart so that the chart can be used with newer cert-manager releases.
(script/cert-manager/install-cert-manager.sh installs 0.13.0 when
current version is 1.2.0...)
Fixes#68
### Motivation
cert-manager apiVersion changed after cert-manager 1.0.0 was released, which prevents the chart from provisionning certificates with newer cert-manager installation because of an incompatible apiVersion.
I have a cluster with cert-manager >1.0.0 installed, making `apiVersion` overridable makes it easy for me to install pulsar on that cluster
### Modifications
I introduced the value `certs.internal_issuer.apiVersion`, which by default uses the apiVersion that was previously hardcoded (`cert-manager.io/v1alpha2`)
I replaced all occurrences of that apiVersion by a reference to the value so that users can override it to `cert-manager.io/v1` if they have a newer version of cert-manager installed.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Adds dynamic superusers configuration
### Motivation
Allow dynamic superusers management. Adding new superuser entry to `.Values.auth.superUsers` will results in adding concatenated list to config
### Modifications
Change static list to dynamic one
### Motivation
In some case, my k8s node only have 1 large capacity ssd, for deploying 1 bookie, I need:
- Partition the ssd into 2 disks, and make 2 pv over it.
- Just make 1 pv over it, but journal & ledgers under same mount path (this PR did)
Both can't isolate IO for journal & ledgers, so I prefer the second one for reusability.
### Modifications
values.yaml
- add `useSingleCommonVolume` option, default false
bookkeeper-statefulset.yaml
- mount the only PV to path `/pulsar/data/bookkeeper`
- use configured common storageClassName
bookkeeper-storageclass.yaml
- use configured provisioner for the common storageClass
### Others
This may not be an issue for everyone, if it's not necessary to merge, I'll just use it locally
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Fixes#94
### Motivation
fix `io.kubernetes.client.openapi.ApiException: Forbidden`
### Modifications
fix typo
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Fixes for wrong namespace handling in some RBAC and missing dnsNames for TLS
### Motivation
Fixes old unused handling of namespace name in RBAC for autorecovery and bookkeeper.
Fixes Helm exception of missing key when not defining TLS dnsNames
### Modifications
Use namespace template in RBAC definitions for bookkeeper and autorecovery. Add if around every `toYaml .Values.tls.bookie.dnsNames` clause in TLS certs definitions.
### Verifying this change
- [x] Make sure that the change passes the CI checks.
### Motivation
As I wanted to use [streamnative/apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard) with this helm chart and own cluster wide Prometheus stack I decided that use of PodMonitor CRD is a good way. Unfortunately prometheus config has some metrics relabelings that are required by grafana dashboard. I decied to port them directly to PodMonitor definition
### Modifications
* Added missing PodMonitor for autorecovery
* Port relabelings from `prometheus-configmap.yaml` to each PodMonitor
### Verifying this change
- [x] Make sure that the change passes the CI checks.
### Motivation
When using standard bookkeeper installation on PSP cluster initialization fails because has to be started as root
### Modifications
Add same ServiceAccount and SecurityContext for bookkeeper-cluster-initialize as in bookkeeper specyfication.
UPDATE: Seems that when using in cluster TLS encryption other components also require RW access to root FS, I added PSP for proxy, zookeepe, broker and toolset
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Fixes#71
### Motivation
Pods are not restarting when config maps are changed after changing values.yaml file, so they need to be restarted manually in order to pick up new values from config map.
### Modifications
As I mentioned `restartPodsOnConfigMapChange` flag for each component is added in values.yaml file whether to restart pods on configmap change or not, default is `false`.
In statefulset templates for each component is added part which is adding annotation that contains hash of corresponding configmap if `restartPodsOnConfigMapChange` is `true`, which will cause pods to restart if corresponding configmap has been changed (https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments).
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
Add PSP and add/modify RBAC. I'm open for all discussion.
### Motivation
On clusters which use PSP and restrictive default policy pulsar cannot be installed, because it uses root user and requires writable container root directory. Additionally default RBAC for broker are too permissive (usage of ClusterRoleBinding) in my opinion.
### Modifications
Add PSP and RBAC for bookkeeper and autorecovery to add
exception to allow startup even in secure environment
where containers cannot access RW on root by default.
Add option for limiting broker ClusterRoleBinding
to single namespace by replacing to RoleBinding
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Co-authored-by: Sijie Guo <sijie@apache.org>
Fixes inability to validate self-signed certs from external clients
### Motivation
Currently self-signed certificates can only be used inside of the same cluster as they are labeled with internal dns names without the possibility of appending additional values. Some use-cases require the connection of external clients. This PR aims to allow users add additional dnsNames (IP or domain) to the self-signed certificates.
### Modifications
* Adds the ability to add `dnsNames` to self-signed certificates to any component like so:
```yaml
tls:
enabled: true
proxy:
enabled: true
dnsNames:
- test.example.com
```
### Verifying this change
- [x] Make sure that the change passes the CI checks.
It remains possible to override the current release namespace by setting
the `namespace` value though this may lead to having the helm metadata
and the pulsar components in different namespaces
Fixes#66
### Motivation
Trying to deploy the chart in a namespace using the usual helm pattern fails for example
```
kubectl create ns pulsartest
helm upgrade --install pulsar -n pulsartest apache/pulsar
Error: namespaces "pulsar" not found
```
fixing that while keeping the helm metadata and the deployed objects in the same namespace requires declaring the namespace twice
```
kubectl create ns pulsartest
helm upgrade --install pulsar -n pulsartest apache/pulsar --set namespace=pulsartest
Error: namespaces "pulsar" not found
```
This is needlessly confusing for newcomers who follow the helm documentation and is contrary to helm best practices.
### Modifications
I changed the chart to use the context namespace `.Release.Namespace` by default while preserving the ability to override that by explicitly providing a namespace on the commande line, with the this modification both examples behave as expected
### Verifying this change
- [x] Make sure that the change passes the CI checks.
This allows operation in environemnts where direct installation of objects into
kubernetes cluster is not desired or possible. For example when using sealedsecrets
or SOPS, where the secrets are firs encrypted and then commited into repository
and deployed latter by some other deployment system.
Co-authored-by: Jiří Pinkava <jiri.pinkava@rossum.ai>
Signed-off-by: xiaolong.ran <rxl@apache.org>
### Motivation
Bump the image version to 2.6.2
### Verifying this change
- [x] Make sure that the change passes the CI checks.
* Fix "unknown apiVersion: kind.sigs.k8s.io/v1alpha3"
*Motivation*
The api version `kind.sigs.k8s.io/v1alpha3` is not available anymore for kind clusters.
So all the CI actions are broken now. This PR fix the issue.
Additionally it adds a helm chart lint job to lint the chart changes.
* Trigger CI when kind cluster build script is changed
### Motivation
* ```publishNotReadyAddresses``` is a service spec and not a service annotation. This is mentioned in the K8s API docs at https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#servicespec-v1-core
### Modifications
* Modified ```publishNotReadyAddresses``` from annotation to service spec
### Verifying this change
- [x] Make sure that the change passes the CI checks.
### Motivation
* It's not recommended to run a production zookkeeper cluster with forceSync as "no". This is also mentioned in the forceSync section in https://pulsar.apache.org/docs/en/next/reference-configuration/#zookeeper
### Modifications
* Removed ```-Dzookeeper.forceSync=no``` from ```values.yaml``` as default ```forceSync``` is ```yes```.
Fixes#50
### Motivation
The host option is not required to setup an ingress, so I made it an optional value
### Modifications
*Describe the modifications you've done.*
Made setting the host optional.
Co-authored-by: Elad Dolev <elad@firebolt.io>
### Motivation
Give the ability to deploy multi-cluster instance on K8s clusters with non-default `clusterDomain`, and connect to external configuration-store
### Modifications
- give the ability to change cluster's name
- give the ability to change `clusterDomain`
- fix external configuration store functionality
- use broker ports variables
- use label templates, and add `component` label in several places
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Fixes#47
### Motivation
Only create the initialize job on install.
### Modifications
- Added an initialize value that can be set to true on install, matching the documentation in the README.md
Fixes#39
### Motivation
The match expression for the "app" label was incorrect breaking the antiaffinity since they would never match. Fixing this makes the podAntiAffinity work, but now requires at least N nodes to be in the cluster where N = largest replica set with affinity. Added the option to set the affinity type to preferredDuringSchedulingIgnoredDuringExecution where it will try to follow the affinity, but will still deploy a pod if it needs to break it.
### Modifications
- Fixed app matchExpression
- Added option to set the affinity type
- bumped chart version
### Verifying this change
- [X] Make sure that the change passes the CI checks.
Fixes#46
### Motivation
There were some templates that relied on extra values that are deprecated.
### Modifications
Modified the checks to check for non deprecated values or deprecated values.
### Verifying this change
- [X] Make sure that the change passes the CI checks.
### Motivation
Allow Grafana to be served from a sub path.
### Modifications
- Added a config map to add extra environment variables to the grafana deployment. As the grafana image adds new features that require environment variables, this can be used to set them.
- Bumped the grafana image to allow a reverse proxy
- removed ingress annotations as they are specific to nginx, and to match all the other ingresses
- bumped the chart version as per the README
Example values:
```
grafana:
configData:
GRAFANA_ROOT_URL: /pulsar/grafana
GRAFANA_SERVE_FROM_SUB_PATH: "true"
ingress:
enabled: true
port: 3000
path: "/pulsar/grafana/?(.*)"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
```
* Add 'http' port specification to zookeeper statefulset
This makes the zookeeper spec inline with the other statefulset specs
in this chart and it provides a port target for custom podMonitors
* Added PodMonitors for bookie, broker, proxy, and zookeeper
New PodMonitors are needed for prometheus-operator to pickup scrape
targets.
Defaults to disabled so users need to opt in to deploy
* Added Apache license info to podmonitor yamls