172 Commits

Author SHA1 Message Date
Michael Marshall
48501ebe84
Allow bk cluster init to restart on failure (#303)
### Motivation

This is essentially the same as https://github.com/apache/pulsar-helm-chart/pull/176. Without this change, an init pod can fail and be in `Error` state even though the second pod succeeded. This will prevent misleading errors.

### Modifications

* Replace `Never` with `OnFailure`

### Verifying this change

This is a trivial change.
2022-10-17 17:59:05 -05:00
Lari Hotari
25f355e6e2
Use appVersion as default tag for Pulsar images (#200)
Co-authored-by: Michael Marshall <mmarshall@apache.org>

### Motivation

There was a suggestion [in a dev mailing list discussion](https://lists.apache.org/thread/bgkvcyt1qq6h67p2k8xwp89xlncbqn3d) that the Helm chart's appVersion should be used as the default image tag.

### Additional context

There are some limitations in Helm. It is not possible to set "appVersion" from the command line. There's in an open feature request https://github.com/helm/helm/issues/8194 to add such a feature to Helm.

### Modifications

- change default values.yaml and set the tags for the images that use the Pulsar image to an empty value
- add "defaultPulsarImageTag" to values.yaml
- add a helper template "pulsar.imageFullName" that contains the logic to fall back to .Values.defaultPulsarImageTag and if it's not set, falling back to .Chart.AppVersion
- use the helper template in all other templates that require the logic
2022-10-17 15:42:58 -05:00
Arnar
f3ba780ab5
Alphabetically sort list of super users (#291)
Fixes #288 

### Motivation

When specifying multiple roles in `.Values.auth.superUsers` the values are converted to a comma-separated list by piping the dict through `values` and `join` in helm templating, `values` however doesn't guarantee that the order of elements will be the same every time. Therefor it recommends also passing it through `sortAlpha` to sort the list alphabetically.

This is a problematic when `.Values.broker.restartPodsOnConfigMapChange` is enabled because the checksum of the configmap changes every time the list's order is changed, resulting in the statefulsets rolling out a new version of the pods.

### Modifications

Pass list through `sortAlpha`.

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2022-10-17 14:36:22 -05:00
Aliaksandr Shulyak
8b42a61f2e
Add nodeSelector to cluster initialize pod (#284)
* Add nodeSelector to cluster initialize pod

* Add option to values file

* Update charts/pulsar/templates/pulsar-cluster-initialize.yaml

Co-authored-by: Michael Marshall <mikemarsh17@gmail.com>

* Fix typo in values

Co-authored-by: Michael Marshall <mikemarsh17@gmail.com>

### Motivation

Add an option to choose where to run pulsar-cluster-initialize pod. Sometimes there is a necessity to run only on certain nodes.

### Modifications

Added nodeSelector option to the pulsar-cluster-initialize job.
2022-10-14 13:44:47 -05:00
HuynhKevin
3c59b43f28
Add imagePullSecrets zookeeper (#244)
* Add imagePullSecrets for zookeeper

* Add imagePullSecrets for zookeeper

Co-authored-by: Kevin Huynh <khuynh@littlebigcode.fr>

All components have the imagePullSecrets to avoid quota limit to init correctly the pods except zookeeper
2022-06-26 00:01:48 -05:00
Filipe Caixeta
c05f659ff4
make proxy httpNumThreads configurable (#251)
Fixes https://github.com/apache/pulsar-helm-chart/issues/250

### Motivation

`httpNumThreads` is hardcoded to 8 in `charts/pulsar/templates/proxy-configmap.yaml`
When trying to override in `values.yaml` by using `proxy.configData.httpNumThreads` we get an error because the keys get duplicated.
This happens because `{{ toYaml .Values.proxy.configData | indent 2 }}` doesn't deduplicate the keys and there is no other way to set `httpNumThreads`

### Modifications

Removing the key from charts/pulsar/templates/proxy-configmap.yaml and adding it to the values yaml solves the problem.

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2022-06-25 23:57:30 -05:00
Marvin Cai
c6ab1d18e3
Support defining extra env for broker and proxy statefulsset. (#273) 2022-06-20 07:59:43 -07:00
Michael Marshall
428736c788
Add bk, zk securityContext to support upgrade to non-root docker image (#266)
Master Issue: https://github.com/apache/pulsar/issues/11269

### Motivation

Apache Pulsar's docker images for 2.10.0 and above are non-root by default. In order to ensure there is a safe upgrade path, we need to expose the `securityContext` for the Bookkeeper and Zookeeper StatefulSets. Here is the relevant k8s documentation on this k8s feature: https://kubernetes.io/docs/tasks/configure-pod-container/security-context.

Once released, all deployments using the default `values.yaml` configuration for the `securityContext` will pay a one time penalty on upgrade where the kubelet will recursively chown files to be root group writable. It's possible to temporarily avoid this penalty by setting `securityContext: {}`.

### Modifications

* Add config blocks for the `bookkeeper.securityContext` and `zookeeper.securityContext`.
* Default to `fsGroup: 0`. This is already the default group id in the docker image, and the docker image assumes the user has root group permission.
* Default to `fsGroupChangePolicy: "OnRootMismatch"`. This configuration will work for all deployments where the user id is stable. If the user id switches between restarts, like it does in OpenShift, please set to `Always`.
* Remove gc configuration writing to directory that the user lacks permission. (Perhaps we want to write to `/pulsar/log/bookie-gc.log`?) 
* Add documentation to the README.

### Verifying this change

I first attempted verification of this change with minikube. It did not work because minikube uses hostPath volumes by default. I then tested on EKS v1.21.9-eks-0d102a7. I tested by deploying the current, latest version of the helm chart (2.9.3) and then upgrading to this PR's version of the helm chart along with using the 2.10.0 docker image. I also tested upgrading from a default version 

Test 1 is a plain upgrade using the default 2.9.3 version of the chart, then upgrading to this PR's version of the chart with the modification to use the 2.10.0 docker images. It worked as expected.

```bash
$ helm install test apache/pulsar
$ # Wait for chart to deploy, then run the following, which uses Pulsar version 2.10.0:
$  helm upgrade test -f charts/pulsar/values.yaml charts/pulsar/
```

Test 2 is a plain upgrade using the default 2.9.3 version of the chart, then an upgrade to this PR's version of the chart, then an upgrade to this PR's version of the chart using 2.10.0 docker images. There is a minor error described in the `README.md`. The solution is to chown the bookie's data directory.

```bash
$ helm install test apache/pulsar
$ # Wait for chart to deploy, then run the following, which uses Pulsar version 2.9.2:
$  helm upgrade test -f charts/pulsar/values.yaml charts/pulsar/
$ # Upgrade using Pulsar version 2.10.0
$  helm upgrade test -f charts/pulsar/values.yaml charts/pulsar/
```

### GC Logging

In my testing, I ran into the following errors when using `-Xlog:gc:/var/log/bookie-gc.log`:

```
pulsar-bookkeeper-verify-clusterid [0.008s] Error opening log file '/var/log/bookie-gc.log': Permission denied
pulsar-bookkeeper-verify-clusterid [0.008s] Initialization of output 'file=/var/log/bookie-gc.log' using options '(null)' failed.
pulsar-bookkeeper-verify-clusterid [0.005s] Error opening log file '/var/log/bookie-gc.log': Permission denied
pulsar-bookkeeper-verify-clusterid [0.006s] Initialization of output 'file=/var/log/bookie-gc.log' using options '(null)' failed.
pulsar-bookkeeper-verify-clusterid Invalid -Xlog option '-Xlog:gc:/var/log/bookie-gc.log', see error log for details.
pulsar-bookkeeper-verify-clusterid Error: Could not create the Java Virtual Machine.
pulsar-bookkeeper-verify-clusterid Error: A fatal exception has occurred. Program will exit.
pulsar-bookkeeper-verify-clusterid Invalid -Xlog option '-Xlog:gc:/var/log/bookie-gc.log', see error log for details.
pulsar-bookkeeper-verify-clusterid Error: Could not create the Java Virtual Machine.
pulsar-bookkeeper-verify-clusterid Error: A fatal exception has occurred. Program will exit.
```

I resolved the error by removing the setting.

### OpenShift Observations

I wanted to seamlessly support OpenShift, so I investigated using configuring the bookkeeper and zookeeper process with `umask 002` so that they would create files and directories that are group writable (OpenShift has a stable group id, but gives the process a random user id). That worked for most tools when switching the user id, but not for RocksDB, which creates a lock file at `/pulsar/data/bookkeeper/ledgers/current/ledgers/LOCK` with the permission `0644` ignoring the umask. Here is the relevant error:

```
2022-05-14T03:45:06,903+0000  ERROR org.apache.bookkeeper.server.Main - Failed to build bookie server
java.io.IOException: Error open RocksDB database
    at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.<init>(KeyValueStorageRocksDB.java:199) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.<init>(KeyValueStorageRocksDB.java:88) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.lambda$static$0(KeyValueStorageRocksDB.java:62) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.storage.ldb.LedgerMetadataIndex.<init>(LedgerMetadataIndex.java:68) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage.<init>(SingleDirectoryDbLedgerStorage.java:169) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage.newSingleDirectoryDbLedgerStorage(DbLedgerStorage.java:150) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage.initialize(DbLedgerStorage.java:129) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.Bookie.<init>(Bookie.java:818) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.proto.BookieServer.newBookie(BookieServer.java:152) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.proto.BookieServer.<init>(BookieServer.java:120) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.server.service.BookieService.<init>(BookieService.java:52) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.server.Main.buildBookieServer(Main.java:304) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.server.Main.doMain(Main.java:226) [org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.server.Main.main(Main.java:208) [org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
Caused by: org.rocksdb.RocksDBException: while open a file for lock: /pulsar/data/bookkeeper/ledgers/current/ledgers/LOCK: Permission denied
    at org.rocksdb.RocksDB.open(Native Method) ~[org.rocksdb-rocksdbjni-6.10.2.jar:?]
    at org.rocksdb.RocksDB.open(RocksDB.java:239) ~[org.rocksdb-rocksdbjni-6.10.2.jar:?]
    at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.<init>(KeyValueStorageRocksDB.java:196) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    ... 13 more
```

As such, in order to support OpenShift, I exposed the `fsGroupChangePolicy`, which allows for OpenShift support, but not necessarily _seamless_ support.
2022-06-13 22:11:13 -05:00
Frank Kelly
bfb6985de8
Add support for Horizontal Pod Autoscaling for Broker and Proxy. (#262)
* Add support for Horizontal Pod Autoscaling for Broker and Proxy.

* Add license
2022-05-06 08:04:13 -06:00
Chirag Modi
192b3ca2ef
Remove completed init jobs using ttl (#235)
* feat: added ttlSecondsAfterFinished configuration to delete completed jobs

* added comments for clarification
2022-02-23 08:24:37 -08:00
Lari Hotari
1c4f745941
Improve Zookeeper "ruok" probes: use TLS port when TLS is enabled, specify "-q 1" for nc (#223)
- NOTICE: we are no more using "bin/pulsar-zookeeper-ruok.sh" from the apachepulsar/pulsar docker image. The probe script is part of the chart.

* Pass "-q 1" to netcat (nc) to fix issue with Zookeeper ruok probe

- see https://github.com/apache/pulsar/pull/14088

* Send ruok to TLS port when TLS is enabled

* Bump chart version
2022-02-17 07:48:20 +02:00
Frank Kelly
9613ee0292
Make PodSecurityPolicy name unique in k8s cluster when rbac.limit_to_namespace is true (#224)
- allows having multiple Pulsar clusters in different K8S namespaces but having the same helm release name
  - PodSecurityPolicy is a cluster-level-resource and name would collide without this change
2022-02-04 10:41:10 +02:00
MMeent
c0a8c1b97f
Use the 'pulsar.matchLabels' template for matching components of this chart. (#118)
This also limits the scope of the PodMonitors to the resources of only this install, instead of all installs that share `component:` label values.

Co-authored-by: Matthias van de Meent <matthias.vandemeent@cofano.nl>
2022-01-26 15:38:52 +02:00
Lari Hotari
22f4b9b3bd
Wrap Zookeeper probe script with timeout command (#214)
so that the probe doesn't continue running indefinitely

- resolves the issue with Kubernetes <1.20
  "Before Kubernetes 1.20, the field timeoutSeconds was not respected for exec probes:
    probes continued running indefinitely, even past their configured deadline,
    until a result was returned."
    in https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes

- #179 already fixed the issue for Kubernetes 1.20+
2022-01-26 15:17:15 +02:00
Shen Liu
1b3e875ba2
Fix ci error caused by wrong block of if clause. (#208)
Co-authored-by: druidliu <druidliu@tencent.com>
2022-01-25 07:44:08 +02:00
Shen Liu
91f8b6f6b1
Add multi volume support in bookkeeper. (#113)
* Add multi volume support in bookkeeper. (#112)

* Add multi volumes support in bookkeeper configmap.

Co-authored-by: druidliu <druidliu@tencent.com>

Fixes #112 

### Motivation

*Add option for user to choose whether using multi volume in bookeeper, especially while using `local-storage`.*

### Modifications

Add `useMultiVolumes` option under `.Values.bookkeeper.volumes.journal` and `.Values.bookkeeper.volumes.ledgers`.
User can choose how many volumes could be used for bookkeeper jounal or ledgers.

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2022-01-22 23:08:07 -06:00
cogito-kyle
adbc6b7fcf
Add custom labels to all k8s objects in chart (#201) 2022-01-18 08:47:49 +02:00
csthomas1
ccf78f1c9d
Added -Dlog4j2.formatMsgNoLookups=true to PULSAR_MANAGER_OPTS (#198)
* Added -Dlog4j2.formatMsgNoLookups=true to PULSAR_MANAGER_OPTS

* Bump the chart version to release changes

Co-authored-by: Lari Hotari <lhotari@apache.org>
2022-01-12 10:42:43 +02:00
Aaron Johnson
cee3b5c5e6
added additionalCommand parameter (#150)
Co-authored-by: Aaron Johnson <aaron.johnson@crowdstrike.com>
2022-01-05 10:26:55 -06:00
Frank Kelly
a919f309c6
Add ability to run extra commands in the initialization jobs e.g. to quit istio sidecars (#181) 2022-01-05 16:24:19 +02:00
shaoyue
41dd2f5034
Fix #175 cluster initialize blocked when fail (#176) 2022-01-05 16:10:09 +02:00
Valeriano Manassero
25e997a425
Automate initialize (#138)
- no need to do "--set initialize=true" anymore
2022-01-05 16:08:11 +02:00
matejhasul
706c8c292b
Workaround kustomize bug in pulsar cluster init (#166)
Replace folding block with multiline string to workaround https://github.com/kubernetes-sigs/kustomize/issues/4201

There are also other places where this bug is hit, but extra generated newline is not significant.

Co-authored-by: Lari Hotari <lhotari@users.noreply.github.com>
2022-01-04 11:08:52 -06:00
Shu.Wang
83bb8bd60f
Conditionally update ingress api version based on k8s version (#183)
* Update ingress api version, extension/v1beta1 will not be supported in new k8s version, this change keep backward compatibility for lower kubernetes version

* Update deprecated util Capabilities.KubeVersion.GitVersion to Capabilities.KubeVersion.Version
2022-01-04 00:53:21 -06:00
Shu.Wang
0a82ab0f9a
Fixes #177 Fix indentation of component, as it should be under the label tag (#182) 2022-01-03 21:57:45 +02:00
Lari Hotari
d74d08a89d
Use NIOServerCnxnFactory for Zookeeper to fix NPE issues with Pulsar 2.8.x+ (#180)
- follow recommendation in https://github.com/apache/pulsar/issues/11070#issuecomment-936539979
2022-01-03 11:59:58 +01:00
Lari Hotari
b4b2fa7b80
[Security] Workaround for CVE-2021-44228 Log4J RCE when Log4J >= 2.10.0 (#186)
* [Security] Workaround for CVE-2021-44228 Log4J RCE when Log4J >= 2.10.0

- prevents the exploit by disabling message pattern lookups

* Bump the chart version
2021-12-10 18:30:01 +02:00
Lari Hotari
a16c6bbf19
Make k8s probe timeoutSeconds configurable and set default to 5s for k8s 1.20+ compatibility (#179)
- set to 5 seconds by default

- address compatibility with Kubernetes 1.20+. This impacts "bin/pulsar-zookeeper-ruok.sh" exec probe used in ZK.
  "Before Kubernetes 1.20, the field timeoutSeconds was not respected for exec probes: probes continued running indefinitely, even past their configured deadline, until a result was returned."
   https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
2021-11-25 08:46:42 +01:00
Frank Kelly
1956a870ff
Fixes #173 Support both Role Binding and Cluster Role Binding dependi… (#174)
* Fixes #173 Support both Role Binding and Cluster Role Binding depending on rbac.limit_to_namespace

* Rev version

* Get Role/Cluster the right way around
2021-11-12 07:56:35 -08:00
Frank Kelly
617308147d
Missing fix for #152. Bookie Service also needs the prefix on the port name (#172)
Fixes #158 (This is the second PR - see also https://github.com/apache/pulsar-helm-chart/pull/162)

### Motivation

* All non-standard port-names need a proper protocol prefix to support Istio
 https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection
 
### Modifications

Add the prefix value before `bookie`
2021-11-09 09:18:26 -08:00
Frank Kelly
5b10f48f5b
Fix #152 Add Helm chart support for Istio port naming (attempt 2) (#162)
Fixes #152 

### Motivation

Support prefix in front of port names to abide by Istio protocol rules
https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection

### Modifications

Support adding a prefix
- pulsar -> tcp-pulsar
- pulsarssl --> tls-pulsarssl etc
2021-09-10 08:56:16 +08:00
Peter Tinti
f307cc32af
updates pulsar ca name generation to use suffix making cert swappable (#141)
Updates CA name generation to be configurable allowing the swapping in of a CA.

### Motivation

We recently swapped out cert issuers and found that with the current helm chart we were unable to do a hot swap without downtime (via helm) because the CA cert name is not configurable. Being able to change the name of the CA allows us to create a new CA first -> Validate -> then swap over in follow up apply/release.

### Modifications

Adds the ability to specify the suffix used to generate the CA name (not the whole name in order to preserve back compatibility regardless of the release name.)
2021-08-25 23:14:03 -07:00
Frank Kelly
65dc68654b
ZooKeeper HTTP port should be exposed by service so we can use prometheus (#143)
Fixes #142 

### Motivation

Expose HTTP Port on ZooKeeper service so we can use Prometheus

### Modifications

Bug fix to expose HTTP port on ZooKeeper service
2021-08-25 23:13:47 -07:00
Aaron Johnson
c45813ffe5
added extraVolumes and extraVolumeMounts (#149)
Fixes #147

### Motivation
This gives the helm chart user the ability to specify a secret or other type of volume to be mounted into any of the statefulset pods

### Modifications
* Added conditionals to `bookkeeper`, `broker`, `proxy`, `toolset`, and `zookeeper` statefulsets which allow the chart user to specify extraVolumes and extraVolumeMounts for deployed pods.
* Added `extraVolumes` and `extraVolumeMounts` parameters to values.yaml
2021-08-25 23:13:27 -07:00
Thomas O'Neill
19d6ce6488
Add Support for imagePullSecrets (#140)
Fixes #125

### Motivation

The default images in the values.yaml are in docker hub. This PR allows us to provide image pull secrets for the containers which will allow us to get around Docker Hub's rate limiting if the nodes are not logged into Docker Hub.

### Modifications

Added a new template to generate `imagePullSecrets`, and included them in the deployments and statefulsets. This will only add them if they are specified under `images.imagePullSecrets`

### Verifying this change

- [] Make sure that the change passes the CI checks.
2021-08-20 17:22:50 -07:00
Lari Hotari
c3e4ea272b
Fix deprecation warning about rbac.authorization.k8s.io/v1beta1 (#135) 2021-07-03 10:56:58 +03:00
TC-robV
75169707fb
add enableAdminApi for prometheus (#121)
Fixes #<xyz>

### Motivation

would be nice to have this option here so people can run admin commands against the prometheus. 

### Modifications

added a new value and modified the deployment, taken from the official prom helm.

### Verifying this change

- [ ] Make sure that the change passes the CI checks.
2021-06-23 21:12:20 -07:00
MMeent
11a1d578dd
Fix indentation issue on checksum/config (#117)
Fixes #116

### Motivation

Theres indentation issues for the `checksum/config` annotation in these templates, which would either fail linting or not apply at all in some situations.

### Modifications

I've added indentation at the specified places such that this isn't an issue anymore.

### Verifying this change

- [ ] Make sure that the change passes the CI checks.
2021-06-23 21:11:38 -07:00
Peter Tinti
d6d240a123
Updates internal issuer cert to include duration and renew configs (#131)
### Motivation
* While component certs can be configured with a custom duration the CA cert for self-signed configuration uses default values. It can be convenient to have this certificate expire more than a month out.

### Modifications
* Updates the internal issuer `{{ .Release.Name }}-ca-tls` certificate to make `duration` and `renewBefore` configurable. Does not use `common` so that the CA can be configured to last much longer than individual components certs if desired.

### Verifying this change
- [x] Make sure that the change passes the CI checks.
2021-06-23 21:00:17 -07:00
Yong Zhang
0816ac2dfd
Reduce the TLS common name length (#115)
---

*Motivation*

Reduce the TLS command name to avoid getting a too long name
that could not generate a certificate.
2021-04-23 12:43:44 +08:00
Jean Helou
ba356e5df7
makes cert-manager apiVersion configurable (#107)
This commit let's users override the apiVersion referenced in this
chart so that the chart can be used with newer cert-manager releases.
(script/cert-manager/install-cert-manager.sh installs 0.13.0 when
current version is 1.2.0...)

Fixes #68

### Motivation

cert-manager apiVersion changed after cert-manager 1.0.0 was released, which prevents the chart from provisionning certificates with newer cert-manager installation because of an incompatible apiVersion.

I have a cluster with cert-manager >1.0.0 installed, making `apiVersion` overridable makes it easy for me to install pulsar on that cluster

### Modifications

I introduced the value `certs.internal_issuer.apiVersion`, which by default uses the apiVersion that was previously hardcoded (`cert-manager.io/v1alpha2`) 
I replaced all occurrences of that apiVersion by a reference to the value so that users can override it to `cert-manager.io/v1` if they have a newer version of cert-manager installed.

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2021-03-16 00:44:38 -07:00
Miecio
c059ea25d8
Feat: Dynamic superusers configuration (#104)
Adds dynamic superusers configuration

### Motivation

Allow dynamic superusers management. Adding new superuser entry to `.Values.auth.superUsers` will results in adding concatenated list to config

### Modifications

Change static list to dynamic one
2021-02-09 00:59:54 -08:00
wuYin
67818a48cb
Support common volume for journal and ledgers (#93)
### Motivation

In some case, my k8s node only have 1 large capacity ssd, for deploying 1 bookie, I need:

- Partition the ssd into 2 disks, and make 2 pv over it.
- Just make 1 pv over it, but journal & ledgers under same mount path (this PR did)

Both can't isolate IO for journal & ledgers, so I prefer the second one for reusability.


### Modifications

values.yaml
  - add `useSingleCommonVolume` option, default false

bookkeeper-statefulset.yaml
   - mount the only PV to path `/pulsar/data/bookkeeper`
   - use configured common storageClassName

bookkeeper-storageclass.yaml
  - use configured provisioner for the common storageClass 

### Others
This may not be an issue for everyone, if it's not necessary to merge, I'll just use it locally

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2021-01-30 09:28:45 -08:00
wangyufan
d73361eb1e
fix broker configmap forbidden (#95)
Fixes #94

### Motivation

fix `io.kubernetes.client.openapi.ApiException: Forbidden`

### Modifications

fix typo

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2021-01-30 09:28:00 -08:00
Miecio
b24ba1adf5
Fix namespace handling and missing dnsNames (#99)
Fixes for wrong namespace handling in some RBAC and missing dnsNames for TLS

### Motivation

Fixes old unused handling of namespace name in RBAC for autorecovery and bookkeeper.
Fixes Helm exception of missing key when not defining TLS dnsNames

### Modifications

Use namespace template in RBAC definitions for bookkeeper and autorecovery. Add if around every `toYaml .Values.tls.bookie.dnsNames` clause in TLS certs definitions.

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2021-01-30 09:27:18 -08:00
Miecio
025b263206
Extend podmonitor and add relabels (#100)
### Motivation

As I wanted to use [streamnative/apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard) with this helm chart and own cluster wide Prometheus stack I decided that use of PodMonitor CRD is a good way. Unfortunately prometheus config has some metrics relabelings that are required by grafana dashboard. I decied to port them directly to PodMonitor definition

### Modifications

* Added missing PodMonitor for autorecovery
* Port relabelings from `prometheus-configmap.yaml` to each PodMonitor

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2021-01-30 09:24:21 -08:00
Miecio
23ba8ac948
Fix for missing PSP for bookie initialize and other (#101)
### Motivation

When using standard bookkeeper installation on PSP cluster initialization fails because has to be started as root

### Modifications

Add same ServiceAccount and SecurityContext for bookkeeper-cluster-initialize as in bookkeeper specyfication.

UPDATE: Seems that when using in cluster TLS encryption other components also require RW access to root FS, I added PSP for proxy, zookeepe, broker and toolset

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2021-01-30 09:22:52 -08:00
Miloš Matijašević
c2f672881e
Updating pods on configmap change (#73)
Fixes #71 

### Motivation

Pods are not restarting when config maps are changed after changing values.yaml file, so they need to be restarted manually in order to pick up new values from config map. 

### Modifications

As I mentioned `restartPodsOnConfigMapChange` flag for each component is added in values.yaml file whether to restart pods on configmap change or not, default is `false`.
In statefulset templates for each component is added part which is adding annotation that contains hash of corresponding configmap if `restartPodsOnConfigMapChange` is `true`, which will cause pods to restart if corresponding configmap has been changed (https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments).

### Verifying this change

- [ ] Make sure that the change passes the CI checks.
2021-01-07 21:28:11 -08:00
Miecio
667e634af0
Add basic PSP and RBAC for core components (#87)
Add PSP and add/modify RBAC. I'm open for all discussion.

### Motivation

On clusters which use PSP and restrictive default policy pulsar cannot be installed, because it uses root user and requires writable container root directory. Additionally default RBAC for broker are too permissive (usage of ClusterRoleBinding) in my opinion.

### Modifications

Add PSP and RBAC for bookkeeper and autorecovery to add
exception to allow startup even in secure environment
where containers cannot access RW on root by default.

Add option for limiting broker ClusterRoleBinding
to single namespace by replacing to RoleBinding

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2021-01-07 21:26:44 -08:00
Jiří Pinkava
8d5339f9ff
Allow use of existing secret for pulsar manager credentials (#69)
Signed-off-by: Jiří Pinkava <jiri.pinkava@rossum.ai>

Co-authored-by: Jiri Pinkava <jiri.pinkava@rossum.ai>
2021-01-07 21:24:52 -08:00