330 Commits

Author SHA1 Message Date
Michael Marshall
a41fbb2582
Do not require version bump (#314)
* [CI] Do not require version bump when linting

* Fix formatting

### Motivation

With #292, we made the lint CI step require chart version bumps. That is an unnecessary requirement since we have a manual release process. Also, we didn't require it previously.

### Modifications

* Disable chart version bump

### Verifying this change

This is a trivial change.
2022-10-20 00:12:38 -05:00
Samuel Verstraete
8f033bd1a5
allow specifying the nodeSelector for the init jobs (#225)
* allow specifying the nodeSelector for the init jobs

* Use pulsar_metadata.nodeSelector

Co-authored-by: samuel <samuel.verstraete@aprimo.com>

### Motivation

When deploying pulsar to an AKS cluster with windows nodepools i was unable to specify that the Jobs of the initalize release had to run on linux nodes. With the change i can now specify a node selector for the init jobs.

### Modifications

add nodeSelector on pulsar_init and bookie_init

### Verifying this change

- [ ] Make sure that the change passes the CI checks.
2022-10-19 23:41:39 -05:00
Michael Marshall
2410743cdb
[test] Add a consumer to the helm tests (#312)
### Motivation

The current tests only produce a message. This test adds a consumer for the produced message.

### Modifications

* Add new section to the test where we consume the produced message
2022-10-19 23:38:42 -05:00
JiangHaiting
da6ce85c66
Bump 2.10.2 (#310)
### Motivation

Bump Apache Pulsar 2.10.2


### Verifying this change

- [ ] Make sure that the change passes the CI checks.
2022-10-19 22:51:08 -05:00
Michael Marshall
bd00842800
Fix monitoring configuration broken by #299 (#313)
Related to #311

### Motivation

In #299, I updated the values without also updating the test values. As a result, I unintentionally enabled the monitoring stack in the tests and broke some examples. Because we are deploying all resources to a single node. It is possible that we are resource constrained, so I am going to re-disable the monitoring stack.

### Modifications

* Update test cluster configurations to re-disable deploying the monitoring stack
* Update examples with the new configuration

### Verifying this change

- [ ] Make sure that the change passes the CI checks.
2022-10-19 22:50:31 -05:00
Michael Marshall
3ef2d80dec
Upgrade to Cert Manager 1.7.3 (#307)
* Upgrade to Cert Manager 1.10.0

* Fail fast when installing cert manager

* Upgrade to 1.7.3

Here is the relevant documentation for k8s compatibility:
https://cert-manager.io/docs/installation/supported-releases/

### Motivation

The current version is out of date.

### Modifications

* Upgrade from 1.5.4 to 1.7.3

### Verifying this change

Once #306 is merged, the test suite will verify this PR.
2022-10-19 16:29:19 -05:00
Michael Marshall
42ce7caa55
Update how to configure external zookeeper servers (#308)
### Motivation

In #269, we added a way to configure external zookeeper servers. However, it was added to the wrong section of the zookeeper config. The `zookeeper.configData` section is mapped directly into the zookeeper configmap.

### Modifications

Move `zookeeper.configData.ZOOKEEPER_SERVERS` to `zookeeper.externalZookeeperServerList`

### Verifying this change
This is a cosmetic change on an unreleased feature.
2022-10-19 16:28:33 -05:00
tison
fd71b46b1a
Replace handmade lint script with official action (#292)
* replace homemade release script with official action

Signed-off-by: tison <wander4096@gmail.com>

* bundle helm/chart-releaser-action

Signed-off-by: tison <wander4096@gmail.com>

* update .asf.yaml

Signed-off-by: tison <wander4096@gmail.com>

* fix helm/chart-testing-action is not allowed

Signed-off-by: tison <wander4096@gmail.com>

* try azure/setup-helm is allowed

Signed-off-by: tison <wander4096@gmail.com>

* Revert "try azure/setup-helm is allowed"

This reverts commit 7ee6fc0b3d4584127568fe607732b9c3aa70f031.

* replace handmade lint script with official action

Signed-off-by: tison <wander4096@gmail.com>

Signed-off-by: tison <wander4096@gmail.com>
2022-10-19 15:34:22 -05:00
Michael Marshall
7f23af26b7
Replace monitoring solution with kube-prometheus-stack dependency (#299)
* Replace monitoring solution with kube-prometheus-stack dependency

* Enable pod monitors

* Download necessary chart dependencies for CI

* Actually run dependency update

* Enable missed podMonitor

* Disable alertmanager by default for feature parity

Related issues #294 #65

Supersedes #296 and #297

### Motivation

Our helm chart is out of date. I propose we make a breaking change for the monitoring solution and start using the `kube-prometheus-stack` as a dependency. This should make upgrades easier and will let users leverage all of that chart's features.

This change will result in the removal of the StreamNative Grafana Dashboards. We'll need to figure out the right way to address that. The apache/pulsar project has grafana dashboards, but they have not been maintained. With this added dependency, we'll have the benefit of being able to use k8s `ConfigMap`s to configure grafana dashboards.

### Modifications

* Remove old prometheus and grafana configuration
* Add kube-prometheus-stack chart as a dependency
* Enable several components by default. I am not opinionated on these, but it is based on the other values in the chart.

### Verifying this change

This is a large change that will require manual validation, and may break deployments. I propose this triggers a helm chart 3.0.0 release.
2022-10-19 10:23:08 -05:00
Michael Marshall
62a0d2b8a4
Use cert-manager to generate certs for tests (#306)
* Use cert-manager to generate certs for tests

* Install Cert-Manager in test env

### Motivation

Currently, we use hard coded certificates for the tests. Instead, we can use Cert Manager to generate the certificates. The primary benefit of this change is that it ensure we're testing the cert manager integration.

### Modifications

* Remove `.ci/tls` directory since we no longer need these certs.
* Remove `scripts/pulsar/clean_tls.sh` (it wasn't used)
* Remove `scripts/pulsar/upload_tls.sh` since we are not uploading any certs
* Update the `helm.sh` test script
* Update the `.ci/clusters` configurations to generate the relevant cert manager manifests

### Verifying this change

- [ ] Make sure that the change passes the CI checks.
2022-10-19 10:22:22 -05:00
Yuwei Sung
816d88c942
added pdb version detection (#260)
* added pdb version detection

* refresh

* Update bookkeeper-pdb.yaml

update the capabilities syntax

* Update broker-pdb.yaml

update capability syntax

* Update proxy-pdb.yaml

update capability version syntax

* Update zookeeper-pdb.yaml

update capability version syntax

* Update zookeeper-pdb.yaml

fix typo

* Update bookkeeper-pdb.yaml

Co-authored-by: Marvin Cai <cai19930303@gmail.com>

Fixes pod disruption budget version warning

### Motivation

PDB policy api version, v1beta1 is deprecated in k8s1.21+ (not available in 1.25+).

### Modifications

zookeeper-pdb, proxy-pdb, broker-pdb and bookkeepr-pdb templates are modified.  If k8s api-resources container policy/v1, the *-pdb.yaml will generate respective apiVersion. 

### Verifying this change

- [ ] Make sure that the change passes the CI checks.
2022-10-18 22:52:11 -05:00
Rajan Dhabalia
89f28bca9c
Support mechanism to provide external zookeeper-server list to build global/configuration zookeeper (#269)
* Support mechanism to provide external zookeeper-server list to build global/configuration zookeeper

* Add external zk example

* add external zk list into values.yaml

Fixes #268

### Motivation
Right now, [chart dynamically](https://github.com/apache/pulsar-helm-chart/blob/master/charts/pulsar/templates/zookeeper-statefulset.yaml#L140) creates zk cluster with zk pods initialized in the same namespace. However, for global/configuration zookeeper, user requires to build zk clusters with pods deployed in different namespaces. Therefore, user needs a mechanism to pass an external list of zk-servers to the chart and build zk-cluster with pods across different namespaces.

### Modification
- Chart should be considering zk-value's configuration for external zookeeper and generate zk-configuration file with appropriate zk-server list and unique id of that zookeeper.

This PR sets `ZOOKEEPER_SERVERS` value provided by user and also sets override-value flag which will be used by [generate-zookeeper-config.sh](https://github.com/apache/pulsar/blob/master/docker/pulsar/scripts/generate-zookeeper-config.sh) to override external zk list in config file and assign appropriate id to the host.

https://github.com/apache/pulsar/pull/15987 fixes [generate-zookeeper-config.sh](https://github.com/apache/pulsar/blob/master/docker/pulsar/scripts/generate-zookeeper-config.sh) changes.


### Result
- User can add `ZOOKEEPER_SERVERS` string into `zookeeper.configData` in [Values.yaml](https://github.com/apache/pulsar-helm-chart/blob/master/charts/pulsar/values.yaml#L385) file to override external zk-server list.
2022-10-18 17:41:43 -05:00
Stepan Mazurov
1bcf255e12
feat(certs): use actual v1 spec for certs (#233)
Co-authored-by: Stepan Mazurov <smazurov@quantummetric.com>

### Motivation

In #204, api version of the cert resources was updated to v1. This was insufficient because `v1` has different spec from `v1alpha1` 

This MR finishes the work that #204 and @lhotari started.

### Modifications

Changed the spec of certs to match v1 cert manager spec.

### Verifying this change

- [ ] Make sure that the change passes the CI checks.
2022-10-18 15:40:43 -05:00
Penghui Li
8f1ca065b3
Bump Apache Pulsar 2.10.1 (#274)
* Bump Apache Pulsar 2.10.1

* Do not bump .Chart.version

* Remove unnecessary jq download that was failing with Permission Denied

Co-authored-by: Michael Marshall <mmarshall@apache.org>
2022-10-18 13:16:51 -05:00
Michael Marshall
58cd43fe8b
Remove '|| yes' in bk cluster init script (#305) 2022-10-18 18:46:07 +03:00
Michael Marshall
48501ebe84
Allow bk cluster init to restart on failure (#303)
### Motivation

This is essentially the same as https://github.com/apache/pulsar-helm-chart/pull/176. Without this change, an init pod can fail and be in `Error` state even though the second pod succeeded. This will prevent misleading errors.

### Modifications

* Replace `Never` with `OnFailure`

### Verifying this change

This is a trivial change.
2022-10-17 17:59:05 -05:00
Lari Hotari
25f355e6e2
Use appVersion as default tag for Pulsar images (#200)
Co-authored-by: Michael Marshall <mmarshall@apache.org>

### Motivation

There was a suggestion [in a dev mailing list discussion](https://lists.apache.org/thread/bgkvcyt1qq6h67p2k8xwp89xlncbqn3d) that the Helm chart's appVersion should be used as the default image tag.

### Additional context

There are some limitations in Helm. It is not possible to set "appVersion" from the command line. There's in an open feature request https://github.com/helm/helm/issues/8194 to add such a feature to Helm.

### Modifications

- change default values.yaml and set the tags for the images that use the Pulsar image to an empty value
- add "defaultPulsarImageTag" to values.yaml
- add a helper template "pulsar.imageFullName" that contains the logic to fall back to .Values.defaultPulsarImageTag and if it's not set, falling back to .Chart.AppVersion
- use the helper template in all other templates that require the logic
2022-10-17 15:42:58 -05:00
Michael Marshall
6a00845670
Remove GitHub Action Workflows that release the chart (#300)
Relates to: https://github.com/apache/pulsar-helm-chart/issues/290

### Motivation

We should not use GitHub Actions to release the helm chart. As such, we can remove the relevant workflow code from this repo while we build the relevant process to officially release the helm chart.

The main risk with this kind of change is that we won't have a way to "release" the chart. However, it is relevant to point out that we have not had any official releases of the chart given that the PMC has not been voting on the releases. I think we need to prioritize fixing this process as a community.

### Modifications

* Remove all scripts and configuration files that enabled GitHub Actions to release the helm chart.

### Verifying this change

This is a trivial change.
2022-10-17 14:39:04 -05:00
Arnar
f3ba780ab5
Alphabetically sort list of super users (#291)
Fixes #288 

### Motivation

When specifying multiple roles in `.Values.auth.superUsers` the values are converted to a comma-separated list by piping the dict through `values` and `join` in helm templating, `values` however doesn't guarantee that the order of elements will be the same every time. Therefor it recommends also passing it through `sortAlpha` to sort the list alphabetically.

This is a problematic when `.Values.broker.restartPodsOnConfigMapChange` is enabled because the checksum of the configmap changes every time the list's order is changed, resulting in the statefulsets rolling out a new version of the pods.

### Modifications

Pass list through `sortAlpha`.

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2022-10-17 14:36:22 -05:00
Michael Marshall
20c55022df
Only send notifications to commits@ ML (#302)
This PR modifies the Apache mailing list notifications so that commits, issues, and pull request notifications are sent to the commits@pulsar.apache.org mailing list. If you would like these notifications, please to subscribe to the commits mailing list or use the GitHub "watch" feature.

Mailing list discussion for this change: https://lists.apache.org/thread/j6y57kr4180xblh7voyrjw47blgmghwt
2022-10-17 14:12:21 -05:00
Aliaksandr Shulyak
8b42a61f2e
Add nodeSelector to cluster initialize pod (#284)
* Add nodeSelector to cluster initialize pod

* Add option to values file

* Update charts/pulsar/templates/pulsar-cluster-initialize.yaml

Co-authored-by: Michael Marshall <mikemarsh17@gmail.com>

* Fix typo in values

Co-authored-by: Michael Marshall <mikemarsh17@gmail.com>

### Motivation

Add an option to choose where to run pulsar-cluster-initialize pod. Sometimes there is a necessity to run only on certain nodes.

### Modifications

Added nodeSelector option to the pulsar-cluster-initialize job.
2022-10-14 13:44:47 -05:00
Michael Marshall
9e10d1ff6d
Update README.md links to Pulsar Docs (#298)
### Motivation

Some of the links in the README are out of date. This PR fixes the ones that I found. Note that the ones with `/en` were not technically broken.
2022-10-13 21:17:28 -05:00
Qiang Zhao
465d1726e2
Bump Apache Pulsar version to 2.9.3 (#277) pulsar-2.9.4 2022-07-18 23:24:46 +08:00
Paul Gier
a2d3f3ef41
scripts: provide an error if the namespace was not created (#276)
Signed-off-by: Paul Gier <paul.gier@datastax.com>

This is just a minor improvement to the error handling of one of the bash scripts

### Motivation

Currently if you run `./scripts/pulsar/prepare_helm_release.sh` and the pulsar namespace does not currently exist, you get several error messages that make it not that clear what still needs to be done next.

```
generate the token keys for the pulsar cluster
The private key and public key are generated to /var/folders/cn/r5tb0zln1bgbfzz_7x72tgzm0000gn/T/tmp.ITrq1a4C and /var/folders/cn/r5tb0zln1bgbfzz_7x72tgzm0000gn/T/tmp.qi0dl2WO successfully.
error: failed to create secret namespaces "pulsar" not found
generate the tokens for the super-users: proxy-admin,broker-admin,admin
generate the token for proxy-admin
pulsar-dev-token-asymmetric-key
kubectl get -n pulsar secrets pulsar-dev-token-asymmetric-key -o jsonpath={.data.PRIVATEKEY} | base64 --decode > /var/folders/cn/r5tb0zln1bgbfzz_7x72tgzm0000gn/T/tmp.CikEhIxe
Error from server (NotFound): namespaces "pulsar" not found
generate the token for broker-admin
pulsar-dev-token-asymmetric-key
kubectl get -n pulsar secrets pulsar-dev-token-asymmetric-key -o jsonpath={.data.PRIVATEKEY} | base64 --decode > /var/folders/cn/r5tb0zln1bgbfzz_7x72tgzm0000gn/T/tmp.G1PU9MMj
Error from server (NotFound): namespaces "pulsar" not found
generate the token for admin
pulsar-dev-token-asymmetric-key
kubectl get -n pulsar secrets pulsar-dev-token-asymmetric-key -o jsonpath={.data.PRIVATEKEY} | base64 --decode > /var/folders/cn/r5tb0zln1bgbfzz_7x72tgzm0000gn/T/tmp.HddlCq8e
Error from server (NotFound): namespaces "pulsar" not found
-------------------------------------

The jwt token secret keys are generated under:
    - 'pulsar-dev-token-asymmetric-key'

The jwt tokens for superusers are generated and stored as below:
    - 'proxy-admin':secret('pulsar-dev-token-proxy-admin')
    - 'broker-admin':secret('pulsar-dev-token-broker-admin')
    - 'admin':secret('pulsar-dev-token-admin')
```

### Modifications

I added a check for the existence of the namespace which fails immediately instead of continuing, and added an error message that describes what the problem is and how to resolve it.

```
error: failed to get namespace 'pulsar'
please check that this namespace exists, or use the '-c' option to create it
```

### Verifying this change

- [X] Make sure that the change passes the CI checks.
2022-07-13 21:38:50 -05:00
Michael Marshall
26bc26028b
Use https to get Apache Pulsar icon in Chart.yaml 2022-06-26 00:39:09 -05:00
HuynhKevin
3c59b43f28
Add imagePullSecrets zookeeper (#244)
* Add imagePullSecrets for zookeeper

* Add imagePullSecrets for zookeeper

Co-authored-by: Kevin Huynh <khuynh@littlebigcode.fr>

All components have the imagePullSecrets to avoid quota limit to init correctly the pods except zookeeper
2022-06-26 00:01:48 -05:00
Filipe Caixeta
c05f659ff4
make proxy httpNumThreads configurable (#251)
Fixes https://github.com/apache/pulsar-helm-chart/issues/250

### Motivation

`httpNumThreads` is hardcoded to 8 in `charts/pulsar/templates/proxy-configmap.yaml`
When trying to override in `values.yaml` by using `proxy.configData.httpNumThreads` we get an error because the keys get duplicated.
This happens because `{{ toYaml .Values.proxy.configData | indent 2 }}` doesn't deduplicate the keys and there is no other way to set `httpNumThreads`

### Modifications

Removing the key from charts/pulsar/templates/proxy-configmap.yaml and adding it to the values yaml solves the problem.

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2022-06-25 23:57:30 -05:00
Yong Zhang
6afab51bad
Upgrade the pulsar manager image version to 0.3.0 (#271)
---

**Motivation**

The pulsar manager released 0.3.0, we can upgrade it in our charts.
2022-06-25 23:52:20 -05:00
Marvin Cai
c6ab1d18e3
Support defining extra env for broker and proxy statefulsset. (#273) 2022-06-20 07:59:43 -07:00
Yong Zhang
f2266c4295
Enable the pulsar manager in the minikube values (#270)
---

Fixes: https://github.com/apache/pulsar/issues/15927

### Motivation

We have documented the using pulsar manager in the Getting started
with helm in the pulsar website. We should enable the pulsar manager
by default in the minikube values.

### Modifications

- enable the pulsar manager by default in the minikube values.
2022-06-15 09:42:16 +08:00
Michael Marshall
428736c788
Add bk, zk securityContext to support upgrade to non-root docker image (#266)
Master Issue: https://github.com/apache/pulsar/issues/11269

### Motivation

Apache Pulsar's docker images for 2.10.0 and above are non-root by default. In order to ensure there is a safe upgrade path, we need to expose the `securityContext` for the Bookkeeper and Zookeeper StatefulSets. Here is the relevant k8s documentation on this k8s feature: https://kubernetes.io/docs/tasks/configure-pod-container/security-context.

Once released, all deployments using the default `values.yaml` configuration for the `securityContext` will pay a one time penalty on upgrade where the kubelet will recursively chown files to be root group writable. It's possible to temporarily avoid this penalty by setting `securityContext: {}`.

### Modifications

* Add config blocks for the `bookkeeper.securityContext` and `zookeeper.securityContext`.
* Default to `fsGroup: 0`. This is already the default group id in the docker image, and the docker image assumes the user has root group permission.
* Default to `fsGroupChangePolicy: "OnRootMismatch"`. This configuration will work for all deployments where the user id is stable. If the user id switches between restarts, like it does in OpenShift, please set to `Always`.
* Remove gc configuration writing to directory that the user lacks permission. (Perhaps we want to write to `/pulsar/log/bookie-gc.log`?) 
* Add documentation to the README.

### Verifying this change

I first attempted verification of this change with minikube. It did not work because minikube uses hostPath volumes by default. I then tested on EKS v1.21.9-eks-0d102a7. I tested by deploying the current, latest version of the helm chart (2.9.3) and then upgrading to this PR's version of the helm chart along with using the 2.10.0 docker image. I also tested upgrading from a default version 

Test 1 is a plain upgrade using the default 2.9.3 version of the chart, then upgrading to this PR's version of the chart with the modification to use the 2.10.0 docker images. It worked as expected.

```bash
$ helm install test apache/pulsar
$ # Wait for chart to deploy, then run the following, which uses Pulsar version 2.10.0:
$  helm upgrade test -f charts/pulsar/values.yaml charts/pulsar/
```

Test 2 is a plain upgrade using the default 2.9.3 version of the chart, then an upgrade to this PR's version of the chart, then an upgrade to this PR's version of the chart using 2.10.0 docker images. There is a minor error described in the `README.md`. The solution is to chown the bookie's data directory.

```bash
$ helm install test apache/pulsar
$ # Wait for chart to deploy, then run the following, which uses Pulsar version 2.9.2:
$  helm upgrade test -f charts/pulsar/values.yaml charts/pulsar/
$ # Upgrade using Pulsar version 2.10.0
$  helm upgrade test -f charts/pulsar/values.yaml charts/pulsar/
```

### GC Logging

In my testing, I ran into the following errors when using `-Xlog:gc:/var/log/bookie-gc.log`:

```
pulsar-bookkeeper-verify-clusterid [0.008s] Error opening log file '/var/log/bookie-gc.log': Permission denied
pulsar-bookkeeper-verify-clusterid [0.008s] Initialization of output 'file=/var/log/bookie-gc.log' using options '(null)' failed.
pulsar-bookkeeper-verify-clusterid [0.005s] Error opening log file '/var/log/bookie-gc.log': Permission denied
pulsar-bookkeeper-verify-clusterid [0.006s] Initialization of output 'file=/var/log/bookie-gc.log' using options '(null)' failed.
pulsar-bookkeeper-verify-clusterid Invalid -Xlog option '-Xlog:gc:/var/log/bookie-gc.log', see error log for details.
pulsar-bookkeeper-verify-clusterid Error: Could not create the Java Virtual Machine.
pulsar-bookkeeper-verify-clusterid Error: A fatal exception has occurred. Program will exit.
pulsar-bookkeeper-verify-clusterid Invalid -Xlog option '-Xlog:gc:/var/log/bookie-gc.log', see error log for details.
pulsar-bookkeeper-verify-clusterid Error: Could not create the Java Virtual Machine.
pulsar-bookkeeper-verify-clusterid Error: A fatal exception has occurred. Program will exit.
```

I resolved the error by removing the setting.

### OpenShift Observations

I wanted to seamlessly support OpenShift, so I investigated using configuring the bookkeeper and zookeeper process with `umask 002` so that they would create files and directories that are group writable (OpenShift has a stable group id, but gives the process a random user id). That worked for most tools when switching the user id, but not for RocksDB, which creates a lock file at `/pulsar/data/bookkeeper/ledgers/current/ledgers/LOCK` with the permission `0644` ignoring the umask. Here is the relevant error:

```
2022-05-14T03:45:06,903+0000  ERROR org.apache.bookkeeper.server.Main - Failed to build bookie server
java.io.IOException: Error open RocksDB database
    at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.<init>(KeyValueStorageRocksDB.java:199) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.<init>(KeyValueStorageRocksDB.java:88) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.lambda$static$0(KeyValueStorageRocksDB.java:62) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.storage.ldb.LedgerMetadataIndex.<init>(LedgerMetadataIndex.java:68) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage.<init>(SingleDirectoryDbLedgerStorage.java:169) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage.newSingleDirectoryDbLedgerStorage(DbLedgerStorage.java:150) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage.initialize(DbLedgerStorage.java:129) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.bookie.Bookie.<init>(Bookie.java:818) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.proto.BookieServer.newBookie(BookieServer.java:152) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.proto.BookieServer.<init>(BookieServer.java:120) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.server.service.BookieService.<init>(BookieService.java:52) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.server.Main.buildBookieServer(Main.java:304) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.server.Main.doMain(Main.java:226) [org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    at org.apache.bookkeeper.server.Main.main(Main.java:208) [org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
Caused by: org.rocksdb.RocksDBException: while open a file for lock: /pulsar/data/bookkeeper/ledgers/current/ledgers/LOCK: Permission denied
    at org.rocksdb.RocksDB.open(Native Method) ~[org.rocksdb-rocksdbjni-6.10.2.jar:?]
    at org.rocksdb.RocksDB.open(RocksDB.java:239) ~[org.rocksdb-rocksdbjni-6.10.2.jar:?]
    at org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.<init>(KeyValueStorageRocksDB.java:196) ~[org.apache.bookkeeper-bookkeeper-server-4.14.4.jar:4.14.4]
    ... 13 more
```

As such, in order to support OpenShift, I exposed the `fsGroupChangePolicy`, which allows for OpenShift support, but not necessarily _seamless_ support.
2022-06-13 22:11:13 -05:00
Li Li
0429adb3d2
[Build] Publish charts to apache/pulsar-site branch asf-site-next (#264) 2022-05-12 11:09:14 +08:00
Frank Kelly
bfb6985de8
Add support for Horizontal Pod Autoscaling for Broker and Proxy. (#262)
* Add support for Horizontal Pod Autoscaling for Broker and Proxy.

* Add license
pulsar-2.9.3
2022-05-06 08:04:13 -06:00
ran
cee3fcfe56
Bump version to 2.9.2 (#255)
* Bump version to `2.9.2`

* Because the latest Pulsar image is based on Java 11, some JVM param for printing GC information has been abandoned, change to use the new JVM param. refer to https://docs.oracle.com/en/java/javase/11/tools/java.html#GUID-BE93ABDC-999C-4CB5-A88B-1994AAAC74D5 and https://issues.redhat.com/browse/CLOUD-3040.

original param | new param
--|--
`-XX:+PrintGCDetails` | `-Xlog:gc*`
`-XX:+PrintGCApplicationStoppedTime` | `-Xlog:safepoint`
`-XX:+PrintHeapAtGC` | `-Xlog:gc+heap=trace`
`-XX:+PrintGCTimeStamps` | `-Xlog:gc::utctime`
* remove JVM param `-XX:G1LogLevel=finest`
pulsar-2.9.2
2022-04-11 15:33:29 +08:00
Chirag Modi
192b3ca2ef
Remove completed init jobs using ttl (#235)
* feat: added ttlSecondsAfterFinished configuration to delete completed jobs

* added comments for clarification
pulsar-2.7.13
2022-02-23 08:24:37 -08:00
Lari Hotari
3918ee36f0
[Build] Revert chart index publishing to new website (#234)
- publish to the old website location, apache/pulsar , branch asf-site
2022-02-17 12:56:34 -08:00
Lari Hotari
1c4f745941
Improve Zookeeper "ruok" probes: use TLS port when TLS is enabled, specify "-q 1" for nc (#223)
- NOTICE: we are no more using "bin/pulsar-zookeeper-ruok.sh" from the apachepulsar/pulsar docker image. The probe script is part of the chart.

* Pass "-q 1" to netcat (nc) to fix issue with Zookeeper ruok probe

- see https://github.com/apache/pulsar/pull/14088

* Send ruok to TLS port when TLS is enabled

* Bump chart version
pulsar-2.7.12
2022-02-17 07:48:20 +02:00
Lari Hotari
5b90c5195c
[Build] Publish charts to apache/pulsar-site branch asf-site-next (#232)
- also use shallow cloning
2022-02-17 07:46:45 +02:00
Frank Kelly
9613ee0292
Make PodSecurityPolicy name unique in k8s cluster when rbac.limit_to_namespace is true (#224)
- allows having multiple Pulsar clusters in different K8S namespaces but having the same helm release name
  - PodSecurityPolicy is a cluster-level-resource and name would collide without this change
pulsar-2.7.11
2022-02-04 10:41:10 +02:00
Lari Hotari
dd0e6d827d
Increase Zookeeper probe timeouts (#220)
- 5 seconds seems to be a too short probe timeout on a system with low resources such as in CI
2022-01-31 19:24:19 +02:00
Lari Hotari
dc97bd4ac6
[CI] Tolerate errors when collecting k8s logs in CI (#217)
- The log collection failed after a command failed.
- tolerate errors
2022-01-26 14:50:48 -06:00
Lari Hotari
d3e7a7e6c9
[CI] Fix issue with k8s log collection (#216)
- slash needs to be replaced with underscore
2022-01-26 20:49:06 +02:00
Lari Hotari
0093f91410
[CI] Collect and upload k8s logs on failure (#215) 2022-01-26 19:43:49 +02:00
MMeent
c0a8c1b97f
Use the 'pulsar.matchLabels' template for matching components of this chart. (#118)
This also limits the scope of the PodMonitors to the resources of only this install, instead of all installs that share `component:` label values.

Co-authored-by: Matthias van de Meent <matthias.vandemeent@cofano.nl>
2022-01-26 15:38:52 +02:00
Lari Hotari
41ff20ec5e
Don't enable pulsar manager by default (#213)
- because of security reasons
  - it increases the attack surface
- it's an unnecessary feature for most users
  - wasted resource consumption
2022-01-26 15:34:30 +02:00
Lari Hotari
fdf9dd7757
Add -XX:+ExitOnOutOfMemoryError to Zookeeper's PULSAR_GC parameters in default values.yaml (#211) 2022-01-26 15:34:07 +02:00
Lari Hotari
22f4b9b3bd
Wrap Zookeeper probe script with timeout command (#214)
so that the probe doesn't continue running indefinitely

- resolves the issue with Kubernetes <1.20
  "Before Kubernetes 1.20, the field timeoutSeconds was not respected for exec probes:
    probes continued running indefinitely, even past their configured deadline,
    until a result was returned."
    in https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes

- #179 already fixed the issue for Kubernetes 1.20+
2022-01-26 15:17:15 +02:00
Lari Hotari
475a4b0b39
Remove references to tag: 2.6.0 in examples (#210)
### Motivation

It's better to not maintain out-dated examples referencing the 2.6.0 tag version.

### Modifications

- remove out-dated examples
2022-01-25 23:30:46 -06:00
Lari Hotari
fa9c22d895
Upgrade default images for Grafana & Pulsar Manager (#206)
- Grafana Dashboard image from v0.0.10 to v0.0.16
  - changes:
    https://github.com/streamnative/apache-pulsar-grafana-dashboard/compare/d50e2758...v0.0.16

- Pulsar Manager from v0.1.0 to v0.2.0
  - changes:
    https://github.com/apache/pulsar-manager/compare/v0.1.0...v0.2.0
2022-01-25 10:11:33 +02:00
Shen Liu
1b3e875ba2
Fix ci error caused by wrong block of if clause. (#208)
Co-authored-by: druidliu <druidliu@tencent.com>
2022-01-25 07:44:08 +02:00