- Add timeouts for waiting for zk and bk to become available.
- If the waiting gets stuck for some reason, the Pulsar deployment never
becomes starts the broker services.
- timeouts will help failures recover eventually
* Added support for JWT secretref and key volume mount. Added admin user auto-creation.
* Removed variables accidentally re-added and comments no longer relevant
* Enabling pulsar manager test w/ manager-admin superuser for symmetric and asymmetric jwt tests
* Added verification of communication with broker to ci test-pulsar-manager
* Fixing error on line 115 of helm.sh
* More fixes
* Adding echo of envs and tenants
* Fixing LOGIN_JSESSIONID variable name
* add missing section in values.yaml for pulsar_metadata resources
* add resources to all init containers and an additional section to specify them in values.yaml
* increase memory defaults for init containers
* remove empty lines
* Add newline to end of file
* Proposal: service accounts creation should be decoupled from PodSecurityPolicy.
* Rename *-rbac.yaml to *-psp.yaml and move service account to *-service-account.yaml
* Test with psp enabled
Co-authored-by: Lari Hotari <lhotari@apache.org>
### Motivation
Enables support for using the Pulsar bookies as persistent state storage for functions.
### Modifications
- Added an option to enable/disable using bookies as state storage
- Adds extra server components options to the bookkeeper to enable necessary features for bookies to be used as state storage
- Adds stateStorageServiceUrl to the broker configmap
* To address the function role vs clusterrole issue
* making backwards compatable
* updated value.yaml to include limit functions to namespace
* Added documentation to clarify the new attribute
* moved limit_to_namespace under functions.rbac
* Refactor GitHub Actions CI to a single workflow
* Handle case where "ct lint" fails because of no chart changes
* Re-order scenarios
* Remove excessive default GC logging
* Bump cert-manager version to v1.12.2
* Use compatible cert-manager version
* Install debugging tools (k9s) for ssh access
* Only apply for interactive shells
* Fix JWT symmetric test
* Fix part that was missing from #356
* Install k9s on the fly when k9s is used
- set KUBECONFIG on the fly for kubectl too
* Upgrade upgrade kind, chart releaser and helm versions
* Disable podMonitory for values-broker-tls.yaml file
- was missing from #317
* Use k8s 1.18.20
* Use ubuntu-20.04 runtime
- k8s < 1.19 doesn't support cgroup v2
* Upgrade to k8s 1.19 as baseline
* Baseline to k8s 1.20
* Set ip family to ipv4
* Add more logging to kind cluster creation
* Simplify duplicate job deletion
* use verbosity flag
* Upgrade to k8s 1.24
* Replace removed tolerate-unready-endpoints annotation with publishNotReadyAddresses
(cherry picked from commit e90926053a2b01bb95529fbaddc8d2ce2cdeec63)
* Use k8s 1.21 as baseline
* Run on ubuntu-22.04
* Use Pulsar 2.10.4
* Allow to use selectors with volumeClaimTemplates
* Fixed naming inconsistency, added null value
Co-authored-by: Claudio Vellage <claudio.vellage@pm.me>
Co-authored-by: Michael Marshall <mmarshall@apache.org>
### Motivation
Currently it's not possible to use selectors with volumeClaimTemplates which makes it hard/impossible to bind statically provisioned PVs.
### Modifications
Added (optional) selectors to `volumeClaimTemplates` and documented in values file.
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
* allow specifying the nodeSelector for the init jobs
* Use pulsar_metadata.nodeSelector
Co-authored-by: samuel <samuel.verstraete@aprimo.com>
### Motivation
When deploying pulsar to an AKS cluster with windows nodepools i was unable to specify that the Jobs of the initalize release had to run on linux nodes. With the change i can now specify a node selector for the init jobs.
### Modifications
add nodeSelector on pulsar_init and bookie_init
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
### Motivation
In #269, we added a way to configure external zookeeper servers. However, it was added to the wrong section of the zookeeper config. The `zookeeper.configData` section is mapped directly into the zookeeper configmap.
### Modifications
Move `zookeeper.configData.ZOOKEEPER_SERVERS` to `zookeeper.externalZookeeperServerList`
### Verifying this change
This is a cosmetic change on an unreleased feature.
* Replace monitoring solution with kube-prometheus-stack dependency
* Enable pod monitors
* Download necessary chart dependencies for CI
* Actually run dependency update
* Enable missed podMonitor
* Disable alertmanager by default for feature parity
Related issues #294#65
Supersedes #296 and #297
### Motivation
Our helm chart is out of date. I propose we make a breaking change for the monitoring solution and start using the `kube-prometheus-stack` as a dependency. This should make upgrades easier and will let users leverage all of that chart's features.
This change will result in the removal of the StreamNative Grafana Dashboards. We'll need to figure out the right way to address that. The apache/pulsar project has grafana dashboards, but they have not been maintained. With this added dependency, we'll have the benefit of being able to use k8s `ConfigMap`s to configure grafana dashboards.
### Modifications
* Remove old prometheus and grafana configuration
* Add kube-prometheus-stack chart as a dependency
* Enable several components by default. I am not opinionated on these, but it is based on the other values in the chart.
### Verifying this change
This is a large change that will require manual validation, and may break deployments. I propose this triggers a helm chart 3.0.0 release.