Compare commits

...

151 Commits

Author SHA1 Message Date
gulecroc
e8ab0c6ded
Feat/cacerts (#619) 2025-06-21 23:13:35 +03:00
Artem Nosulchyk
3e5c82c229
extra volume mounts for oxia coordinator (#618)
* extra volume mounts for oxia coordinator

* .

* .
2025-06-13 10:55:02 -07:00
Lari Hotari
7cd7078695
Add labels to all k8s objects (#617)
* Add labels to all k8s objects

* Add labels to initialization job pods
2025-06-09 21:27:23 +03:00
Lari Hotari
2d16ffefd4
Use PEM files directly as ZooKeeper keystore and truststore (#613) 2025-05-30 18:16:04 +03:00
Lari Hotari
fdcfe60fe9 Chart: Bump version to 4.1.0 2025-05-23 16:52:39 +03:00
gulecroc
1180db46cd
add template for ca issuer name and secret name (#565)
* set template for ca issuer name and secret name + geo-replication installation example

* remove geo-replication from this PR

* use certs template to define ca name and secret name

* Handle proxy, toolset and zookeeper in the same way as others

* Make the logic more consistent by separating the selfsigning issuer configuration

---------

Co-authored-by: GLECROC <guillaume.lecroc@cnp.fr>
Co-authored-by: Lari Hotari <lhotari@users.noreply.github.com>
Co-authored-by: Lari Hotari <lhotari@apache.org>
2025-05-23 16:22:17 +03:00
Lari Hotari
51a535d83d
Upgrade to Pulsar 4.0.5 (#612) 2025-05-23 15:28:31 +03:00
trynocoding
352ed0846b
Fix broker initialization error when using global Zookeeper (#602) (#603) 2025-05-21 12:20:41 +03:00
Bruno Domenici
a9f2ba76ae
OpenID: introducing support for OpenID configuration (#509)
* feat!(openid): introducing support for openid configuration

BREAKING CHANGE: provider configuration changed from auth.authentication.provider to auth.authentication.jwt.enabled

* add upgrading to 4.1.0

* add validation for deprecated values

* add openid CI with keycloak

* fix chart-testing lint new-line-at-end-of-file

* fix keycloak dependency repository

* fix keycloak repository

* fix yaml to json convert error

* disable keycloak to validate github actions before re-enable it

* disable openid test scenario

* disable keycloak in values

* enable keycloak without authentication and authorization

* add openid test scenario

* disable test scenario other than openid

* enable all test scenario

* disable functions component

* create openid resources

* test truncate command

* test truncate command

* change client_secret generator

* change client_secret generator

* test python

* fix script

* fix script

* print python result

* test python

* test python

* fix client_secret generation

* fix create openid resources

* fix secret name

* fix mount keycloak config

* fix keycloak service

* exclude keycloak from chart

* add license

* add license

* wait keycloak is alive

* fix keycloak chart install namespace

* add test pulsar real openid config

* fix keycloak issuer url

* fix pod name

* remove check keycloak alive

* check realm pulsar openid configuration

* change keycloak service

* remove test keyclock service

* remove selector to get all pod log

* wait keycloak is alive

* check keycloak realm pulsar urls

* wait until keycloak is ready

* add wait timeout

* fix realm pulsar name

* add log to debug

* add openid for toolset

* set authorization

* set authorization

* fix client template filename

* fix install keycloak

* disable authorization

* debug sub claim value

* fix sub claim value

* cleanup

* enable all build

---------

Co-authored-by: glecroc <guillaume.lecroc@cnp.fr>
2025-05-20 14:09:12 +03:00
Lari Hotari
52d3164b8d
Upgrade oxia image to 0.12.0 in default values.yaml (#611) 2025-05-20 03:29:49 -07:00
Artem Nosulchyk
9ddbf4bc86
extra containers and volumes for oxia coordinator (#609) 2025-05-20 13:13:07 +03:00
Artem Nosulchyk
fa1456ea4d
configurable oxia coordinator configmap and entrypoint (#606) 2025-05-19 16:16:40 +03:00
Artem Nosulchyk
8382906775
annotations (#610) 2025-05-13 16:35:44 -07:00
Austin Poole
57fa527b04
update nodeSelector for bookkeeper cluster initializer (#608) 2025-05-10 11:57:16 +03:00
Haim Kortovich
77ec4cedfb
Add appAnnotations for all statefulsets (#604) 2025-05-07 09:05:19 +03:00
Artem Nosulchyk
cd701ecedd
add support of extra volumes and mounts for autorecovery (#607) 2025-05-07 08:44:11 +03:00
Artem Nosulchyk
d4afc985d2
oxia components podmonitor match labels (#605) 2025-05-06 22:27:27 +03:00
Lari Hotari
7833e51c28 Chart: Bump version to 4.0.1 2025-04-15 11:05:33 +03:00
gulecroc
6e824f0c4e
Fix bookkeeper.extraVolumes (#596) 2025-04-15 01:04:10 -07:00
Lari Hotari
b703761a52
Upgrade Oxia to 0.11.15 (#600) 2025-04-15 00:50:32 -07:00
Lari Hotari
8d889eb971
Upgrade to Pulsar 4.0.4 (#599) 2025-04-15 00:24:48 -07:00
Lari Hotari
6ff77e8c65
Update RELEASE.md 2025-03-14 00:51:58 -07:00
Lari Hotari
e7b08065a1
Update RELEASE.md 2025-03-14 00:46:19 -07:00
Lari Hotari
3f75320f18 Update RELEASE.md 2025-03-11 02:44:10 +02:00
Lari Hotari
a30291e7df
Update RELEASE.md 2025-03-10 17:22:39 -07:00
Lari Hotari
20f7fc8d79 Update README 2025-03-11 02:19:27 +02:00
Lari Hotari
637cf11d1a
Fix Grafana dashboards for Broker with honorLabels, remove unnecessary *_created metrics and improve docs (#593)
* Drop _created metrics for broker and proxy

* Enable all metrics by default for broker

* change default dashboard

* Remove messy dashboards

* Enable default dashboards in Grafana

* Add testing values with more aggressive disk cleanup

* Add VictoriaMetrics debugging instructions

* Set honorLabels to true

* Document disabling monitoring

* Set password in testing values

* Fix linting issue detected by kubeconform
2025-03-10 16:46:28 -07:00
Lari Hotari
e6f05809bd
Migrate from kube-prometheus-metrics to victoria-metrics-k8s-stack (#592) 2025-03-08 16:36:41 -08:00
Lari Hotari
302db43e91
Remove PSP support (#591) 2025-03-08 12:00:35 -08:00
Lari Hotari
75119dd6d7
Remove Prometheus scrape annotations when podmonitors are enabled (#590) 2025-03-07 09:51:06 -08:00
Lari Hotari
6fe37a373f
Use bookkeeperMetadataServiceUri in broker and make PulsarMetadataClientDriver configurable (#589) 2025-03-07 09:24:03 -08:00
Lari Hotari
dd1325216f
Change Pulsar Proxy service load balancer type to ClusterIP (#588) 2025-03-06 05:03:42 -08:00
Lari Hotari
976ba92e3b
Test with k8s 1.32.2 and upgrade tool versions used in CI (#587)
- kind 0.22.0 -> 0.27.0
- test with k8s 1.32.2 instead of 1.29.2 to ensure compatibility with latest k8s release
- default helm version 3.14.4 -> 3.16.4
- chart releaser 1.6.0 -> 1.7.0
- ubuntu 22.04 -> 24.04
- chart testing 3.11.0 -> 3.12.0
- yamllint 1.33.0 -> 1.35.1
- yamale 4.0.4 -> 6.0.0
2025-03-05 23:50:44 -08:00
Lari Hotari
18c4cc5440 Add comment warning about enabling PulsarMetadataBookieDriver
- upgrade compatibility tests didn't pass with this setting, so more testing is needed
2025-03-06 09:49:56 +02:00
Lari Hotari
601e78d8a5
Add Broker Cache and Sockets dashboards (#586) 2025-03-05 23:24:19 -08:00
Lari Hotari
80999ff1d8
Use BookKeeper BP-29 metadataServiceUri to configure bookie metadata store, also when using Zookeeper (#585) 2025-03-05 23:24:07 -08:00
Lari Hotari
87b48d0610
Update RELEASE.md 2025-03-04 13:16:33 -08:00
Lari Hotari
9f61859d19
Use PIP-45 metadata store config to replace deprecated ZK config and make PulsarMetadataBookieDriver configurable in BK (#576) 2025-03-04 20:23:35 +02:00
Lari Hotari
a55b1bb560
Remove the dependency to pulsarctl when generating JWT tokens (#584) 2025-03-04 20:18:10 +02:00
Lari Hotari
43f8dfa04e
Revisit solution to configure Bookkeeper RocksDB settings - default to individual config files (#583) 2025-03-04 04:04:38 -08:00
Lari Hotari
f98ee7d69c
Replace ">" with "|" to avoid Go Yaml issue go-yaml/yaml#789 (#582) 2025-03-04 02:21:39 -08:00
Lari Hotari
589b0b1b24
Upgrade default cert-manager version to 1.12.16 (#581) 2025-03-04 01:02:25 -08:00
Lari Hotari
5c1b7a9288
Restore support for dbStorage_rocksDB_* settings defined in bookkeeper.configData (#580) 2025-03-03 22:05:59 -08:00
Lari Hotari
4bdf6d51eb
Improve kube-prometheus-stack config in values.yaml by adding missing key and some basic comments (#579)
* Enable prometheusOperator in CI test

* Add comments and add offloader dashboard
2025-03-03 11:09:25 -08:00
Lari Hotari
4de387e726
Workaround issue with Prometheus 3.0 and metrics (#577)
* Add "fallbackScrapeProtocol: PrometheusText0.0.4" to all pod monitors
2025-03-03 06:26:04 -08:00
Lari Hotari
492e273d82
Upgrade to kube-prometheus-stack 69.x including prometheus-operator 0.80.0 defaulting to Prometheus 3.x (#578)
* Upgrade to kube-prometheus-stack 67.x
  * Prometheus operator is upgraded to 0.80.0
  * Prometheus is upgraded from 2.55.0 to 3.2.1

* Enable pod monitors to test them

* Run linting with kube-prometheus-stack enabled

* Validate all CI configs
2025-03-03 05:49:03 -08:00
Lari Hotari
afca5aaf08
Upgrade to Pulsar 4.0.3 (#575) 2025-02-28 09:18:10 -08:00
Lari Hotari
4386eacba8
[fix] Fix broker service annotations issue and other annotations issues (#574)
* Fix broker services annotations issues

* Add annotations support to autorecovery.service

* Consistently use similar way to handle annotations

* Add autorecovery service annotations key to values.yaml
2025-02-28 09:17:54 -08:00
Lari Hotari
f928380124
Fix pulsar-cluster-initialize / pulsar-init rendering with kustomize (#572)
* Fix pulsar-cluster-initialize / pulsar-init rendering with kustomize

- reapply #166 changes that were reverted by #544 changes

* Add validation for kustomize output in CI
2025-02-19 00:46:24 -08:00
Philipp Dolif
ab46d2165e
Increase defaults for ensemble size, write quorum, and ack quorum to 2 (#570) 2025-02-18 22:27:34 -08:00
Alejandro Ramirez
0b6b03002c
Fix OOM issue on broker wait-zookeeper-ready initContainer (#568) 2025-02-18 22:26:39 -08:00
Lari Hotari
e55405cbe2 Improve RELEASE.md
- address word wrap issue in validation instructions
2025-01-20 19:22:51 +02:00
Lari Hotari
7717adfab4 Chart: Bump version to 3.9.0 2025-01-20 19:11:45 +02:00
Lari Hotari
ee119d4f29
Use Pulsar 3.0.9 as previous LTS version in CI (#564) 2025-01-20 09:06:01 -08:00
Lari Hotari
dd1aa5e119
Use Pulsar 4.0.2 image by default (#563) 2025-01-20 08:22:16 -08:00
Eric Shen
b5ff00b16b
feat(tls): support ca type issuer and v1alpha* version cert-manager api (#561) 2024-12-18 07:11:54 -08:00
Raúl Sánchez
df9284dc97
Fix helm chart to allow configurable ingress pathType (#558) 2024-12-11 07:21:03 -08:00
Lari Hotari
05c78df4c5 Chart: Bump version to 3.8.0 2024-12-05 21:28:53 +02:00
Lari Hotari
d09ab8c4a7
Upgrade to Pulsar 4.0.1 image (#557) 2024-12-05 11:26:21 -08:00
Lari Hotari
0eeb7830a9
Revert "Wrap Zookeeper probe script with timeout command (#214)" (#556)
This reverts commit 22f4b9b3bd18a16c477003338464dfe5a689e074.
2024-12-02 01:35:22 -08:00
Lari Hotari
07689860f6
Fix Oxia config so that it includes a list of all pods in the statefulset (#553)
* Fix Oxia config so that it includes a list of all pods in the statefulset

* Test Oxia with 3 replicas since some issues only come up with more nodes

* Make internal name not a fqdn

* Fix issue with insufficient cpu requests in CI
2024-11-22 05:54:11 -08:00
Lari Hotari
cc12992d8f
Fix invalid internal server name in Oxia config (#552)
.svc doesn't resolve. it's better to use the fully qualified name
2024-11-22 04:35:54 -08:00
Yuwei Sung
c6ce11a9b7
Add support for using Oxia as the metadata store for Pulsar and BookKeeper (#544)
Co-authored-by: Lari Hotari <lhotari@apache.org>
2024-11-21 16:52:20 -08:00
Liam Gibson
17b739d10a
Add support for admin port on ZooKeeper (#550)
* Add support for admin port on ZooKeeper

* Make ZK admin port conditional
2024-11-20 09:27:44 -08:00
doug-ba
f6b6d88847
Correct pulsar proxy prometheus.io/port annotation (#548) 2024-11-18 21:39:24 -08:00
lenglet-k
ed50c68633
feat: add loadBalancerClass for proxy and pulsar-manager (#546)
* feat: add loadBalancerClass for proxy and pulsar-manager

Co-authored-by: Lari Hotari <lhotari@users.noreply.github.com>
2024-11-08 07:23:45 -08:00
Lari Hotari
d877fc3312
Use Pulsar 4.0.0 image, bump chart version to 3.7.0, kube-prometheus-stack to 65.x (#542)
* Use Pulsar 4.0.0 image, bump chart version to 3.7.0

* Bump kube-prometheus-stack to 65.x.x

* Remove testing with latest and test with previous LTS version

- run kube-prometheus-stack test with previous LTS version since
  the older chart version doesn't support Pulsar 4.0.0 image

* Fix passing "--values" to helm command

* Move ci runner config to a script

* Attempt to fix pulsar-manager-cluster-initialize
2024-10-29 15:29:27 -07:00
ChaoYang
64e67c1a88
update role (#543) 2024-10-29 15:28:47 -07:00
lenglet-k
db20c2bfa6
fix: broker extraEnv variable (#540)
* fix: broker extraEnv variable

* fix: comment extraEnv for broker as default values

* fix(typo): rename extreEnvs to extraEnvs
2024-10-18 00:07:24 -07:00
Lari Hotari
9e499db308
Test with 3.3.2 image (#541) 2024-10-18 00:06:49 -07:00
lenglet-k
346c5cdcd4
feat! add extraVolumes and Mounts for pulsar-manager (#535) 2024-10-08 05:00:00 -07:00
Lari Hotari
727e8c8b0d Chart: Bump version to 3.6.0 2024-10-04 23:01:20 +03:00
Lari Hotari
64b0769dc1
Use Pulsar 3.0.7 image by default (#536) 2024-10-04 12:55:06 -07:00
lenglet-k
75c00ebc7a
feat: add imagepullsecrets on pulsar-manager-initialize job (#533) 2024-10-02 17:15:46 -07:00
Lari Hotari
fffdcfc1ad
Fix compatibility with Pulsar 3.3.x+ docker images where /pulsar isn't writable (#531) 2024-09-27 12:17:12 -07:00
Shu.Wang
a45bc4bfe1
Add topologyspreadconstraint to deploy pods in sts cross different az evenly (#526)
Signed-off-by: Wang, Shu <shu.wang@fmr.com>
2024-09-26 21:37:15 -07:00
Lari Hotari
5276bd69ad Upgrade deprecated GitHub Actions in the CI workflow 2024-09-27 07:29:24 +03:00
Lari Hotari
6b31946fc7 Upgrade deprecated actions/upload-artifact@v2 to v4 2024-09-26 20:39:41 +03:00
ludmanl
54401c0b9a
feat: Support to customize broker podManagementPolicy from values.yaml (#525) 2024-09-03 03:47:52 -07:00
Duncan Schulze
0031827761
Support using self generated certificates (#523)
* Support using self generated certificates

* chore: fix linting
2024-08-23 17:49:36 +03:00
Lari Hotari
ac4f5a6627
Upgrade cert-manager to v1.12.13 (#517)
- cert-manager 1.12 is a LTS release, EOL until May 2025
2024-08-15 01:34:20 -07:00
Lari Hotari
dc817205a1
Bump minimum k8s version to 1.23.0 (#518) 2024-08-15 00:55:22 -07:00
Starry
093fa273f8
Add initContainers to templates (#516) 2024-08-05 09:40:55 -07:00
Lari Hotari
7675e4270d
Test compatibility with Pulsar 3.3.1 (#515) 2024-08-01 12:46:21 -07:00
Lari Hotari
70c4779542
Bump app version to 3.0.6 (#514) 2024-08-01 12:42:23 -07:00
Lari Hotari
70f36ffe43
Add timeouts for cluster metadata initialization and for init containers (#218)
- Add timeouts for waiting for zk and bk to become available.
- If the waiting gets stuck for some reason, the Pulsar deployment never
  becomes starts the broker services.
  - timeouts will help failures recover eventually
2024-06-20 10:07:48 -07:00
Lari Hotari
023f902a02
Allow specifying default pull policy and functions pull policy (#507) 2024-06-12 04:16:48 -07:00
Lari Hotari
9db0cccaca
Make zookeeper healthchecks compatible with Alpine's busybox nc (#504)
* Make zookeeper healthchecks compatible with Alpine's busybox nc

* Test Pulsar 3.3.0 image

* Use 127.0.0.1 instead of localhost in zookeeper healthchecks

- Alpine nc fails if "localhost" is used.
  - perhaps it defaults to use IPv6?

* Disable testing with Pulsar 3.3.0 image until 3.3.1 is released

- the image needs "apk add bind-tools" since busybox nslookup isn't compatible with kubernetes
2024-06-08 08:52:06 +03:00
Lari Hotari
47c2ac442a
Add defaultPulsarImageRepository configuration (#503)
- makes it easier to use a custom image
2024-06-05 04:20:16 -07:00
Lari Hotari
aebf5fb0d5
Upgrade kube-prometheus-stack to 59.x.x (#502) 2024-06-05 04:20:07 -07:00
Massimiliano Mirelli
6e84409b48
Support NodePort Proxy service (#500)
* Enables nodeport support for the proxy

* Correct indentation and remove null `nodePort`

Removing null `nodePort` causes k8s to pick up a random port

* Address review comment

https://github.com/apache/pulsar-helm-chart/pull/500/files#r1605762312
2024-06-04 08:46:16 -07:00
Massimiliano Mirelli
cb5c44f8ec
Allow broker's service clusterIP customisation (#498)
* Allow broker's service clusterIP customisation

This customisation is useful to configure headless vs non-headless
broker's service. The default is headless broker service, i.e. a
service for which kubernetes  does not allocate an IP
address (https://kubernetes.io/docs/concepts/services-networking/service/#type-clusterip). A
headless service is a very simple type of service that doesn't seem to work well
when pulsar service is exposed by pulsar-proxy via a nodeport.

Addresses #497.

* Address review comments

https://github.com/apache/pulsar-helm-chart/pull/498/files#r1605762934
and https://github.com/apache/pulsar-helm-chart/pull/498/files#r1605763245

* Move doc to Values.broker.service
2024-06-04 08:45:14 -07:00
Lari Hotari
3ecc2baab8 Chart: Bump version to 3.4.1 2024-05-17 17:55:45 +03:00
Lari Hotari
6795ad5c2c
Use Pulsar 3.0.5 as the default Pulsar version (appVersion) (#499) 2024-05-17 07:54:09 -07:00
MonicaMagoniCom
c4941b32d1
Add namespace to hpa templates (#494) 2024-05-03 11:48:02 -07:00
Lari Hotari
bd8bc633df
Change default statusFilePath to /pulsar/logs/status (#489)
* Change default statusFilePath to /pulsar/logs/status

* Write OK to statusFilePath
2024-04-15 05:41:17 -07:00
Lari Hotari
59f6f74fd7
Fix prometheus node-exporter crashloop (#488) 2024-04-12 03:10:24 -07:00
Lari Hotari
ee4b7a7988
Increase default Prometheus scrape interval to 60s (#487) 2024-04-11 07:35:57 -07:00
Martin
7c7ca4a7bc
enable message peeking (#486) 2024-04-10 23:20:37 -07:00
Martin
347326e0c3
Fix pulsar-manager persistence (#485)
- only setup environment in pulsar manager if broker is deployed
- fix indent
- enable persistence for manager and move configs around
2024-04-03 21:28:46 -07:00
Lari Hotari
d9e65836e8 Chart: Bump version to 3.4.0 2024-04-02 16:31:14 +03:00
Lari Hotari
a8776fd76c
Upgrade appVersion to 3.0.4 to use Pulsar 3.0.4 by default (#484) 2024-04-02 06:28:38 -07:00
Lari Hotari
88638d6b66 Increase timeouts in CI
- metallb timeout from 90s to 120s
- chart installation timeout from 300s to 360s
2024-04-02 10:14:09 +03:00
Lari Hotari
fdd46f9b74
Add basic NOTES.txt (#482) 2024-03-27 04:32:36 -07:00
Lari Hotari
cc0a1acf22
Disable functions by default in values.yaml (#483) 2024-03-26 23:17:40 +01:00
Lari Hotari
fdec9c69ef
Use podManagementPolicy OrderedReady for Broker sts when Functions are enabled (#474)
* Use podManagementPolicy OrderedReady for Broker sts when Functions are enabled

* Don't change podManagementPolicy when the sts already exists

* Fix template issue

* Fix apiVersion
2024-03-26 10:49:33 -07:00
doug-ba
9929b80b3c
add ability to use separate disk for zookeeper tx log (#476)
* add ability to use separate disk for zookeeper tx log

* Use absolute path

---------

Co-authored-by: Lari Hotari <lhotari@users.noreply.github.com>
2024-03-26 07:51:31 -07:00
Lari Hotari
eb0a878d9c
Make job.ttl.enabled consistent and effective only when k8s >= 1.23 (#481) 2024-03-26 06:23:15 -07:00
doug-ba
bc5862d4b0
pulsar-manager adding support for existing secret (#478) 2024-03-26 05:26:37 -07:00
doug-ba
3dee8dfe3b
making .ReleaseIsInstall optional for init jobs (#480)
* making .ReleasIsInstall optional for init jobs

* initialize simplifying an if condition based on feedback
2024-03-25 22:26:32 -07:00
Lari Hotari
43ed6f5434 Chart: Bump version to 3.3.1 2024-03-15 14:31:23 +02:00
Heesung Sohn
7eb8ce0ff3
Bump appVersion to 3.0.3 (#469) 2024-03-10 08:37:17 +02:00
Nathan Clayton
b4241f984b
Update broker statefulset to check if AWS keys secret name is defined before adding to environment. (#466) 2024-03-03 10:38:31 +02:00
Lari Hotari
0b130fafa9
Fix typo in script name in README.md 2024-03-01 05:39:21 -08:00
Lari Hotari
be62fef11c
Add security disclaimer for Helm chart usage 2024-02-29 10:04:03 -08:00
Lari Hotari
aeae9d72e5 Chart: Bump version to 3.3.0 2024-02-23 21:26:19 +02:00
Martin
89c5987b17
Bugfix/pulsar manager init (#463)
* add some more logs to the pulsar manager test

* fix admin secret "double-encoding"

* make pulsar-manager-cluster-initialize.yaml "rerunnable"
2024-02-22 17:37:25 +02:00
Lari Hotari
17a4239733
Remove buggy and useless function-worker-config-map (#462)
Fixes #56
2024-02-21 13:47:23 -08:00
Lari Hotari
0e3251bea8
Remove deprecated "extra" key to configure components, also remove dashboard that has been replaced (#461)
- the "extra" key has been deprecated a long time ago
- the dashboard is outdated and there's a replacement with kube-prometheus-stack and #439
2024-02-21 04:53:29 -08:00
csthomas1
cb269bbaf3
Feature/pulsar manager v0.2.0 with jwt setup admin account creation (#219)
* Added support for JWT secretref and key volume mount. Added admin user auto-creation.

* Removed variables accidentally re-added and comments no longer relevant

* Enabling pulsar manager test w/ manager-admin superuser for symmetric and asymmetric jwt tests

* Added verification of communication with broker to ci test-pulsar-manager

* Fixing error on line 115 of helm.sh

* More fixes

* Adding echo of envs and tenants

* Fixing LOGIN_JSESSIONID variable name
2024-02-21 04:25:23 -08:00
Victor Fauth
29ea17b3fc
Enable persistence for pulsar-manager (#343)
* Enable persistence for pulsar-manager

* Upgrade to v0.4.0 version of pulsar-manager to get required fix

- contains https://github.com/apache/pulsar-manager/pull/501
  in https://github.com/apache/pulsar-manager/releases/tag/v0.4.0

---------

Co-authored-by: Victor Fauth <victor.fauth@thalesgroup.com>
Co-authored-by: Lari Hotari <lhotari@apache.org>
2024-02-15 01:27:40 -08:00
Lari Hotari
ad65ac9941
Prepare scripts for arm64 / aarch64 support (#459)
- GitHub Actions will be adding arm64 support soon
  https://resources.github.com/devops/accelerate-your-cicd-with-arm-and-gpu-runners-in-github-actions/
2024-02-14 23:49:15 -08:00
Lari Hotari
a1cf2ac6ad
Upgrade to recent version of pulsarctl (#458) 2024-02-14 23:25:55 -08:00
Martin
d0b784a953
Feature/pulsar manager initialize (#457)
* add better pulsar manager integration and init along with tests & docs

* fix pulsar manager startup args

* update pulsar manager service to ClusterIP + remove duplicate
2024-02-14 10:13:54 -08:00
Lari Hotari
1f20887f09
Fix kubeconform check and improve it (#456)
- do "helm repo add" for the prometheus-community repo
- run checks for all k8s versions between 1.21.0-1.29.0
2024-02-13 01:43:16 -08:00
Lari Hotari
24b80c1986
Add validation using kubeconform (#449) 2024-01-31 04:21:27 -08:00
Lari Hotari
9cbe03c7ee
Improve Bookkeeper default configuration (#454)
- remove minimal memory settings
- add more optimal data compaction settings
2024-01-31 03:21:04 -08:00
Martin
4daf6d88a2
grouped init containers (#441) 2024-01-26 03:09:57 -08:00
Lari Hotari
8d2d567b30
Remove pulsar_detector dash board (#446)
- not applicable for Apache Pulsar Helm chart's Pulsar deployment
2024-01-26 03:09:11 -08:00
Lari Hotari
72a8fb6b3e
Upgrade kube-prometheus-stack to 56.x.x version (#445)
* Upgrade to kube-prometheus-stack 56.x.x

* Add CI test case for kube-prometheus-stack upgrade

* Add "--force-conflicts" flag
2024-01-26 03:07:10 -08:00
Lari Hotari
727dccb013
Update RELEASE.md 2024-01-25 07:36:51 -08:00
Martin
8cd3a04812
expose admin port of pulsar manager in service (#440) 2024-01-24 23:32:25 -08:00
Lari Hotari
de4d2e7dc8
Add kubeVersion to Chart.yaml to enforce minimum Kubernetes version (#443) 2024-01-24 11:46:59 -08:00
Lari Hotari
65a5fc0002 Fix typo in Apache License 2.0 abbrev, should be AL 2.0 2024-01-24 21:44:24 +02:00
Lari Hotari
d486e4a42d
Add default configuration for Pulsar Grafana dashboards (#439)
* Add default configuration for dashboards
2024-01-24 11:12:57 -08:00
Lari Hotari
a75508862f Update helm version requirement in docs 2024-01-19 20:26:39 +02:00
Lari Hotari
e058aa581d
Require helm version 3.10 or newer (#436)
* Add check for required helm version

* Add test scenario for helm 3.10.0
2024-01-18 19:28:09 +02:00
Lari Hotari
1cb83398c8
Don't use TLS from function instances to brokers by default (#435)
- Function instances don't currently have the TLS CA cert available
2024-01-17 21:04:43 -08:00
Lari Hotari
9461dfc280
Update RELEASE.md 2024-01-17 14:56:41 -08:00
Lari Hotari
aae69e897e
Update README.md
there is no `helm delete`, it is `helm uninstall`
2024-01-17 14:53:28 -08:00
Lari Hotari
584b18ad3c
Update RELEASE.md
Cover gaps in release instructions.
- missed pushing the version bump commit
- release notes creation instructions were missing
2024-01-17 14:51:51 -08:00
Lari Hotari
6db886f078 Chart: Bump version to 3.2.0
(cherry picked from commit 03b3888df449796f815ce90d12a3c64ab661ea30)
2024-01-18 00:45:52 +02:00
Lari Hotari
89602c39e2 Improve functions testing logging 2024-01-17 18:18:32 +02:00
Lari Hotari
23211c998a Fix creating namespace for cert-manager deployment 2024-01-17 18:18:28 +02:00
Lari Hotari
e49bd32378 Fix indent for Tiered storage offload environment 2024-01-17 18:11:30 +02:00
Lari Hotari
e6ccd93d4f
Test Pulsar Functions in CI (#434) 2024-01-17 04:12:37 -08:00
pellicano
cfa156f738
Tiered Storage config (#205)
* Add tiered storage config

* Check Tiered Storage on README

* GitHub PR #205 changes (1st round)

Remove <= 2.6.0 configs.
Add missing GCS secret volumeMount.
Update GCS example name.

* Cleanup comments

* Bump chart version

* GitHub PR #205 changes (2nd round)

Moved storageOffload under broker section.
Fixed some typos.
Added AWS S3 IRSA annotation comment.

* GitHub PR #205  changes (3rd round)

Moved AWS and Azure credentials into K8S secrets using same StreamNative Helm Chart approach.

* Trim trailing spaces

---------

Co-authored-by: Lari Hotari <lhotari@apache.org>
Co-authored-by: Marcelo Pellicano <mpellicanodeoliveira@bluecatnetworks.com>
2024-01-17 03:06:16 -08:00
Lari Hotari
18e67f2bf8 Update RELEASE.md 2024-01-17 12:07:47 +02:00
Lari Hotari
f0844d1d38 Update RELEASE.md 2024-01-17 11:49:07 +02:00
Lari Hotari
0197e0846d Update RELEASE.md 2024-01-17 11:12:55 +02:00
Lari Hotari
5c0d56cdbf
Update RELEASE.md 2024-01-17 01:07:06 -08:00
127 changed files with 7388 additions and 2152 deletions

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,73 @@
{
"clientId": $ARGS.named.CLIENT_ID,
"enabled": true,
"clientAuthenticatorType": "client-secret",
"secret": $ARGS.named.CLIENT_SECRET,
"standardFlowEnabled" : false,
"implicitFlowEnabled" : false,
"serviceAccountsEnabled": true,
"protocol": "openid-connect",
"attributes": {
"realm_client": "false",
"oidc.ciba.grant.enabled": "false",
"client.secret.creation.time": "1735689600",
"backchannel.logout.session.required": "true",
"standard.token.exchange.enabled": "false",
"frontchannel.logout.session.required": "true",
"oauth2.device.authorization.grant.enabled": "false",
"display.on.consent.screen": "false",
"backchannel.logout.revoke.offline.tokens": "false"
},
"protocolMappers": [
{
"name": "sub",
"protocol": "openid-connect",
"protocolMapper": "oidc-hardcoded-claim-mapper",
"consentRequired": false,
"config": {
"introspection.token.claim": "true",
"claim.value": $ARGS.named.SUB_CLAIM_VALUE,
"userinfo.token.claim": "true",
"id.token.claim": "true",
"lightweight.claim": "false",
"access.token.claim": "true",
"claim.name": "sub",
"jsonType.label": "String",
"access.tokenResponse.claim": "false"
}
},
{
"name": "nbf",
"protocol": "openid-connect",
"protocolMapper": "oidc-hardcoded-claim-mapper",
"consentRequired": false,
"config": {
"introspection.token.claim": "true",
"claim.value": "1735689600",
"userinfo.token.claim": "true",
"id.token.claim": "true",
"lightweight.claim": "false",
"access.token.claim": "true",
"claim.name": "nbf",
"jsonType.label": "long",
"access.tokenResponse.claim": "false"
}
}
],
"defaultClientScopes": [
"web-origins",
"service_account",
"acr",
"profile",
"roles",
"basic",
"email"
],
"optionalClientScopes": [
"address",
"phone",
"organization",
"offline_access",
"microprofile-jwt"
]
}

View File

@ -0,0 +1,26 @@
# Keycloak
Keycloak is used to validate OIDC configuration.
To create the pulsar realm configuration, we use :
* `0-realm-pulsar-partial-export.json` : after creating pulsar realm in Keycloack UI, this file is the result of the partial export in Keycloak UI without options.
* `1-client-template.json` : this is the template to create pulsar clients.
To create the final `realm-pulsar.json`, merge files with `jq` command :
* create a client with `CLIENT_ID`, `CLIENT_SECRET` and `SUB_CLAIM_VALUE` :
```
CLIENT_ID=xx
CLIENT_SECRET=yy
SUB_CLAIM_VALUE=zz
jq -n --arg CLIENT_ID "$CLIENT_ID" --arg CLIENT_SECRET "$CLIENT_SECRET" --arg SUB_CLAIM_VALUE "$SUB_CLAIM_VALUE" 1-client-template.json > client.json
```
* then merge the realm and the client :
```
jq '.clients += [input]' 0-realm-pulsar-partial-export.json client.json > realm-pulsar.json
```

View File

@ -0,0 +1,34 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
tls:
enabled: false
# This block sets up an example Pulsar Realm
# https://www.keycloak.org/server/importExport#_importing_a_realm_from_a_directory
extraEnvVars:
- name: KEYCLOAK_EXTRA_ARGS
value: "--import-realm"
extraVolumes:
- name: realm-config
secret:
secretName: keycloak-ci-realm-config
extraVolumeMounts:
- name: realm-config
mountPath: "/opt/bitnami/keycloak/data/import"
readOnly: true

View File

@ -0,0 +1,5 @@
{
"type": "client_credentials",
"client_id": $ARGS.named.CLIENT_ID,
"client_secret": $ARGS.named.CLIENT_SECRET
}

View File

@ -27,40 +27,73 @@ VALUES_FILE=$1
TLS=${TLS:-"false"}
SYMMETRIC=${SYMMETRIC:-"false"}
FUNCTION=${FUNCTION:-"false"}
MANAGER=${MANAGER:-"false"}
ALLOW_LOADBALANCERS=${ALLOW_LOADBALANCERS:-"false"}
source ${PULSAR_HOME}/.ci/helm.sh
# create cluster
ci::create_cluster
extra_opts=""
ci::helm_repo_add
extra_opts=()
# Add any arguments after $1 to extra_opts
shift # Remove $1 from the argument list
while [[ $# -gt 0 ]]; do
extra_opts+=("$1")
shift
done
if [[ "x${SYMMETRIC}" == "xtrue" ]]; then
extra_opts="-s"
extra_opts+=("-s")
fi
if [[ "x${EXTRA_SUPERUSERS}" != "x" ]]; then
extra_opts+=("--pulsar-superusers" "proxy-admin,broker-admin,admin,${EXTRA_SUPERUSERS}")
fi
install_type="install"
test_action="produce-consume"
if [[ "$UPGRADE_FROM_VERSION" != "" ]]; then
ALLOW_LOADBALANCERS="true"
# install older version of pulsar chart
PULSAR_CHART_VERSION="$UPGRADE_FROM_VERSION"
ci::install_pulsar_chart install ${PULSAR_HOME}/.ci/values-common.yaml ${PULSAR_HOME}/${VALUES_FILE} ${extra_opts}
# Install Prometheus Operator CRDs using the upgrade script since kube-prometheus-stack is now disabled before the upgrade
${PULSAR_HOME}/scripts/kube-prometheus-stack/upgrade_prometheus_operator_crds.sh
ci::install_pulsar_chart install ${PULSAR_HOME}/.ci/values-common.yaml ${PULSAR_HOME}/${VALUES_FILE} --set kube-prometheus-stack.enabled=false "${extra_opts[@]}"
install_type="upgrade"
echo "Wait 10 seconds"
sleep 10
# check pulsar environment
ci::check_pulsar_environment
# test that we can access the admin api
ci::test_pulsar_admin_api_access
# produce messages with old version of pulsar and consume with new version
ci::test_pulsar_producer_consumer "produce"
test_action="consume"
if [[ "$(ci::helm_values_for_deployment | yq .victoria-metrics-k8s-stack.enabled)" == "true" ]]; then
echo "Upgrade Victoria Metrics Operator CRDs before upgrading the deployment"
${PULSAR_HOME}/scripts/victoria-metrics-k8s-stack/upgrade_vm_operator_crds.sh
fi
fi
PULSAR_CHART_VERSION="local"
# install (or upgrade) pulsar chart
ci::install_pulsar_chart ${install_type} ${PULSAR_HOME}/.ci/values-common.yaml ${PULSAR_HOME}/${VALUES_FILE} ${extra_opts}
ci::install_pulsar_chart ${install_type} ${PULSAR_HOME}/.ci/values-common.yaml ${PULSAR_HOME}/${VALUES_FILE} "${extra_opts[@]}"
echo "Wait 10 seconds"
sleep 10
# check that there aren't any loadbalancers if ALLOW_LOADBALANCERS is false
if [[ "${ALLOW_LOADBALANCERS}" == "false" ]]; then
ci::check_loadbalancers
fi
# check pulsar environment
ci::check_pulsar_environment
@ -69,10 +102,15 @@ ci::test_pulsar_admin_api_access
# test producer/consumer
ci::test_pulsar_producer_consumer "${test_action}"
if [[ "x${FUNCTION}" == "xtrue" ]]; then
if [[ "$(ci::helm_values_for_deployment | yq .components.functions)" == "true" ]]; then
# test functions
ci::test_pulsar_function
fi
if [[ "$(ci::helm_values_for_deployment | yq .components.pulsar_manager)" == "true" ]]; then
# test manager
ci::test_pulsar_manager
fi
# delete the cluster
ci::delete_cluster

View File

@ -0,0 +1,105 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# enable TLS with cacerts
tls:
enabled: true
proxy:
enabled: true
cacerts:
enabled: true
certs:
- name: common-cacert
existingSecret: "pulsar-ci-common-cacert"
secretKeys:
- ca.crt
broker:
enabled: true
cacerts:
enabled: true
certs:
- name: common-cacert
existingSecret: "pulsar-ci-common-cacert"
secretKeys:
- ca.crt
bookie:
enabled: true
cacerts:
enabled: true
certs:
- name: common-cacert
existingSecret: "pulsar-ci-common-cacert"
secretKeys:
- ca.crt
zookeeper:
enabled: true
cacerts:
enabled: true
certs:
- name: common-cacert
existingSecret: "pulsar-ci-common-cacert"
secretKeys:
- ca.crt
toolset:
cacerts:
enabled: true
certs:
- name: common-cacert
existingSecret: "pulsar-ci-common-cacert"
secretKeys:
- ca.crt
autorecovery:
cacerts:
enabled: true
certs:
- name: common-cacert
existingSecret: "pulsar-ci-common-cacert"
secretKeys:
- ca.crt
# enable cert-manager
certs:
internal_issuer:
enabled: true
type: selfsigning
# deploy cacerts
extraDeploy:
- |
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-common-cacert"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
spec:
secretName: "{{ template "pulsar.fullname" . }}-common-cacert"
commonName: "common-cacert"
duration: "{{ .Values.certs.internal_issuer.duration }}"
renewBefore: "{{ .Values.certs.internal_issuer.renewBefore }}"
usages:
- server auth
- client auth
isCA: true
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}"
kind: Issuer
group: cert-manager.io

View File

@ -17,12 +17,13 @@
# under the License.
#
auth:
authentication:
enabled: true
provider: "jwt"
jwt:
# Enable JWT authentication
enabled: true
# If the token is generated by a secret key, set the usingSecretKey as true.
# If the token is generated by a private key, set the usingSecretKey as false.
usingSecretKey: false
@ -35,3 +36,9 @@ auth:
proxy: "proxy-admin"
# pulsar-admin client to broker/proxy communication
client: "admin"
# pulsar-manager to broker communication
manager: "manager-admin"
components:
pulsar_manager: true

View File

@ -17,12 +17,13 @@
# under the License.
#
auth:
authentication:
enabled: true
provider: "jwt"
jwt:
# Enable JWT authentication
enabled: true
# If the token is generated by a secret key, set the usingSecretKey as true.
# If the token is generated by a private key, set the usingSecretKey as false.
usingSecretKey: true
@ -35,3 +36,8 @@ auth:
proxy: "proxy-admin"
# pulsar-admin client to broker/proxy communication
client: "admin"
# pulsar manager to broker
manager: "manager-admin"
components:
pulsar_manager: true

View File

@ -0,0 +1,94 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Mount crendentials to each component
proxy:
configData:
# Authentication settings of the broker itself. Used when the broker connects to other brokers, or when the proxy connects to brokers, either in same or other clusters
brokerClientAuthenticationPlugin: "org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2"
brokerClientAuthenticationParameters: '{"privateKey":"file:///pulsar/auth/proxy/credentials_file.json","audience":"account","issuerUrl":"http://keycloak-ci-headless:8080/realms/pulsar"}'
extraVolumes:
- name: pulsar-proxy-credentials
secret:
secretName: pulsar-proxy-credentials
extraVolumeMounts:
- name: pulsar-proxy-credentials
mountPath: "/pulsar/auth/proxy"
readOnly: true
broker:
configData:
# Authentication settings of the broker itself. Used when the broker connects to other brokers, or when the proxy connects to brokers, either in same or other clusters
brokerClientAuthenticationPlugin: "org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2"
brokerClientAuthenticationParameters: '{"privateKey":"file:///pulsar/auth/broker/credentials_file.json","audience":"account","issuerUrl":"http://keycloak-ci-headless:8080/realms/pulsar"}'
extraVolumes:
- name: pulsar-broker-credentials
secret:
secretName: pulsar-broker-credentials
extraVolumeMounts:
- name: pulsar-broker-credentials
mountPath: "/pulsar/auth/broker"
readOnly: true
toolset:
configData:
authPlugin: "org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2"
authParams: '{"privateKey":"file:///pulsar/auth/admin/credentials_file.json","audience":"account","issuerUrl":"http://keycloak-ci-headless:8080/realms/pulsar"}'
extraVolumes:
- name: pulsar-admin-credentials
secret:
secretName: pulsar-admin-credentials
extraVolumeMounts:
- name: pulsar-admin-credentials
mountPath: "/pulsar/auth/admin"
readOnly: true
auth:
authentication:
enabled: true
openid:
# Enable openid authentication
enabled: true
# https://pulsar.apache.org/docs/next/security-openid-connect/#enable-openid-connect-authentication-in-the-broker-and-proxy
openIDAllowedTokenIssuers:
- http://keycloak-ci-headless:8080/realms/pulsar
openIDAllowedAudiences:
- account
#openIDTokenIssuerTrustCertsFilePath:
openIDRoleClaim: "sub"
openIDAcceptedTimeLeewaySeconds: "0"
openIDCacheSize: "5"
openIDCacheRefreshAfterWriteSeconds: "64800"
openIDCacheExpirationSeconds: "86400"
openIDHttpConnectionTimeoutMillis: "10000"
openIDHttpReadTimeoutMillis: "10000"
openIDKeyIdCacheMissRefreshSeconds: "300"
openIDRequireIssuersUseHttps: "false"
openIDFallbackDiscoveryMode: "DISABLED"
authorization:
enabled: true
superUsers:
# broker to broker communication
broker: "broker-admin"
# proxy to broker communication
proxy: "proxy-admin"
# pulsar-admin client to broker/proxy communication
client: "admin"
# pulsar manager to broker
manager: "manager-admin"

View File

@ -0,0 +1,35 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
components:
zookeeper: false
oxia: true
# disable functions for oxia tests since there's no support for Oxia in
# BookKeeperPackagesStorage which requires Zookeeper
functions: false
oxia:
initialShardCount: 3
replicationFactor: 3
server:
replicas: 3
cpuLimit: 333m
memoryLimit: 200Mi
dbCacheSizeMb: 100
storageSize: 1Gi

View File

@ -17,6 +17,5 @@
# under the License.
#
rbac:
enabled: true
psp: true
components:
pulsar_manager: true

View File

@ -17,4 +17,4 @@
# under the License.
#
defaultPulsarImageTag: 3.1.1
defaultPulsarImageTag: 3.0.12

View File

@ -17,6 +17,7 @@
# under the License.
#
# enable TLS
tls:
enabled: true

View File

@ -0,0 +1,60 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
victoria-metrics-k8s-stack:
enabled: true
victoria-metrics-operator:
enabled: true
vmsingle:
enabled: true
vmagent:
enabled: true
grafana:
enabled: true
adminPassword: pulsar-ci-admin
prometheus-node-exporter:
enabled: true
zookeeper:
podMonitor:
enabled: true
bookkeeper:
podMonitor:
enabled: true
broker:
podMonitor:
enabled: true
autorecovery:
podMonitor:
enabled: true
proxy:
podMonitor:
enabled: true
oxia:
coordinator:
podMonitor:
enabled: true
server:
podMonitor:
enabled: true

View File

@ -0,0 +1,41 @@
#!/bin/bash
# this script is used to install tools for the GitHub Actions CI runner while debugging with ssh
if [[ -z "${GITHUB_ACTIONS}" ]]; then
echo "Error: This script is intended to run only in GitHub Actions environment"
exit 1
fi
cat >> $HOME/.bashrc <<'EOF'
function use_kind_kubeconfig() {
export KUBECONFIG=$(ls $HOME/kind/pulsar-ci-*/kubeconfig.yaml)
}
function kubectl() {
# use kind environment's kubeconfig
if [ -z "$KUBECONFIG" ]; then
use_kind_kubeconfig
fi
command kubectl "$@"
}
function k9s() {
# use kind environment's kubeconfig
if [ -z "$KUBECONFIG" ]; then
use_kind_kubeconfig
fi
# install k9s on the fly
if [ ! -x /usr/local/bin/k9s ]; then
echo "Installing k9s..."
curl -L -s https://github.com/derailed/k9s/releases/download/v0.40.5/k9s_Linux_amd64.tar.gz | sudo tar xz -C /usr/local/bin k9s
fi
command k9s "$@"
}
alias k=kubectl
EOF
cat >> $HOME/.bash_profile <<'EOF'
if [ -f ~/.bashrc ]; then
source ~/.bashrc
fi
EOF

325
.ci/helm.sh Normal file → Executable file
View File

@ -81,9 +81,17 @@ function ci::install_cert_manager() {
echo "Successfully installed the cert manager."
}
function ci::helm_repo_add() {
echo "Adding the helm repo ..."
${HELM} repo add prometheus-community https://prometheus-community.github.io/helm-charts
${HELM} repo add vm https://victoriametrics.github.io/helm-charts/
${HELM} repo update
echo "Successfully added the helm repo."
}
function ci::print_pod_logs() {
echo "Logs for all pulsar containers:"
for k8sobject in $(${KUBECTL} get pods,jobs -n ${NAMESPACE} -l app=pulsar -o=name); do
echo "Logs for all containers:"
for k8sobject in $(${KUBECTL} get pods,jobs -n ${NAMESPACE} -o=name); do
${KUBECTL} logs -n ${NAMESPACE} "$k8sobject" --all-containers=true --ignore-errors=true --prefix=true --tail=100 || true
done;
}
@ -91,7 +99,7 @@ function ci::print_pod_logs() {
function ci::collect_k8s_logs() {
mkdir -p "${K8S_LOGS_DIR}" && cd "${K8S_LOGS_DIR}"
echo "Collecting k8s logs to ${K8S_LOGS_DIR}"
for k8sobject in $(${KUBECTL} get pods,jobs -n ${NAMESPACE} -l app=pulsar -o=name); do
for k8sobject in $(${KUBECTL} get pods,jobs -n ${NAMESPACE} -o=name); do
filebase="${k8sobject//\//_}"
${KUBECTL} logs -n ${NAMESPACE} "$k8sobject" --all-containers=true --ignore-errors=true --prefix=true > "${filebase}.$$.log.txt" || true
${KUBECTL} logs -n ${NAMESPACE} "$k8sobject" --all-containers=true --ignore-errors=true --prefix=true --previous=true > "${filebase}.previous.$$.log.txt" || true
@ -105,15 +113,29 @@ function ci::install_pulsar_chart() {
local install_type=$1
local common_value_file=$2
local value_file=$3
local extra_opts=$4
shift 3
local extra_values=()
local extra_opts=()
local values_next=false
for arg in "$@"; do
if [[ "$arg" == "--values" || "$arg" == "--set" ]]; then
extra_values+=("$arg")
values_next=true
elif [[ "$values_next" == true ]]; then
extra_values+=("$arg")
values_next=false
else
extra_opts+=("$arg")
fi
done
local install_args
if [[ "${install_type}" == "install" ]]; then
echo "Installing the pulsar chart"
${KUBECTL} create namespace ${NAMESPACE}
ci::install_cert_manager
echo ${CHARTS_HOME}/scripts/pulsar/prepare_helm_release.sh -k ${CLUSTER} -n ${NAMESPACE} ${extra_opts}
${CHARTS_HOME}/scripts/pulsar/prepare_helm_release.sh -k ${CLUSTER} -n ${NAMESPACE} ${extra_opts}
echo ${CHARTS_HOME}/scripts/pulsar/prepare_helm_release.sh -k ${CLUSTER} -n ${NAMESPACE} "${extra_opts[@]}"
${CHARTS_HOME}/scripts/pulsar/prepare_helm_release.sh -k ${CLUSTER} -n ${NAMESPACE} "${extra_opts[@]}"
sleep 10
# install metallb for loadbalancer support
@ -123,13 +145,17 @@ function ci::install_pulsar_chart() {
${KUBECTL} wait --namespace metallb-system \
--for=condition=ready pod \
--selector=app=metallb \
--timeout=90s
--timeout=120s
# configure metallb
${KUBECTL} apply -f ${BINDIR}/metallb/metallb-config.yaml
install_args=""
# create auth resources
if [[ "x${AUTHENTICATION_PROVIDER}" == "xopenid" ]]; then
ci::create_openid_resources
fi
else
install_args="--wait --wait-for-jobs --timeout 300s --debug"
install_args="--wait --wait-for-jobs --timeout 360s --debug"
fi
CHART_ARGS=""
@ -148,8 +174,8 @@ function ci::install_pulsar_chart() {
fi
fi
set -x
${HELM} template --values ${common_value_file} --values ${value_file} ${CLUSTER} ${CHART_ARGS}
${HELM} ${install_type} --values ${common_value_file} --values ${value_file} --namespace=${NAMESPACE} ${CLUSTER} ${CHART_ARGS} ${install_args}
${HELM} template --values ${common_value_file} --values ${value_file} "${extra_values[@]}" ${CLUSTER} ${CHART_ARGS}
${HELM} ${install_type} --values ${common_value_file} --values ${value_file} "${extra_values[@]}" --namespace=${NAMESPACE} ${CLUSTER} ${CHART_ARGS} ${install_args}
set +x
if [[ "${install_type}" == "install" ]]; then
@ -251,9 +277,15 @@ function ci::retry() {
}
function ci::test_pulsar_admin_api_access() {
echo "Test pulsar admin api access"
ci::retry ${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-admin tenants list
}
function ci::test_create_test_namespace() {
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-admin tenants create pulsar-ci
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-admin namespaces create pulsar-ci/test
}
function ci::test_pulsar_producer_consumer() {
action="${1:-"produce-consume"}"
echo "Testing with ${action}"
@ -264,8 +296,7 @@ function ci::test_pulsar_producer_consumer() {
fi
set -x
if [[ "${action}" == "produce" || "${action}" == "produce-consume" ]]; then
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-admin tenants create pulsar-ci
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-admin namespaces create pulsar-ci/test
ci::test_create_test_namespace
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-admin topics create pulsar-ci/test/test-topic
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-admin topics create-subscription -s test pulsar-ci/test/test-topic
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-client produce -m "test-message" pulsar-ci/test/test-topic
@ -280,31 +311,277 @@ function ci::test_pulsar_producer_consumer() {
}
function ci::wait_function_running() {
num_running=$(${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bash -c 'bin/pulsar-admin functions status --tenant pulsar-ci --namespace test --name test-function | bin/jq .numRunning')
num_running=$(${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bash -c 'bin/pulsar-admin functions status --tenant pulsar-ci --namespace test --name test-function' | jq .numRunning)
counter=1
while [[ ${num_running} -lt 1 ]]; do
echo ${num_running}
((counter++))
if [[ $counter -gt 6 ]]; then
echo >&2 "Timeout waiting..."
return 1
fi
echo "Waiting 15 seconds for function to be running"
sleep 15
${KUBECTL} get pods -n ${NAMESPACE} --field-selector=status.phase=Running
${KUBECTL} get pods -n ${NAMESPACE} -l component=function || true
${KUBECTL} get events --sort-by=.lastTimestamp -A | tail -n 30 || true
num_running=$(${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bash -c 'bin/pulsar-admin functions status --tenant pulsar-ci --namespace test --name test-function | bin/jq .numRunning')
podname=$(${KUBECTL} get pods -l component=function -n ${NAMESPACE} --no-headers -o custom-columns=":metadata.name") || true
if [[ -n "$podname" ]]; then
echo "Function pod is $podname"
${KUBECTL} describe pod -n ${NAMESPACE} $podname
echo "Function pod logs"
${KUBECTL} logs -n ${NAMESPACE} $podname
fi
num_running=$(${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bash -c 'bin/pulsar-admin functions status --tenant pulsar-ci --namespace test --name test-function' | jq .numRunning)
done
}
function ci::wait_message_processed() {
num_processed=$(${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bash -c 'bin/pulsar-admin functions stats --tenant pulsar-ci --namespace test --name test-function | bin/jq .processedSuccessfullyTotal')
num_processed=$(${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bash -c 'bin/pulsar-admin functions stats --tenant pulsar-ci --namespace test --name test-function' | jq .processedSuccessfullyTotal)
podname=$(${KUBECTL} get pods -l component=function -n ${NAMESPACE} --no-headers -o custom-columns=":metadata.name")
counter=1
while [[ ${num_processed} -lt 1 ]]; do
echo ${num_processed}
((counter++))
if [[ $counter -gt 6 ]]; then
echo >&2 "Timeout waiting..."
return 1
fi
echo "Waiting 15 seconds for message to be processed"
sleep 15
echo "Function pod is $podname"
${KUBECTL} describe pod -n ${NAMESPACE} $podname
echo "Function pod logs"
${KUBECTL} logs -n ${NAMESPACE} $podname
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-admin functions stats --tenant pulsar-ci --namespace test --name test-function
num_processed=$(${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bash -c 'bin/pulsar-admin functions stats --tenant pulsar-ci --namespace test --name test-function | bin/jq .processedSuccessfullyTotal')
num_processed=$(${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bash -c 'bin/pulsar-admin functions stats --tenant pulsar-ci --namespace test --name test-function' | jq .processedSuccessfullyTotal)
done
}
function ci::test_pulsar_function() {
echo "Testing functions"
echo "Creating function"
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-admin functions create --tenant pulsar-ci --namespace test --name test-function --inputs "pulsar-ci/test/test_input" --output "pulsar-ci/test/test_output" --parallelism 1 --classname org.apache.pulsar.functions.api.examples.ExclamationFunction --jar /pulsar/examples/api-examples.jar
echo "Creating subscription for output topic"
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-admin topics create-subscription -s test pulsar-ci/test/test_output
echo "Waiting for function to be ready"
# wait until the function is running
# TODO: re-enable function test
# ci::wait_function_running
# ${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-client produce -m "hello pulsar function!" pulsar-ci/test/test_input
# ci::wait_message_processed
ci::wait_function_running
echo "Sending input message"
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-client produce -m 'hello pulsar function!' pulsar-ci/test/test_input
echo "Waiting for message to be processed"
ci::wait_message_processed
echo "Consuming output message"
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-client consume -s test pulsar-ci/test/test_output
}
function ci::test_pulsar_manager() {
echo "Testing pulsar manager"
until ${KUBECTL} get jobs -n ${NAMESPACE} ${CLUSTER}-pulsar-manager-init -o json | jq -r '.status.conditions[] | select (.type | test("Complete")).status' | grep True; do sleep 3; done
${KUBECTL} describe job -n ${NAMESPACE} ${CLUSTER}-pulsar-manager-init
${KUBECTL} logs -n ${NAMESPACE} job.batch/${CLUSTER}-pulsar-manager-init
${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-pulsar-manager-0 -- cat /pulsar-manager/pulsar-manager.log
echo "Checking Podname"
podname=$(${KUBECTL} get pods -n ${NAMESPACE} -l component=pulsar-manager --no-headers -o custom-columns=":metadata.name")
echo "Getting pulsar manager UI password"
PASSWORD=$(${KUBECTL} get secret -n ${NAMESPACE} -l component=pulsar-manager -o=jsonpath="{.items[0].data.UI_PASSWORD}" | base64 --decode)
echo "Getting CSRF_TOKEN"
CSRF_TOKEN=$(${KUBECTL} exec -n ${NAMESPACE} ${podname} -- curl http://127.0.0.1:7750/pulsar-manager/csrf-token)
echo "Performing login"
${KUBECTL} exec -n ${NAMESPACE} ${podname} -- curl -X POST http://127.0.0.1:9527/pulsar-manager/login \
-H 'Accept: application/json, text/plain, */*' \
-H 'Content-Type: application/json' \
-H "X-XSRF-TOKEN: $CSRF_TOKEN" \
-H "Cookie: XSRF-TOKEN=$CSRF_TOKEN" \
-sS -D headers.txt \
-d '{"username": "pulsar", "password": "'${PASSWORD}'"}'
LOGIN_TOKEN=$(${KUBECTL} exec -n ${NAMESPACE} ${podname} -- grep "token:" headers.txt | sed 's/^.*: //')
LOGIN_JSESSIONID=$(${KUBECTL} exec -n ${NAMESPACE} ${podname} -- grep -o "JSESSIONID=[a-zA-Z0-9_]*" headers.txt | sed 's/^.*=//')
echo "Checking environment"
envs=$(${KUBECTL} exec -n ${NAMESPACE} ${podname} -- curl -X GET http://127.0.0.1:9527/pulsar-manager/environments \
-H 'Content-Type: application/json' \
-H "token: $LOGIN_TOKEN" \
-H "X-XSRF-TOKEN: $CSRF_TOKEN" \
-H "username: pulsar" \
-H "Cookie: XSRF-TOKEN=$CSRF_TOKEN; JSESSIONID=$LOGIN_JSESSIONID;")
echo "$envs"
number_of_envs=$(echo $envs | jq '.total')
if [ "$number_of_envs" -ne 1 ]; then
echo "Error: Did not find expected environment"
exit 1
fi
# Force manager to query broker for tenant info. This will require use of the manager's JWT, if JWT authentication is enabled.
echo "Checking tenants"
pulsar_env=$(echo $envs | jq -r '.data[0].name')
tenants=$(${KUBECTL} exec -n ${NAMESPACE} ${podname} -- curl -X GET http://127.0.0.1:9527/pulsar-manager/admin/v2/tenants \
-H 'Content-Type: application/json' \
-H "token: $LOGIN_TOKEN" \
-H "X-XSRF-TOKEN: $CSRF_TOKEN" \
-H "username: pulsar" \
-H "tenant: pulsar" \
-H "environment: ${pulsar_env}" \
-H "Cookie: XSRF-TOKEN=$CSRF_TOKEN; JSESSIONID=$LOGIN_JSESSIONID;")
echo "$tenants"
number_of_tenants=$(echo $tenants | jq '.total')
if [ "$number_of_tenants" -lt 1 ]; then
echo "Error: Found no tenants!"
exit 1
fi
}
function ci::check_loadbalancers() {
(
set +e
${KUBECTL} get services -n ${NAMESPACE} | grep LoadBalancer
if [ $? -eq 0 ]; then
echo "Error: Found service with type LoadBalancer. This is not allowed because of security reasons."
exit 1
fi
exit 0
)
}
function ci::validate_kustomize_yaml() {
# if kustomize is not installed, install kustomize to a temp directory
if ! command -v kustomize &> /dev/null; then
KUSTOMIZE_VERSION=5.6.0
KUSTOMIZE_DIR=$(mktemp -d)
echo "Installing kustomize ${KUSTOMIZE_VERSION} to ${KUSTOMIZE_DIR}"
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash -s ${KUSTOMIZE_VERSION} ${KUSTOMIZE_DIR}
export PATH=${KUSTOMIZE_DIR}:$PATH
fi
# prevent regression of https://github.com/apache/pulsar-helm-chart/issues/569
local kustomize_yaml_dir=$(mktemp -d)
cp ${PULSAR_HOME}/.ci/kustomization.yaml ${kustomize_yaml_dir}
PULSAR_HOME=${PULSAR_HOME} yq -i '.helmGlobals.chartHome = env(PULSAR_HOME) + "/charts"' ${kustomize_yaml_dir}/kustomization.yaml
failures=0
# validate zookeeper init
echo "Validating kustomize yaml output with zookeeper init"
_ci::validate_kustomize_yaml ${kustomize_yaml_dir} || ((failures++))
# validate oxia init
yq -i '.helmCharts[0].valuesInline.components += {"zookeeper": false, "oxia": true}' ${kustomize_yaml_dir}/kustomization.yaml
echo "Validating kustomize yaml output with oxia init"
_ci::validate_kustomize_yaml ${kustomize_yaml_dir} || ((failures++))
if [ $failures -gt 0 ]; then
exit 1
fi
}
function _ci::validate_kustomize_yaml() {
local kustomize_yaml_dir=$1
kustomize build --enable-helm --helm-kube-version 1.23.0 --load-restrictor=LoadRestrictionsNone ${kustomize_yaml_dir} | yq 'select(.spec.template.spec.containers[0].args != null) | .spec.template.spec.containers[0].args' | \
awk '{
if (prev_line ~ /\\$/ && $0 ~ /^$/) {
print "Found issue: backslash at end of line followed by empty line. Must use pipe character for multiline strings to support kustomize due to kubernetes-sigs/kustomize#4201.";
print "Line: " prev_line;
has_issue = 1;
}
prev_line = $0;
}
END {
if (!has_issue) {
print "No issues found: no backslash followed by empty line";
exit 0;
}
exit 1;
}'
}
# Create all resources needed for openid authentication
function ci::create_openid_resources() {
echo "Creating openid resources"
cp ${PULSAR_HOME}/.ci/auth/keycloak/0-realm-pulsar-partial-export.json /tmp/realm-pulsar.json
for component in broker proxy admin manager; do
echo "Creating openid resources for ${component}"
local client_id=pulsar-${component}
# Github action hang up when read string from /dev/urandom, so use python to generate a random string
local client_secret=$(python -c "import secrets; import string; length = 32; random_string = ''.join(secrets.choice(string.ascii_letters + string.digits) for _ in range(length)); print(random_string);")
if [[ "${component}" == "admin" ]]; then
local sub_claim_value="admin"
else
local sub_claim_value="${component}-admin"
fi
# Create the client credentials file
jq -n --arg CLIENT_ID $client_id --arg CLIENT_SECRET "$client_secret" -f ${PULSAR_HOME}/.ci/auth/oauth2/credentials_file.json > /tmp/${component}-credentials_file.json
# Create the secret for the client credentials
local secret_name="pulsar-${component}-credentials"
${KUBECTL} create secret generic ${secret_name} --from-file=credentials_file.json=/tmp/${component}-credentials_file.json -n ${NAMESPACE}
# Create the keycloak client file
jq -n --arg CLIENT_ID $client_id --arg CLIENT_SECRET "$client_secret" --arg SUB_CLAIM_VALUE "$sub_claim_value" -f ${PULSAR_HOME}/.ci/auth/keycloak/1-client-template.json > /tmp/${component}-keycloak-client.json
# Merge the keycloak client file with the realm
jq '.clients += [input]' /tmp/realm-pulsar.json /tmp/${component}-keycloak-client.json > /tmp/realm-pulsar.json.tmp
mv /tmp/realm-pulsar.json.tmp /tmp/realm-pulsar.json
done
echo "Create keycloak realm configuration"
${KUBECTL} create secret generic keycloak-ci-realm-config --from-file=realm-pulsar.json=/tmp/realm-pulsar.json -n ${NAMESPACE}
echo "Installing keycloak helm chart"
${HELM} install keycloak-ci oci://registry-1.docker.io/bitnamicharts/keycloak --version 24.6.4 --values ${PULSAR_HOME}/.ci/auth/keycloak/values.yaml -n ${NAMESPACE}
echo "Wait until keycloak is running"
WC=$(${KUBECTL} get pods -n ${NAMESPACE} --field-selector=status.phase=Running | grep keycloak-ci-0 | wc -l)
counter=1
while [[ ${WC} -lt 1 ]]; do
((counter++))
echo ${WC};
sleep 15
${KUBECTL} get pods,jobs -n ${NAMESPACE}
${KUBECTL} get events --sort-by=.lastTimestamp -A | tail -n 30 || true
if [[ $((counter % 20)) -eq 0 ]]; then
ci::print_pod_logs
if [[ $counter -gt 100 ]]; then
echo >&2 "Timeout waiting..."
exit 1
fi
fi
WC=$(${KUBECTL} get pods -n ${NAMESPACE} --field-selector=status.phase=Running | grep keycloak-ci-0 | wc -l)
done
echo "Wait until keycloak is ready"
${KUBECTL} wait --for=condition=Ready pod/keycloak-ci-0 -n ${NAMESPACE} --timeout 180s
echo "Check keycloack realm pulsar issuer url"
${KUBECTL} exec -n ${NAMESPACE} keycloak-ci-0 -c keycloak -- bash -c 'curl -sSL http://keycloak-ci-headless:8080/realms/pulsar'
}
# lists all available functions in this tool
function ci::list_functions() {
declare -F | awk '{print $NF}' | sort | grep -E '^ci::' | sed 's/^ci:://'
}
# Only run this section if the script is being executed directly (not sourced)
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
if [ -z "$1" ]; then
echo "usage: $0 [function_name]"
echo "Available functions:"
ci::list_functions
exit 1
fi
ci_function_name="ci::$1"
shift
if [[ "$(LC_ALL=C type -t "${ci_function_name}")" == "function" ]]; then
eval "$ci_function_name" "$@"
exit $?
else
echo "Invalid ci function"
echo "Available functions:"
ci::list_functions
exit 1
fi
fi

32
.ci/kustomization.yaml Normal file
View File

@ -0,0 +1,32 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmGlobals:
chartHome: ../charts
helmCharts:
- name: pulsar
releaseName: pulsar
valuesInline:
victoria-metrics-k8s-stack:
enabled: false
components:
pulsar_manager: true
zookeeper: true

View File

@ -17,15 +17,35 @@
# under the License.
#
kube-prometheus-stack:
victoria-metrics-k8s-stack:
enabled: false
prometheusOperator:
victoria-metrics-operator:
enabled: false
grafana:
vmsingle:
enabled: false
vmagent:
enabled: false
vmalert:
enabled: false
alertmanager:
enabled: false
prometheus:
grafana:
enabled: false
prometheus-node-exporter:
enabled: false
kube-state-metrics:
enabled: false
kubelet:
enabled: false
kubeApiServer:
enabled: false
kubeControllerManager:
enabled: false
coreDns:
enabled: false
kubeEtcd:
enabled: false
kubeScheduler:
enabled: false
# disabled AntiAffinity
@ -36,6 +56,8 @@ affinity:
components:
autorecovery: false
pulsar_manager: false
# enable functions by default in CI
functions: true
zookeeper:
replicaCount: 1
@ -53,6 +75,12 @@ bookkeeper:
diskUsageWarnThreshold: "0.999"
PULSAR_PREFIX_diskUsageThreshold: "0.999"
PULSAR_PREFIX_diskUsageWarnThreshold: "0.999"
# minimal memory use for bookkeeper
# https://bookkeeper.apache.org/docs/reference/config#db-ledger-storage-settings
dbStorage_writeCacheMaxSizeMb: "32"
dbStorage_readAheadCacheMaxSizeMb: "32"
dbStorage_rocksDB_writeBufferSizeMB: "8"
dbStorage_rocksDB_blockCacheSize: "8388608"
broker:
replicaCount: 1
@ -84,3 +112,11 @@ proxy:
toolset:
useProxy: false
oxia:
coordinator:
podMonitor:
enabled: false
server:
podMonitor:
enabled: false

View File

@ -39,15 +39,15 @@ inputs:
version:
description: "The chart-testing version to install"
required: false
default: v3.10.1
default: v3.12.0
yamllint_version:
description: "The yamllint version to install"
required: false
default: '1.33.0'
default: '1.35.1'
yamale_version:
description: "The yamale version to install"
required: false
default: '4.0.4'
default: '6.0.0'
runs:
using: composite
steps:

View File

@ -35,9 +35,20 @@ set -o errexit
set -o nounset
set -o pipefail
DEFAULT_CHART_TESTING_VERSION=v3.10.1
DEFAULT_YAMLLINT_VERSION=1.33.0
DEFAULT_YAMALE_VERSION=4.0.4
DEFAULT_CHART_TESTING_VERSION=v3.12.0
DEFAULT_YAMLLINT_VERSION=1.35.1
DEFAULT_YAMALE_VERSION=6.0.0
ARCH=$(uname -m)
case $ARCH in
x86) ARCH="386";;
x86_64) ARCH="amd64";;
i686) ARCH="386";;
i386) ARCH="386";;
arm64) ARCH="arm64";;
aarch64) ARCH="arm64";;
esac
OS=$(uname|tr '[:upper:]' '[:lower:]')
show_help() {
cat << EOF
@ -109,31 +120,35 @@ install_chart_testing() {
exit 1
fi
local arch
arch=$(uname -m)
local cache_dir="$RUNNER_TOOL_CACHE/ct/$version/$arch"
local cache_dir="$RUNNER_TOOL_CACHE/ct/$version/${ARCH}"
local venv_dir="$cache_dir/venv"
if [[ ! -d "$cache_dir" ]]; then
mkdir -p "$cache_dir"
echo "Installing chart-testing..."
curl -sSLo ct.tar.gz "https://github.com/helm/chart-testing/releases/download/$version/chart-testing_${version#v}_linux_amd64.tar.gz"
curl -sSLo ct.tar.gz "https://github.com/helm/chart-testing/releases/download/$version/chart-testing_${version#v}_${OS}_${ARCH}.tar.gz"
tar -xzf ct.tar.gz -C "$cache_dir"
rm -f ct.tar.gz
# if uv (https://docs.astral.sh/uv/) is not installed, install it
if ! command -v uv &> /dev/null; then
echo 'Installing uv...'
curl -LsSf https://astral.sh/uv/install.sh | sh
fi
echo 'Creating virtual Python environment...'
python3 -m venv "$venv_dir"
uv venv "$venv_dir"
echo 'Activating virtual environment...'
# shellcheck disable=SC1090
source "$venv_dir/bin/activate"
echo 'Installing yamllint...'
pip3 install "yamllint==${yamllint_version}"
uv pip install "yamllint==${yamllint_version}"
echo 'Installing Yamale...'
pip3 install "yamale==${yamale_version}"
uv pip install "yamale==${yamale_version}"
fi
# https://github.com/helm/chart-testing-action/issues/62

View File

@ -53,8 +53,8 @@ runs:
# tune filesystem mount options, https://www.kernel.org/doc/Documentation/filesystems/ext4.txt
# commit=999999, effectively disables automatic syncing to disk (default is every 5 seconds)
# nobarrier/barrier=0, loosen data consistency on system crash (no negative impact to empheral CI nodes)
sudo mount -o remount,nodiscard,commit=999999,barrier=0 /
sudo mount -o remount,nodiscard,commit=999999,barrier=0 /mnt
sudo mount -o remount,nodiscard,commit=999999,barrier=0 / || true
sudo mount -o remount,nodiscard,commit=999999,barrier=0 /mnt || true
# disable discard/trim at device level since remount with nodiscard doesn't seem to be effective
# https://www.spinics.net/lists/linux-ide/msg52562.html
for i in /sys/block/sd*/queue/discard_max_bytes; do
@ -77,12 +77,6 @@ runs:
# stop Azure Linux agent to save RAM
sudo systemctl stop walinuxagent.service || true
# enable docker experimental mode which is
# required for using "docker build --squash" / "-Ddocker.squash=true"
daemon_json="$(sudo cat /etc/docker/daemon.json | jq '.experimental = true')"
echo "$daemon_json" | sudo tee /etc/docker/daemon.json
# restart docker daemon
sudo systemctl restart docker
echo '::endgroup::'
# show memory

View File

@ -32,9 +32,10 @@ concurrency:
cancel-in-progress: true
jobs:
preconditions:
name: Preconditions
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
if: (github.event_name != 'schedule') || (github.repository == 'apache/pulsar-helm-chart')
outputs:
docs_only: ${{ steps.check_changes.outputs.docs_only }}
@ -62,12 +63,12 @@ jobs:
license-check:
needs: preconditions
name: License Check
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
timeout-minutes: 10
if: ${{ needs.preconditions.outputs.docs_only != 'true' }}
steps:
- name: Set up Go 1.12
uses: actions/setup-go@v4
uses: actions/setup-go@v5
with:
go-version: 1.12
id: go
@ -83,7 +84,7 @@ jobs:
ct-lint:
needs: ['preconditions', 'license-check']
name: chart-testing lint
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
timeout-minutes: 45
if: ${{ needs.preconditions.outputs.docs_only != 'true' }}
steps:
@ -105,15 +106,19 @@ jobs:
- name: Set up Helm
if: ${{ steps.check_changes.outputs.docs_only != 'true' }}
uses: azure/setup-helm@v3
uses: azure/setup-helm@v4
with:
version: v3.12.3
version: v3.16.4
- name: Set up Python
if: ${{ steps.check_changes.outputs.docs_only != 'true' }}
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: '3.9'
python-version: '3.12'
- name: Install uv, a fast modern package manager for Python
if: ${{ steps.check_changes.outputs.docs_only != 'true' }}
run: curl -LsSf https://astral.sh/uv/install.sh | sh
- name: Set up chart-testing
if: ${{ steps.check_changes.outputs.docs_only != 'true' }}
@ -127,6 +132,45 @@ jobs:
--validate-maintainers=false \
--target-branch ${{ github.event.repository.default_branch }}
- name: Run kubeconform check for helm template with every major k8s version 1.25.0-1.32.0
if: ${{ steps.check_changes.outputs.docs_only != 'true' }}
run: |
PULSAR_CHART_HOME=$(pwd)
source ${PULSAR_CHART_HOME}/hack/common.sh
source ${PULSAR_CHART_HOME}/.ci/helm.sh
hack::ensure_kubectl
hack::ensure_helm
hack::ensure_kubeconform
ci::helm_repo_add
helm dependency build charts/pulsar
validate_helm_template_with_k8s_version() {
local kube_version=$1
shift
echo -n "Validating helm template with kubeconform for k8s version $kube_version"
if [ $# -gt 0 ]; then
echo " Extra args: $*"
else
echo ""
fi
helm template charts/pulsar --set victoria-metrics-k8s-stack.enabled=false --set components.pulsar_manager=true --kube-version $kube_version "$@" | \
kubeconform -schema-location default -schema-location 'https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json' -strict -kubernetes-version $kube_version -summary
}
set -o pipefail
for k8s_version_part in {25..32}; do
k8s_version="1.${k8s_version_part}.0"
echo "Validating default values with k8s version $k8s_version"
validate_helm_template_with_k8s_version $k8s_version
for config in .ci/clusters/*.yaml; do
echo "Validating $config with k8s version $k8s_version"
validate_helm_template_with_k8s_version $k8s_version --values .ci/values-common.yaml --values $config
done
done
- name: Validate kustomize yaml for extra new lines in pulsar-init commands
if: ${{ steps.check_changes.outputs.docs_only != 'true' }}
run: |
./.ci/helm.sh validate_kustomize_yaml
- name: Wait for ssh connection when build fails
# ssh access is enabled for builds in own forks
uses: ./.github/actions/ssh-access
@ -137,27 +181,28 @@ jobs:
install-chart-tests:
name: ${{ matrix.testScenario.name }} - k8s ${{ matrix.k8sVersion.version }} - ${{ matrix.testScenario.type || 'install' }}
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
timeout-minutes: ${{ matrix.testScenario.timeout || 45 }}
needs: ['preconditions', 'ct-lint']
if: ${{ needs.preconditions.outputs.docs_only != 'true' }}
strategy:
fail-fast: false
matrix:
# see https://github.com/kubernetes-sigs/kind/releases/tag/v0.20.0 for the list of supported k8s versions for kind 0.20.0
# see https://github.com/kubernetes-sigs/kind/releases/tag/v0.27.0 for the list of supported k8s versions for kind 0.27.0
# docker images are available at https://hub.docker.com/r/kindest/node/tags
k8sVersion:
- version: "1.21.14"
kind_image_tag: v1.21.14@sha256:8a4e9bb3f415d2bb81629ce33ef9c76ba514c14d707f9797a01e3216376ba093
- version: "1.27.3"
kind_image_tag: v1.27.3@sha256:3966ac761ae0136263ffdb6cfd4db23ef8a83cba8a463690e98317add2c9ba72
- version: "1.25.16"
kind_image_tag: v1.25.16@sha256:6110314339b3b44d10da7d27881849a87e092124afab5956f2e10ecdb463b025
- version: "1.32.2"
kind_image_tag: v1.32.2@sha256:f226345927d7e348497136874b6d207e0b32cc52154ad8323129352923a3142f
testScenario:
- name: Upgrade latest released version
values_file: .ci/clusters/values-upgrade.yaml
shortname: upgrade
type: upgrade
- name: Use Pulsar Image
values_file: .ci/clusters/values-pulsar-image.yaml
shortname: pulsar-image
- name: Use previous LTS Pulsar Image
values_file: .ci/clusters/values-pulsar-previous-lts.yaml
shortname: pulsar-previous-lts
- name: JWT Asymmetric Keys
values_file: .ci/clusters/values-jwt-asymmetric.yaml
shortname: jwt-asymmetric
@ -179,29 +224,49 @@ jobs:
- name: ZK & BK TLS Only
values_file: .ci/clusters/values-zkbk-tls.yaml
shortname: zkbk-tls
- name: PSP
values_file: .ci/clusters/values-psp.yaml
shortname: psp
- name: Pulsar Manager
values_file: .ci/clusters/values-pulsar-manager.yaml
shortname: pulsar-manager
- name: Oxia
values_file: .ci/clusters/values-oxia.yaml
shortname: oxia
- name: OpenID
values_file: .ci/clusters/values-openid.yaml
shortname: openid
- name: CA certificates
values_file: .ci/clusters/values-cacerts.yaml
shortname: cacerts
include:
- k8sVersion:
version: "1.21.14"
kind_image_tag: v1.21.14@sha256:8a4e9bb3f415d2bb81629ce33ef9c76ba514c14d707f9797a01e3216376ba093
version: "1.25.16"
kind_image_tag: v1.25.16@sha256:6110314339b3b44d10da7d27881849a87e092124afab5956f2e10ecdb463b025
testScenario:
name: "Upgrade TLS"
values_file: .ci/clusters/values-tls.yaml
shortname: tls
type: upgrade
- k8sVersion:
version: "1.21.14"
kind_image_tag: v1.21.14@sha256:8a4e9bb3f415d2bb81629ce33ef9c76ba514c14d707f9797a01e3216376ba093
version: "1.25.16"
kind_image_tag: v1.25.16@sha256:6110314339b3b44d10da7d27881849a87e092124afab5956f2e10ecdb463b025
testScenario:
name: "Upgrade PSP"
values_file: .ci/clusters/values-psp.yaml
shortname: psp
name: "Upgrade victoria-metrics-k8s-stack for previous LTS"
values_file: .ci/clusters/values-victoria-metrics-grafana.yaml --values .ci/clusters/values-pulsar-previous-lts.yaml
shortname: victoria-metrics-grafana
type: upgrade
upgradeFromVersion: 3.2.0
- k8sVersion:
version: "1.25.16"
kind_image_tag: v1.25.16@sha256:6110314339b3b44d10da7d27881849a87e092124afab5956f2e10ecdb463b025
testScenario:
name: "TLS with helm 3.12.0"
values_file: .ci/clusters/values-tls.yaml
shortname: tls
type: install
helmVersion: 3.12.0
env:
k8sVersion: ${{ matrix.k8sVersion.kind_image_tag }}
KUBECTL_VERSION: ${{ matrix.k8sVersion.version }}
HELM_VERSION: ${{ matrix.helmVersion || '3.14.4' }}
steps:
- name: checkout
uses: actions/checkout@v4
@ -211,38 +276,7 @@ jobs:
- name: Setup debugging tools for ssh access
if: ${{ github.repository != 'apache/pulsar-helm-chart' && github.event_name == 'pull_request' }}
run: |
cat >> $HOME/.bashrc <<'EOF'
function use_kind_kubeconfig() {
export KUBECONFIG=$(ls $HOME/kind/pulsar-ci-*/kubeconfig.yaml)
}
function kubectl() {
# use kind environment's kubeconfig
if [ -z "$KUBECONFIG" ]; then
use_kind_kubeconfig
fi
command kubectl "$@"
}
function k9s() {
# use kind environment's kubeconfig
if [ -z "$KUBECONFIG" ]; then
use_kind_kubeconfig
fi
# install k9s on the fly
if [ ! -x /usr/local/bin/k9s ]; then
echo "Installing k9s..."
curl -L -s https://github.com/derailed/k9s/releases/download/v0.29.1/k9s_Linux_amd64.tar.gz | sudo tar xz -C /usr/local/bin k9s
fi
command k9s "$@"
}
EOF
cat >> $HOME/.bash_profile <<'EOF'
if [ -f ~/.bashrc ]; then
source ~/.bashrc
fi
EOF
run: .ci/configure_ci_runner_for_debugging.sh
- name: Setup ssh access to build runner VM
# ssh access is enabled for builds in own forks
@ -252,15 +286,22 @@ jobs:
with:
limit-access-to-actor: true
- name: Run chart-testing (${{ matrix.testScenario.type || 'install' }})
- name: Run chart-testing (${{ matrix.testScenario.type || 'install' }}) with helm ${{ env.HELM_VERSION }}
run: |
case "${{ matrix.testScenario.shortname }}" in
"jwt-symmetric")
export SYMMETRIC=true
export EXTRA_SUPERUSERS=manager-admin
;;
"jwt-asymmetric")
export EXTRA_SUPERUSERS=manager-admin
;;
"openid")
export AUTHENTICATION_PROVIDER=openid
;;
esac
if [[ "${{ matrix.testScenario.type || 'install' }}" == "upgrade" ]]; then
export UPGRADE_FROM_VERSION=latest
export UPGRADE_FROM_VERSION="${{ matrix.testScenario.upgradeFromVersion || 'latest' }}"
fi
.ci/chart_test.sh ${{ matrix.testScenario.values_file }}
@ -274,7 +315,7 @@ jobs:
ci::collect_k8s_logs
- name: Upload k8s logs on failure
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v4
if: ${{ cancelled() || failure() }}
continue-on-error: true
with:
@ -296,7 +337,7 @@ jobs:
pulsar-helm-chart-ci-checks-completed:
name: "CI checks completed"
if: ${{ always() && ((github.event_name != 'schedule') || (github.repository == 'apache/pulsar-helm-chart')) }}
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
timeout-minutes: 10
needs: [
'preconditions',

2
.gitignore vendored
View File

@ -17,5 +17,3 @@ charts/**/*.lock
PRIVATEKEY
PUBLICKEY
.vagrant/
pulsarctl-*-*.tar.gz
pulsarctl-*-*/

337
README.md
View File

@ -27,6 +27,113 @@ Read [Deploying Pulsar on Kubernetes](http://pulsar.apache.org/docs/deploy-kuber
> :warning: This helm chart is updated outside of the regular Pulsar release cycle and might lag behind a bit. It only supports basic Kubernetes features now. Currently, it can be used as no more than a template and starting point for a Kubernetes deployment. In many cases, it would require some customizations.
## Important Security Advisory for Helm Chart Usage
### Notice of Default Configuration
This Helm chart's default configuration DOES NOT meet production security requirements.
Users MUST review and customize security settings for their specific environment.
IMPORTANT: This Helm chart provides a starting point for Pulsar deployments but requires
significant security customization before use in production environments. We strongly
recommend implementing:
1. Authentication and authorization for all components
2. TLS encryption for all communication channels
3. Proper network isolation and access controls
4. Regular security updates and vulnerability assessments
As an open source project, we welcome contributions to improve security features.
Please consider submitting pull requests to address security gaps or enhance
existing security implementations.
### Pulsar Proxy Security Considerations
As per the [Pulsar Proxy documentation](https://pulsar.apache.org/docs/3.1.x/administration-proxy/), it is explicitly stated that the Pulsar proxy is not designed for exposure to the public internet. The design assumes that deployments will be protected by network perimeter security measures. It is crucial to understand that relying solely on the default configuration can expose your deployment to significant security vulnerabilities.
### Upgrading
#### To 4.1.0
This version introduces `OpenID` authentication. Setting `auth.authentication.provider` is no longer supported, you need to enable the provider with `auth.authentication.<provider>.enabled`.
#### To 4.0.0
The default service type for the Pulsar proxy has changed from `LoadBalancer` to `ClusterIP` for security reasons. This limits access to within the Kubernetes environment by default.
### External Access Recommendations
If you need to expose the Pulsar Proxy outside the cluster:
1. **USE INTERNAL LOAD BALANCERS ONLY**
- Set type to LoadBalancer only in secured environments with proper network controls
- Add cloud provider-specific annotations for internal load balancers:
- Kubernetes documentation about internal load balancers:
- [Internal load balancer](https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer)
- See cloud provider documentation:
- AWS / EKS: [AWS Load Balancer Controller / Service Annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
- Azure / AKS: [Use an internal load balancer with Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/internal-lb)
- GCP / GKE: [LoadBalancer service parameters](https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer-parameters)
- Examples (verify correctness for your environment):
- AWS / EKS: `service.beta.kubernetes.io/aws-load-balancer-internal: "true"`
- Azure / AKS: `service.beta.kubernetes.io/azure-load-balancer-internal: "true"`
- GCP / GKE: `networking.gke.io/load-balancer-type: "Internal"`
2. **IMPLEMENT AUTHENTICATION AND AUTHORIZATION**
- Configure all clients to authenticate properly
- Set up appropriate authorization policies
3. **USE TLS FOR ALL CONNECTIONS**
- Enable TLS for client-to-proxy connections
- Enable TLS for proxy-to-broker connections
- Enable TLS for all internal cluster communications
- Note: TLS alone is NOT sufficient as a security solution. Even with TLS enabled, clusters exposed to untrusted networks remain vulnerable to denial-of-service attacks, authentication bypass attempts, and protocol-level exploits.
4. **NETWORK SECURITY**
- Use private networks (VPCs)
- Configure firewalls, security groups, and IP restrictions
5. **CLIENT IP ADDRESS BASED ACCESS RESTRICTIONS**
- When using a LoadBalancer service type, restrict access to specific IP ranges by configuring `proxy.service.loadBalancerSourceRanges` in your values.yaml:
```yaml
proxy:
service:
loadBalancerSourceRanges:
- 10.0.0.0/8 # Private network range
- 172.16.0.0/12 # Private network range
- 192.168.0.0/16 # Private network range
```
- This feature:
- Provides an additional defense layer by filtering traffic at the load balancer level
- Only allows connections from specified CIDR blocks
- Works only with LoadBalancer service type and when your cloud provider supports the `loadBalancerSourceRanges` parameter
- Important: This should be implemented alongside other security measures (internal load balancer, authentication, TLS, network policies) as part of a defense-in-depth strategy,
not as a standalone security solution
### Alternative for External Access
As an alternative method for external access, Pulsar has support for [SNI proxy routing](https://pulsar.apache.org/docs/next/concepts-proxy-sni-routing/). SNI Proxy routing is supported with proxy servers such as Apache Traffic Server, HAProxy and Nginx.
Note: This option isn't currently implemented in the Apache Pulsar Helm chart.
**IMPORTANT**: Pulsar binary protocol cannot be exposed outside of the Kubernetes cluster using Kubernetes Ingress. Kubernetes Ingress works for the Admin REST API and topic lookups, but clients would be connecting to the advertised listener addresses returned by the brokers and it would only work when clients can connect directly to brokers. This is not a supported secure option for exposing Pulsar to untrusted networks.
### General Recommendations
- **Network Perimeter Security:** It is imperative to implement robust network perimeter security to safeguard your deployment. The absence of such security measures can lead to unauthorized access and potential data breaches.
- **Restricted Access:** For environments where security is less critical, such as certain development or testing scenarios, the use of `loadBalancerSourceRanges` may be employed to restrict access to specified IP addresses or ranges. This, however, should not be considered a substitute for comprehensive security measures in production environments.
### User Responsibility
The user assumes full responsibility for the security and integrity of their deployment. This includes, but is not limited to, the proper configuration of security features and adherence to best practices for securing network access. The providers of this Helm chart disclaim all warranties, whether express or implied, including any warranties of merchantability, fitness for a particular purpose, and non-infringement of third-party rights.
### No Security Guarantees
The providers of this Helm chart make no guarantees regarding the security of the chart under any circumstances. It is the user's responsibility to ensure that their deployment is secure and complies with all relevant security standards and regulations.
By using this Helm chart, the user acknowledges the risks associated with its default configuration and the necessity for proper security customization. The user further agrees that the providers of the Helm chart shall not be liable for any security breaches or incidents resulting from the use of the chart.
## Features
This Helm Chart includes all the components of Apache Pulsar for a complete experience.
@ -40,7 +147,7 @@ This Helm Chart includes all the components of Apache Pulsar for a complete expe
- [x] Management & monitoring components:
- [x] Pulsar Manager
- [x] Optional PodMonitors for each component (enabled by default)
- [x] [Kube-Prometheus-Stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) (as of 3.0.0)
- [x] [victoria-metrics-k8s-stack](hhttps://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-k8s-stack) (as of 4.0.0)
It includes support for:
@ -53,9 +160,10 @@ It includes support for:
- [x] Broker
- [x] Toolset
- [x] Bookie
- [x] ZooKeeper
- [x] ZooKeeper (requires the `AdditionalCertificateOutputFormats=true` feature gate to be enabled in the cert-manager deployment when using cert-manager versions below 1.15.0)
- [x] Authentication
- [x] JWT
- [x] OpenID
- [ ] Mutal TLS
- [ ] Kerberos
- [x] Authorization
@ -64,7 +172,7 @@ It includes support for:
- [x] Non-persistence storage
- [x] Persistence Volume
- [x] Local Persistent Volumes
- [ ] Tiered Storage
- [x] Tiered Storage
- [x] Functions
- [x] Kubernetes Runtime
- [x] Process Runtime
@ -76,9 +184,9 @@ It includes support for:
In order to use this chart to deploy Apache Pulsar on Kubernetes, the followings are required.
1. kubectl 1.21 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin))
2. Helm v3 (3.0.2 or higher)
3. A Kubernetes cluster, version 1.21 or higher.
1. kubectl 1.25 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin))
2. Helm v3 (3.12.0 or higher)
3. A Kubernetes cluster, version 1.25 or higher.
## Environment setup
@ -93,26 +201,62 @@ Before proceeding to deploying Pulsar, you need to prepare your environment.
To add this chart to your local Helm repository:
```bash
helm repo add apache https://pulsar.apache.org/charts
helm repo add apachepulsar https://pulsar.apache.org/charts
helm repo update
```
## Kubernetes cluster preparation
You need a Kubernetes cluster whose version is 1.21 or higher in order to use this chart, due to the usage of certain Kubernetes features.
You need a Kubernetes cluster whose version is 1.25 or higher in order to use this chart, due to the usage of certain Kubernetes features.
We provide some instructions to guide you through the preparation: http://pulsar.apache.org/docs/helm-prepare/
## Deploy Pulsar to Kubernetes
1. Configure your values file. The best way to know which values are available is to read the [values.yaml](./charts/pulsar/values.yaml).
A best practice is to start with an empty values file and only set the keys that differ from the default configuration.
Anti-affinity rules for Zookeeper and Bookie components require at least one node per replica. For Kubernetes clusters with less than 3 nodes,
you must disable this feature by adding this to your initial values.yaml file:
```yaml
affinity:
anti_affinity: false
```
2. Install the chart:
```bash
helm install <release-name> -n <namespace> -f your-values.yaml apache/pulsar
helm install -n <namespace> --create-namespace <release-name> -f your-values.yaml apachepulsar/pulsar
```
3. Access the Pulsar cluster
3. Observe the deployment progress
Watching events to view progress of deployment:
```shell
kubectl get -n <namespace> events -o wide --watch
```
Watching state of deployed Kubernetes objects, updated every 2 seconds:
```shell
watch kubectl get -n <namespace> all
```
Waiting until Pulsar Proxy is available:
```shell
kubectl wait --timeout=600s --for=condition=ready pod -n <namespace> -l component=proxy
```
Watching state with k9s (https://k9scli.io/topics/install/):
```shell
k9s -n <namespace>
```
4. Access the Pulsar cluster
The default values will create a `ClusterIP` for the proxy you can use to interact with the cluster. To find the IP address of proxy use:
@ -139,35 +283,102 @@ You can also checkout out the example values file for different deployments.
- [Deploy a Pulsar cluster with JWT authentication using symmetric key](examples/values-jwt-symmetric.yaml)
- [Deploy a Pulsar cluster with JWT authentication using asymmetric key](examples/values-jwt-asymmetric.yaml)
## Disabling Kube-Prometheus-Stack CRDs
## Disabling victoria-metrics-k8s-stack components
In order to disable the kube-prometheus-stack fully, it is necessary to add the following to your `values.yaml`:
In order to disable the victoria-metrics-k8s-stack, you can add the following to your `values.yaml`.
Victoria Metrics components can also be disabled and enabled individually if you only need specific monitoring features.
```yaml
kube-prometheus-stack:
# disable VictoriaMetrics and related components
victoria-metrics-k8s-stack:
enabled: false
prometheusOperator:
victoria-metrics-operator:
enabled: false
vmsingle:
enabled: false
vmagent:
enabled: false
kube-state-metrics:
enabled: false
prometheus-node-exporter:
enabled: false
grafana:
enabled: false
alertmanager:
Additionally, you'll need to set each component's `podMonitor` property to `false`.
```yaml
# disable pod monitors
autorecovery:
podMonitor:
enabled: false
prometheus:
bookkeeper:
podMonitor:
enabled: false
oxia:
server:
podMonitor:
enabled: false
coordinator:
podMonitor:
enabled: false
broker:
podMonitor:
enabled: false
proxy:
podMonitor:
enabled: false
zookeeper:
podMonitor:
enabled: false
```
Otherwise, the helm chart installation will attempt to install the CRDs for the kube-prometheus-stack. Additionally,
you'll need to disable each of the component's `PodMonitors`. This is shown in some [examples](./examples) and is
verified in some [tests](./.ci/clusters).
This is shown in some [examples/values-disable-monitoring.yaml](examples/values-disable-monitoring.yaml).
## Pulsar Manager
The Pulsar Manager can be deployed alongside the pulsar cluster instance.
Depending on the given settings it uses an existing Secret within the given namespace or creates a new one, with random
passwords for both, the UI and the internal database.
To forward the UI use (assumes you did not change the namespace):
```
kubectl port-forward $(kubectl get pods -l component=pulsar-manager -o jsonpath='{.items[0].metadata.name}') 9527:9527
```
And then opening the browser to http://localhost:9527
The default user is `pulsar` and you can find out the password with this command
```
kubectl get secret -l component=pulsar-manager -o=jsonpath="{.items[0].data.UI_PASSWORD}" | base64 --decode
```
## Grafana Dashboards
The Apache Pulsar Helm Chart uses the `kube-prometheus-stack` Helm Chart to deploy Grafana. Dashboards are loaded via a Kubernetes `ConfigMap`. Please see their documentation for loading those dashboards.
The Apache Pulsar Helm Chart uses the `victoria-metrics-k8s-stack` Helm Chart to deploy Grafana.
The `apache/pulsar` GitHub repo contains some dashboards [here](https://github.com/apache/pulsar/tree/master/grafana).
There are several ways to configure Grafana dashboards. The default [`values.yaml`](charts/pulsar/values.yaml) comes with examples of Pulsar dashboards which get downloaded from the Apache-2.0 licensed [lhotari/pulsar-grafana-dashboards OSS project](https://github.com/lhotari/pulsar-grafana-dashboards) by URL.
### Third Party Dashboards
Dashboards can be configured in [`values.yaml`](charts/pulsar/values.yaml) or by adding `ConfigMap` items with the label `grafana_dashboard: "1"`.
In [`values.yaml`](charts/pulsar/values.yaml), it's possible to include dashboards by URL or by grafana.com dashboard id (`gnetId` and `revision`).
Please see the [Grafana Helm chart documentation for importing dashboards](https://github.com/grafana/helm-charts/blob/main/charts/grafana/README.md#import-dashboards).
You can connect to Grafana by forwarding port 3000
```
kubectl port-forward $(kubectl get pods -l app.kubernetes.io/name=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000
```
And then opening the browser to http://localhost:3000 . The default user is `admin`.
You can find out the password with this command
```
kubectl get secret -l app.kubernetes.io/name=grafana -o=jsonpath="{.items[0].data.admin-password}" | base64 --decode
```
### Pulsar Grafana Dashboards
* The `apache/pulsar` GitHub repo contains some Grafana dashboards [here](https://github.com/apache/pulsar/tree/master/grafana).
* StreamNative provides Grafana Dashboards for Apache Pulsar in this [GitHub repository](https://github.com/streamnative/apache-pulsar-grafana-dashboard).
* DataStax provides Grafana Dashboards for Apache Pulsar in this [GitHub repository](https://github.com/datastax/pulsar-helm-chart/tree/master/helm-chart-sources/pulsar/grafana-dashboards).
@ -179,21 +390,58 @@ Once your Pulsar Chart is installed, configuration changes and chart
updates should be done using `helm upgrade`.
```bash
helm repo add apache https://pulsar.apache.org/charts
helm repo add apachepulsar https://pulsar.apache.org/charts
helm repo update
helm get values <pulsar-release-name> > pulsar.yaml
helm upgrade -f pulsar.yaml \
<pulsar-release-name> apache/pulsar
# If you are using the provided victoria-metrics-k8s-stack for monitoring, this installs or upgrades the required CRDs
./scripts/victoria-metrics-k8s-stack/upgrade_vm_operator_crds.sh
# get the existing values.yaml used for the most recent deployment
helm get values -n <namespace> <pulsar-release-name> > values.yaml
# upgrade the deployment
helm upgrade -n <namespace> -f values.yaml <pulsar-release-name> apachepulsar/pulsar
```
For more detailed information, see our [Upgrading](http://pulsar.apache.org/docs/helm-upgrade/) guide.
## Upgrading to Helm chart version 4.2.0 (not released yet)
### TLS configuration for ZooKeeper has changed
The TLS configuration for ZooKeeper has been changed to fix certificate and private key expiration issues.
This change impacts configurations that have `tls.enabled` and `tls.zookeeper.enabled` set in `values.yaml`.
The revised solution requires the `AdditionalCertificateOutputFormats=true` feature gate to be enabled in the `cert-manager` deployment when using cert-manager versions below 1.15.0.
If you installed `cert-manager` using `./scripts/cert-manager/install-cert-manager.sh`, you can re-run the updated script to set the feature gate. The script currently installs or upgrades cert-manager LTS version 1.12.17, where the feature gate must be explicitly enabled.
## Upgrading from Helm Chart versions before 4.0.0 to 4.0.0 version and above
### Pulsar Proxy service's default type has been changed from `LoadBalancer` to `ClusterIP`
Please check the section "External Access Recommendations" for guidance and also check the security advisory section.
You will need to configure keys under `proxy.service` in your `values.yaml` to preserve existing functionality since the default has been changed.
### kube-prometheus-stack replaced with victoria-metrics-k8s-stack
The `kube-prometheus-stack` was replaced with `victoria-metrics-k8s-stack` in Pulsar Helm chart version 4.0.0. The trigger for the change was incompatibilities discovered in testing with most recent `kube-prometheus-stack` and Prometheus 3.2.1 which failed to scrape Pulsar metrics in certain cases without providing proper error messages or debug information at debug level logging.
[Victoria Metrics](https://docs.victoriametrics.com/) is Apache 2.0 Licensed OSS and it's a fully compatible drop-in replacement for Prometheus which is fast and efficient.
Before upgrading to Pulsar Helm Chart version 4.0.0, it is recommended to disable kube-prometheus-stack in the original Helm chart version that
is used:
```shell
# get the existing values.yaml used for the most recent deployment
helm get values -n <namespace> <pulsar-release-name> > values.yaml
# disable kube-prometheus-stack in the currently used version before upgrading to Pulsar Helm chart 4.0.0
helm upgrade -n <namespace> -f values.yaml --version <your-current-chart-version> --set kube-prometheus-stack.enabled=false <pulsar-release-name> apachepulsar/pulsar
```
After, this you can proceed with `helm upgrade`.
## Upgrading to Apache Pulsar 2.10.0 and above (or Helm Chart version 3.0.0 and above)
The 2.10.0+ Apache Pulsar docker image is a non-root container, by default. That complicates an upgrade to 2.10.0
because the existing files are owned by the root user but are not writable by the root group. In order to leverage this
new security feature, the Bookkeeper and Zookeeper StatefulSet [securityContexts](https://kubernetes.io/docs/tasks/configure-pod-container/security-context)
are configurable in the `values.yaml`. They default to:
are configurable in the [`values.yaml`](charts/pulsar/values.yaml). They default to:
```yaml
securityContext:
@ -241,6 +489,7 @@ Caused by: org.rocksdb.RocksDBException: while open a file for lock: /pulsar/dat
### Recovering from `helm upgrade` error "unable to build kubernetes objects from current release manifest"
Example of the error message:
```bash
Error: UPGRADE FAILED: unable to build kubernetes objects from current release manifest:
[resource mapping not found for name: "pulsar-bookie" namespace: "pulsar" from "":
@ -274,10 +523,10 @@ This workaround addresses the issue by updating in-place Helm release metadata t
To uninstall the Pulsar Chart, run the following command:
```bash
helm delete <pulsar-release-name>
helm uninstall <pulsar-release-name>
```
For the purposes of continuity, these charts have some Kubernetes objects that are not removed when performing `helm delete`.
For the purposes of continuity, these charts have some Kubernetes objects that are not removed when performing `helm uninstall`.
These items we require you to *conciously* remove them, as they affect re-deployment should you choose to.
* PVCs for stateful data, which you must *consciously* remove
@ -292,6 +541,36 @@ We've done our best to make these charts as seamless as possible,
occasionally troubles do surface outside of our control. We've collected
tips and tricks for troubleshooting common issues. Please examine these first before raising an [issue](https://github.com/apache/pulsar-helm-chart/issues/new/choose), and feel free to add to them by raising a [Pull Request](https://github.com/apache/pulsar-helm-chart/compare)!
### VictoriaMetrics Troubleshooting
In example commands, k8s is namespace `pulsar` replace with your deployment namespace.
#### VictoriaMetrics Web UI
Connecting to `vmsingle` pod for web UI.
```shell
kubectl port-forward -n pulsar $(kubectl get pods -n pulsar -l app.kubernetes.io/name=vmsingle -o jsonpath='{.items[0].metadata.name}') 8429:8429
```
Now you can access the UI at http://localhost:8429 and http://localhost:8429/vmui (for similar UI as in Prometheus)
#### VictoriaMetrics Scraping debugging UI - Active Targets
Connection to `vmagent` pod for debugging targets.
```shell
kubectl port-forward -n pulsar $(kubectl get pods -n pulsar -l app.kubernetes.io/name=vmagent -o jsonpath='{.items[0].metadata.name}') 8429:8429
```
Now you can access the UI at http://localhost:8429
Active Targets UI
- http://localhost:8429/targets
Scraping Configuration
- http://localhost:8429/config
## Release Process
See [RELEASE.md](RELEASE.md)

View File

@ -23,7 +23,7 @@ This document details the steps for releasing the Apache Pulsar Helm Chart.
## Prerequisites
- Helm version >= 3.0.2
- Helm version >= 3.12.0
- Helm gpg plugin (one option: https://github.com/technosophos/helm-gpg)
## Build Release Notes
@ -44,33 +44,42 @@ official Apache releases must not include the rcN suffix.
# Set Version
export VERSION_RC=3.0.0-candidate-1
export VERSION_WITHOUT_RC=${VERSION_RC%-candidate-*}
# set your ASF user id
export APACHE_USER=<your ASF userid>
```
# Clone and set PULSAR_REPO_ROOT
git clone https://github.com/apache/pulsar-helm-chart.git pulsar
- Clone clean repository and set PULSAR_REPO_ROOT
```shell
git clone https://github.com/apache/pulsar-helm-chart.git
cd pulsar-helm-chart
export PULSAR_REPO_ROOT=$(pwd)
```
- We currently release Helm Chart from `master` branch:
- Alternatively (not recommended), go to your already checked out pulsar-helm-chart directory and ensure that it's clean
```shell
git checkout master
```
- Clean the checkout: the sdist step below will
```shell
git fetch origin
git reset --hard origin/master
# clean the checkout
git clean -fdX .
export PULSAR_REPO_ROOT=$(pwd)
```
- Update Helm Chart version in `Chart.yaml`, example: `version: 1.0.0` (without
the RC tag). Verify that the `appVersion` matches the `values.yaml` versions for Pulsar components.
```shell
yq -i '.version=strenv(VERSION_WITHOUT_RC)' charts/pulsar/Chart.yaml
```
- Add and commit the version change.
```shell
git add charts/pulsar/Chart.yaml
git commit -m "Chart: Bump version to $VERSION_WITHOUT_RC"
git push origin master
```
Note: You will tag this commit, you do not need to open a PR for it.
@ -78,7 +87,7 @@ official Apache releases must not include the rcN suffix.
- Tag your release
```shell
git tag -s pulsar-${VERSION_RC} -m "Apache Pulsar Helm Chart $VERSION_RC"
git tag -u $APACHE_USER@apache.org -s pulsar-${VERSION_RC} -m "Apache Pulsar Helm Chart $VERSION_RC"
```
- Tarball the repo
@ -106,7 +115,7 @@ official Apache releases must not include the rcN suffix.
http://www.apache.org/dev/openpgp.html#key-gen-generate-key)
```shell
helm gpg sign -u <apache_id>@apache.org pulsar-${VERSION_WITHOUT_RC}.tgz
helm gpg sign -u $APACHE_USER@apache.org pulsar-${VERSION_WITHOUT_RC}.tgz
```
Warning: you need the `helm gpg` plugin to sign the chart. It can be found at: https://github.com/technosophos/helm-gpg
@ -114,10 +123,14 @@ official Apache releases must not include the rcN suffix.
This should also generate a provenance file (Example: `pulsar-1.0.0.tgz.prov`) as described in
https://helm.sh/docs/topics/provenance/, which can be used to verify integrity of the Helm chart.
Verify the signed chart (with example output shown):
Verify the signed chart:
```shell
$ helm gpg verify pulsar-${VERSION_WITHOUT_RC}.tgz
helm gpg verify pulsar-${VERSION_WITHOUT_RC}.tgz
```
Example output:
```
gpg: Signature made Thu Oct 20 16:36:24 2022 CDT
gpg: using RSA key BD4291E509D771B79E7BD1F5C5724B3F5588C4EB
gpg: issuer "mmarshall@apache.org"
@ -135,7 +148,6 @@ official Apache releases must not include the rcN suffix.
- Move the artifacts to ASF dev dist repo, generate convenience `index.yaml` & publish them
```shell
APACHE_USER=<your ASF userid>
# Create new folder for the release
svn mkdir --username $APACHE_USER -m "Add directory for pulsar-helm-chart $VERSION_RC release" https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/$VERSION_RC
# checkout the directory
@ -166,6 +178,8 @@ official Apache releases must not include the rcN suffix.
- Remove old Helm Chart versions from the dev repo
First check if this is required by viewing the versions available at https://dist.apache.org/repos/dist/dev/pulsar/helm-chart
```shell
export PREVIOUS_VERSION_RC=3.0.0-candidate-1
svn rm --username $APACHE_USER -m "Remove old Helm Chart release: ${PREVIOUS_VERSION_RC}" https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/${PREVIOUS_VERSION_RC}
@ -175,9 +189,23 @@ official Apache releases must not include the rcN suffix.
```shell
cd ${PULSAR_REPO_ROOT}
git push upstream tag pulsar-${VERSION_RC}
git push origin tag pulsar-${VERSION_RC}
```
## Create release notes for the release candidate in GitHub UI
```shell
# open this URL and create release notes by clicking "Create release from tag"
echo https://github.com/apache/pulsar-helm-chart/releases/tag/pulsar-${VERSION_RC}
```
1. Open the above URL in a browser and create release notes by clicking "Create release from tag".
2. Find "Previous tag: auto" in the UI above the text box and choose the previous release there.
3. Click "Generate release notes".
4. Review the generated release notes.
5. Select "Set as a pre-release"
6. Click "Publish release".
## Prepare Vote email on the Apache Pulsar release candidate
@ -202,6 +230,9 @@ Hello Apache Pulsar Community,
This is a call for the vote to release the Apache Pulsar Helm Chart version ${VERSION_WITHOUT_RC}.
Release notes for $VERSION_RC:
https://github.com/apache/pulsar-helm-chart/releases/tag/pulsar-$VERSION_RC
The release candidate is available at:
https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/$VERSION_RC/
@ -212,9 +243,15 @@ Public keys are available at: https://www.apache.org/dist/pulsar/KEYS
For convenience "index.yaml" has been uploaded (though excluded from voting), so you can also run the below commands.
helm repo add apache-pulsar-dist-dev https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/$VERSION_RC/
helm repo add --force-update apache-pulsar-dist-dev \\
https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/$VERSION_RC/
helm repo update
helm install pulsar apache-pulsar-dist-dev/pulsar --set affinity.anti_affinity=false
helm install pulsar apache-pulsar-dist-dev/pulsar \\
--version ${VERSION_WITHOUT_RC} --set affinity.anti_affinity=false \\
--wait --timeout 10m --debug
For observing the deployment progress, you can use the k9s tool to view the cluster state changes in a different terminal window.
The k9s tool is available at https://k9scli.io/topics/install/.
pulsar-${VERSION_WITHOUT_RC}.tgz.prov - is also uploaded for verifying Chart Integrity, though it is not strictly required for releasing the artifact based on ASF Guidelines.
@ -372,9 +409,15 @@ Contributors can run below commands to test the Helm Chart
```shell
export VERSION_RC=3.0.0-candidate-1
helm repo add apache-pulsar-dist-dev https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/${VERSION_RC}/
export VERSION_WITHOUT_RC=${VERSION_RC%-candidate-*}
```
```shell
helm repo add --force-update \
apache-pulsar-dist-dev https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/$VERSION_RC/
helm repo update
helm install pulsar apache-pulsar-dist-dev/pulsar --set affinity.anti_affinity=false
helm install pulsar apache-pulsar-dist-dev/pulsar \
--version ${VERSION_WITHOUT_RC} --set affinity.anti_affinity=false
```
You can then perform any other verifications to check that it works as you expected by
@ -421,17 +464,19 @@ EOF
## Publish release to SVN
You need to migrate the RC artifacts that passed to this repository:
https://dist.apache.org/repos/dist/release/pulsar/helm-chart/
(The migration should include renaming the files so that they no longer have the RC number in their filenames.)
The best way of doing this is to svn cp between the two repos (this avoids having to upload
the binaries again, and gives a clearer history in the svn commit logs):
Set environment variables
```shell
export VERSION_RC=3.0.0-candidate-1
export VERSION_WITHOUT_RC=${VERSION_RC%-candidate-*}
APACHE_USER=<your ASF userid>
export APACHE_USER=<your ASF userid>
```
Migrating the approved RC artifacts to the release directory:
https://dist.apache.org/repos/dist/release/pulsar/helm-chart/
svn commands for handling this:
```shell
svn rm --username $APACHE_USER -m "Remove temporary index.yaml file" https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/${VERSION_RC}/index.yaml
svn move --username $APACHE_USER -m "Release Pulsar Helm Chart ${VERSION_WITHOUT_RC} from ${VERSION_RC}" \
https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/${VERSION_RC} \
@ -445,10 +490,8 @@ Verify that the packages appear in [Pulsar Helm Chart](https://dist.apache.org/r
Create and push the release tag:
```shell
cd "${PULSAR_REPO_ROOT}"
git checkout pulsar-${VERSION_RC}
git tag -s pulsar-${VERSION_WITHOUT_RC} -m "Apache Pulsar Helm Chart ${VERSION_WITHOUT_RC}"
git push upstream pulsar-${VERSION_WITHOUT_RC}
git tag -u $APACHE_USER@apache.org pulsar-$VERSION_WITHOUT_RC $(git rev-parse pulsar-$VERSION_RC^{}) -m "Apache Pulsar Helm Chart ${VERSION_WITHOUT_RC}"
git push origin pulsar-${VERSION_WITHOUT_RC}
```
## Update index.yaml
@ -468,7 +511,7 @@ cd pulsar-site
# Run on a branch based on main branch
cd static/charts
# need the chart file temporarily to update the index
wget https://downloads.apache.org/pulsar/helm-chart/${VERSION_WITHOUT_RC}/pulsar-${VERSION_WITHOUT_RC}.tgz
wget https://dist.apache.org/repos/dist/release/pulsar/helm-chart/${VERSION_WITHOUT_RC}/pulsar-${VERSION_WITHOUT_RC}.tgz
# store the license header temporarily
head -n 17 index.yaml > license_header.txt
# update the index
@ -481,14 +524,29 @@ rm license_header.txt index.yaml.new
rm pulsar-${VERSION_WITHOUT_RC}.tgz
```
Verify that the updated `index.yaml` file has the most recent version. Then run:
Verify that the updated `index.yaml` file has the most recent version.
Wait until the file is available:
```shell
while ! curl -fIL https://downloads.apache.org/pulsar/helm-chart/${VERSION_WITHOUT_RC}/pulsar-${VERSION_WITHOUT_RC}.tgz; do
echo "Waiting for pulsar-${VERSION_WITHOUT_RC}.tgz to become available..."
sleep 10
done
```
Then run:
```shell
git add index.yaml
git commit -m "Adding Pulsar Helm Chart ${VERSION_WITHOUT_RC} to index.yaml"
```
Then open a PR.
Then commit the change.
```
git push origin main
```
## Create release notes for the tag in GitHub UI

View File

@ -18,20 +18,21 @@
#
apiVersion: v2
appVersion: "3.0.2"
appVersion: "4.0.5"
description: Apache Pulsar Helm chart for Kubernetes
name: pulsar
version: 3.1.0
version: 4.1.0
kubeVersion: ">=1.25.0-0"
home: https://pulsar.apache.org
sources:
- https://github.com/apache/pulsar
- https://github.com/apache/pulsar-helm-chart
- https://github.com/apache/pulsar
- https://github.com/apache/pulsar-helm-chart
icon: https://pulsar.apache.org/img/pulsar.svg
maintainers:
- name: The Apache Pulsar Team
- name: The Apache Pulsar Team
email: dev@pulsar.apache.org
dependencies:
- name: kube-prometheus-stack
version: 41.x.x
repository: https://prometheus-community.github.io/helm-charts
condition: kube-prometheus-stack.enabled
- name: victoria-metrics-k8s-stack
version: 0.38.x
repository: https://victoriametrics.github.io/helm-charts/
condition: victoria-metrics-k8s-stack.enabled

View File

@ -0,0 +1,185 @@
======================================================================================
APACHE PULSAR HELM CHART
======================================================================================
======================================================================================
SECURITY ADVISORY
======================================================================================
This Helm chart's default configuration DOES NOT meet production security requirements.
Users MUST review and customize security settings for their specific environment.
IMPORTANT: This Helm chart provides a starting point for Pulsar deployments but requires
significant security customization before use in production environments. We strongly
recommend implementing:
1. Proper network isolation and access controls
2. Authentication and authorization for all components
3. TLS encryption for all communication channels
4. Regular security updates and vulnerability assessments
As an open source project, we welcome contributions to improve security features.
Please consider submitting pull requests to address security gaps or enhance
existing security implementations.
---------------------------------------------------------------------------------------
SECURITY NOTICE: The Pulsar proxy is not designed for direct public internet exposure.
It lacks security features required for untrusted networks and should only be deployed
within secured environments with proper network controls.
IMPORTANT CHANGE IN v4.0.0: Default service type changed from LoadBalancer to ClusterIP
for security reasons. This limits access to within the Kubernetes environment by default.
---------------------------------------------------------------------------------------
IF YOU NEED EXTERNAL ACCESS FOR YOUR PULSAR CLUSTER:
---------------------------------------------------------------------------------------
Note: This information might be outdated. Please go to https://github.com/apache/pulsar-helm-chart for updated information.
If you need to expose the Pulsar Proxy outside the cluster using a LoadBalancer service type:
1. USE INTERNAL LOAD BALANCERS ONLY
- Set type to LoadBalancer only in secured environments with proper network controls
- Add cloud provider-specific annotations for internal load balancers
- See cloud provider documentation:
* AWS / EKS: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/
* Azure / AKS: https://learn.microsoft.com/en-us/azure/aks/internal-lb
* GCP / GKE: https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer-parameters
- Examples (verify correctness for your environment):
* AWS / EKS: service.beta.kubernetes.io/aws-load-balancer-internal: "true"
* Azure / AKS: service.beta.kubernetes.io/azure-load-balancer-internal: "true"
* GCP / GKE: networking.gke.io/load-balancer-type: "Internal"
2. IMPLEMENT AUTHENTICATION AND AUTHORIZATION
- Configure all clients to authenticate properly
- Set up appropriate authorization policies
3. USE TLS FOR ALL CONNECTIONS
- Enable TLS for client-to-proxy connections
- Enable TLS for proxy-to-broker connections
- Enable TLS for all internal cluster communications (brokers, zookeepers, bookies)
- Note: TLS alone is NOT sufficient as a security solution in Pulsar. Even with TLS enabled,
clusters exposed to untrusted networks remain vulnerable to denial-of-service attacks,
authentication bypass attempts, and protocol-level exploits. Always implement defense-in-depth
security measures and limit exposure to trusted networks only.
4. NETWORK SECURITY
- Use private networks (VPCs)
- Configure firewalls, security groups, and IP restrictions appropriately
- In addition, consider using loadBalancerSourceRanges to limit access to specific IP ranges
5. CLIENT IP ADDRESS BASED ACCESS RESTRICTIONS
- When using a LoadBalancer service type, restrict access to specific IP ranges by configuring
`proxy.service.loadBalancerSourceRanges` in your values.yaml
- Important: This should be implemented alongside other security measures (internal load balancer,
authentication, TLS, network policies) as part of a defense-in-depth strategy,
not as a standalone security solution
---------------------------------------------------------------------------------------
ALTERNATIVE FOR EXTERNAL ACCESS
---------------------------------------------------------------------------------------
As an alternative method for external access, Pulsar has support for SNI proxy routing:
https://pulsar.apache.org/docs/next/concepts-proxy-sni-routing/
SNI Proxy routing is supported with proxy servers such as Apache Traffic Server, HAProxy and Nginx.
Note: This option isn't currently implemented in the Apache Pulsar Helm chart.
IMPORTANT: Pulsar binary protocol cannot be exposed outside of the Kubernetes cluster
using Kubernetes Ingress. Kubernetes Ingress works for the Admin REST API and topic lookups,
but clients would be connecting to the advertised listener addresses returned by the brokers and it
would only work when clients can connect directly to brokers. This is not a supported secure option
for exposing Pulsar to untrusted networks.
{{- if .Values.useReleaseStatus }}
======================================================================================
🚀 QUICK START 🚀
======================================================================================
Watching events to view progress of deployment:
kubectl get -n {{ .Values.namespace | default .Release.Namespace }} events -o wide --watch
Watching state of deployed Kubernetes objects, updated every 2 seconds:
watch kubectl get -n {{ .Values.namespace | default .Release.Namespace }} all
{{- if .Values.components.proxy }}
Waiting until Pulsar Proxy is available:
kubectl wait --timeout=600s --for=condition=ready pod -n {{ .Values.namespace | default .Release.Namespace }} -l component=proxy
{{- end }}
Watching state with k9s (https://k9scli.io/topics/install/):
k9s -n {{ .Values.namespace | default .Release.Namespace }}
{{- if and .Values.affinity.anti_affinity (or (gt (int .Values.bookkeeper.replicaCount) 1) (gt (int .Values.zookeeper.replicaCount) 1)) }}
======================================================================================
⚠️ NOTICE FOR DEV K8S CLUSTER USERS ⚠️
======================================================================================
Please note that anti-affinity rules for Zookeeper and Bookie components require at least
one node per replica. There are currently {{ .Values.bookkeeper.replicaCount }} bookies and {{ .Values.zookeeper.replicaCount }} zookeepers configured.
For Kubernetes clusters with fewer than 3 nodes, such as single-node Kubernetes clusters in
development environments like minikube, Docker Desktop, Rancher Desktop (k3s), or Podman
Desktop, you must disable the anti-affinity feature by either:
Adding to your values.yaml:
affinity:
anti_affinity: false
Or adding "--set affinity.anti_affinity=false" to the helm command line.
After making the changes to your values yaml file, redeploy with "helm upgrade":
helm upgrade -n {{ .Release.Namespace }} -f your_values_file.yaml {{ .Release.Name }} apachepulsar/pulsar
These configuration instructions can be omitted for Kubernetes clusters with 3 or more nodes.
{{- end }}
{{- end }}
{{- if and (eq .Values.proxy.service.type "LoadBalancer") (not .Values.proxy.service.annotations) }}
======================================================================================
⚠️ 🚨 INSECURE CONFIGURATION DETECTED 🚨 ⚠️
======================================================================================
WARNING: You are using a LoadBalancer service type without internal load balancer
annotations. This is potentially an insecure configuration. Please carefully review
the security recommendations above and visit https://github.com/apache/pulsar-helm-chart
for more information.
======================================================================================
{{- end }}
======================================================================================
DISCLAIMER
======================================================================================
The providers of this Helm chart make no guarantees regarding the security of the chart under
any circumstances. It is the user's responsibility to ensure that their deployment is secure
and complies with all relevant security standards and regulations.
By using this Helm chart, the user acknowledges the risks associated with its default
configuration and the necessity for proper security customization. The user further
agrees that the providers of the Helm chart shall not be liable for any security breaches
or incidents resulting from the use of the chart.
The user assumes full responsibility for the security and integrity of their deployment.
This includes, but is not limited to, the proper configuration of security features and
adherence to best practices for securing network access. The providers of this Helm chart
disclaim all warranties, whether express or implied, including any warranties of
merchantability, fitness for a particular purpose, and non-infringement of third-party rights.
======================================================================================
RESOURCES
======================================================================================
- 🖥️ Install k9s terminal interface for viewing and managing k8s clusters: https://k9scli.io/topics/install/
- ❓ Usage Questions: https://github.com/apache/pulsar/discussions/categories/q-a
- 🐛 Report Issues: https://github.com/apache/pulsar-helm-chart/issues
- 🔒 Security Issues: https://pulsar.apache.org/security/
- 📚 Documentation: https://github.com/apache/pulsar-helm-chart
🌟 Please contribute to improve the Apache Pulsar Helm chart and its documentation:
- 🤝 Contribute: https://github.com/apache/pulsar-helm-chart
Thank you for installing Apache Pulsar Helm chart version {{ .Chart.Version }}.

View File

@ -36,7 +36,7 @@ Define autorecovery zookeeper client tls settings
*/}}
{{- define "pulsar.autorecovery.zookeeper.tls.settings" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
/pulsar/keytool/keytool.sh autorecovery {{ template "pulsar.autorecovery.hostname" . }} true;
{{- include "pulsar.component.zookeeper.tls.settings" (dict "component" "autorecovery" "isClient" true "isCacerts" .Values.tls.autorecovery.cacerts.enabled) -}}
{{- end }}
{{- end }}
@ -51,11 +51,21 @@ Define autorecovery tls certs mounts
- name: ca
mountPath: "/pulsar/certs/ca"
readOnly: true
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
mountPath: "/pulsar/keytool/keytool.sh"
subPath: keytool.sh
{{- end }}
{{- if .Values.tls.autorecovery.cacerts.enabled }}
- mountPath: "/pulsar/certs/cacerts"
name: autorecovery-cacerts
{{- range $cert := .Values.tls.autorecovery.cacerts.certs }}
- name: {{ $cert.name }}
mountPath: "/pulsar/certs/{{ $cert.name }}"
readOnly: true
{{- end }}
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem.sh"
subPath: certs-combine-pem.sh
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem-infinity.sh"
subPath: certs-combine-pem-infinity.sh
{{- end }}
{{- end }}
@ -72,18 +82,32 @@ Define autorecovery tls certs volumes
path: tls.crt
- key: tls.key
path: tls.key
- key: tls-combined.pem
path: tls-combined.pem
- name: ca
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
items:
- key: ca.crt
path: ca.crt
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
configMap:
name: "{{ template "pulsar.fullname" . }}-keytool-configmap"
defaultMode: 0755
{{- end }}
{{- if .Values.tls.autorecovery.cacerts.enabled }}
- name: autorecovery-cacerts
emptyDir: {}
{{- range $cert := .Values.tls.autorecovery.cacerts.certs }}
- name: {{ $cert.name }}
secret:
secretName: "{{ $cert.existingSecret }}"
items:
{{- range $key := $cert.secretKeys }}
- key: {{ $key }}
path: {{ $key }}
{{- end }}
{{- end }}
- name: certs-scripts
configMap:
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
defaultMode: 0755
{{- end }}
{{- end }}
@ -92,8 +116,9 @@ Define autorecovery init container : verify cluster id
*/}}
{{- define "pulsar.autorecovery.init.verify_cluster_id" -}}
bin/apply-config-from-env.py conf/bookkeeper.conf;
{{- include "pulsar.autorecovery.zookeeper.tls.settings" . -}}
until bin/bookkeeper shell whatisinstanceid; do
export BOOKIE_MEM="-Xmx128M";
{{- include "pulsar.autorecovery.zookeeper.tls.settings" . }}
until timeout 15 bin/bookkeeper shell whatisinstanceid; do
sleep 3;
done;
{{- end }}

View File

@ -37,7 +37,7 @@ Define bookie zookeeper client tls settings
*/}}
{{- define "pulsar.bookkeeper.zookeeper.tls.settings" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
/pulsar/keytool/keytool.sh bookie {{ template "pulsar.bookkeeper.hostname" . }} true;
{{- include "pulsar.component.zookeeper.tls.settings" (dict "component" "bookie" "isClient" true "isCacerts" .Values.tls.bookie.cacerts.enabled) -}}
{{- end }}
{{- end }}
@ -45,18 +45,30 @@ Define bookie zookeeper client tls settings
Define bookie tls certs mounts
*/}}
{{- define "pulsar.bookkeeper.certs.volumeMounts" -}}
{{- if and .Values.tls.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled) }}
{{- if .Values.tls.enabled }}
{{- if or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled }}
- name: bookie-certs
mountPath: "/pulsar/certs/bookie"
readOnly: true
{{- end }}
- name: ca
mountPath: "/pulsar/certs/ca"
readOnly: true
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
mountPath: "/pulsar/keytool/keytool.sh"
subPath: keytool.sh
{{- end }}
{{- if .Values.tls.bookie.cacerts.enabled }}
- mountPath: "/pulsar/certs/cacerts"
name: bookie-cacerts
{{- range $cert := .Values.tls.bookie.cacerts.certs }}
- name: {{ $cert.name }}
mountPath: "/pulsar/certs/{{ $cert.name }}"
readOnly: true
{{- end }}
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem.sh"
subPath: certs-combine-pem.sh
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem-infinity.sh"
subPath: certs-combine-pem-infinity.sh
{{- end }}
{{- end }}
@ -64,7 +76,8 @@ Define bookie tls certs mounts
Define bookie tls certs volumes
*/}}
{{- define "pulsar.bookkeeper.certs.volumes" -}}
{{- if and .Values.tls.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled) }}
{{- if .Values.tls.enabled }}
{{- if or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled }}
- name: bookie-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.bookie.cert_name }}"
@ -73,18 +86,35 @@ Define bookie tls certs volumes
path: tls.crt
- key: tls.key
path: tls.key
{{- if .Values.tls.zookeeper.enabled }}
- key: tls-combined.pem
path: tls-combined.pem
{{- end }}
{{- end }}
- name: ca
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
items:
- key: ca.crt
path: ca.crt
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
configMap:
name: "{{ template "pulsar.fullname" . }}-keytool-configmap"
defaultMode: 0755
{{- end }}
{{- if .Values.tls.bookie.cacerts.enabled }}
- name: bookie-cacerts
emptyDir: {}
{{- range $cert := .Values.tls.bookie.cacerts.certs }}
- name: {{ $cert.name }}
secret:
secretName: "{{ $cert.existingSecret }}"
items:
{{- range $key := $cert.secretKeys }}
- key: {{ $key }}
path: {{ $key }}
{{- end }}
{{- end }}
- name: certs-scripts
configMap:
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
defaultMode: 0755
{{- end }}
{{- end }}
@ -92,8 +122,31 @@ Define bookie tls certs volumes
Define bookie common config
*/}}
{{- define "pulsar.bookkeeper.config.common" -}}
zkServers: "{{ template "pulsar.zookeeper.connect" . }}"
zkLedgersRootPath: "{{ .Values.metadataPrefix }}/ledgers"
{{/*
Configure BookKeeper's metadata store (available since BookKeeper 4.7.0 / BP-29)
https://bookkeeper.apache.org/bps/BP-29-metadata-store-api-module/
https://bookkeeper.apache.org/docs/deployment/manual#cluster-metadata-setup
*/}}
# Set empty values for zkServers and zkLedgersRootPath since we're using the metadataServiceUri to configure BookKeeper's metadata store
zkServers: ""
zkLedgersRootPath: ""
{{- if .Values.components.zookeeper }}
{{- if (and (hasKey .Values.pulsar_metadata "bookkeeper") .Values.pulsar_metadata.bookkeeper.usePulsarMetadataBookieDriver) }}
# there's a bug when using PulsarMetadataBookieDriver since it always appends /ledgers to the metadataServiceUri
# Possibly a bug in org.apache.pulsar.metadata.bookkeeper.AbstractMetadataDriver#resolveLedgersRootPath in Pulsar code base
metadataServiceUri: "metadata-store:zk:{{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }}"
{{- else }}
# use zk+hierarchical:// when using BookKeeper's built-in metadata driver
metadataServiceUri: "zk+hierarchical://{{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }}/ledgers"
{{- end }}
{{- else if .Values.components.oxia }}
metadataServiceUri: "{{ template "pulsar.oxia.metadata.url.bookkeeper" . }}"
{{- end }}
{{- /* metadataStoreSessionTimeoutMillis maps to zkTimeout in bookkeeper.conf for both zookeeper and oxia metadata stores */}}
{{- if (and (hasKey .Values.pulsar_metadata "bookkeeper") (hasKey .Values.pulsar_metadata.bookkeeper "metadataStoreSessionTimeoutMillis")) }}
zkTimeout: "{{ .Values.pulsar_metadata.bookkeeper.metadataStoreSessionTimeoutMillis }}"
{{- end }}
# enable bookkeeper http server
httpServerEnabled: "true"
httpServerPort: "{{ .Values.bookkeeper.ports.http }}"
@ -113,7 +166,7 @@ PULSAR_PREFIX_tlsCertificatePath: /pulsar/certs/bookie/tls.crt
PULSAR_PREFIX_tlsKeyStoreType: PEM
PULSAR_PREFIX_tlsKeyStore: /pulsar/certs/bookie/tls.key
PULSAR_PREFIX_tlsTrustStoreType: PEM
PULSAR_PREFIX_tlsTrustStore: /pulsar/certs/ca/ca.crt
PULSAR_PREFIX_tlsTrustStore: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.bookie.cacerts.enabled | quote }}
{{- end }}
{{- end }}
@ -123,8 +176,9 @@ Define bookie init container : verify cluster id
{{- define "pulsar.bookkeeper.init.verify_cluster_id" -}}
{{- if not (and .Values.volumes.persistence .Values.bookkeeper.volumes.persistence) }}
bin/apply-config-from-env.py conf/bookkeeper.conf;
{{- include "pulsar.bookkeeper.zookeeper.tls.settings" . -}}
until bin/bookkeeper shell whatisinstanceid; do
export BOOKIE_MEM="-Xmx128M";
{{- include "pulsar.bookkeeper.zookeeper.tls.settings" . }}
until timeout 15 bin/bookkeeper shell whatisinstanceid; do
sleep 3;
done;
bin/bookkeeper shell bookieformat -nonInteractive -force -deleteCookie || true
@ -132,8 +186,9 @@ bin/bookkeeper shell bookieformat -nonInteractive -force -deleteCookie || true
{{- if and .Values.volumes.persistence .Values.bookkeeper.volumes.persistence }}
set -e;
bin/apply-config-from-env.py conf/bookkeeper.conf;
{{- include "pulsar.bookkeeper.zookeeper.tls.settings" . -}}
until bin/bookkeeper shell whatisinstanceid; do
export BOOKIE_MEM="-Xmx128M";
{{- include "pulsar.bookkeeper.zookeeper.tls.settings" . }}
until timeout 15 bin/bookkeeper shell whatisinstanceid; do
sleep 3;
done;
{{- end }}

View File

@ -43,7 +43,7 @@ Define broker zookeeper client tls settings
*/}}
{{- define "pulsar.broker.zookeeper.tls.settings" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
/pulsar/keytool/keytool.sh broker {{ template "pulsar.broker.hostname" . }} true;
{{- include "pulsar.component.zookeeper.tls.settings" (dict "component" "broker" "isClient" true "isCacerts" .Values.tls.broker.cacerts.enabled) -}}
{{- end }}
{{- end }}
@ -51,18 +51,30 @@ Define broker zookeeper client tls settings
Define broker tls certs mounts
*/}}
{{- define "pulsar.broker.certs.volumeMounts" -}}
{{- if and .Values.tls.enabled (or .Values.tls.broker.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled)) }}
{{- if .Values.tls.enabled }}
{{- if or .Values.tls.broker.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled) }}
- name: broker-certs
mountPath: "/pulsar/certs/broker"
readOnly: true
{{- end }}
- name: ca
mountPath: "/pulsar/certs/ca"
readOnly: true
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
mountPath: "/pulsar/keytool/keytool.sh"
subPath: keytool.sh
{{- end }}
{{- if .Values.tls.broker.cacerts.enabled }}
- mountPath: "/pulsar/certs/cacerts"
name: broker-cacerts
{{- range $cert := .Values.tls.broker.cacerts.certs }}
- name: {{ $cert.name }}
mountPath: "/pulsar/certs/{{ $cert.name }}"
readOnly: true
{{- end }}
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem.sh"
subPath: certs-combine-pem.sh
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem-infinity.sh"
subPath: certs-combine-pem-infinity.sh
{{- end }}
{{- end }}
@ -70,7 +82,8 @@ Define broker tls certs mounts
Define broker tls certs volumes
*/}}
{{- define "pulsar.broker.certs.volumes" -}}
{{- if and .Values.tls.enabled (or .Values.tls.broker.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled)) }}
{{- if .Values.tls.enabled }}
{{- if or .Values.tls.broker.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled) }}
- name: broker-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.broker.cert_name }}"
@ -79,17 +92,34 @@ Define broker tls certs volumes
path: tls.crt
- key: tls.key
path: tls.key
{{- if .Values.tls.zookeeper.enabled }}
- key: tls-combined.pem
path: tls-combined.pem
{{- end }}
{{- end }}
- name: ca
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
items:
- key: ca.crt
path: ca.crt
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
{{- end }}
{{- if .Values.tls.broker.cacerts.enabled }}
- name: broker-cacerts
emptyDir: {}
{{- range $cert := .Values.tls.broker.cacerts.certs }}
- name: {{ $cert.name }}
secret:
secretName: "{{ $cert.existingSecret }}"
items:
{{- range $key := $cert.secretKeys }}
- key: {{ $key }}
path: {{ $key }}
{{- end }}
{{- end }}
- name: certs-scripts
configMap:
name: "{{ template "pulsar.fullname" . }}-keytool-configmap"
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
defaultMode: 0755
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,132 @@
{{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}}
{{/*
Define the pulsar certs ca issuer name
*/}}
{{- define "pulsar.certs.issuers.ca.name" -}}
{{- if .Values.certs.internal_issuer.enabled -}}
{{- if and (eq .Values.certs.internal_issuer.type "selfsigning") .Values.certs.issuers.selfsigning.name -}}
{{- .Values.certs.issuers.selfsigning.name -}}
{{- else if and (eq .Values.certs.internal_issuer.type "ca") .Values.certs.issuers.ca.name -}}
{{- .Values.certs.issuers.ca.name -}}
{{- else -}}
{{- template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer
{{- end -}}
{{- else -}}
{{- if .Values.certs.issuers.ca.name -}}
{{- .Values.certs.issuers.ca.name -}}
{{- else -}}
{{- fail "certs.issuers.ca.name is required when TLS is enabled and certs.internal_issuer.enabled is false" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Define the pulsar certs ca issuer secret name
*/}}
{{- define "pulsar.certs.issuers.ca.secretName" -}}
{{- if .Values.certs.internal_issuer.enabled -}}
{{- if and (eq .Values.certs.internal_issuer.type "selfsigning") .Values.certs.issuers.selfsigning.secretName -}}
{{- .Values.certs.issuers.selfsigning.secretName -}}
{{- else if and (eq .Values.certs.internal_issuer.type "ca") .Values.certs.issuers.ca.secretName -}}
{{- .Values.certs.issuers.ca.secretName -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name .Values.tls.ca_suffix -}}
{{- end -}}
{{- else -}}
{{- if .Values.certs.issuers.ca.secretName -}}
{{- .Values.certs.issuers.ca.secretName -}}
{{- else -}}
{{- fail "certs.issuers.ca.secretName is required when TLS is enabled and certs.internal_issuer.enabled is false" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Common certificate template
Usage: {{- include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.proxy "tlsConfig" .Values.tls.proxy) -}}
*/}}
{{- define "pulsar.cert.template" -}}
{{- if eq .root.Values.certs.internal_issuer.apiVersion "cert-manager.io/v1beta1" -}}
{{- fail "cert-manager.io/v1beta1 is no longer supported. Please set certs.internal_issuer.apiVersion to cert-manager.io/v1" -}}
{{- end -}}
apiVersion: "{{ .root.Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" .root }}-{{ .tlsConfig.cert_name }}"
namespace: {{ template "pulsar.namespace" .root }}
labels:
{{- include "pulsar.standardLabels" .root | nindent 4 }}
spec:
# Secret names are always required.
secretName: "{{ .root.Release.Name }}-{{ .tlsConfig.cert_name }}"
{{- if .root.Values.tls.zookeeper.enabled }}
additionalOutputFormats:
- type: CombinedPEM
{{- end }}
duration: "{{ .root.Values.tls.common.duration }}"
renewBefore: "{{ .root.Values.tls.common.renewBefore }}"
subject:
organizations:
{{ toYaml .root.Values.tls.common.organization | indent 4 }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" .root }}-{{ .componentConfig.component }}"
isCA: false
privateKey:
size: {{ .root.Values.tls.common.keySize }}
algorithm: {{ .root.Values.tls.common.keyAlgorithm }}
encoding: {{ .root.Values.tls.common.keyEncoding }}
usages:
- server auth
- client auth
# At least one of a DNS Name, USI SAN, or IP address is required.
dnsNames:
{{- if .tlsConfig.dnsNames }}
{{ toYaml .tlsConfig.dnsNames | indent 4 }}
{{- end }}
- {{ printf "*.%s-%s.%s.svc.%s" (include "pulsar.fullname" .root) .componentConfig.component (include "pulsar.namespace" .root) .root.Values.clusterDomain | quote }}
- {{ printf "%s-%s" (include "pulsar.fullname" .root) .componentConfig.component | quote }}
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.certs.issuers.ca.name" .root }}"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{- end -}}
{{/*
CA certificates template
Usage: {{ include "pulsar.certs.cacerts" (dict "certs" .Values.tls.<component>.cacerts.certs) }}
*/}}
{{- define "pulsar.certs.cacerts" -}}
{{- $certs := .certs -}}
{{- $cacerts := list -}}
{{- $cacerts = print "/pulsar/certs/ca/ca.crt" | append $cacerts -}}
{{- range $cert := $certs -}}
{{- range $key := $cert.secretKeys -}}
{{- $cacerts = print "/pulsar/certs/" $cert.name "/" $key | append $cacerts -}}
{{- end -}}
{{- end -}}
{{ join " " $cacerts }}
{{- end -}}

View File

@ -126,5 +126,13 @@ imagePullSecrets:
Create full image name
*/}}
{{- define "pulsar.imageFullName" -}}
{{- printf "%s:%s" .image.repository (.image.tag | default .root.Values.defaultPulsarImageTag | default .root.Chart.AppVersion) -}}
{{- printf "%s:%s" (.image.repository | default .root.Values.defaultPulsarImageRepository) (.image.tag | default .root.Values.defaultPulsarImageTag | default .root.Chart.AppVersion) -}}
{{- end -}}
{{/*
Lookup pull policy, default to defaultPullPolicy
*/}}
{{- define "pulsar.imagePullPolicy" -}}
{{- printf "%s" (.image.pullPolicy | default .root.Values.defaultPullPolicy) -}}
{{- end -}}

View File

@ -0,0 +1,97 @@
{{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}}
{{- define "pulsar.podMonitor" -}}
{{- $root := index . 0 }}
{{- $component := index . 1 }}
{{- $matchLabel := index . 2 }}
{{- $portName := "http" }}
{{- if gt (len .) 3 }}
{{- $portName = index . 3 }}
{{- end }}
{{/* Extract component parts for nested values */}}
{{- $componentParts := splitList "." $component }}
{{- $valuesPath := $root.Values }}
{{- range $componentParts }}
{{- $valuesPath = index $valuesPath . }}
{{- end }}
{{- if index $root.Values "victoria-metrics-k8s-stack" "enabled" }}
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMPodScrape
{{- else }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
{{- end }}
metadata:
name: {{ template "pulsar.fullname" $root }}-{{ replace "." "-" $component }}
labels:
{{- include "pulsar.standardLabels" $root | nindent 4 }}
spec:
jobLabel: {{ replace "." "-" $component }}
podMetricsEndpoints:
- port: {{ $portName }}
path: /metrics
scheme: http
interval: {{ $valuesPath.podMonitor.interval }}
scrapeTimeout: {{ $valuesPath.podMonitor.scrapeTimeout }}
# Set honor labels to true to allow overriding namespace label with Pulsar's namespace label
honorLabels: true
{{- if index $root.Values "victoria-metrics-k8s-stack" "enabled" }}
relabelConfigs:
{{- else }}
relabelings:
{{- end }}
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if or $valuesPath.podMonitor.metricRelabelings (and $valuesPath.podMonitor.dropUnderscoreCreatedMetrics (index $valuesPath.podMonitor.dropUnderscoreCreatedMetrics "enabled")) }}
{{- if index $root.Values "victoria-metrics-k8s-stack" "enabled" }}
metricRelabelConfigs:
{{- else }}
metricRelabelings:
{{- end }}
{{- if and $valuesPath.podMonitor.dropUnderscoreCreatedMetrics (index $valuesPath.podMonitor.dropUnderscoreCreatedMetrics "enabled") }}
# Drop metrics that end with _created, auto-created by metrics library to match OpenMetrics format
- sourceLabels: [__name__]
{{- if and (hasKey $valuesPath.podMonitor.dropUnderscoreCreatedMetrics "excludePatterns") $valuesPath.podMonitor.dropUnderscoreCreatedMetrics.excludePatterns }}
regex: "(?!{{ $valuesPath.podMonitor.dropUnderscoreCreatedMetrics.excludePatterns | join "|" }}).*_created$"
{{- else }}
regex: ".*_created$"
{{- end }}
action: drop
{{- end }}
{{- with $valuesPath.podMonitor.metricRelabelings }}
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" $root | nindent 6 }}
{{ $matchLabel }}
{{- end -}}

View File

@ -0,0 +1,122 @@
{{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}}
{{/*
Probe
*/}}
{{- define "oxia-cluster.probe" -}}
exec:
command: ["oxia", "health", "--port={{ . }}"]
initialDelaySeconds: 10
timeoutSeconds: 10
{{- end }}
{{/*
Probe
*/}}
{{- define "oxia-cluster.readiness-probe" -}}
exec:
command: ["oxia", "health", "--port={{ . }}", "--service=oxia-readiness"]
initialDelaySeconds: 10
timeoutSeconds: 10
{{- end }}
{{/*
Probe
*/}}
{{- define "oxia-cluster.startup-probe" -}}
exec:
command: ["oxia", "health", "--port={{ . }}"]
initialDelaySeconds: 60
timeoutSeconds: 10
{{- end }}
{{/*
Define the pulsar oxia
*/}}
{{- define "pulsar.oxia.server.service" -}}
{{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-svc
{{- end }}
{{/*
oxia url for broker metadata
*/}}
{{- define "pulsar.oxia.metadata.url.broker" -}}
{{- if .Values.components.oxia -}}
oxia://{{ template "pulsar.oxia.server.service" . }}:{{ .Values.oxia.server.ports.public }}/broker
{{- end -}}
{{- end -}}
{{/*
oxia url for bookkeeper metadata
*/}}
{{- define "pulsar.oxia.metadata.url.bookkeeper" -}}
{{- if .Values.components.oxia -}}
metadata-store:oxia://{{ template "pulsar.oxia.server.service" . }}:{{ .Values.oxia.server.ports.public }}/bookkeeper
{{- end -}}
{{- end -}}
{{/*
Define coordinator configmap
*/}}
{{- define "oxia.coordinator.config.yaml" -}}
namespaces:
- name: default
initialShardCount: {{ .Values.oxia.initialShardCount }}
replicationFactor: {{ .Values.oxia.replicationFactor }}
- name: broker
initialShardCount: {{ .Values.oxia.initialShardCount }}
replicationFactor: {{ .Values.oxia.replicationFactor }}
- name: bookkeeper
initialShardCount: {{ .Values.oxia.initialShardCount }}
replicationFactor: {{ .Values.oxia.replicationFactor }}
servers:
{{- $servicename := printf "%s-%s-svc" (include "pulsar.fullname" .) .Values.oxia.component }}
{{- $fqdnSuffix := printf "%s.svc.cluster.local" (include "pulsar.namespace" .) }}
{{- $podnamePrefix := printf "%s-%s-server-" (include "pulsar.fullname" .) .Values.oxia.component }}
{{- range until (int .Values.oxia.server.replicas) }}
{{- $podnameIndex := . }}
{{- $podname := printf "%s%d.%s" $podnamePrefix $podnameIndex $servicename }}
{{- $podnameFQDN := printf "%s.%s" $podname $fqdnSuffix }}
- public: {{ $podnameFQDN }}:{{ $.Values.oxia.server.ports.public }}
internal: {{ $podname }}:{{ $.Values.oxia.server.ports.internal }}
{{- end }}
{{- end }}
{{/*
Define coordinator entrypoint
*/}}
{{- define "oxia.coordinator.entrypoint" -}}
- "oxia"
- "coordinator"
{{- if .Values.oxia.coordinator.customConfigMapName }}
- "--conf=configmap:{{ template "pulsar.namespace" . }}/{{ .Values.oxia.coordinator.customConfigMapName }}"
{{- else }}
- "--conf=configmap:{{ template "pulsar.namespace" . }}/{{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator"
{{- end }}
- "--log-json"
- "--metadata=configmap"
- "--k8s-namespace={{ template "pulsar.namespace" . }}"
- "--k8s-configmap-name={{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator-status"
{{- if .Values.oxia.pprofEnabled }}
- "--profile"
{{- end}}
{{- end}}

View File

@ -0,0 +1,95 @@
{{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}}
{{/*
Define proxy tls certs mounts
*/}}
{{- define "pulsar.proxy.certs.volumeMounts" -}}
{{- if .Values.tls.enabled }}
{{- if .Values.tls.proxy.enabled }}
- mountPath: "/pulsar/certs/proxy"
name: proxy-certs
readOnly: true
{{- end }}
- mountPath: "/pulsar/certs/ca"
name: ca
readOnly: true
{{- end }}
{{- if .Values.tls.proxy.cacerts.enabled }}
- mountPath: "/pulsar/certs/cacerts"
name: proxy-cacerts
{{- range $cert := .Values.tls.proxy.cacerts.certs }}
- name: {{ $cert.name }}
mountPath: "/pulsar/certs/{{ $cert.name }}"
readOnly: true
{{- end }}
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem.sh"
subPath: certs-combine-pem.sh
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem-infinity.sh"
subPath: certs-combine-pem-infinity.sh
{{- end }}
{{- end }}
{{/*
Define proxy tls certs volumes
*/}}
{{- define "pulsar.proxy.certs.volumes" -}}
{{- if .Values.tls.enabled }}
{{- if .Values.tls.proxy.enabled }}
- name: proxy-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.proxy.cert_name }}"
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
{{- if .Values.tls.zookeeper.enabled }}
- key: tls-combined.pem
path: tls-combined.pem
{{- end }}
{{- end }}
- name: ca
secret:
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
items:
- key: ca.crt
path: ca.crt
{{- end }}
{{- if .Values.tls.proxy.cacerts.enabled }}
- name: proxy-cacerts
emptyDir: {}
{{- range $cert := .Values.tls.proxy.cacerts.certs }}
- name: {{ $cert.name }}
secret:
secretName: "{{ $cert.existingSecret }}"
items:
{{- range $key := $cert.secretKeys }}
- key: {{ $key }}
path: {{ $key }}
{{- end }}
{{- end }}
- name: certs-scripts
configMap:
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
defaultMode: 0755
{{- end }}
{{- end }}

View File

@ -36,7 +36,7 @@ Define toolset zookeeper client tls settings
*/}}
{{- define "pulsar.toolset.zookeeper.tls.settings" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled -}}
/pulsar/keytool/keytool.sh toolset {{ template "pulsar.toolset.hostname" . }} true;
{{- include "pulsar.component.zookeeper.tls.settings" (dict "component" "toolset" "isClient" true "isCacerts" .Values.tls.toolset.cacerts.enabled) -}}
{{- end -}}
{{- end }}
@ -44,18 +44,30 @@ Define toolset zookeeper client tls settings
Define toolset tls certs mounts
*/}}
{{- define "pulsar.toolset.certs.volumeMounts" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
{{- if .Values.tls.enabled }}
{{- if .Values.tls.zookeeper.enabled }}
- name: toolset-certs
mountPath: "/pulsar/certs/toolset"
readOnly: true
{{- end }}
- name: ca
mountPath: "/pulsar/certs/ca"
readOnly: true
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
mountPath: "/pulsar/keytool/keytool.sh"
subPath: keytool.sh
{{- end }}
{{- if .Values.tls.toolset.cacerts.enabled }}
- mountPath: "/pulsar/certs/cacerts"
name: toolset-cacerts
{{- range $cert := .Values.tls.toolset.cacerts.certs }}
- name: {{ $cert.name }}
mountPath: "/pulsar/certs/{{ $cert.name }}"
readOnly: true
{{- end }}
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem.sh"
subPath: certs-combine-pem.sh
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem-infinity.sh"
subPath: certs-combine-pem-infinity.sh
{{- end }}
{{- end }}
@ -63,7 +75,8 @@ Define toolset tls certs mounts
Define toolset tls certs volumes
*/}}
{{- define "pulsar.toolset.certs.volumes" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
{{- if .Values.tls.enabled }}
{{- if .Values.tls.zookeeper.enabled }}
- name: toolset-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.toolset.cert_name }}"
@ -72,17 +85,32 @@ Define toolset tls certs volumes
path: tls.crt
- key: tls.key
path: tls.key
- key: tls-combined.pem
path: tls-combined.pem
{{- end }}
- name: ca
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
items:
- key: ca.crt
path: ca.crt
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
{{- end }}
{{- if .Values.tls.toolset.cacerts.enabled }}
- name: toolset-cacerts
emptyDir: {}
{{- range $cert := .Values.tls.toolset.cacerts.certs }}
- name: {{ $cert.name }}
secret:
secretName: "{{ $cert.existingSecret }}"
items:
{{- range $key := $cert.secretKeys }}
- key: {{ $key }}
path: {{ $key }}
{{- end }}
{{- end }}
- name: certs-scripts
configMap:
name: "{{ template "pulsar.fullname" . }}-keytool-configmap"
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
defaultMode: 0755
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,37 @@
{{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}}
{{/*
Renders a value that contains template perhaps with scope if the scope is present.
Usage:
{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $ ) }}
{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $ "scope" $app ) }}
*/}}
{{- define "common.tplvalues.render" -}}
{{- $value := typeIs "string" .value | ternary .value (.value | toYaml) }}
{{- if contains "{{" (toJson .value) }}
{{- if .scope }}
{{- tpl (cat "{{- with $.RelativeScope -}}" $value "{{- end }}") (merge (dict "RelativeScope" .scope) .context) }}
{{- else }}
{{- tpl $value .context }}
{{- end }}
{{- else }}
{{- $value }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,25 @@
{{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}}
{{/*
Check deprecated setting auth.authentication.provider since 4.1.0
*/}}
{{- if (and .Values.auth.authentication.enabled (not (empty .Values.auth.authentication.provider))) }}
{{- fail "ERROR: Setting auth.authentication.provider is no longer supported. For details, see the migration guide in README.md." }}
{{- end }}

View File

@ -53,6 +53,93 @@ Define zookeeper tls settings
*/}}
{{- define "pulsar.zookeeper.tls.settings" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
/pulsar/keytool/keytool.sh zookeeper {{ template "pulsar.zookeeper.hostname" . }} false;
{{- include "pulsar.component.zookeeper.tls.settings" (dict "component" "zookeeper" "isClient" false "isCacerts" .Values.tls.zookeeper.cacerts.enabled) -}}
{{- end }}
{{- end }}
{{- define "pulsar.component.zookeeper.tls.settings" }}
{{- $component := .component -}}
{{- $isClient := .isClient -}}
{{- $keyFile := printf "/pulsar/certs/%s/tls-combined.pem" $component -}}
{{- $caFile := ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .isCacerts -}}
{{- if $isClient }}
echo $'\n' >> conf/pulsar_env.sh
echo "PULSAR_EXTRA_OPTS=\"\${PULSAR_EXTRA_OPTS} -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.client.secure=true -Dzookeeper.client.certReload=true -Dzookeeper.ssl.keyStore.location={{- $keyFile }} -Dzookeeper.ssl.keyStore.type=PEM -Dzookeeper.ssl.trustStore.location={{- $caFile }} -Dzookeeper.ssl.trustStore.type=PEM\"" >> conf/pulsar_env.sh
echo $'\n' >> conf/bkenv.sh
echo "BOOKIE_EXTRA_OPTS=\"\${BOOKIE_EXTRA_OPTS} -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.client.secure=true -Dzookeeper.client.certReload=true -Dzookeeper.ssl.keyStore.location={{- $keyFile }} -Dzookeeper.ssl.keyStore.type=PEM -Dzookeeper.ssl.trustStore.location={{- $caFile }} -Dzookeeper.ssl.trustStore.type=PEM\"" >> conf/bkenv.sh
{{- else }}
echo $'\n' >> conf/pulsar_env.sh
echo "PULSAR_EXTRA_OPTS=\"\${PULSAR_EXTRA_OPTS} -Dzookeeper.ssl.keyStore.location={{- $keyFile }} -Dzookeeper.ssl.keyStore.type=PEM -Dzookeeper.ssl.trustStore.location={{- $caFile }} -Dzookeeper.ssl.trustStore.type=PEM\"" >> conf/pulsar_env.sh
{{- end }}
{{- end }}
{{/*
Define zookeeper tls certs mounts
*/}}
{{- define "pulsar.zookeeper.certs.volumeMounts" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
- mountPath: "/pulsar/certs/zookeeper"
name: zookeeper-certs
readOnly: true
- mountPath: "/pulsar/certs/ca"
name: ca
readOnly: true
{{- end }}
{{- if .Values.tls.zookeeper.cacerts.enabled }}
- mountPath: "/pulsar/certs/cacerts"
name: zookeeper-cacerts
{{- range $cert := .Values.tls.zookeeper.cacerts.certs }}
- name: {{ $cert.name }}
mountPath: "/pulsar/certs/{{ $cert.name }}"
readOnly: true
{{- end }}
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem.sh"
subPath: certs-combine-pem.sh
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem-infinity.sh"
subPath: certs-combine-pem-infinity.sh
{{- end }}
{{- end }}
{{/*
Define zookeeper tls certs volumes
*/}}
{{- define "pulsar.zookeeper.certs.volumes" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
- name: zookeeper-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.zookeeper.cert_name }}"
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
- key: tls-combined.pem
path: tls-combined.pem
- name: ca
secret:
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
items:
- key: ca.crt
path: ca.crt
{{- end }}
{{- if .Values.tls.zookeeper.cacerts.enabled }}
- name: zookeeper-cacerts
emptyDir: {}
{{- range $cert := .Values.tls.zookeeper.cacerts.certs }}
- name: {{ $cert.name }}
secret:
secretName: "{{ $cert.existingSecret }}"
items:
{{- range $key := $cert.secretKeys }}
- key: {{ $key }}
path: {{ $key }}
{{- end }}
{{- end }}
- name: certs-scripts
configMap:
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
defaultMode: 0755
{{- end }}
{{- end }}

View File

@ -17,7 +17,7 @@
# under the License.
#
{{- if or .Values.components.autorecovery .Values.extra.autoRecovery }}
{{- if .Values.components.autorecovery }}
apiVersion: v1
kind: ConfigMap
metadata:

View File

@ -17,42 +17,7 @@
# under the License.
#
# deploy broker PodMonitor only when `$.Values.broker.podMonitor.enabled` is true
# deploy autorecovery PodMonitor only when `$.Values.autorecovery.podMonitor.enabled` is true
{{- if $.Values.autorecovery.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ template "pulsar.name" . }}-recovery
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobLabel: recovery
podMetricsEndpoints:
- port: http
path: /metrics
scheme: http
interval: {{ $.Values.autorecovery.podMonitor.interval }}
scrapeTimeout: {{ $.Values.autorecovery.podMonitor.scrapeTimeout }}
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if $.Values.autorecovery.podMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml $.Values.autorecovery.podMonitor.metricRelabelings | nindent 8 }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: {{ .Values.autorecovery.component }}
{{- include "pulsar.podMonitor" (list . "autorecovery" (printf "component: %s" .Values.autorecovery.component)) }}
{{- end }}

View File

@ -1,85 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
namespace: {{ template "pulsar.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
namespace: {{ template "pulsar.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
subjects:
- kind: ServiceAccount
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
namespace: {{ template "pulsar.namespace" . }}
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
{{- if .Values.rbac.limit_to_namespace }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}-{{ template "pulsar.namespace" . }}"
{{- else}}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
{{- end}}
spec:
readOnlyRootFilesystem: false
privileged: false
allowPrivilegeEscalation: false
runAsUser:
rule: 'RunAsAny'
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end }}

View File

@ -17,7 +17,7 @@
# under the License.
#
{{- if or .Values.components.autorecovery .Values.extra.autoRecovery }}
{{- if .Values.components.autorecovery }}
apiVersion: v1
kind: ServiceAccount
metadata:
@ -26,8 +26,8 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.autorecovery.component }}
annotations:
{{- with .Values.autorecovery.service_account.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}

View File

@ -17,7 +17,7 @@
# under the License.
#
{{- if or .Values.components.autorecovery .Values.extra.autoRecovery }}
{{- if .Values.components.autorecovery }}
apiVersion: v1
kind: Service
metadata:
@ -26,6 +26,10 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.autorecovery.component }}
{{- with .Values.autorecovery.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
ports:
- name: http

View File

@ -17,12 +17,13 @@
# under the License.
#
{{- if or .Values.components.autorecovery .Values.extra.autoRecovery }}
{{- if .Values.components.autorecovery }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
namespace: {{ template "pulsar.namespace" . }}
annotations: {{ .Values.autorecovery.appAnnotations | toYaml | nindent 4 }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.autorecovery.component }}
@ -43,8 +44,10 @@ spec:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.autorecovery.component }}
annotations:
{{- if not .Values.autorecovery.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.autorecovery.ports.http }}"
{{- end }}
{{- if .Values.autorecovery.restartPodsOnConfigMapChange }}
checksum/config: {{ include (print $.Template.BasePath "/autorecovery-configmap.yaml") . | sha256sum }}
{{- end }}
@ -61,6 +64,10 @@ spec:
{{- with .Values.autorecovery.tolerations }}
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
{{- if .Values.autorecovery.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml .Values.autorecovery.topologySpreadConstraints | nindent 8 }}
{{- end }}
affinity:
{{- if and .Values.affinity.anti_affinity .Values.autorecovery.affinity.anti_affinity}}
@ -106,36 +113,57 @@ spec:
terminationGracePeriodSeconds: {{ .Values.autorecovery.gracePeriod }}
serviceAccountName: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
initContainers:
{{- if .Values.tls.autorecovery.cacerts.enabled }}
- name: cacerts
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.autorecovery "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.autorecovery "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.autorecovery.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.autorecovery.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if and .Values.autorecovery.waitBookkeeperTimeout (gt (.Values.autorecovery.waitBookkeeperTimeout | int) 0) }}
# This initContainer will wait for bookkeeper initnewcluster to complete
# before deploying the bookies
- name: pulsar-bookkeeper-verify-clusterid
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.autorecovery "root" .) }}"
imagePullPolicy: {{ .Values.images.autorecovery.pullPolicy }}
resources: {{ toYaml .Values.initContainer_resources.verify_cluster_id | nindent 10 }}
command: ["sh", "-c"]
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.autorecovery "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.autorecovery.waitBookkeeperTimeout }}", "sh", "-c"]
args:
- >
- |
{{- include "pulsar.autorecovery.init.verify_cluster_id" . | nindent 10 }}
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
volumeMounts:
{{- if .Values.autorecovery.extraVolumeMounts }}
{{ toYaml .Values.autorecovery.extraVolumeMounts | indent 8 }}
{{- end }}
{{- include "pulsar.autorecovery.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if .Values.autorecovery.initContainers }}
{{- toYaml .Values.autorecovery.initContainers | nindent 6 }}
{{- end }}
containers:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.autorecovery "root" .) }}"
imagePullPolicy: {{ .Values.images.autorecovery.pullPolicy }}
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.autorecovery "root" .) }}"
{{- if .Values.autorecovery.resources }}
resources:
{{ toYaml .Values.autorecovery.resources | indent 10 }}
{{- end }}
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end}}
command: ["sh", "-c"]
args:
- >
- |
{{- if .Values.tls.autorecovery.cacerts.enabled }}
cd /pulsar/certs/cacerts;
nohup /pulsar/bin/certs-combine-pem-infinity.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.autorecovery.cacerts.certs) }} > /pulsar/certs/cacerts/certs-combine-pem-infinity.log 2>&1 &
cd /pulsar;
{{- end }}
bin/apply-config-from-env.py conf/bookkeeper.conf;
{{- include "pulsar.autorecovery.zookeeper.tls.settings" . | nindent 10 }}
OPTS="${OPTS} -Dlog4j2.formatMsgNoLookups=true" exec bin/bookkeeper autorecovery
@ -149,6 +177,9 @@ spec:
{{- include "pulsar.autorecovery.certs.volumeMounts" . | nindent 8 }}
volumes:
{{- include "pulsar.autorecovery.certs.volumes" . | nindent 6 }}
{{- if .Values.autorecovery.extraVolumes }}
{{ toYaml .Values.autorecovery.extraVolumes | indent 6 }}
{{- end }}
{{- include "pulsar.imagePullSecrets" . | nindent 6}}
{{- end }}

View File

@ -16,7 +16,7 @@
# specific language governing permissions and limitations
# under the License.
#
{{- if or .Release.IsInstall .Values.initialize }}
{{- if or (and .Values.useReleaseStatus .Release.IsInstall) .Values.initialize }}
{{- if .Values.components.bookkeeper }}
apiVersion: batch/v1
kind: Job
@ -29,32 +29,49 @@ metadata:
spec:
# This feature was previously behind a feature gate for several Kubernetes versions and will default to true in 1.23 and beyond
# https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/
{{- if .Values.job.ttl.enabled }}
ttlSecondsAfterFinished: {{ .Values.job.ttl.secondsAfterFinished }}
{{- if and .Values.job.ttl.enabled (semverCompare ">=1.23-0" .Capabilities.KubeVersion.Version) }}
ttlSecondsAfterFinished: {{ .Values.job.ttl.secondsAfterFinished | default 600 }}
{{- end }}
template:
metadata:
labels:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.bookkeeper.component }}-init
spec:
{{- include "pulsar.imagePullSecrets" . | nindent 6 }}
serviceAccountName: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
nodeSelector:
{{- if .Values.pulsar_metadata.nodeSelector }}
nodeSelector:
{{ toYaml .Values.pulsar_metadata.nodeSelector | indent 8 }}
{{- end }}
{{- with .Values.pulsar_metadata.tolerations }}
tolerations:
{{- if .Values.pulsar_metadata.tolerations }}
{{ toYaml .Values.pulsar_metadata.tolerations | indent 8 }}
{{- end }}
initContainers:
- name: wait-zookeeper-ready
{{- if .Values.tls.bookie.cacerts.enabled }}
- name: cacerts
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.bookie "root" .) }}"
imagePullPolicy: {{ .Values.images.bookie.pullPolicy }}
resources: {{ toYaml .Values.initContainer_resources.zookeeper_ready | nindent 10 }}
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.bookie "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- >-
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.bookie.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.toolset.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if and .Values.components.zookeeper .Values.bookkeeper.metadata.waitZookeeperTimeout (gt (.Values.bookkeeper.metadata.waitZookeeperTimeout | int) 0) }}
- name: wait-zookeeper-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.bookie "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.bookie "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.bookkeeper.metadata.waitZookeeperTimeout }}", "sh", "-c"]
args:
- |
{{- if $zk:=.Values.pulsar_metadata.userProvidedZookeepers }}
export PULSAR_MEM="-Xmx128M";
until bin/pulsar zookeeper-shell -server {{ $zk }} ls {{ or .Values.metadataPrefix "/" }}; do
until timeout 15 bin/pulsar zookeeper-shell -server {{ $zk }} ls {{ or .Values.metadataPrefix "/" }}; do
echo "user provided zookeepers {{ $zk }} are unreachable... check in 3 seconds ..." && sleep 3;
done;
{{ else }}
@ -62,35 +79,44 @@ spec:
sleep 3;
done;
{{- end}}
{{- end}}
{{- if and .Values.components.oxia .Values.bookkeeper.metadata.waitOxiaTimeout (gt (.Values.bookkeeper.metadata.waitOxiaTimeout | int) 0) }}
- name: wait-oxia-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.bookie "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.bookie "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.bookkeeper.metadata.waitOxiaTimeout }}", "sh", "-c"]
args:
- |
until nslookup {{ template "pulsar.oxia.server.service" . }}; do
sleep 3;
done;
{{- end }}
containers:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}-init"
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.bookie "root" .) }}"
imagePullPolicy: {{ .Values.images.bookie.pullPolicy }}
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.bookie "root" .) }}"
{{- if .Values.bookkeeper.metadata.resources }}
resources:
{{ toYaml .Values.bookkeeper.metadata.resources | indent 10 }}
{{- end }}
command: ["sh", "-c"]
command: ["timeout", "{{ .Values.bookkeeper.metadata.initTimeout | default 60 }}", "sh", "-c"]
args:
- >
- |
bin/apply-config-from-env.py conf/bookkeeper.conf;
{{- include "pulsar.toolset.zookeeper.tls.settings" . | nindent 12 }}
export BOOKIE_MEM="-Xmx128M";
if bin/bookkeeper shell whatisinstanceid; then
if timeout 15 bin/bookkeeper shell whatisinstanceid; then
echo "bookkeeper cluster already initialized";
else
{{- if not (eq .Values.metadataPrefix "") }}
bin/bookkeeper org.apache.zookeeper.ZooKeeperMain -server {{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }} create {{ .Values.metadataPrefix }} && echo 'created for pulsar cluster "{{ template "pulsar.cluster.name" . }}"' &&
{{- if and .Values.components.zookeeper (not (eq .Values.metadataPrefix "")) }}
bin/pulsar zookeeper-shell -server {{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }} create {{ .Values.metadataPrefix }} && echo 'created for pulsar cluster "{{ template "pulsar.cluster.name" . }}"' &&
{{- end }}
bin/bookkeeper shell initnewcluster;
fi
{{- if .Values.extraInitCommand }}
{{ .Values.extraInitCommand }}
{{- end }}
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end }}
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"

View File

@ -19,40 +19,5 @@
# deploy bookkeeper PodMonitor only when `$.Values.bookkeeper.podMonitor.enabled` is true
{{- if $.Values.bookkeeper.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ template "pulsar.fullname" . }}-bookie
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobLabel: bookie
podMetricsEndpoints:
- port: http
path: /metrics
scheme: http
interval: {{ $.Values.bookkeeper.podMonitor.interval }}
scrapeTimeout: {{ $.Values.bookkeeper.podMonitor.scrapeTimeout }}
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if $.Values.bookkeeper.podMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml $.Values.bookkeeper.podMonitor.metricRelabelings | nindent 8 }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: bookie
{{- include "pulsar.podMonitor" (list . "bookkeeper" (printf "component: %s" .Values.bookkeeper.component)) }}
{{- end }}

View File

@ -1,85 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
namespace: {{ template "pulsar.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
namespace: {{ template "pulsar.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
subjects:
- kind: ServiceAccount
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
namespace: {{ template "pulsar.namespace" . }}
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
{{- if .Values.rbac.limit_to_namespace }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}-{{ template "pulsar.namespace" . }}"
{{- else}}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
{{- end}}
spec:
readOnlyRootFilesystem: false
privileged: false
allowPrivilegeEscalation: false
runAsUser:
rule: 'RunAsAny'
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end}}

View File

@ -26,8 +26,8 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.bookkeeper.component }}
annotations:
{{- with .Values.bookkeeper.service_account.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}

View File

@ -26,9 +26,9 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.bookkeeper.component }}
{{- if .Values.bookkeeper.service.annotations }}
{{- with .Values.bookkeeper.service.annotations }}
annotations:
{{ toYaml .Values.bookkeeper.service.annotations | indent 4 }}
{{ toYaml . | indent 4 }}
{{- end }}
spec:
ports:

View File

@ -23,6 +23,7 @@ kind: StatefulSet
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
namespace: {{ template "pulsar.namespace" . }}
annotations: {{ .Values.bookkeeper.appAnnotations | toYaml | nindent 4 }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.bookkeeper.component }}
@ -42,8 +43,10 @@ spec:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.bookkeeper.component }}
annotations:
{{- if not .Values.bookkeeper.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.bookkeeper.ports.http }}"
{{- end }}
{{- if .Values.bookkeeper.restartPodsOnConfigMapChange }}
checksum/config: {{ include (print $.Template.BasePath "/bookkeeper-configmap.yaml") . | sha256sum }}
{{- end }}
@ -58,11 +61,15 @@ spec:
{{- if .Values.bookkeeper.tolerations }}
tolerations:
{{ toYaml .Values.bookkeeper.tolerations | indent 8 }}
{{- end }}
{{- if .Values.bookkeeper.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml .Values.bookkeeper.topologySpreadConstraints | nindent 8 }}
{{- end }}
affinity:
{{- if and .Values.affinity.anti_affinity .Values.bookkeeper.affinity.anti_affinity}}
podAntiAffinity:
{{ if eq .Values.bookkeeper.affinity.type "requiredDuringSchedulingIgnoredDuringExecution"}}
{{- if eq .Values.bookkeeper.affinity.type "requiredDuringSchedulingIgnoredDuringExecution"}}
{{ .Values.bookkeeper.affinity.type }}:
- labelSelector:
matchExpressions:
@ -79,7 +86,7 @@ spec:
values:
- {{ .Values.bookkeeper.component }}
topologyKey: {{ .Values.bookkeeper.affinity.anti_affinity_topology_key }}
{{ else }}
{{- else }}
{{ .Values.bookkeeper.affinity.type }}:
- weight: 100
podAffinityTerm:
@ -106,31 +113,44 @@ spec:
securityContext:
{{ toYaml .Values.bookkeeper.securityContext | indent 8 }}
{{- end }}
{{- if and .Values.bookkeeper.waitMetadataTimeout (gt (.Values.bookkeeper.waitMetadataTimeout | int) 0) }}
initContainers:
{{- if .Values.tls.bookie.cacerts.enabled }}
- name: cacerts
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.bookie "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.bookie "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.bookie.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.bookkeeper.certs.volumeMounts" . | nindent 8 }}
{{- end }}
# This initContainer will wait for bookkeeper initnewcluster to complete
# before deploying the bookies
- name: pulsar-bookkeeper-verify-clusterid
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.bookie "root" .) }}"
imagePullPolicy: {{ .Values.images.bookie.pullPolicy }}
resources: {{ toYaml .Values.initContainer_resources.verify_cluster_id | nindent 10 }}
command: ["sh", "-c"]
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.bookie "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.bookkeeper.waitMetadataTimeout }}", "sh", "-c"]
args:
# only reformat bookie if bookkeeper is running without persistence
- >
- |
{{- include "pulsar.bookkeeper.init.verify_cluster_id" . | nindent 10 }}
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end}}
volumeMounts:
{{- include "pulsar.bookkeeper.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if .Values.bookkeeper.initContainers }}
{{- toYaml .Values.bookkeeper.initContainers | nindent 6 }}
{{- end }}
containers:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.bookie "root" .) }}"
imagePullPolicy: {{ .Values.images.bookie.pullPolicy }}
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.bookie "root" .) }}"
{{- if .Values.bookkeeper.probe.liveness.enabled }}
livenessProbe:
httpGet:
@ -167,17 +187,34 @@ spec:
{{- end }}
command: ["sh", "-c"]
args:
- >
- |
# set required environment variables to use rocksdb config files provided in the Pulsar image
export PULSAR_PREFIX_defaultRocksdbConf=${PULSAR_PREFIX_defaultRocksdbConf:-conf/default_rocksdb.conf}
export PULSAR_PREFIX_entryLocationRocksdbConf=${PULSAR_PREFIX_entryLocationRocksdbConf:-conf/entry_location_rocksdb.conf}
export PULSAR_PREFIX_ledgerMetadataRocksdbConf=${PULSAR_PREFIX_ledgerMetadataRocksdbConf:-conf/ledger_metadata_rocksdb.conf}
if [ -x bin/update-rocksdb-conf-from-env.py ] && [ -f "${PULSAR_PREFIX_entryLocationRocksdbConf}" ]; then
echo "Updating ${PULSAR_PREFIX_entryLocationRocksdbConf} from environment variables starting with dbStorage_rocksDB_*"
bin/update-rocksdb-conf-from-env.py "${PULSAR_PREFIX_entryLocationRocksdbConf}"
else
# Ensure that Bookkeeper will not load RocksDB config from existing files and fallback to use default RocksDB config
# See https://github.com/apache/bookkeeper/pull/3523 as reference
export PULSAR_PREFIX_defaultRocksdbConf=conf/non_existing_default_rocksdb.conf
export PULSAR_PREFIX_entryLocationRocksdbConf=conf/non_existing_entry_location_rocksdb.conf
export PULSAR_PREFIX_ledgerMetadataRocksdbConf=conf/non_existing_ledger_metadata_rocksdb.conf
# Ensure that Bookkeeper will use RocksDB format_version 5 (this currently applies only to the entry location rocksdb due to a bug in Bookkeeper)
export PULSAR_PREFIX_dbStorage_rocksDB_format_version=${PULSAR_PREFIX_dbStorage_rocksDB_format_version:-5}
fi
{{- if .Values.bookkeeper.additionalCommand }}
{{ .Values.bookkeeper.additionalCommand }}
{{- end }}
{{- if .Values.tls.bookie.cacerts.enabled }}
cd /pulsar/certs/cacerts;
nohup /pulsar/bin/certs-combine-pem-infinity.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.bookie.cacerts.certs) }} > /pulsar/certs/cacerts/certs-combine-pem-infinity.log 2>&1 &
cd /pulsar;
{{- end }}
bin/apply-config-from-env.py conf/bookkeeper.conf;
{{- include "pulsar.bookkeeper.zookeeper.tls.settings" . | nindent 10 }}
OPTS="${OPTS} -Dlog4j2.formatMsgNoLookups=true" exec bin/pulsar bookie;
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end}}
ports:
- name: "{{ .Values.tcpPrefix }}bookie"
containerPort: {{ .Values.bookkeeper.ports.bookie }}
@ -226,10 +263,10 @@ spec:
emptyDir: {}
{{- end }}
{{- include "pulsar.bookkeeper.certs.volumes" . | nindent 6 }}
{{- include "pulsar.imagePullSecrets" . | nindent 6}}
{{- if .Values.bookkeeper.extraVolumes }}
{{ toYaml .Values.bookkeeper.extraVolumes | indent 6 }}
{{- end }}
{{- include "pulsar.imagePullSecrets" . | nindent 6}}
{{- if and (and .Values.persistence .Values.volumes.persistence) .Values.bookkeeper.volumes.persistence}}
volumeClaimTemplates:
{{- if .Values.bookkeeper.volumes.useSingleCommonVolume }}

View File

@ -63,12 +63,22 @@ rules:
resources:
- configmaps
verbs: ["get", "list", "watch"]
- apiGroups: ["", "extensions", "apps"]
- apiGroups: [""]
resources:
- pods
- services
- deployments
- secrets
verbs:
- list
- watch
- get
- update
- create
- delete
- patch
- apiGroups: ["apps"]
resources:
- deployments
- statefulsets
verbs:
- list

View File

@ -28,27 +28,130 @@ metadata:
component: {{ .Values.broker.component }}
data:
# Metadata settings
zookeeperServers: "{{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }}"
{{- if .Values.components.zookeeper }}
metadataStoreUrl: "zk:{{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }}"
{{- $configMetadataStoreUrl := "" }}
{{- if .Values.pulsar_metadata.configurationStore }}
configurationStoreServers: "{{ template "pulsar.configurationStore.connect" . }}{{ .Values.pulsar_metadata.configurationStoreMetadataPrefix }}"
{{- $configMetadataStoreUrl = printf "zk:%s%s" (include "pulsar.configurationStore.connect" .) .Values.pulsar_metadata.configurationStoreMetadataPrefix }}
{{- else }}
{{- $configMetadataStoreUrl = printf "zk:%s%s" (include "pulsar.zookeeper.connect" .) .Values.metadataPrefix }}
{{- end }}
{{- if not .Values.pulsar_metadata.configurationStore }}
configurationStoreServers: "{{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }}"
configurationMetadataStoreUrl: "{{ $configMetadataStoreUrl }}"
{{- if .Values.pulsar_metadata.bookkeeper.usePulsarMetadataClientDriver }}
bookkeeperMetadataServiceUri: "metadata-store:{{ $configMetadataStoreUrl }}/ledgers"
{{- else }}
bookkeeperMetadataServiceUri: "zk+hierarchical://{{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }}/ledgers"
{{- end }}
{{- end }}
{{- if .Values.components.oxia }}
metadataStoreUrl: "{{ template "pulsar.oxia.metadata.url.broker" . }}"
configurationMetadataStoreUrl: "{{ template "pulsar.oxia.metadata.url.broker" . }}"
bookkeeperMetadataServiceUri: "{{ template "pulsar.oxia.metadata.url.bookkeeper" . }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreAllowReadOnlyOperations" }}
PULSAR_PREFIX_metadataStoreAllowReadOnlyOperations: "{{ .Values.pulsar_metadata.metadataStoreAllowReadOnlyOperations }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreSessionTimeoutMillis" }}
metadataStoreSessionTimeoutMillis: "{{ .Values.pulsar_metadata.metadataStoreSessionTimeoutMillis }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreOperationTimeoutSeconds" }}
metadataStoreOperationTimeoutSeconds: "{{ .Values.pulsar_metadata.metadataStoreOperationTimeoutSeconds }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreCacheExpirySeconds" }}
metadataStoreCacheExpirySeconds: "{{ .Values.pulsar_metadata.metadataStoreCacheExpirySeconds }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreBatchingEnabled" }}
metadataStoreBatchingEnabled: "{{ .Values.pulsar_metadata.metadataStoreBatchingEnabled }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreBatchingMaxDelayMillis" }}
metadataStoreBatchingMaxDelayMillis: "{{ .Values.pulsar_metadata.metadataStoreBatchingMaxDelayMillis }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreBatchingMaxOperations" }}
metadataStoreBatchingMaxOperations: "{{ .Values.pulsar_metadata.metadataStoreBatchingMaxOperations }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreBatchingMaxSizeKb" }}
metadataStoreBatchingMaxSizeKb: "{{ .Values.pulsar_metadata.metadataStoreBatchingMaxSizeKb }}"
{{- end }}
# Broker settings
clusterName: {{ template "pulsar.cluster.name" . }}
# Enable all metrics by default
exposeTopicLevelMetricsInPrometheus: "true"
exposeConsumerLevelMetricsInPrometheus: "true"
exposeProducerLevelMetricsInPrometheus: "true"
exposeManagedLedgerMetricsInPrometheus: "true"
exposeManagedCursorMetricsInPrometheus: "true"
exposeBundlesMetricsInPrometheus: "true"
exposePublisherStats: "true"
exposePreciseBacklogInPrometheus: "true"
replicationMetricsEnabled: "true"
splitTopicAndPartitionLabelInPrometheus: "true"
aggregatePublisherStatsByProducerName: "true"
bookkeeperClientExposeStatsToPrometheus: "true"
numHttpServerThreads: "8"
zooKeeperSessionTimeoutMillis: "30000"
statusFilePath: "{{ template "pulsar.home" . }}/status"
statusFilePath: "{{ template "pulsar.home" . }}/logs/status"
# Tiered storage settings
{{- if .Values.broker.storageOffload.driver }}
{{- if eq .Values.broker.storageOffload.driver "aws-s3" }}
managedLedgerOffloadDriver: "{{ .Values.broker.storageOffload.driver }}"
s3ManagedLedgerOffloadBucket: "{{ .Values.broker.storageOffload.bucket }}"
s3ManagedLedgerOffloadRegion: "{{ .Values.broker.storageOffload.region }}"
{{- if .Values.broker.storageOffload.managedLedgerOffloadAutoTriggerSizeThresholdBytes }}
PULSAR_PREFIX_managedLedgerOffloadThresholdInBytes: "{{ .Values.broker.storageOffload.managedLedgerOffloadAutoTriggerSizeThresholdBytes }}"
{{- end }}
{{- if .Values.broker.storageOffload.managedLedgerOffloadDeletionLagMs }}
PULSAR_PREFIX_managedLedgerOffloadDeletionLagInMillis: "{{ .Values.broker.storageOffload.managedLedgerOffloadDeletionLagMs }}"
{{- end }}
{{- if .Values.broker.storageOffload.maxBlockSizeInBytes }}
s3ManagedLedgerOffloadMaxBlockSizeInBytes: "{{ .Values.broker.storageOffload.maxBlockSizeInBytes }}"
{{- end }}
{{- if .Values.broker.storageOffload.readBufferSizeInBytes }}
s3ManagedLedgerOffloadReadBufferSizeInBytes: "{{ .Values.broker.storageOffload.readBufferSizeInBytes }}"
{{- end }}
{{- end }}
{{- if eq .Values.broker.storageOffload.driver "google-cloud-storage" }}
managedLedgerOffloadDriver: "{{ .Values.broker.storageOffload.driver }}"
gcsManagedLedgerOffloadBucket: "{{ .Values.broker.storageOffload.bucket }}"
gcsManagedLedgerOffloadRegion: "{{ .Values.broker.storageOffload.region }}"
gcsManagedLedgerOffloadServiceAccountKeyFile: "/pulsar/gcp-service-account/{{ .Values.broker.storageOffload.gcsServiceAccountJsonFile }}"
{{- if .Values.broker.storageOffload.managedLedgerOffloadAutoTriggerSizeThresholdBytes }}
PULSAR_PREFIX_managedLedgerOffloadThresholdInBytes: "{{ .Values.broker.storageOffload.managedLedgerOffloadAutoTriggerSizeThresholdBytes }}"
{{- end }}
{{- if .Values.broker.storageOffload.managedLedgerOffloadDeletionLagMs }}
PULSAR_PREFIX_managedLedgerOffloadDeletionLagInMillis: "{{ .Values.broker.storageOffload.managedLedgerOffloadDeletionLagMs }}"
{{- end }}
{{- if .Values.broker.storageOffload.maxBlockSizeInBytes }}
gcsManagedLedgerOffloadMaxBlockSizeInBytes: "{{ .Values.broker.storageOffload.maxBlockSizeInBytes }}"
{{- end }}
{{- if .Values.broker.storageOffload.readBufferSizeInBytes }}
gcsManagedLedgerOffloadReadBufferSizeInBytes: "{{ .Values.broker.storageOffload.readBufferSizeInBytes }}"
{{- end }}
{{- end }}
{{- if eq .Values.broker.storageOffload.driver "azureblob" }}
managedLedgerOffloadDriver: "{{ .Values.broker.storageOffload.driver }}"
managedLedgerOffloadBucket: "{{ .Values.broker.storageOffload.bucket }}"
{{- if .Values.broker.storageOffload.managedLedgerOffloadAutoTriggerSizeThresholdBytes }}
PULSAR_PREFIX_managedLedgerOffloadThresholdInBytes: "{{ .Values.broker.storageOffload.managedLedgerOffloadAutoTriggerSizeThresholdBytes }}"
{{- end }}
{{- if .Values.broker.storageOffload.managedLedgerOffloadDeletionLagMs }}
PULSAR_PREFIX_managedLedgerOffloadDeletionLagInMillis: "{{ .Values.broker.storageOffload.managedLedgerOffloadDeletionLagMs }}"
{{- end }}
{{- if .Values.broker.storageOffload.maxBlockSizeInBytes }}
managedLedgerOffloadMaxBlockSizeInBytes: "{{ .Values.broker.storageOffload.maxBlockSizeInBytes }}"
{{- end }}
{{- end }}
{{- end }}
# Function Worker Settings
# function worker configuration
{{- if not (or .Values.components.functions .Values.extra.functionsAsPods) }}
{{- if not .Values.components.functions }}
functionsWorkerEnabled: "false"
{{- end }}
{{- if or .Values.components.functions .Values.extra.functionsAsPods }}
{{- if .Values.components.functions }}
functionsWorkerEnabled: "true"
{{- if .Values.functions.useBookieAsStateStore }}
PF_stateStorageServiceUrl: "bk://{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}:{{ .Values.bookkeeper.ports.statestore }}"
@ -62,36 +165,32 @@ data:
PF_functionRuntimeFactoryConfigs_pulsarRootDir: {{ template "pulsar.home" . }}
PF_kubernetesContainerFactory_pulsarRootDir: {{ template "pulsar.home" . }}
PF_functionRuntimeFactoryConfigs_pulsarDockerImageName: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.functions "root" .) }}"
PF_functionRuntimeFactoryConfigs_imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.functions "root" .) }}"
PF_functionRuntimeFactoryConfigs_submittingInsidePod: "true"
PF_functionRuntimeFactoryConfigs_installUserCodeDependencies: "true"
PF_functionRuntimeFactoryConfigs_jobNamespace: {{ template "pulsar.namespace" . }}
PF_functionRuntimeFactoryConfigs_expectedMetricsCollectionInterval: "30"
{{- if not (and .Values.tls.enabled .Values.tls.broker.enabled) }}
{{- if not (and .Values.tls.enabled .Values.tls.broker.enabled .Values.tls.function_instance.enabled) }}
PF_functionRuntimeFactoryConfigs_pulsarAdminUrl: "http://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.http }}/"
PF_functionRuntimeFactoryConfigs_pulsarServiceUrl: "pulsar://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.pulsar }}/"
{{- end }}
{{- if and .Values.tls.enabled .Values.tls.broker.enabled }}
{{- else }}
PF_functionRuntimeFactoryConfigs_pulsarAdminUrl: "https://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.https }}/"
PF_functionRuntimeFactoryConfigs_pulsarServiceUrl: "pulsar+ssl://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.pulsarssl }}/"
{{- end }}
PF_functionRuntimeFactoryConfigs_changeConfigMap: "{{ template "pulsar.fullname" . }}-{{ .Values.functions.component }}-config"
PF_functionRuntimeFactoryConfigs_changeConfigMapNamespace: {{ template "pulsar.namespace" . }}
# support version < 2.5.0
PF_kubernetesContainerFactory_pulsarDockerImageName: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.functions "root" .) }}"
PF_kubernetesContainerFactory_imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.functions "root" .) }}"
PF_kubernetesContainerFactory_submittingInsidePod: "true"
PF_kubernetesContainerFactory_installUserCodeDependencies: "true"
PF_kubernetesContainerFactory_jobNamespace: {{ template "pulsar.namespace" . }}
PF_kubernetesContainerFactory_expectedMetricsCollectionInterval: "30"
{{- if not (and .Values.tls.enabled .Values.tls.broker.enabled) }}
{{- if not (and .Values.tls.enabled .Values.tls.broker.enabled .Values.tls.function_instance.enabled) }}
PF_kubernetesContainerFactory_pulsarAdminUrl: "http://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.http }}/"
PF_kubernetesContainerFactory_pulsarServiceUrl: "pulsar://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.pulsar }}/"
{{- end }}
{{- if and .Values.tls.enabled .Values.tls.broker.enabled }}
{{- else }}
PF_kubernetesContainerFactory_pulsarAdminUrl: "https://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.https }}/"
PF_kubernetesContainerFactory_pulsarServiceUrl: "pulsar+ssl://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.pulsarssl }}/"
{{- end }}
PF_kubernetesContainerFactory_changeConfigMap: "{{ template "pulsar.fullname" . }}-{{ .Values.functions.component }}-config"
PF_kubernetesContainerFactory_changeConfigMapNamespace: {{ template "pulsar.namespace" . }}
{{- end }}
# prometheus needs to access /metrics endpoint
@ -105,7 +204,7 @@ data:
# TLS Settings
tlsCertificateFilePath: "/pulsar/certs/broker/tls.crt"
tlsKeyFilePath: "/pulsar/certs/broker/tls.key"
tlsTrustCertsFilePath: "/pulsar/certs/ca/ca.crt"
tlsTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.broker.cacerts.enabled | quote }}
{{- end }}
# Authentication Settings
@ -113,15 +212,19 @@ data:
authenticationEnabled: "true"
{{- if .Values.auth.authorization.enabled }}
authorizationEnabled: "true"
superUserRoles: {{ .Values.auth.superUsers | values | sortAlpha | join "," }}
superUserRoles: {{ .Values.auth.superUsers | values | compact | sortAlpha | join "," }}
{{- if .Values.auth.useProxyRoles }}
proxyRoles: {{ .Values.auth.superUsers.proxy }}
{{- end }}
{{- end }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.jwt.enabled }}
# token authentication configuration
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.jwt.enabled .Values.auth.authentication.openid.enabled }}
authenticationProviders: "org.apache.pulsar.broker.authentication.AuthenticationProviderToken,org.apache.pulsar.broker.authentication.oidc.AuthenticationProviderOpenID"
{{- end }}
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.jwt.enabled ( not .Values.auth.authentication.openid.enabled ) }}
authenticationProviders: "org.apache.pulsar.broker.authentication.AuthenticationProviderToken"
{{- end }}
brokerClientAuthenticationParameters: "file:///pulsar/tokens/broker/token"
brokerClientAuthenticationPlugin: "org.apache.pulsar.client.impl.auth.AuthenticationToken"
{{- if .Values.auth.authentication.jwt.usingSecretKey }}
@ -130,6 +233,25 @@ data:
tokenPublicKey: "file:///pulsar/keys/token/public.key"
{{- end }}
{{- end }}
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.openid.enabled }}
# openid authentication configuration
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.openid.enabled ( not .Values.auth.authentication.jwt.enabled ) }}
authenticationProviders: "org.apache.pulsar.broker.authentication.oidc.AuthenticationProviderOpenID"
{{- end }}
PULSAR_PREFIX_openIDAllowedTokenIssuers: {{ .Values.auth.authentication.openid.openIDAllowedTokenIssuers | uniq | compact | sortAlpha | join "," | quote }}
PULSAR_PREFIX_openIDAllowedAudiences: {{ .Values.auth.authentication.openid.openIDAllowedAudiences | uniq | compact | sortAlpha | join "," | quote }}
PULSAR_PREFIX_openIDTokenIssuerTrustCertsFilePath: {{ .Values.auth.authentication.openid.openIDTokenIssuerTrustCertsFilePath | quote }}
PULSAR_PREFIX_openIDRoleClaim: {{ .Values.auth.authentication.openid.openIDRoleClaim | quote }}
PULSAR_PREFIX_openIDAcceptedTimeLeewaySeconds: {{ .Values.auth.authentication.openid.openIDAcceptedTimeLeewaySeconds | quote }}
PULSAR_PREFIX_openIDCacheSize: {{ .Values.auth.authentication.openid.openIDCacheSize | quote }}
PULSAR_PREFIX_openIDCacheRefreshAfterWriteSeconds: {{ .Values.auth.authentication.openid.openIDCacheRefreshAfterWriteSeconds | quote }}
PULSAR_PREFIX_openIDCacheExpirationSeconds: {{ .Values.auth.authentication.openid.openIDCacheExpirationSeconds | quote }}
PULSAR_PREFIX_openIDHttpConnectionTimeoutMillis: {{ .Values.auth.authentication.openid.openIDHttpConnectionTimeoutMillis | quote }}
PULSAR_PREFIX_openIDHttpReadTimeoutMillis: {{ .Values.auth.authentication.openid.openIDHttpReadTimeoutMillis | quote }}
PULSAR_PREFIX_openIDKeyIdCacheMissRefreshSeconds: {{ .Values.auth.authentication.openid.openIDKeyIdCacheMissRefreshSeconds | quote }}
PULSAR_PREFIX_openIDRequireIssuersUseHttps: {{ .Values.auth.authentication.openid.openIDRequireIssuersUseHttps | quote }}
PULSAR_PREFIX_openIDFallbackDiscoveryMode: {{ .Values.auth.authentication.openid.openIDFallbackDiscoveryMode | quote }}
{{- end }}
{{- end }}
{{- if and .Values.tls.enabled .Values.tls.bookie.enabled }}
@ -138,13 +260,13 @@ data:
bookkeeperTLSKeyFileType: "PEM"
bookkeeperTLSKeyFilePath: "/pulsar/certs/broker/tls.key"
bookkeeperTLSCertificateFilePath: "/pulsar/certs/broker/tls.crt"
bookkeeperTLSTrustCertsFilePath: "/pulsar/certs/ca/ca.crt"
bookkeeperTLSTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.broker.cacerts.enabled | quote }}
bookkeeperTLSTrustCertTypes: "PEM"
PULSAR_PREFIX_bookkeeperTLSClientAuthentication: "true"
PULSAR_PREFIX_bookkeeperTLSKeyFileType: "PEM"
PULSAR_PREFIX_bookkeeperTLSKeyFilePath: "/pulsar/certs/broker/tls.key"
PULSAR_PREFIX_bookkeeperTLSCertificateFilePath: "/pulsar/certs/broker/tls.crt"
PULSAR_PREFIX_bookkeeperTLSTrustCertsFilePath: "/pulsar/certs/ca/ca.crt"
PULSAR_PREFIX_bookkeeperTLSTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.broker.cacerts.enabled | quote }}
PULSAR_PREFIX_bookkeeperTLSTrustCertTypes: "PEM"
# https://github.com/apache/bookkeeper/pull/2300
bookkeeperUseV2WireProtocol: "false"

View File

@ -26,6 +26,7 @@ apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
maxReplicas: {{ .Values.broker.autoscaling.maxReplicas }}
{{- with .Values.broker.autoscaling.metrics }}

View File

@ -19,40 +19,5 @@
# deploy broker PodMonitor only when `$.Values.broker.podMonitor.enabled` is true
{{- if $.Values.broker.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ template "pulsar.fullname" . }}-broker
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobLabel: broker
podMetricsEndpoints:
- port: http
path: /metrics
scheme: http
interval: {{ $.Values.broker.podMonitor.interval }}
scrapeTimeout: {{ $.Values.broker.podMonitor.scrapeTimeout }}
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if $.Values.broker.podMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml $.Values.broker.podMonitor.metricRelabelings | nindent 8 }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: broker
{{- include "pulsar.podMonitor" (list . "broker" (printf "component: %s" .Values.broker.component)) }}
{{- end }}

View File

@ -1,85 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}-psp"
namespace: {{ template "pulsar.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}-psp"
namespace: {{ template "pulsar.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}-psp"
subjects:
- kind: ServiceAccount
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}-acct"
namespace: {{ template "pulsar.namespace" . }}
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
{{- if .Values.rbac.limit_to_namespace }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}-{{ template "pulsar.namespace" . }}"
{{- else}}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
{{- end}}
spec:
readOnlyRootFilesystem: false
privileged: false
allowPrivilegeEscalation: false
runAsUser:
rule: 'RunAsAny'
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end}}

View File

@ -17,7 +17,7 @@
# under the License.
#
{{- if or .Values.components.functions .Values.extra.functionsAsPods }}
{{- if .Values.components.functions }}
apiVersion: rbac.authorization.k8s.io/v1
{{- if .Values.functions.rbac.limit_to_namespace }}
kind: Role

View File

@ -26,14 +26,14 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.broker.component }}
annotations:
{{- with .Values.broker.service_account.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
---
{{- end }}
{{- if or .Values.components.functions .Values.extra.functionsAsPods }}
{{- if .Values.components.functions }}
apiVersion: v1
kind: ServiceAccount
metadata:
@ -42,8 +42,8 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.functions.component }}
annotations:
{{- with .Values.functions.service_account.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
---

View File

@ -26,9 +26,12 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.broker.component }}
{{- with .Values.broker.service.annotations }}
annotations:
{{ toYaml .Values.broker.service.annotations | indent 4 }}
{{ toYaml . | indent 4 }}
{{- end }}
spec:
type: ClusterIP
ports:
# prometheus needs to access /metrics endpoint
- name: http
@ -43,7 +46,7 @@ spec:
- name: "{{ .Values.tlsPrefix }}pulsarssl"
port: {{ .Values.broker.ports.pulsarssl }}
{{- end }}
clusterIP: None
clusterIP: "{{ .Values.broker.service.clusterIP }}"
selector:
{{- include "pulsar.matchLabels" . | nindent 4 }}
component: {{ .Values.broker.component }}

View File

@ -21,8 +21,11 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
namespace: {{ template "pulsar.namespace" . }}
{{- $stsName := printf "%s-%s" (include "pulsar.fullname" .) .Values.broker.component }}
name: {{ $stsName | quote }}
{{- $namespace := include "pulsar.namespace" . }}
namespace: {{ $namespace | quote }}
annotations: {{ .Values.broker.appAnnotations | toYaml | nindent 4 }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.broker.component }}
@ -37,15 +40,33 @@ spec:
component: {{ .Values.broker.component }}
updateStrategy:
type: RollingUpdate
{{- /*
When functions are enabled, podManagementPolicy must be OrderedReady to ensure that other started brokers are available via DNS
for the function worker to connect to.
Since podManagementPolicy is immutable, this rule is only applied when the broker is first installed.
*/}}
{{- $stsObj := lookup "apps/v1" "StatefulSet" $namespace $stsName }}
{{- if $stsObj }}
podManagementPolicy: {{ $stsObj.spec.podManagementPolicy }}
{{- else }}
{{- if .Values.broker.podManagementPolicy }}
podManagementPolicy: {{ .Values.broker.podManagementPolicy }}
{{- else if not .Values.components.functions }}
podManagementPolicy: Parallel
{{- else }}
podManagementPolicy: OrderedReady
{{- end }}
{{- end }}
template:
metadata:
labels:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.broker.component }}
annotations:
{{- if not .Values.broker.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.broker.ports.http }}"
{{- end }}
{{- if .Values.broker.restartPodsOnConfigMapChange }}
checksum/config: {{ include (print $.Template.BasePath "/broker-configmap.yaml") . | sha256sum }}
{{- end }}
@ -61,11 +82,15 @@ spec:
{{- if .Values.broker.tolerations }}
tolerations:
{{ toYaml .Values.broker.tolerations | indent 8 }}
{{- end }}
{{- if .Values.broker.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml .Values.broker.topologySpreadConstraints | nindent 8 }}
{{- end }}
affinity:
{{- if and .Values.affinity.anti_affinity .Values.broker.affinity.anti_affinity}}
podAntiAffinity:
{{ if eq .Values.broker.affinity.type "requiredDuringSchedulingIgnoredDuringExecution"}}
{{- if eq .Values.broker.affinity.type "requiredDuringSchedulingIgnoredDuringExecution"}}
{{ .Values.broker.affinity.type }}:
- labelSelector:
matchExpressions:
@ -82,7 +107,7 @@ spec:
values:
- {{ .Values.broker.component }}
topologyKey: {{ .Values.broker.affinity.anti_affinity_topology_key }}
{{ else }}
{{- else }}
{{ .Values.broker.affinity.type }}:
- weight: 100
podAffinityTerm:
@ -101,48 +126,71 @@ spec:
values:
- {{ .Values.broker.component }}
topologyKey: {{ .Values.broker.affinity.anti_affinity_topology_key }}
{{ end }}
{{- end }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.broker.gracePeriod }}
initContainers:
{{- if .Values.tls.broker.cacerts.enabled }}
- name: cacerts
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.broker "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.broker "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.broker.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.broker.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if and .Values.components.zookeeper .Values.broker.waitZookeeperTimeout (gt (.Values.broker.waitZookeeperTimeout | int) 0) }}
# This init container will wait for zookeeper to be ready before
# deploying the bookies
- name: wait-zookeeper-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.broker "root" .) }}"
imagePullPolicy: {{ .Values.images.broker.pullPolicy }}
resources: {{ toYaml .Values.initContainer_resources.zookeeper_ready | nindent 10 }}
command: ["sh", "-c"]
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.broker "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.broker.waitZookeeperTimeout }}", "sh", "-c"]
args:
- >-
- |
{{- include "pulsar.broker.zookeeper.tls.settings" . | nindent 12 }}
export BOOKIE_MEM="-Xmx128M";
export PULSAR_MEM="-Xmx128M";
{{- if .Values.pulsar_metadata.configurationStore }}
until bin/bookkeeper org.apache.zookeeper.ZooKeeperMain -server {{ template "pulsar.configurationStore.connect" . }} get {{ .Values.configurationStoreMetadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
until timeout 15 bin/pulsar zookeeper-shell -server {{ template "pulsar.configurationStore.connect" . }} get {{ .Values.pulsar_metadata.configurationStoreMetadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
{{- end }}
{{- if not .Values.pulsar_metadata.configurationStore }}
until bin/bookkeeper org.apache.zookeeper.ZooKeeperMain -server {{ template "pulsar.zookeeper.connect" . }} get {{ .Values.metadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
until timeout 15 bin/pulsar zookeeper-shell -server {{ template "pulsar.zookeeper.connect" . }} get {{ .Values.metadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
{{- end }}
echo "pulsar cluster {{ template "pulsar.cluster.name" . }} isn't initialized yet ... check in 3 seconds ..." && sleep 3;
done;
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end }}
volumeMounts:
{{- include "pulsar.broker.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if and .Values.components.oxia .Values.broker.waitOxiaTimeout (gt (.Values.broker.waitOxiaTimeout | int) 0) }}
- name: wait-oxia-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.broker "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.broker "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.broker.waitOxiaTimeout }}", "sh", "-c"]
args:
- |
until nslookup {{ template "pulsar.oxia.server.service" . }}; do
sleep 3;
done;
{{- end }}
{{- if and .Values.broker.waitBookkeeperTimeout (gt (.Values.broker.waitBookkeeperTimeout | int) 0) }}
# This init container will wait for bookkeeper to be ready before
# deploying the broker
- name: wait-bookkeeper-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.broker "root" .) }}"
imagePullPolicy: {{ .Values.images.broker.pullPolicy }}
resources: {{ toYaml .Values.initContainer_resources.bookkeeper_ready | nindent 10 }}
command: ["sh", "-c"]
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.broker "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.broker.waitBookkeeperTimeout }}", "sh", "-c"]
args:
- >
- |
{{- include "pulsar.broker.zookeeper.tls.settings" . | nindent 12 }}
bin/apply-config-from-env.py conf/bookkeeper.conf;
export BOOKIE_MEM="-Xmx128M";
until bin/bookkeeper shell whatisinstanceid; do
until timeout 15 bin/bookkeeper shell whatisinstanceid; do
echo "bookkeeper cluster is not initialized yet. backoff for 3 seconds ...";
sleep 3;
done;
@ -157,16 +205,16 @@ spec:
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end }}
volumeMounts:
{{- include "pulsar.broker.certs.volumeMounts" . | nindent 10 }}
{{- end }}
{{- if .Values.broker.initContainers }}
{{- toYaml .Values.broker.initContainers | nindent 6 }}
{{- end }}
containers:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.broker "root" .) }}"
imagePullPolicy: {{ .Values.images.broker.pullPolicy }}
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.broker "root" .) }}"
{{- if .Values.broker.probe.liveness.enabled }}
livenessProbe:
httpGet:
@ -203,20 +251,27 @@ spec:
{{- end }}
command: ["sh", "-c"]
args:
- >
- |
{{- if .Values.broker.additionalCommand }}
{{ .Values.broker.additionalCommand }}
{{- end }}
{{- if .Values.tls.broker.cacerts.enabled }}
cd /pulsar/certs/cacerts;
nohup /pulsar/bin/certs-combine-pem-infinity.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.broker.cacerts.certs) }} > /pulsar/certs/cacerts/certs-combine-pem-infinity.log 2>&1 &
cd /pulsar;
{{- end }}
bin/apply-config-from-env.py conf/broker.conf;
bin/gen-yml-from-env.py conf/functions_worker.yml;
echo "OK" > status;
echo "OK" > "${statusFilePath:-status}";
{{- if .Values.components.zookeeper }}
{{- include "pulsar.broker.zookeeper.tls.settings" . | nindent 10 }}
bin/pulsar zookeeper-shell -server {{ template "pulsar.zookeeper.connect" . }} get {{ template "pulsar.broker.znode" . }};
timeout 15 bin/pulsar zookeeper-shell -server {{ template "pulsar.zookeeper.connect" . }} get {{ template "pulsar.broker.znode" . }};
while [ $? -eq 0 ]; do
echo "broker {{ template "pulsar.broker.hostname" . }} znode still exists ... check in 10 seconds ...";
sleep 10;
bin/pulsar zookeeper-shell -server {{ template "pulsar.zookeeper.connect" . }} get {{ template "pulsar.broker.znode" . }};
timeout 15 bin/pulsar zookeeper-shell -server {{ template "pulsar.zookeeper.connect" . }} get {{ template "pulsar.broker.znode" . }};
done;
{{- end }}
cat conf/pulsar_env.sh;
OPTS="${OPTS} -Dlog4j2.formatMsgNoLookups=true" exec bin/pulsar broker;
ports:
@ -233,16 +288,12 @@ spec:
- name: "{{ .Values.tlsPrefix }}pulsarssl"
containerPort: {{ .Values.broker.ports.pulsarssl }}
{{- end }}
{{- if .Values.broker.extreEnvs }}
env:
{{ toYaml .Values.broker.extreEnvs | indent 8 }}
{{- end }}
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
volumeMounts:
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
- mountPath: "/pulsar/keys"
name: token-keys
readOnly: true
@ -251,20 +302,51 @@ spec:
readOnly: true
{{- end }}
{{- end }}
{{- if .Values.broker.storageOffload.driver }}
{{- if eq .Values.broker.storageOffload.driver "google-cloud-storage" }}
- name: gcp-service-account
readOnly: true
mountPath: /pulsar/gcp-service-account
{{- end }}
{{- end }}
{{- if .Values.broker.extraVolumeMounts }}
{{ toYaml .Values.broker.extraVolumeMounts | indent 10 }}
{{- end }}
{{- include "pulsar.broker.certs.volumeMounts" . | nindent 10 }}
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
env:
{{- if and (and .Values.broker.storageOffload (eq .Values.broker.storageOffload.driver "aws-s3")) .Values.broker.storageOffload.secret }}
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: {{ .Values.broker.storageOffload.secret }}
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.broker.storageOffload.secret }}
key: AWS_SECRET_ACCESS_KEY
{{- end }}
{{- if and .Values.broker.storageOffload (eq .Values.broker.storageOffload.driver "azureblob") }}
- name: AZURE_STORAGE_ACCOUNT
valueFrom:
secretKeyRef:
name: {{ .Values.broker.storageOffload.secret }}
key: AZURE_STORAGE_ACCOUNT
- name: AZURE_STORAGE_ACCESS_KEY
valueFrom:
secretKeyRef:
name: {{ .Values.broker.storageOffload.secret }}
key: AZURE_STORAGE_ACCESS_KEY
{{- end }}
{{- if .Values.broker.extraEnvs }}
{{- toYaml .Values.broker.extraEnvs | nindent 10 }}
{{- end }}
volumes:
{{- if .Values.broker.extraVolumes }}
{{ toYaml .Values.broker.extraVolumes | indent 6 }}
{{- end }}
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
- name: token-keys
secret:
{{- if not .Values.auth.authentication.jwt.usingSecretKey }}
@ -289,6 +371,13 @@ spec:
path: broker/token
{{- end}}
{{- end}}
{{- if .Values.broker.storageOffload.driver }}
{{- if eq .Values.broker.storageOffload.driver "google-cloud-storage" }}
- name: gcp-service-account
secret:
secretName: {{ .Values.broker.storageOffload.gcsServiceAccountSecret }}
{{- end }}
{{- end }}
{{- include "pulsar.broker.certs.volumes" . | nindent 6 }}
{{- include "pulsar.imagePullSecrets" . | nindent 6}}
{{- end }}

View File

@ -0,0 +1,82 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: certs-scripts
data:
certs-combine-pem.sh: |
#!/bin/bash
# This script combines all certificates into a single file.
# Usage: certs-combine-pem.sh <output_file> <cert1> <cert2> ...
set -eu -o pipefail
if [ "$#" -lt 2 ]; then
echo "Usage: $0 <output_file> <cert1> <cert2> ..."
exit 1
fi
OUTPUT_FILE="$1"
shift
OUTPUT_FILE_TMP="${OUTPUT_FILE}.tmp"
rm -f "$OUTPUT_FILE_TMP"
for CERT in "$@"; do
if [ -f "$CERT" ]; then
echo "# $CERT" >> "$OUTPUT_FILE_TMP"
cat "$CERT" >> "$OUTPUT_FILE_TMP"
else
echo "Certificate file '$CERT' does not exist, skipping"
fi
done
if [ ! -f "$OUTPUT_FILE" ]; then
touch "$OUTPUT_FILE"
fi
if diff -q "$OUTPUT_FILE" "$OUTPUT_FILE_TMP" > /dev/null; then
# No changes detected, skipping update
rm -f "$OUTPUT_FILE_TMP"
else
# Update $OUTPUT_FILE with new certificates
mv "$OUTPUT_FILE_TMP" "$OUTPUT_FILE"
fi
certs-combine-pem-infinity.sh: |
#!/bin/bash
# This script combines all certificates into a single file, every minutes.
# Usage: certs-combine-pem-infinity.sh <output_file> <cert1> <cert2> ...
set -eu -o pipefail
if [ "$#" -lt 2 ]; then
echo "Usage: $0 <output_file> <cert1> <cert2> ..."
exit 1
fi
while true; do
/pulsar/bin/certs-combine-pem.sh "$@"
sleep 60
done

View File

@ -0,0 +1,22 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if semverCompare "<3.12.0-0" .Capabilities.HelmVersion.Version -}}
{{- fail "Your Helm version is not supported. Please upgrade to Helm 3.12.0 or later. The recommended version is currently 3.14.4 or newer. You can find more about Helm releases and installation at https://github.com/helm/helm/releases. " -}}
{{- end -}}

View File

@ -1,67 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.extra.dashboard }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.dashboard.component }}"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.dashboard.component }}
spec:
replicas: {{ .Values.dashboard.replicaCount }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: {{ .Values.dashboard.component }}
template:
metadata:
labels:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.dashboard.component }}
annotations:
{{ toYaml .Values.dashboard.annotations | indent 8 }}
spec:
{{- if .Values.dashboard.nodeSelector }}
nodeSelector:
{{ toYaml .Values.dashboard.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.dashboard.tolerations }}
tolerations:
{{ toYaml .Values.dashboard.tolerations | indent 8 }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.dashboard.gracePeriod }}
containers:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.dashboard.component }}"
image: "{{ .Values.dashboard.image.repository }}:{{ .Values.dashboard.image.tag }}"
imagePullPolicy: {{ .Values.dashboard.image.pullPolicy }}
{{- if .Values.dashboard.resources }}
resources:
{{ toYaml .Values.dashboard.resources | indent 10 }}
{{- end }}
ports:
- name: http
containerPort: 80
env:
- name: SERVICE_URL
value: http://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:8080/
{{- end }}

View File

@ -1,68 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.extra.dashboard }}
{{- if .Values.dashboard.ingress.enabled }}
{{- if semverCompare "<1.19-0" .Capabilities.KubeVersion.Version }}
apiVersion: extensions/v1beta1
{{- else }}
apiVersion: networking.k8s.io/v1
{{- end }}
kind: Ingress
metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.dashboard.component }}
annotations:
{{- with .Values.dashboard.ingress.annotations }}
{{ toYaml . | indent 4 }}
{{- end }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.dashboard.component }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
{{- with .Values.dashboard.ingress.ingressClassName }}
ingressClassName: {{ . }}
{{- end }}
{{- if .Values.dashboard.ingress.tls.enabled }}
tls:
- hosts:
- {{ .Values.dashboard.ingress.hostname }}
{{- with .Values.dashboard.ingress.tls.secretName }}
secretName: {{ . }}
{{- end }}
{{- end }}
rules:
- host: {{ required "Dashboard ingress hostname not provided" .Values.dashboard.ingress.hostname }}
http:
paths:
- path: {{ .Values.dashboard.ingress.path }}
{{- if semverCompare "<1.19-0" .Capabilities.KubeVersion.Version }}
backend:
serviceName: "{{ template "pulsar.fullname" . }}-{{ .Values.dashboard.component }}"
servicePort: {{ .Values.dashboard.ingress.port }}
{{- else }}
pathType: ImplementationSpecific
backend:
service:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.dashboard.component }}"
port:
number: {{ .Values.dashboard.ingress.port }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,23 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- range .Values.extraDeploy }}
---
{{ include "common.tplvalues.render" (dict "value" . "context" $) }}
{{- end }}

View File

@ -1,105 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# script to process key/cert to keystore and truststore
{{- if .Values.tls.zookeeper.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ template "pulsar.fullname" . }}-keytool-configmap"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: keytool
data:
keytool.sh: |
#!/bin/bash
component=$1
name=$2
isClient=$3
crtFile=/pulsar/certs/${component}/tls.crt
keyFile=/pulsar/certs/${component}/tls.key
caFile=/pulsar/certs/ca/ca.crt
p12File=/pulsar/${component}.p12
keyStoreFile=/pulsar/${component}.keystore.jks
trustStoreFile=/pulsar/${component}.truststore.jks
function checkFile() {
local file=$1
local len=$(wc -c ${file} | awk '{print $1}')
echo "processing ${file} : len = ${len}"
if [ ! -f ${file} ]; then
echo "${file} is not found"
return -1
fi
if [ $len -le 0 ]; then
echo "${file} is empty"
return -1
fi
}
function ensureFileNotEmpty() {
local file=$1
until checkFile ${file}; do
echo "file isn't initialized yet ... check in 3 seconds ..." && sleep 3;
done;
}
ensureFileNotEmpty ${crtFile}
ensureFileNotEmpty ${keyFile}
ensureFileNotEmpty ${caFile}
PASSWORD=$(head /dev/urandom | base64 | head -c 24)
openssl pkcs12 \
-export \
-in ${crtFile} \
-inkey ${keyFile} \
-out ${p12File} \
-name ${name} \
-passout "pass:${PASSWORD}"
keytool -importkeystore \
-srckeystore ${p12File} \
-srcstoretype PKCS12 -srcstorepass "${PASSWORD}" \
-alias ${name} \
-destkeystore ${keyStoreFile} \
-deststorepass "${PASSWORD}"
keytool -import \
-file ${caFile} \
-storetype JKS \
-alias ${name} \
-keystore ${trustStoreFile} \
-storepass "${PASSWORD}" \
-trustcacerts -noprompt
ensureFileNotEmpty ${keyStoreFile}
ensureFileNotEmpty ${trustStoreFile}
if [[ "x${isClient}" == "xtrue" ]]; then
echo $'\n' >> conf/pulsar_env.sh
echo "PULSAR_EXTRA_OPTS=\"\${PULSAR_EXTRA_OPTS} -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.client.secure=true -Dzookeeper.ssl.keyStore.location=${keyStoreFile} -Dzookeeper.ssl.keyStore.password=${PASSWORD} -Dzookeeper.ssl.trustStore.location=${trustStoreFile} -Dzookeeper.ssl.trustStore.password=${PASSWORD}\"" >> conf/pulsar_env.sh
echo $'\n' >> conf/bkenv.sh
echo "BOOKIE_EXTRA_OPTS=\"\${BOOKIE_EXTRA_OPTS} -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.client.secure=true -Dzookeeper.ssl.keyStore.location=${keyStoreFile} -Dzookeeper.ssl.keyStore.password=${PASSWORD} -Dzookeeper.ssl.trustStore.location=${trustStoreFile} -Dzookeeper.ssl.trustStore.password=${PASSWORD}\"" >> conf/bkenv.sh
else
echo $'\n' >> conf/pulsar_env.sh
echo "PULSAR_EXTRA_OPTS=\"\${PULSAR_EXTRA_OPTS} -Dzookeeper.ssl.keyStore.location=${keyStoreFile} -Dzookeeper.ssl.keyStore.password=${PASSWORD} -Dzookeeper.ssl.trustStore.location=${trustStoreFile} -Dzookeeper.ssl.trustStore.password=${PASSWORD}\"" >> conf/pulsar_env.sh
fi
{{- end }}

View File

@ -16,17 +16,17 @@
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.components.functions }}
## function config map
{{- if and .Values.components.oxia (not .Values.oxia.coordinator.customConfigMapName) }}
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.functions.component }}-config"
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.functions.component }}
component: {{ .Values.oxia.component }}-coordinator
data:
pulsarDockerImageName: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.functions "root" .) }}"
config.yaml: |
{{- include "oxia.coordinator.config.yaml" . | nindent 4 }}
{{- end }}

View File

@ -0,0 +1,95 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.components.oxia }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.oxia.component }}-coordinator
annotations: {{ .Values.oxia.coordinator.appAnnotations | toYaml | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: {{ .Values.oxia.component }}-coordinator
strategy:
type: Recreate
template:
metadata:
labels:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.oxia.component }}-coordinator
annotations:
{{- if not .Values.oxia.coordinator.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.oxia.coordinator.ports.metrics }}"
{{- end }}
{{- with .Values.oxia.coordinator.annotations }}
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{- if .Values.oxia.coordinator.nodeSelector }}
nodeSelector:
{{ toYaml .Values.oxia.coordinator.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.oxia.coordinator.tolerations }}
tolerations:
{{ toYaml .Values.oxia.coordinator.tolerations | indent 8 }}
{{- end }}
serviceAccountName: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator
containers:
- command:
{{- if .Values.oxia.coordinator.entrypoint }}
{{ toYaml .Values.oxia.coordinator.entrypoint | indent 12 }}
{{- else }}
{{- include "oxia.coordinator.entrypoint" . | nindent 12 }}
{{- end }}
image: "{{ .Values.images.oxia.repository }}:{{ .Values.images.oxia.tag }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.oxia "root" .) }}"
name: coordinator
ports:
{{- range $key, $value := .Values.oxia.coordinator.ports }}
- containerPort: {{ $value | int }}
name: {{ $key }}
{{- end}}
resources:
limits:
cpu: {{ .Values.oxia.coordinator.cpuLimit }}
memory: {{ .Values.oxia.coordinator.memoryLimit }}
{{- if .Values.oxia.coordinator.extraVolumeMounts }}
volumeMounts:
{{- toYaml .Values.oxia.coordinator.extraVolumeMounts | nindent 12 }}
{{- end }}
livenessProbe:
{{- include "oxia-cluster.probe" .Values.oxia.coordinator.ports.internal | nindent 12 }}
readinessProbe:
{{- include "oxia-cluster.probe" .Values.oxia.coordinator.ports.internal | nindent 12 }}
{{- if .Values.oxia.coordinator.extraContainers }}
{{- toYaml .Values.oxia.coordinator.extraContainers | nindent 8 }}
{{- end }}
{{- if .Values.oxia.coordinator.extraVolumes }}
volumes:
{{- toYaml .Values.oxia.coordinator.extraVolumes | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,23 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# deploy oxia-coordinator PodMonitor only when `$.Values.oxia.coordinator.podMonitor.enabled` is true
{{- if and $.Values.components.oxia $.Values.oxia.coordinator.podMonitor.enabled }}
{{- include "pulsar.podMonitor" (list . "oxia.coordinator" (printf "component: %s-coordinator" .Values.oxia.component) "metrics") }}
{{- end }}

View File

@ -0,0 +1,33 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.components.oxia }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.oxia.component }}-coordinator
rules:
- apiGroups: [ "" ]
resources: [ "configmaps" ]
verbs: [ "*" ]
{{- end }}

View File

@ -0,0 +1,37 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.components.oxia }}
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.oxia.component }}-coordinator
subjects:
- kind: ServiceAccount
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator
namespace: {{ template "pulsar.namespace" . }}
roleRef:
apiGroup: ""
kind: Role
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator
{{- end }}

View File

@ -0,0 +1,43 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.components.oxia }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.oxia.component }}-coordinator
{{- with .Values.oxia.coordinator.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
ports:
{{- range $key, $value := .Values.oxia.coordinator.ports }}
- name: {{ $key }}
port: {{ $value }}
targetPort: {{ $key }}
{{- end}}
selector:
{{- include "pulsar.matchLabels" . | nindent 4 }}
component: {{ .Values.oxia.component }}-coordinator
{{- end }}

View File

@ -0,0 +1,36 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.components.oxia }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.oxia.component }}-coordinator
{{- with .Values.oxia.coordinator.service_account.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
{{- if .Values.images.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.images.imagePullSecrets.secretName }}
{{- end}}
{{- end}}

View File

@ -0,0 +1,23 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# deploy oxia-server PodMonitor only when `$.Values.oxia.server.podMonitor.enabled` is true
{{- if and $.Values.components.oxia $.Values.oxia.server.podMonitor.enabled }}
{{- include "pulsar.podMonitor" (list . "oxia.server" (printf "component: %s-server" .Values.oxia.component) "metrics") }}
{{- end }}

View File

@ -17,22 +17,27 @@
# under the License.
#
{{- if .Values.extra.dashboard }}
{{- if .Values.components.oxia }}
apiVersion: v1
kind: Service
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.dashboard.component }}"
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.dashboard.component }}
component: {{ .Values.oxia.component }}-server
{{- with .Values.oxia.server.service.public.annotations }}
annotations:
{{ toYaml .Values.dashboard.service.annotations | indent 4 }}
{{ toYaml . | indent 4 }}
{{- end }}
spec:
ports:
{{ toYaml .Values.dashboard.service.ports | indent 2 }}
clusterIP: None
{{- range $key, $value := .Values.oxia.server.ports }}
- name: {{ $key }}
port: {{ $value }}
targetPort: {{ $key }}
{{- end}}
selector:
{{- include "pulsar.matchLabels" . | nindent 4 }}
component: {{ .Values.dashboard.component }}
{{- end }}
component: {{ .Values.oxia.component }}-server
{{- end}}

View File

@ -0,0 +1,45 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.components.oxia }}
apiVersion: v1
kind: Service
metadata:
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-svc
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.oxia.component }}-server
{{- with .Values.oxia.server.service.internal.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
clusterIP: None
publishNotReadyAddresses: true
ports:
{{- range $key, $value := .Values.oxia.server.ports }}
- name: {{ $key }}
port: {{ $value }}
targetPort: {{ $key }}
{{- end}}
selector:
{{- include "pulsar.matchLabels" . | nindent 4 }}
component: {{ .Values.oxia.component }}-server
{{- end}}

View File

@ -0,0 +1,36 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.components.oxia }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.oxia.component }}-server
{{- with .Values.oxia.server.service_account.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
{{- if .Values.images.imagePullSecrets }}
imagePullSecrets:
- name: {{ .Values.images.imagePullSecrets.secretName }}
{{- end}}
{{- end}}

View File

@ -0,0 +1,153 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.components.oxia }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-server
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.oxia.component }}-server
annotations: {{ .Values.oxia.server.appAnnotations | toYaml | nindent 4 }}
spec:
replicas: {{ .Values.oxia.server.replicas }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: {{ .Values.oxia.component }}-server
serviceName: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-svc
podManagementPolicy: Parallel
template:
metadata:
labels:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.oxia.component }}-server
annotations:
{{- if not .Values.oxia.server.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.oxia.server.ports.metrics }}"
{{- end }}
{{- with .Values.oxia.server.annotations }}
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{- if .Values.oxia.server.nodeSelector }}
nodeSelector:
{{ toYaml .Values.oxia.server.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.oxia.server.tolerations }}
tolerations:
{{ toYaml .Values.oxia.server.tolerations | indent 8 }}
{{- end }}
{{- if .Values.oxia.server.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml .Values.oxia.server.topologySpreadConstraints | nindent 8 }}
{{- end }}
affinity:
{{- if and .Values.affinity.anti_affinity .Values.oxia.server.affinity.anti_affinity}}
podAntiAffinity:
{{ if eq .Values.oxia.server.affinity.type "requiredDuringSchedulingIgnoredDuringExecution"}}
{{ .Values.oxia.server.affinity.type }}:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- "{{ template "pulsar.name" . }}"
- key: "release"
operator: In
values:
- {{ .Release.Name }}
- key: "component"
operator: In
values:
- {{ .Values.oxia.component }}-server
topologyKey: {{ .Values.oxia.server.affinity.anti_affinity_topology_key }}
{{ else }}
{{ .Values.oxia.server.affinity.type }}:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- "{{ template "pulsar.name" . }}"
- key: "release"
operator: In
values:
- {{ .Release.Name }}
- key: "component"
operator: In
values:
- {{ .Values.oxia.component }}-server
topologyKey: {{ .Values.oxia.server.affinity.anti_affinity_topology_key }}
{{ end }}
{{- end }}
serviceAccountName: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}
{{- if .Values.oxia.server.securityContext }}
securityContext:
{{ toYaml .Values.oxia.server.securityContext | indent 8 }}
{{- end }}
containers:
- command:
- "oxia"
- "server"
- "--log-json"
- "--data-dir=/data/db"
- "--wal-dir=/data/wal"
- "--db-cache-size-mb={{ .Values.oxia.server.dbCacheSizeMb }}"
{{- if .Values.oxia.pprofEnabled }}
- "--profile"
{{- end}}
image: "{{ .Values.images.oxia.repository }}:{{ .Values.images.oxia.tag }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.oxia "root" .) }}"
name: server
ports:
{{- range $key, $value := .Values.oxia.server.ports }}
- containerPort: {{ $value | int }}
name: {{ $key }}
{{- end}}
resources:
limits:
cpu: {{ .Values.oxia.server.cpuLimit }}
memory: {{ .Values.oxia.server.memoryLimit }}
volumeMounts:
- name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-data
mountPath: /data
livenessProbe:
{{- include "oxia-cluster.probe" .Values.oxia.server.ports.internal | nindent 12 }}
readinessProbe:
{{- include "oxia-cluster.readiness-probe" .Values.oxia.server.ports.internal | nindent 12 }}
startupProbe:
{{- include "oxia-cluster.startup-probe" .Values.oxia.server.ports.internal | nindent 12 }}
volumeClaimTemplates:
- metadata:
name: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-data
spec:
accessModes: [ "ReadWriteOnce" ]
{{- if .Values.oxia.server.storageClassName }}
storageClassName: {{ .Values.oxia.server.storageClassName }}
{{- end}}
resources:
requests:
storage: {{ .Values.oxia.server.storageSize }}
{{- end}}

View File

@ -17,7 +17,7 @@
# under the License.
#
{{- if or .Values.components.proxy .Values.extra.proxy }}
{{- if .Values.components.proxy }}
apiVersion: v1
kind: ConfigMap
metadata:
@ -28,7 +28,7 @@ metadata:
component: {{ .Values.proxy.component }}
data:
clusterName: {{ template "pulsar.cluster.name" . }}
statusFilePath: "{{ template "pulsar.home" . }}/status"
statusFilePath: "{{ template "pulsar.home" . }}/logs/status"
# prometheus needs to access /metrics endpoint
webServicePort: "{{ .Values.proxy.ports.containerPorts.http }}"
{{- if or (not .Values.tls.enabled) (not .Values.tls.proxy.enabled) }}
@ -42,14 +42,14 @@ data:
webServicePortTls: "{{ .Values.proxy.ports.containerPorts.https }}"
tlsCertificateFilePath: "/pulsar/certs/proxy/tls.crt"
tlsKeyFilePath: "/pulsar/certs/proxy/tls.key"
tlsTrustCertsFilePath: "/pulsar/certs/ca/ca.crt"
tlsTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.proxy.cacerts.enabled | quote }}
{{- if and .Values.tls.enabled .Values.tls.broker.enabled }}
# if broker enables TLS, configure proxy to talk to broker using TLS
brokerServiceURLTLS: pulsar+ssl://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.pulsarssl }}
brokerWebServiceURLTLS: https://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.https }}
tlsEnabledWithBroker: "true"
tlsCertRefreshCheckDurationSec: "300"
brokerClientTrustCertsFilePath: "/pulsar/certs/ca/ca.crt"
brokerClientTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.proxy.cacerts.enabled | quote }}
{{- end }}
{{- if not (and .Values.tls.enabled .Values.tls.broker.enabled) }}
brokerServiceURL: pulsar://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.pulsar }}
@ -65,14 +65,19 @@ data:
authorizationEnabled: "false"
forwardAuthorizationCredentials: "true"
{{- if .Values.auth.useProxyRoles }}
superUserRoles: {{ omit .Values.auth.superUsers "proxy" | values | sortAlpha | join "," }}
superUserRoles: {{ omit .Values.auth.superUsers "proxy" | values | compact | sortAlpha | join "," }}
{{- else }}
superUserRoles: {{ .Values.auth.superUsers | values | sortAlpha | join "," }}
superUserRoles: {{ .Values.auth.superUsers | values | compact | sortAlpha | join "," }}
{{- end }}
{{- end }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.jwt.enabled }}
# token authentication configuration
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.jwt.enabled .Values.auth.authentication.openid.enabled }}
authenticationProviders: "org.apache.pulsar.broker.authentication.AuthenticationProviderToken,org.apache.pulsar.broker.authentication.oidc.AuthenticationProviderOpenID"
{{- end }}
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.jwt.enabled ( not .Values.auth.authentication.openid.enabled ) }}
authenticationProviders: "org.apache.pulsar.broker.authentication.AuthenticationProviderToken"
{{- end }}
brokerClientAuthenticationParameters: "file:///pulsar/tokens/proxy/token"
brokerClientAuthenticationPlugin: "org.apache.pulsar.client.impl.auth.AuthenticationToken"
{{- if .Values.auth.authentication.jwt.usingSecretKey }}
@ -81,6 +86,25 @@ data:
tokenPublicKey: "file:///pulsar/keys/token/public.key"
{{- end }}
{{- end }}
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.openid.enabled }}
# openid authentication configuration
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.openid.enabled ( not .Values.auth.authentication.jwt.enabled ) }}
authenticationProviders: "org.apache.pulsar.broker.authentication.oidc.AuthenticationProviderOpenID"
{{- end }}
PULSAR_PREFIX_openIDAllowedTokenIssuers: {{ .Values.auth.authentication.openid.openIDAllowedTokenIssuers | uniq | compact | sortAlpha | join "," | quote }}
PULSAR_PREFIX_openIDAllowedAudiences: {{ .Values.auth.authentication.openid.openIDAllowedAudiences | uniq | compact | sortAlpha | join "," | quote }}
PULSAR_PREFIX_openIDTokenIssuerTrustCertsFilePath: {{ .Values.auth.authentication.openid.openIDTokenIssuerTrustCertsFilePath | quote }}
PULSAR_PREFIX_openIDRoleClaim: {{ .Values.auth.authentication.openid.openIDRoleClaim | quote }}
PULSAR_PREFIX_openIDAcceptedTimeLeewaySeconds: {{ .Values.auth.authentication.openid.openIDAcceptedTimeLeewaySeconds | quote }}
PULSAR_PREFIX_openIDCacheSize: {{ .Values.auth.authentication.openid.openIDCacheSize | quote }}
PULSAR_PREFIX_openIDCacheRefreshAfterWriteSeconds: {{ .Values.auth.authentication.openid.openIDCacheRefreshAfterWriteSeconds | quote }}
PULSAR_PREFIX_openIDCacheExpirationSeconds: {{ .Values.auth.authentication.openid.openIDCacheExpirationSeconds | quote }}
PULSAR_PREFIX_openIDHttpConnectionTimeoutMillis: {{ .Values.auth.authentication.openid.openIDHttpConnectionTimeoutMillis | quote }}
PULSAR_PREFIX_openIDHttpReadTimeoutMillis: {{ .Values.auth.authentication.openid.openIDHttpReadTimeoutMillis | quote }}
PULSAR_PREFIX_openIDKeyIdCacheMissRefreshSeconds: {{ .Values.auth.authentication.openid.openIDKeyIdCacheMissRefreshSeconds | quote }}
PULSAR_PREFIX_openIDRequireIssuersUseHttps: {{ .Values.auth.authentication.openid.openIDRequireIssuersUseHttps | quote }}
PULSAR_PREFIX_openIDFallbackDiscoveryMode: {{ .Values.auth.authentication.openid.openIDFallbackDiscoveryMode | quote }}
{{- end }}
{{- end }}
{{ toYaml .Values.proxy.configData | indent 2 }}
{{- end }}

View File

@ -26,6 +26,9 @@ apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
spec:
maxReplicas: {{ .Values.proxy.autoscaling.maxReplicas }}
{{- with .Values.proxy.autoscaling.metrics }}

View File

@ -59,7 +59,7 @@ spec:
servicePort: {{ .Values.proxy.ports.http }}
{{- end }}
{{- else }}
pathType: ImplementationSpecific
pathType: {{ .Values.proxy.ingress.pathType }}
backend:
service:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"

View File

@ -17,7 +17,7 @@
# under the License.
#
{{- if or .Values.components.proxy .Values.extra.proxy }}
{{- if .Values.components.proxy }}
{{- if .Values.proxy.pdb.usePolicy }}
# pdb version detection
{{- if semverCompare "<1.21-0" .Capabilities.KubeVersion.Version }}

View File

@ -19,40 +19,5 @@
# deploy proxy PodMonitor only when `$.Values.proxy.podMonitor.enabled` is true
{{- if $.Values.proxy.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ template "pulsar.fullname" . }}-proxy
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobLabel: proxy
podMetricsEndpoints:
- port: http
path: /metrics
scheme: http
interval: {{ $.Values.proxy.podMonitor.interval }}
scrapeTimeout: {{ $.Values.proxy.podMonitor.scrapeTimeout }}
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if $.Values.proxy.podMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml $.Values.proxy.podMonitor.metricRelabelings | nindent 8 }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: proxy
{{- include "pulsar.podMonitor" (list . "proxy" (printf "component: %s" .Values.proxy.component) "sts-http") }}
{{- end }}

View File

@ -1,85 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
namespace: {{ template "pulsar.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
namespace: {{ template "pulsar.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
subjects:
- kind: ServiceAccount
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
namespace: {{ template "pulsar.namespace" . }}
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
{{- if .Values.rbac.limit_to_namespace }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}-{{ template "pulsar.namespace" . }}"
{{- else}}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
{{- end}}
spec:
readOnlyRootFilesystem: false
privileged: false
allowPrivilegeEscalation: false
runAsUser:
rule: 'RunAsAny'
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end}}

View File

@ -17,7 +17,7 @@
# under the License.
#
{{- if or .Values.components.proxy .Values.extra.proxy }}
{{- if .Values.components.proxy }}
apiVersion: v1
kind: Service
metadata:
@ -35,6 +35,9 @@ spec:
{{- with .Values.proxy.service.loadBalancerIP }}
loadBalancerIP: {{ . }}
{{- end }}
{{- with .Values.proxy.service.loadBalancerClass }}
loadBalancerClass: {{ . }}
{{- end }}
{{- if .Values.proxy.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.proxy.service.externalTrafficPolicy }}
{{- end }}
@ -47,20 +50,32 @@ spec:
port: {{ .Values.proxy.ports.http }}
protocol: TCP
targetPort: sts-http
{{- if and (eq .Values.proxy.service.type "NodePort") (ne .Values.proxy.service.nodePorts.http "") }}
nodePort: {{ .Values.proxy.service.nodePorts.http }}
{{- end}}
- name: "{{ .Values.tcpPrefix }}pulsar"
port: {{ .Values.proxy.ports.pulsar }}
protocol: TCP
targetPort: "sts-{{ .Values.tcpPrefix }}pulsar"
{{- if and (eq .Values.proxy.service.type "NodePort") (ne .Values.proxy.service.nodePorts.pulsar "") }}
nodePort: {{ .Values.proxy.service.nodePorts.pulsar }}
{{- end}}
{{- end }}
{{- if and .Values.tls.enabled .Values.tls.proxy.enabled }}
- name: https
port: {{ .Values.proxy.ports.https }}
protocol: TCP
targetPort: sts-https
{{- if and (eq .Values.proxy.service.type "NodePort") (ne .Values.proxy.service.nodePorts.https "") }}
nodePort: {{ .Values.proxy.service.nodePorts.https }}
{{- end}}
- name: "{{ .Values.tlsPrefix }}pulsarssl"
port: {{ .Values.proxy.ports.pulsarssl }}
protocol: TCP
targetPort: "sts-{{ .Values.tlsPrefix }}pulsarssl"
{{- if and (eq .Values.proxy.service.type "NodePort") (ne .Values.proxy.service.nodePorts.pulsarssl "") }}
nodePort: {{ .Values.proxy.service.nodePorts.pulsarssl }}
{{- end}}
{{- end }}
selector:
{{- include "pulsar.matchLabels" . | nindent 4 }}

View File

@ -17,12 +17,13 @@
# under the License.
#
{{- if or .Values.components.proxy .Values.extra.proxy }}
{{- if .Values.components.proxy }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
namespace: {{ template "pulsar.namespace" . }}
annotations: {{ .Values.proxy.appAnnotations | toYaml | nindent 4 }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.proxy.component }}
@ -44,8 +45,10 @@ spec:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.proxy.component }}
annotations:
{{- if not .Values.proxy.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.proxy.ports.http }}"
prometheus.io/port: "{{ .Values.proxy.ports.containerPorts.http }}"
{{- end }}
{{- if .Values.proxy.restartPodsOnConfigMapChange }}
checksum/config: {{ include (print $.Template.BasePath "/proxy-configmap.yaml") . | sha256sum }}
{{- end }}
@ -60,6 +63,10 @@ spec:
{{- if .Values.proxy.tolerations }}
tolerations:
{{ toYaml .Values.proxy.tolerations | indent 8 }}
{{- end }}
{{- if .Values.proxy.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml .Values.proxy.topologySpreadConstraints | nindent 8 }}
{{- end }}
affinity:
{{- if and .Values.affinity.anti_affinity .Values.proxy.affinity.anti_affinity}}
@ -105,34 +112,65 @@ spec:
terminationGracePeriodSeconds: {{ .Values.proxy.gracePeriod }}
serviceAccountName: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
initContainers:
{{- if .Values.tls.proxy.cacerts.enabled }}
- name: combine-certs
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.proxy "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.proxy "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.proxy.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.proxy.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if and .Values.components.zookeeper .Values.proxy.waitZookeeperTimeout (gt (.Values.proxy.waitZookeeperTimeout | int) 0) }}
# This init container will wait for zookeeper to be ready before
# deploying the bookies
- name: wait-zookeeper-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.proxy "root" .) }}"
imagePullPolicy: {{ .Values.images.proxy.pullPolicy }}
resources: {{ toYaml .Values.initContainer_resources.zookeeper_ready | nindent 10 }}
command: ["sh", "-c"]
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.proxy "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.proxy.waitZookeeperTimeout }}", "sh", "-c"]
args:
- >-
- |
export PULSAR_MEM="-Xmx128M";
{{- if $zk:=.Values.pulsar_metadata.userProvidedZookeepers }}
until bin/pulsar zookeeper-shell -server {{ $zk }} ls {{ or .Values.metadataPrefix "/" }}; do
until timeout 15 bin/pulsar zookeeper-shell -server {{ $zk }} ls {{ or .Values.metadataPrefix "/" }}; do
echo "user provided zookeepers {{ $zk }} are unreachable... check in 3 seconds ..." && sleep 3;
done;
{{ else }}
until bin/pulsar zookeeper-shell -server {{ template "pulsar.configurationStore.service" . }} get {{ .Values.metadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
sleep 3;
{{- else if .Values.pulsar_metadata.configurationStore }}
until timeout 15 bin/pulsar zookeeper-shell -server {{ template "pulsar.configurationStore.service" . }} get {{ .Values.pulsar_metadata.configurationStoreMetadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
echo "pulsar cluster {{ template "pulsar.cluster.name" . }} isn't initialized yet ... check in 3 seconds ..." && sleep 3;
done;
{{- else }}
until timeout 15 bin/pulsar zookeeper-shell -server {{ template "pulsar.zookeeper.service" . }} get {{ .Values.metadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
echo "pulsar cluster {{ template "pulsar.cluster.name" . }} isn't initialized yet ... check in 3 seconds ..." && sleep 3;
done;
{{- end}}
{{- end}}
{{- if and .Values.components.oxia .Values.proxy.waitOxiaTimeout (gt (.Values.proxy.waitOxiaTimeout | int) 0) }}
- name: wait-oxia-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.proxy "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.proxy "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.proxy.waitOxiaTimeout }}", "sh", "-c"]
args:
- |
until nslookup {{ template "pulsar.oxia.server.service" . }}; do
sleep 3;
done;
{{- end }}
{{- if and .Values.proxy.waitBrokerTimeout (gt (.Values.proxy.waitBrokerTimeout | int) 0) }}
# This init container will wait for at least one broker to be ready before
# deploying the proxy
- name: wait-broker-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.proxy "root" .) }}"
imagePullPolicy: {{ .Values.images.proxy.pullPolicy }}
resources: {{ toYaml .Values.initContainer_resources.broker_ready | nindent 10 }}
command: ["sh", "-c"]
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.proxy "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.proxy.waitBrokerTimeout }}", "sh", "-c"]
args:
- >-
- |
set -e;
brokerServiceNumber="$(nslookup -timeout=10 {{ template "pulsar.fullname" . }}-{{ .Values.broker.component }} | grep Name | wc -l)";
until [ ${brokerServiceNumber} -ge 1 ]; do
@ -140,10 +178,14 @@ spec:
sleep 10;
brokerServiceNumber="$(nslookup -timeout=10 {{ template "pulsar.fullname" . }}-{{ .Values.broker.component }} | grep Name | wc -l)";
done;
{{- end}}
{{- if .Values.proxy.initContainers }}
{{- toYaml .Values.proxy.initContainers | nindent 6 }}
{{- end }}
containers:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.proxy "root" .) }}"
imagePullPolicy: {{ .Values.images.proxy.pullPolicy }}
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.proxy "root" .) }}"
{{- if .Values.proxy.probe.liveness.enabled }}
livenessProbe:
httpGet:
@ -180,12 +222,17 @@ spec:
{{- end }}
command: ["sh", "-c"]
args:
- >
- |
{{- if .Values.proxy.additionalCommand }}
{{ .Values.proxy.additionalCommand }}
{{- end }}
{{- if .Values.tls.proxy.cacerts.enabled }}
cd /pulsar/certs/cacerts;
nohup /pulsar/bin/certs-combine-pem-infinity.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.proxy.cacerts.certs) }} > /pulsar/certs/cacerts/certs-combine-pem-infinity.log 2>&1 &
cd /pulsar;
{{- end }}
bin/apply-config-from-env.py conf/proxy.conf &&
echo "OK" > status &&
echo "OK" > "${statusFilePath:-status}" &&
OPTS="${OPTS} -Dlog4j2.formatMsgNoLookups=true" exec bin/pulsar proxy
ports:
# prometheus needs to access /metrics endpoint
@ -201,13 +248,9 @@ spec:
- name: "sts-{{ .Values.tlsPrefix }}pulsarssl"
containerPort: {{ .Values.proxy.ports.pulsarssl }}
{{- end }}
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end }}
{{- if .Values.proxy.extreEnvs }}
{{- if .Values.proxy.extraEnvs }}
env:
{{ toYaml .Values.proxy.extreEnvs | indent 8 }}
{{ toYaml .Values.proxy.extraEnvs | indent 8 }}
{{- end }}
envFrom:
- configMapRef:
@ -215,7 +258,7 @@ spec:
{{- if or .Values.proxy.extraVolumeMounts .Values.auth.authentication.enabled (and .Values.tls.enabled (or .Values.tls.proxy.enabled .Values.tls.broker.enabled)) }}
volumeMounts:
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
- mountPath: "/pulsar/keys"
name: token-keys
readOnly: true
@ -224,16 +267,7 @@ spec:
readOnly: true
{{- end }}
{{- end }}
{{- if .Values.tls.proxy.enabled }}
- mountPath: "/pulsar/certs/proxy"
name: proxy-certs
readOnly: true
{{- end}}
{{- if .Values.tls.enabled }}
- mountPath: "/pulsar/certs/ca"
name: ca
readOnly: true
{{- end}}
{{- include "pulsar.proxy.certs.volumeMounts" . | nindent 10 }}
{{- if .Values.proxy.extraVolumeMounts }}
{{ toYaml .Values.proxy.extraVolumeMounts | indent 10 }}
{{- end }}
@ -245,7 +279,7 @@ spec:
{{ toYaml .Values.proxy.extraVolumes | indent 8 }}
{{- end }}
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
- name: token-keys
secret:
{{- if not .Values.auth.authentication.jwt.usingSecretKey }}
@ -270,21 +304,6 @@ spec:
path: proxy/token
{{- end}}
{{- end}}
{{- if .Values.tls.proxy.enabled }}
- name: ca
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
items:
- key: ca.crt
path: ca.crt
- name: proxy-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.proxy.cert_name }}"
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
{{- end}}
{{- include "pulsar.proxy.certs.volumes" . | nindent 8 }}
{{- end}}
{{- end }}

116
charts/pulsar/templates/pulsar-cluster-initialize.yaml Normal file → Executable file
View File

@ -17,12 +17,12 @@
# under the License.
#
{{- if or .Release.IsInstall .Values.initialize }}
{{- if or (and .Values.useReleaseStatus .Release.IsInstall) .Values.initialize }}
{{- if .Values.components.broker }}
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_metadata.component }}"
name: {{ template "pulsar.fullname" . }}-{{ .Values.pulsar_metadata.component }}
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
@ -30,10 +30,14 @@ metadata:
spec:
# This feature was previously behind a feature gate for several Kubernetes versions and will default to true in 1.23 and beyond
# https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/
{{- if .Values.job.ttl.enabled }}
ttlSecondsAfterFinished: {{ .Values.job.ttl.secondsAfterFinished }}
{{- if and .Values.job.ttl.enabled (semverCompare ">=1.23-0" .Capabilities.KubeVersion.Version) }}
ttlSecondsAfterFinished: {{ .Values.job.ttl.secondsAfterFinished | default 600 }}
{{- end }}
template:
metadata:
labels:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.pulsar_metadata.component }}
spec:
{{- include "pulsar.imagePullSecrets" . | nindent 6 }}
{{- if .Values.pulsar_metadata.nodeSelector }}
@ -41,68 +45,97 @@ spec:
{{ toYaml .Values.pulsar_metadata.nodeSelector | indent 8 }}
{{- end }}
initContainers:
{{- if .Values.pulsar_metadata.configurationStore }}
- name: wait-cs-ready
{{- if .Values.tls.toolset.cacerts.enabled }}
- name: cacerts
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
imagePullPolicy: {{ .Values.pulsar_metadata.image.pullPolicy }}
resources: {{ toYaml .Values.initContainer_resources.cs_ready | nindent 10 }}
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- >-
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.toolset.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.toolset.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if and .Values.components.zookeeper .Values.pulsar_metadata.waitZookeeperTimeout (gt (.Values.pulsar_metadata.waitZookeeperTimeout | int) 0) }}
{{- if .Values.pulsar_metadata.configurationStore }}
- name: wait-zk-cs-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.pulsar_metadata.waitZookeeperTimeout }}", "sh", "-c"]
args:
- |
until nslookup {{ .Values.pulsar_metadata.configurationStore}}; do
sleep 3;
done;
{{- end }}
- name: wait-zookeeper-ready
- name: wait-zk-metastore-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
imagePullPolicy: {{ .Values.pulsar_metadata.image.pullPolicy }}
resources: {{ toYaml .Values.initContainer_resources.zookeeper_ready | nindent 10 }}
command: ["sh", "-c"]
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.pulsar_metadata.waitZookeeperTimeout }}", "sh", "-c"]
args:
- >-
{{- if $zk:=.Values.pulsar_metadata.userProvidedZookeepers }}
- |
{{- if $zk := .Values.pulsar_metadata.userProvidedZookeepers }}
export PULSAR_MEM="-Xmx128M";
until bin/pulsar zookeeper-shell -server {{ $zk }} ls {{ or .Values.metadataPrefix "/" }}; do
until timeout 15 bin/pulsar zookeeper-shell -server {{ $zk }} ls {{ or .Values.metadataPrefix "/" }}; do
echo "user provided zookeepers {{ $zk }} are unreachable... check in 3 seconds ..." && sleep 3;
done;
{{ else }}
{{ else if .Values.components.zookeeper }}
until nslookup {{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}-{{ add (.Values.zookeeper.replicaCount | int) -1 }}.{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}.{{ template "pulsar.namespace" . }}; do
sleep 3;
done;
{{- end}}
{{- end }}
{{- end }}
{{- if and .Values.components.oxia .Values.pulsar_metadata.waitOxiaTimeout (gt (.Values.pulsar_metadata.waitOxiaTimeout | int) 0) }}
- name: wait-oxia-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.pulsar_metadata.waitOxiaTimeout }}", "sh", "-c"]
args:
- |
until nslookup {{ template "pulsar.oxia.server.service" . }}; do
sleep 3;
done;
{{- end }}
{{- if and .Values.pulsar_metadata.waitBookkeeperTimeout (gt (.Values.pulsar_metadata.waitBookkeeperTimeout | int) 0) }}
# This initContainer will wait for bookkeeper initnewcluster to complete
# before initializing pulsar metadata
- name: pulsar-bookkeeper-verify-clusterid
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
imagePullPolicy: {{ .Values.pulsar_metadata.image.pullPolicy }}
resources: {{ toYaml .Values.initContainer_resources.verify_cluster_id | nindent 10 }}
command: ["sh", "-c"]
image: {{ template "pulsar.imageFullName" (dict "image" .Values.pulsar_metadata.image "root" .) }}
imagePullPolicy: {{ template "pulsar.imagePullPolicy" (dict "image" .Values.pulsar_metadata.image "root" .) }}
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.pulsar_metadata.waitBookkeeperTimeout }}", "sh", "-c"]
args:
- >
- |
bin/apply-config-from-env.py conf/bookkeeper.conf;
echo Default BOOKIE_MEM settings are set very high, which can cause the init container to fail.;
echo Setting the memory to a lower value to avoid OOM as operations below are not memory intensive.;
export BOOKIE_MEM="-Xmx128M";
{{- include "pulsar.toolset.zookeeper.tls.settings" . | nindent 10 }}
until bin/bookkeeper shell whatisinstanceid; do
until timeout 15 bin/bookkeeper shell whatisinstanceid; do
sleep 3;
done;
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
name: {{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}
volumeMounts:
{{- include "pulsar.toolset.certs.volumeMounts" . | nindent 8 }}
{{- end }}
containers:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_metadata.component }}"
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
imagePullPolicy: {{ .Values.pulsar_metadata.image.pullPolicy }}
- name: {{ template "pulsar.fullname" . }}-{{ .Values.pulsar_metadata.component }}
image: {{ template "pulsar.imageFullName" (dict "image" .Values.pulsar_metadata.image "root" .) }}
imagePullPolicy: {{ template "pulsar.imagePullPolicy" (dict "image" .Values.pulsar_metadata.image "root" .) }}
{{- if .Values.pulsar_metadata.resources }}
resources:
{{ toYaml .Values.pulsar_metadata.resources | indent 10 }}
{{- end }}
command: ["sh", "-c"]
command: ["timeout", "{{ .Values.pulsar_metadata.initTimeout | default 60 }}", "sh", "-c"]
{{- if .Values.components.zookeeper }}
args:
- |
- | # Use the pipe character for the YAML multiline string. Workaround for kubernetes-sigs/kustomize#4201
{{- include "pulsar.toolset.zookeeper.tls.settings" . | nindent 12 }}
export PULSAR_MEM="-Xmx128M";
bin/pulsar initialize-cluster-metadata \
@ -110,8 +143,7 @@ spec:
--zookeeper {{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }} \
{{- if .Values.pulsar_metadata.configurationStore }}
--configuration-store {{ template "pulsar.configurationStore.connect" . }}{{ .Values.pulsar_metadata.configurationStoreMetadataPrefix }} \
{{- end }}
{{- if not .Values.pulsar_metadata.configurationStore }}
{{- else }}
--configuration-store {{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }} \
{{- end }}
--web-service-url http://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}:{{ .Values.broker.ports.http }}/ \
@ -121,10 +153,26 @@ spec:
{{- if .Values.extraInitCommand }}
{{ .Values.extraInitCommand }}
{{- end }}
{{- else if .Values.components.oxia }}
args:
- | # Use the pipe character for the YAML multiline string. Workaround for kubernetes-sigs/kustomize#4201
export PULSAR_MEM="-Xmx128M";
bin/pulsar initialize-cluster-metadata \
--cluster {{ template "pulsar.cluster.name" . }} \
--metadata-store "{{ template "pulsar.oxia.metadata.url.broker" . }}" \
--configuration-store "{{ template "pulsar.oxia.metadata.url.broker" . }}" \
--web-service-url http://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}:{{ .Values.broker.ports.http }}/ \
--web-service-url-tls https://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}:{{ .Values.broker.ports.https }}/ \
--broker-service-url pulsar://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}:{{ .Values.broker.ports.pulsar }}/ \
--broker-service-url-tls pulsar+ssl://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}:{{ .Values.broker.ports.pulsarssl }}/ ;
{{- if .Values.extraInitCommand }}
{{ .Values.extraInitCommand }}
{{- end }}
{{- end }}
volumeMounts:
{{- include "pulsar.toolset.certs.volumeMounts" . | nindent 8 }}
{{- include "pulsar.toolset.certs.volumeMounts" . | nindent 10 }}
volumes:
{{- include "pulsar.toolset.certs.volumes" . | nindent 6 }}
{{- include "pulsar.toolset.certs.volumes" . | nindent 8 }}
restartPolicy: OnFailure
{{- if .Values.pulsar_metadata.nodeSelector }}
nodeSelector:

View File

@ -17,23 +17,36 @@
# under the License.
#
{{- if and (or .Values.components.pulsar_manager .Values.extra.pulsar_manager) (not .Values.pulsar_manager.existingSecretName) }}
{{- if and .Values.components.pulsar_manager ( not .Values.pulsar_manager.admin.existingSecret ) }}
apiVersion: v1
kind: Secret
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-secret"
namespace: {{ template "pulsar.namespace" . }}
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}
cluster: {{ template "pulsar.fullname" . }}
"helm.sh/resource-policy": "keep" # do not remove when uninstalling to keep it for next install
type: Opaque
data:
{{- if .Values.pulsar_manager.admin}}
PULSAR_MANAGER_ADMIN_PASSWORD: {{ .Values.pulsar_manager.admin.password | default "pulsar" | b64enc }}
PULSAR_MANAGER_ADMIN_USER: {{ .Values.pulsar_manager.admin.user | default "pulsar" | b64enc }}
{{- end }}
{{/* https://itnext.io/manage-auto-generated-secrets-in-your-helm-charts-5aee48ba6918 */}}
{{- $namespace := include "pulsar.namespace" . -}}
{{- $fullname := include "pulsar.fullname" . -}}
{{- $secretName := printf "%s-%s-secret" $fullname .Values.pulsar_manager.component -}}
{{- $secretObj := lookup "v1" "Secret" $namespace $secretName | default dict }}
{{- $secretData := (get $secretObj "data") | default dict }}
{{- $ui_user := ((get $secretData "UI_USERNAME") | b64dec) | default (.Values.pulsar_manager.admin.ui_username) | default ("pulsar") | b64enc }}
{{- $ui_password := ((get $secretData "UI_PASSWORD") | b64dec) | default (.Values.pulsar_manager.admin.ui_password) | default (randAlphaNum 32) | b64enc }}
UI_USERNAME: {{ $ui_user | quote }}
UI_PASSWORD: {{ $ui_password | quote }}
{{- $db_user := ((get $secretData "DB_USERNAME") | b64dec) | default (.Values.pulsar_manager.admin.db_username) | default ("pulsar") | b64enc }}
{{- $db_password := ((get $secretData "DB_PASSWORD") | b64dec) | default (.Values.pulsar_manager.admin.db_password) | default (randAlphaNum 32) | b64enc }}
DB_USERNAME: {{ $db_user | quote }}
DB_PASSWORD: {{ $db_password | quote }}
{{- end }}

View File

@ -0,0 +1,188 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if or (and .Values.useReleaseStatus .Release.IsInstall) .Values.initialize }}
{{- if .Values.components.pulsar_manager }}
apiVersion: batch/v1
kind: Job
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-init"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}-init
spec:
{{- if and .Values.job.ttl.enabled (semverCompare ">=1.23-0" .Capabilities.KubeVersion.Version) }}
ttlSecondsAfterFinished: {{ .Values.job.ttl.secondsAfterFinished | default 600 }}
{{- end }}
template:
metadata:
labels:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.pulsar_manager.component }}-init
spec:
{{- include "pulsar.imagePullSecrets" . | nindent 6 }}
nodeSelector:
{{- if .Values.pulsar_metadata.nodeSelector }}
{{ toYaml .Values.pulsar_metadata.nodeSelector | indent 8 }}
{{- end }}
tolerations:
{{- if .Values.pulsar_metadata.tolerations }}
{{ toYaml .Values.pulsar_metadata.tolerations | indent 8 }}
{{- end }}
restartPolicy: OnFailure
initContainers:
- name: wait-pulsar-manager-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 12 }}
command: [ "sh", "-c" ]
args:
- |
ADMIN_URL={{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-admin:{{ .Values.pulsar_manager.adminService.port }}
until $(curl -sS --fail -X GET http://${ADMIN_URL} > /dev/null 2>&1); do
sleep 3;
done;
# This init container will wait for at least one broker to be ready before
# initializing the pulsar-manager
{{- if .Values.components.broker }}
- name: wait-broker-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.proxy "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 12 }}
command: [ "sh", "-c" ]
args:
- |
set -e;
brokerServiceNumber="$(nslookup -timeout=10 {{ template "pulsar.fullname" . }}-{{ .Values.broker.component }} | grep Name | wc -l)";
until [ ${brokerServiceNumber} -ge 1 ]; do
echo "pulsar cluster {{ template "pulsar.cluster.name" . }} isn't initialized yet ... check in 10 seconds ...";
sleep 10;
brokerServiceNumber="$(nslookup -timeout=10 {{ template "pulsar.fullname" . }}-{{ .Values.broker.component }} | grep Name | wc -l)";
done;
{{- end }}
containers:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-init"
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
{{- if .Values.pulsar_metadata.resources }}
resources: {{ toYaml .Values.pulsar_metadata.resources | nindent 12 }}
{{- end }}
command: [ "sh", "-c" ]
args:
- |
cd /tmp
ADMIN_URL={{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-admin:{{ .Values.pulsar_manager.adminService.port }}
CSRF_TOKEN=$(curl http://${ADMIN_URL}/pulsar-manager/csrf-token)
UI_URL={{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}:{{ .Values.pulsar_manager.service.port }}
{{/* check if account is already existing */}}
LOGIN_REPLY=$(curl -v \
-X POST http://${UI_URL}/pulsar-manager/login \
-H 'Accept: application/json, text/plain, */*' \
-H 'Content-Type: application/json' \
-H "X-XSRF-TOKEN: $CSRF_TOKEN" \
-H "Cookie: XSRF-TOKEN=$CSRF_TOKEN" \
-sS -D headers.txt \
-d '{"username": "'${USERNAME}'", "password": "'${PASSWORD}'"}')
echo "$LOGIN_REPLY"
if [ -n "$(echo "$LOGIN_REPLY" | grep 'success')" ]; then
echo "account already exists"
else
echo "creating account"
{{/* set admin credentials */}}
curl -v \
-X PUT http://${ADMIN_URL}/pulsar-manager/users/superuser \
-H "X-XSRF-TOKEN: $CSRF_TOKEN" \
-H "Cookie: XSRF-TOKEN=$CSRF_TOKEN;" \
-H 'Content-Type: application/json' \
-d '{"name": "'"${USERNAME}"'", "password": "'"${PASSWORD}"'", "description": "Helm-managed Admin Account", "email": "'"${USERNAME}"'@pulsar.org"}'
{{/* login as admin */}}
LOGIN_REPLY=$(curl -v \
-X POST http://${UI_URL}/pulsar-manager/login \
-H 'Accept: application/json, text/plain, */*' \
-H 'Content-Type: application/json' \
-H "X-XSRF-TOKEN: $CSRF_TOKEN" \
-H "Cookie: XSRF-TOKEN=$CSRF_TOKEN" \
-sS -D headers.txt \
-d '{"username": "'${USERNAME}'", "password": "'${PASSWORD}'"}')
echo "$LOGIN_REPLY"
fi
{{- if .Values.components.broker }}
LOGIN_TOKEN=$(grep "token:" headers.txt | sed 's/^.*: //')
LOGIN_JSESSSIONID=$(grep -o "JSESSIONID=[a-zA-Z0-9_]*" headers.txt | sed 's/^.*=//')
{{/* create environment */}}
{{- if or (not .Values.tls.enabled) (not .Values.tls.broker.enabled) }}
BROKER_URL="http://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.http }}"
{{- else }}
BROKER_URL="https://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.https }}"
{{- end }}
BOOKIE_URL="http://{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}:{{ .Values.bookkeeper.ports.http }}"
echo '{ "name": "{{ template "pulsar.fullname" . }}", "broker": "'$BROKER_URL'", "bookie": "'$BOOKIE_URL'"}'
ENVIRONMENT_REPLY=$(curl -v \
-X PUT http://${UI_URL}/pulsar-manager/environments/environment \
-H 'Content-Type: application/json' \
-H "token: $LOGIN_TOKEN" \
-H "X-XSRF-TOKEN: $CSRF_TOKEN" \
-H "username: $USERNAME" \
-H "Cookie: XSRF-TOKEN=$CSRF_TOKEN; JSESSIONID=$LOGIN_JSESSSIONID;" \
-d '{ "name": "{{ template "pulsar.fullname" . }}", "broker": "'$BROKER_URL'", "bookie": "'$BOOKIE_URL'"}')
echo "$ENVIRONMENT_REPLY"
if [ -n "$(echo "$ENVIRONMENT_REPLY" | grep -e 'success' -e 'exist')" ]; then
echo "Successfully created / found existing environment"
exit 0
else
echo "Error creating environment"
exit 1
fi
{{- else }}
if [ -n "$(echo "$LOGIN_REPLY" | grep 'success')" ]; then
echo "Successfully created / found existing account"
exit 0
else
echo "Error creating account"
exit 1
fi
{{- end }}
env:
- name: USERNAME
valueFrom:
secretKeyRef:
{{- if .Values.pulsar_manager.admin.existingSecret }}
name: {{ .Values.pulsar_manager.admin.existingSecret | quote }}
{{- else }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-secret"
{{- end }}
key: UI_USERNAME
- name: PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.pulsar_manager.admin.existingSecret }}
name: {{ .Values.pulsar_manager.admin.existingSecret | quote }}
{{- else }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-secret"
{{- end }}
key: UI_PASSWORD
{{- end }}
{{- end }}

View File

@ -17,7 +17,7 @@
# under the License.
#
{{- if or .Values.components.pulsar_manager .Values.extra.pulsar_manager }}
{{- if .Values.components.pulsar_manager }}
apiVersion: v1
kind: ConfigMap
metadata:
@ -27,5 +27,18 @@ metadata:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}
data:
{{ toYaml .Values.pulsar_manager.configData | indent 2 }}
PULSAR_CLUSTER: {{ template "pulsar.fullname" . }}
PULSAR_MANAGER_OPTS: "-Dlog4j2.formatMsgNoLookups=true"
{{- if .Values.auth.authentication.enabled }}
# auth
{{- if .Values.auth.authentication.jwt.enabled }}
{{- if .Values.auth.authentication.jwt.usingSecretKey }}
SECRET_KEY: "file:///pulsar-manager/keys/token/secret.key"
{{- else }}
PRIVATE_KEY: "file:///pulsar-manager/keys/token/private.key"
PUBLIC_KEY: "file:///pulsar-manager/keys/token/public.key"
{{- end }}
{{- end }}
{{- end }}
{{ toYaml .Values.pulsar_manager.configData | indent 2}}
{{- end }}

View File

@ -1,101 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if or .Values.components.pulsar_manager .Values.extra.pulsar_manager }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: {{ .Values.pulsar_manager.component }}
template:
metadata:
labels:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.pulsar_manager.component }}
annotations:
{{- if .Values.pulsar_manager.restartPodsOnConfigMapChange }}
checksum/config: {{ include (print $.Template.BasePath "/pulsar-manager-configmap.yaml") . | sha256sum }}
{{- end }}
{{- with .Values.pulsar_manager.annotations }}
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{- if .Values.pulsar_manager.nodeSelector }}
nodeSelector:
{{ toYaml .Values.pulsar_manager.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.pulsar_manager.tolerations }}
tolerations:
{{ toYaml .Values.pulsar_manager.tolerations | indent 8 }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.pulsar_manager.gracePeriod }}
containers:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}"
image: "{{ .Values.images.pulsar_manager.repository }}:{{ .Values.images.pulsar_manager.tag }}"
imagePullPolicy: {{ .Values.images.pulsar_manager.pullPolicy }}
{{- if .Values.pulsar_manager.resources }}
resources:
{{ toYaml .Values.pulsar_manager.resources | indent 12 }}
{{- end }}
ports:
- containerPort: {{ .Values.pulsar_manager.service.targetPort }}
volumeMounts:
- name: pulsar-manager-data
mountPath: /data
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}"
env:
- name: PULSAR_CLUSTER
value: {{ template "pulsar.fullname" . }}
- name: USERNAME
valueFrom:
secretKeyRef:
key: PULSAR_MANAGER_ADMIN_USER
{{- if .Values.pulsar_manager.existingSecretName }}
name: "{{ .Values.pulsar_manager.existingSecretName }}"
{{- else }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-secret"
{{- end }}
- name: PASSWORD
valueFrom:
secretKeyRef:
key: PULSAR_MANAGER_ADMIN_PASSWORD
{{- if .Values.pulsar_manager.existingSecretName }}
name: "{{ .Values.pulsar_manager.existingSecretName }}"
{{- else }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-secret"
{{- end }}
- name: PULSAR_MANAGER_OPTS
value: "$(PULSAR_MANAGER_OPTS) -Dlog4j2.formatMsgNoLookups=true"
{{- include "pulsar.imagePullSecrets" . | nindent 6}}
volumes:
- name: pulsar-manager-data
emptyDir: {}
{{- end }}

View File

@ -55,7 +55,7 @@ spec:
serviceName: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}"
servicePort: {{ .Values.pulsar_manager.service.targetPort }}
{{- else }}
pathType: ImplementationSpecific
pathType: {{ .Values.pulsar_manager.ingress.pathType }}
backend:
service:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}"

View File

@ -17,7 +17,7 @@
# under the License.
#
{{- if or .Values.components.pulsar_manager .Values.extra.pulsar_manager }}
{{- if .Values.components.pulsar_manager }}
apiVersion: v1
kind: Service
metadata:
@ -26,13 +26,18 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}
{{- with .Values.pulsar_manager.service.annotations }}
annotations:
{{ toYaml .Values.pulsar_manager.service.annotations | indent 4 }}
{{ toYaml . | indent 4 }}
{{- end }}
spec:
type: {{ .Values.pulsar_manager.service.type }}
{{- if .Values.pulsar_manager.service.externalTrafficPolicy }}
externalTrafficPolicy: {{ .Values.pulsar_manager.service.externalTrafficPolicy }}
{{- end }}
{{- with .Values.pulsar_manager.service.loadBalancerClass }}
loadBalancerClass: {{ . }}
{{- end }}
{{- if .Values.pulsar_manager.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges: {{ toYaml .Values.pulsar_manager.service.loadBalancerSourceRanges | nindent 4 }}
{{- end }}
@ -44,8 +49,30 @@ spec:
selector:
{{- include "pulsar.matchLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}
{{- if .Values.pulsar_manager.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges:
{{ toYaml .Values.pulsar_manager.service.loadBalancerSourceRanges | indent 4 }}
---
apiVersion: v1
kind: Service
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-admin"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}
{{- with .Values.pulsar_manager.adminService.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
type: {{ .Values.pulsar_manager.adminService.type }}
ports:
- port: {{ .Values.pulsar_manager.adminService.port }}
targetPort: {{ .Values.pulsar_manager.adminService.targetPort }}
protocol: TCP
selector:
{{- include "pulsar.matchLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}
{{- end }}

View File

@ -0,0 +1,174 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.components.pulsar_manager }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}"
namespace: {{ template "pulsar.namespace" . }}
annotations: {{ .Values.pulsar_manager.appAnnotations | toYaml | nindent 4 }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}
spec:
serviceName: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}"
replicas: 1
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: {{ .Values.pulsar_manager.component }}
template:
metadata:
labels:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.pulsar_manager.component }}
annotations:
{{- if .Values.pulsar_manager.restartPodsOnConfigMapChange }}
checksum/config: {{ include (print $.Template.BasePath "/pulsar-manager-configmap.yaml") . | sha256sum }}
{{- end }}
{{- with .Values.pulsar_manager.annotations }}
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{- if .Values.pulsar_manager.nodeSelector }}
nodeSelector:
{{ toYaml .Values.pulsar_manager.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.pulsar_manager.tolerations }}
tolerations:
{{ toYaml .Values.pulsar_manager.tolerations | indent 8 }}
{{- end }}
{{- if .Values.pulsar_manager.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml .Values.pulsar_manager.topologySpreadConstraints | nindent 8 }}
{{- end }}
terminationGracePeriodSeconds: {{ .Values.pulsar_manager.gracePeriod }}
{{- if .Values.pulsar_manager.initContainers }}
initContainers:
{{- toYaml .Values.pulsar_manager.initContainers | nindent 6 }}
{{- end }}
containers:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}"
image: "{{ .Values.images.pulsar_manager.repository }}:{{ .Values.images.pulsar_manager.tag }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.pulsar_manager "root" .) }}"
{{- if .Values.pulsar_manager.resources }}
resources:
{{ toYaml .Values.pulsar_manager.resources | indent 12 }}
{{- end }}
ports:
- containerPort: {{ .Values.pulsar_manager.service.targetPort }}
- containerPort: {{ .Values.pulsar_manager.adminService.targetPort }}
volumeMounts:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-{{ .Values.pulsar_manager.volumes.data.name }}"
mountPath: /data
{{- if .Values.pulsar_manager.extraVolumeMounts }}
{{ toYaml .Values.pulsar_manager.extraVolumeMounts | indent 10 }}
{{- end }}
{{- if .Values.auth.authentication.enabled }}
{{- if .Values.auth.authentication.jwt.enabled }}
- name: pulsar-manager-keys
mountPath: /pulsar-manager/keys
{{- end }}
{{- end }}
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}"
env:
- name: USERNAME
valueFrom:
secretKeyRef:
{{- if .Values.pulsar_manager.admin.existingSecret }}
name: {{ .Values.pulsar_manager.admin.existingSecret | quote }}
{{- else }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-secret"
{{- end }}
key: DB_USERNAME
- name: PASSWORD
valueFrom:
secretKeyRef:
{{- if .Values.pulsar_manager.admin.existingSecret }}
name: {{ .Values.pulsar_manager.admin.existingSecret | quote }}
{{- else }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-secret"
{{- end }}
key: DB_PASSWORD
{{- if .Values.auth.authentication.enabled }}
{{- if .Values.auth.authentication.jwt.enabled }}
{{- if .Values.auth.superUsers.manager }}
- name: JWT_TOKEN
valueFrom:
secretKeyRef:
key: TOKEN
name: "{{ .Release.Name }}-token-{{ .Values.auth.superUsers.manager }}"
{{- end }}
{{- end }}
{{- end }}
{{- include "pulsar.imagePullSecrets" . | nindent 6}}
volumes:
{{- if .Values.pulsar_manager.extraVolumes }}
{{ toYaml .Values.pulsar_manager.extraVolumes | indent 8 }}
{{- end }}
{{- if .Values.auth.authentication.enabled }}
{{- if .Values.auth.authentication.jwt.enabled }}
- name: pulsar-manager-keys
secret:
defaultMode: 420
{{- if .Values.auth.authentication.jwt.usingSecretKey }}
secretName: "{{ .Release.Name }}-token-symmetric-key"
{{- else }}
secretName: "{{ .Release.Name }}-token-asymmetric-key"
{{- end }}
items:
{{- if .Values.auth.authentication.jwt.usingSecretKey }}
- key: SECRETKEY
path: token/secret.key
{{- else }}
- key: PRIVATEKEY
path: token/private.key
- key: PUBLICKEY
path: token/public.key
{{- end }}
{{- end }}
{{- end }}
{{- if not (and (and .Values.persistence .Values.volumes.persistence) .Values.pulsar_manager.volumes.persistence) }}
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-{{ .Values.pulsar_manager.volumes.data.name }}"
emptyDir: {}
{{- end }}
{{- if and (and .Values.persistence .Values.volumes.persistence) .Values.pulsar_manager.volumes.persistence }}
volumeClaimTemplates:
- metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-{{ .Values.pulsar_manager.volumes.data.name }}"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: {{ .Values.pulsar_manager.volumes.data.size }}
{{- if .Values.pulsar_manager.volumes.data.storageClassName }}
storageClassName: "{{ .Values.pulsar_manager.volumes.data.storageClassName }}"
{{- else if and .Values.volumes.local_storage .Values.pulsar_manager.volumes.data.local_storage }}
storageClassName: "local-storage"
{{- end }}
{{- with .Values.pulsar_manager.volumes.data.selector }}
selector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -24,17 +24,20 @@ kind: Issuer
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
spec:
selfSigned: {}
---
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-ca"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
spec:
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
commonName: "{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
duration: "{{ .Values.certs.internal_issuer.duration }}"
renewBefore: "{{ .Values.certs.internal_issuer.renewBefore }}"
@ -51,14 +54,15 @@ spec:
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
---
{{- end }}
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Issuer
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
name: "{{ template "pulsar.certs.issuers.ca.name" . }}"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
spec:
ca:
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
{{- end }}
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
{{- end }}

View File

@ -18,260 +18,30 @@
#
{{- if .Values.tls.enabled }}
{{- if .Values.certs.internal_issuer.enabled }}
{{- if .Values.tls.proxy.enabled }}
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.tls.proxy.cert_name }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
# Secret names are always required.
secretName: "{{ .Release.Name }}-{{ .Values.tls.proxy.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
subject:
organizations:
{{ toYaml .Values.tls.common.organization | indent 4 }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
isCA: false
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
usages:
- server auth
- client auth
# At least one of a DNS Name, USI SAN, or IP address is required.
dnsNames:
- "*.{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
- "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
{{- if .Values.tls.proxy.dnsNames }}
{{ toYaml .Values.tls.proxy.dnsNames | indent 4 }}
{{- end }}
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{- if .Values.tls.proxy.createCert }}
{{ include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.proxy "tlsConfig" .Values.tls.proxy) }}
---
{{- end }}
{{- end }}
{{- if or .Values.tls.broker.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled) }}
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.tls.broker.cert_name }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
# Secret names are always required.
secretName: "{{ .Release.Name }}-{{ .Values.tls.broker.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
subject:
organizations:
{{ toYaml .Values.tls.common.organization | indent 4 }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
isCA: false
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
usages:
- server auth
- client auth
# At least one of a DNS Name, USI SAN, or IP address is required.
dnsNames:
{{- if .Values.tls.broker.dnsNames }}
{{ toYaml .Values.tls.broker.dnsNames | indent 4 }}
{{- end}}
- "*.{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
- "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{ include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.broker "tlsConfig" .Values.tls.broker) }}
---
{{- end }}
{{- if or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled }}
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.tls.bookie.cert_name }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
# Secret names are always required.
secretName: "{{ .Release.Name }}-{{ .Values.tls.bookie.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
subject:
organizations:
{{ toYaml .Values.tls.common.organization | indent 4 }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
isCA: false
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
usages:
- server auth
- client auth
dnsNames:
{{- if .Values.tls.bookie.dnsNames }}
{{ toYaml .Values.tls.bookie.dnsNames | indent 4 }}
{{- end }}
- "*.{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
- "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{ include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.bookkeeper "tlsConfig" .Values.tls.bookie) }}
---
{{- end }}
{{- if .Values.tls.zookeeper.enabled }}
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.tls.autorecovery.cert_name }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
# Secret names are always required.
secretName: "{{ .Release.Name }}-{{ .Values.tls.autorecovery.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
subject:
organizations:
{{ toYaml .Values.tls.common.organization | indent 4 }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
isCA: false
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
usages:
- server auth
- client auth
dnsNames:
{{- if .Values.tls.autorecovery.dnsNames }}
{{ toYaml .Values.tls.autorecovery.dnsNames | indent 4 }}
{{- end }}
- "*.{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
- "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{ include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.autorecovery "tlsConfig" .Values.tls.autorecovery) }}
---
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.tls.toolset.cert_name }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
# Secret names are always required.
secretName: "{{ .Release.Name }}-{{ .Values.tls.toolset.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
subject:
organizations:
{{ toYaml .Values.tls.common.organization | indent 4 }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
isCA: false
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
usages:
- server auth
- client auth
dnsNames:
{{- if .Values.tls.toolset.dnsNames }}
{{ toYaml .Values.tls.toolset.dnsNames | indent 4 }}
{{- end }}
- "*.{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
- "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{ include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.toolset "tlsConfig" .Values.tls.toolset) }}
---
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.tls.zookeeper.cert_name }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
# Secret names are always required.
secretName: "{{ .Release.Name }}-{{ .Values.tls.zookeeper.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
subject:
organizations:
{{ toYaml .Values.tls.common.organization | indent 4 }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}"
isCA: false
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
usages:
- server auth
- client auth
dnsNames:
{{- if .Values.tls.zookeeper.dnsNames }}
{{ toYaml .Values.tls.zookeeper.dnsNames | indent 4 }}
{{- end }}
- "*.{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
- "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}"
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{ include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.zookeeper "tlsConfig" .Values.tls.zookeeper) }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -36,7 +36,7 @@ data:
brokerServiceUrl: "pulsar+ssl://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.pulsarssl }}/"
useTls: "true"
tlsAllowInsecureConnection: "false"
tlsTrustCertsFilePath: "/pulsar/certs/proxy-ca/ca.crt"
tlsTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.toolset.cacerts.enabled | quote }}
tlsEnableHostnameVerification: "false"
{{- end }}
{{- if not (and .Values.tls.enabled .Values.tls.broker.enabled) }}
@ -51,7 +51,7 @@ data:
brokerServiceUrl: "pulsar+ssl://{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}:{{ .Values.proxy.ports.pulsarssl }}/"
useTls: "true"
tlsAllowInsecureConnection: "false"
tlsTrustCertsFilePath: "/pulsar/certs/proxy-ca/ca.crt"
tlsTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.toolset.cacerts.enabled | quote }}
tlsEnableHostnameVerification: "false"
{{- end }}
{{- if not (and .Values.tls.enabled .Values.tls.proxy.enabled) }}
@ -61,7 +61,7 @@ data:
{{- end }}
# Authentication Settings
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
authParams: "file:///pulsar/tokens/client/token"
authPlugin: "org.apache.pulsar.client.impl.auth.AuthenticationToken"
{{- end }}

View File

@ -1,85 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
namespace: {{ template "pulsar.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
namespace: {{ template "pulsar.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
subjects:
- kind: ServiceAccount
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
namespace: {{ template "pulsar.namespace" . }}
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
{{- if .Values.rbac.limit_to_namespace }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}-{{ template "pulsar.namespace" . }}"
{{- else}}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
{{- end}}
spec:
readOnlyRootFilesystem: false
privileged: false
allowPrivilegeEscalation: false
runAsUser:
rule: 'RunAsAny'
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end}}

View File

@ -26,8 +26,8 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.toolset.component }}
annotations:
{{- with .Values.toolset.service_account.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
{{- end }}

Some files were not shown because too many files have changed in this diff Show More