Compare commits

..

52 Commits

Author SHA1 Message Date
gulecroc
e8ab0c6ded
Feat/cacerts (#619) 2025-06-21 23:13:35 +03:00
Artem Nosulchyk
3e5c82c229
extra volume mounts for oxia coordinator (#618)
* extra volume mounts for oxia coordinator

* .

* .
2025-06-13 10:55:02 -07:00
Lari Hotari
7cd7078695
Add labels to all k8s objects (#617)
* Add labels to all k8s objects

* Add labels to initialization job pods
2025-06-09 21:27:23 +03:00
Lari Hotari
2d16ffefd4
Use PEM files directly as ZooKeeper keystore and truststore (#613) 2025-05-30 18:16:04 +03:00
Lari Hotari
fdcfe60fe9 Chart: Bump version to 4.1.0 2025-05-23 16:52:39 +03:00
gulecroc
1180db46cd
add template for ca issuer name and secret name (#565)
* set template for ca issuer name and secret name + geo-replication installation example

* remove geo-replication from this PR

* use certs template to define ca name and secret name

* Handle proxy, toolset and zookeeper in the same way as others

* Make the logic more consistent by separating the selfsigning issuer configuration

---------

Co-authored-by: GLECROC <guillaume.lecroc@cnp.fr>
Co-authored-by: Lari Hotari <lhotari@users.noreply.github.com>
Co-authored-by: Lari Hotari <lhotari@apache.org>
2025-05-23 16:22:17 +03:00
Lari Hotari
51a535d83d
Upgrade to Pulsar 4.0.5 (#612) 2025-05-23 15:28:31 +03:00
trynocoding
352ed0846b
Fix broker initialization error when using global Zookeeper (#602) (#603) 2025-05-21 12:20:41 +03:00
Bruno Domenici
a9f2ba76ae
OpenID: introducing support for OpenID configuration (#509)
* feat!(openid): introducing support for openid configuration

BREAKING CHANGE: provider configuration changed from auth.authentication.provider to auth.authentication.jwt.enabled

* add upgrading to 4.1.0

* add validation for deprecated values

* add openid CI with keycloak

* fix chart-testing lint new-line-at-end-of-file

* fix keycloak dependency repository

* fix keycloak repository

* fix yaml to json convert error

* disable keycloak to validate github actions before re-enable it

* disable openid test scenario

* disable keycloak in values

* enable keycloak without authentication and authorization

* add openid test scenario

* disable test scenario other than openid

* enable all test scenario

* disable functions component

* create openid resources

* test truncate command

* test truncate command

* change client_secret generator

* change client_secret generator

* test python

* fix script

* fix script

* print python result

* test python

* test python

* fix client_secret generation

* fix create openid resources

* fix secret name

* fix mount keycloak config

* fix keycloak service

* exclude keycloak from chart

* add license

* add license

* wait keycloak is alive

* fix keycloak chart install namespace

* add test pulsar real openid config

* fix keycloak issuer url

* fix pod name

* remove check keycloak alive

* check realm pulsar openid configuration

* change keycloak service

* remove test keyclock service

* remove selector to get all pod log

* wait keycloak is alive

* check keycloak realm pulsar urls

* wait until keycloak is ready

* add wait timeout

* fix realm pulsar name

* add log to debug

* add openid for toolset

* set authorization

* set authorization

* fix client template filename

* fix install keycloak

* disable authorization

* debug sub claim value

* fix sub claim value

* cleanup

* enable all build

---------

Co-authored-by: glecroc <guillaume.lecroc@cnp.fr>
2025-05-20 14:09:12 +03:00
Lari Hotari
52d3164b8d
Upgrade oxia image to 0.12.0 in default values.yaml (#611) 2025-05-20 03:29:49 -07:00
Artem Nosulchyk
9ddbf4bc86
extra containers and volumes for oxia coordinator (#609) 2025-05-20 13:13:07 +03:00
Artem Nosulchyk
fa1456ea4d
configurable oxia coordinator configmap and entrypoint (#606) 2025-05-19 16:16:40 +03:00
Artem Nosulchyk
8382906775
annotations (#610) 2025-05-13 16:35:44 -07:00
Austin Poole
57fa527b04
update nodeSelector for bookkeeper cluster initializer (#608) 2025-05-10 11:57:16 +03:00
Haim Kortovich
77ec4cedfb
Add appAnnotations for all statefulsets (#604) 2025-05-07 09:05:19 +03:00
Artem Nosulchyk
cd701ecedd
add support of extra volumes and mounts for autorecovery (#607) 2025-05-07 08:44:11 +03:00
Artem Nosulchyk
d4afc985d2
oxia components podmonitor match labels (#605) 2025-05-06 22:27:27 +03:00
Lari Hotari
7833e51c28 Chart: Bump version to 4.0.1 2025-04-15 11:05:33 +03:00
gulecroc
6e824f0c4e
Fix bookkeeper.extraVolumes (#596) 2025-04-15 01:04:10 -07:00
Lari Hotari
b703761a52
Upgrade Oxia to 0.11.15 (#600) 2025-04-15 00:50:32 -07:00
Lari Hotari
8d889eb971
Upgrade to Pulsar 4.0.4 (#599) 2025-04-15 00:24:48 -07:00
Lari Hotari
6ff77e8c65
Update RELEASE.md 2025-03-14 00:51:58 -07:00
Lari Hotari
e7b08065a1
Update RELEASE.md 2025-03-14 00:46:19 -07:00
Lari Hotari
3f75320f18 Update RELEASE.md 2025-03-11 02:44:10 +02:00
Lari Hotari
a30291e7df
Update RELEASE.md 2025-03-10 17:22:39 -07:00
Lari Hotari
20f7fc8d79 Update README 2025-03-11 02:19:27 +02:00
Lari Hotari
637cf11d1a
Fix Grafana dashboards for Broker with honorLabels, remove unnecessary *_created metrics and improve docs (#593)
* Drop _created metrics for broker and proxy

* Enable all metrics by default for broker

* change default dashboard

* Remove messy dashboards

* Enable default dashboards in Grafana

* Add testing values with more aggressive disk cleanup

* Add VictoriaMetrics debugging instructions

* Set honorLabels to true

* Document disabling monitoring

* Set password in testing values

* Fix linting issue detected by kubeconform
2025-03-10 16:46:28 -07:00
Lari Hotari
e6f05809bd
Migrate from kube-prometheus-metrics to victoria-metrics-k8s-stack (#592) 2025-03-08 16:36:41 -08:00
Lari Hotari
302db43e91
Remove PSP support (#591) 2025-03-08 12:00:35 -08:00
Lari Hotari
75119dd6d7
Remove Prometheus scrape annotations when podmonitors are enabled (#590) 2025-03-07 09:51:06 -08:00
Lari Hotari
6fe37a373f
Use bookkeeperMetadataServiceUri in broker and make PulsarMetadataClientDriver configurable (#589) 2025-03-07 09:24:03 -08:00
Lari Hotari
dd1325216f
Change Pulsar Proxy service load balancer type to ClusterIP (#588) 2025-03-06 05:03:42 -08:00
Lari Hotari
976ba92e3b
Test with k8s 1.32.2 and upgrade tool versions used in CI (#587)
- kind 0.22.0 -> 0.27.0
- test with k8s 1.32.2 instead of 1.29.2 to ensure compatibility with latest k8s release
- default helm version 3.14.4 -> 3.16.4
- chart releaser 1.6.0 -> 1.7.0
- ubuntu 22.04 -> 24.04
- chart testing 3.11.0 -> 3.12.0
- yamllint 1.33.0 -> 1.35.1
- yamale 4.0.4 -> 6.0.0
2025-03-05 23:50:44 -08:00
Lari Hotari
18c4cc5440 Add comment warning about enabling PulsarMetadataBookieDriver
- upgrade compatibility tests didn't pass with this setting, so more testing is needed
2025-03-06 09:49:56 +02:00
Lari Hotari
601e78d8a5
Add Broker Cache and Sockets dashboards (#586) 2025-03-05 23:24:19 -08:00
Lari Hotari
80999ff1d8
Use BookKeeper BP-29 metadataServiceUri to configure bookie metadata store, also when using Zookeeper (#585) 2025-03-05 23:24:07 -08:00
Lari Hotari
87b48d0610
Update RELEASE.md 2025-03-04 13:16:33 -08:00
Lari Hotari
9f61859d19
Use PIP-45 metadata store config to replace deprecated ZK config and make PulsarMetadataBookieDriver configurable in BK (#576) 2025-03-04 20:23:35 +02:00
Lari Hotari
a55b1bb560
Remove the dependency to pulsarctl when generating JWT tokens (#584) 2025-03-04 20:18:10 +02:00
Lari Hotari
43f8dfa04e
Revisit solution to configure Bookkeeper RocksDB settings - default to individual config files (#583) 2025-03-04 04:04:38 -08:00
Lari Hotari
f98ee7d69c
Replace ">" with "|" to avoid Go Yaml issue go-yaml/yaml#789 (#582) 2025-03-04 02:21:39 -08:00
Lari Hotari
589b0b1b24
Upgrade default cert-manager version to 1.12.16 (#581) 2025-03-04 01:02:25 -08:00
Lari Hotari
5c1b7a9288
Restore support for dbStorage_rocksDB_* settings defined in bookkeeper.configData (#580) 2025-03-03 22:05:59 -08:00
Lari Hotari
4bdf6d51eb
Improve kube-prometheus-stack config in values.yaml by adding missing key and some basic comments (#579)
* Enable prometheusOperator in CI test

* Add comments and add offloader dashboard
2025-03-03 11:09:25 -08:00
Lari Hotari
4de387e726
Workaround issue with Prometheus 3.0 and metrics (#577)
* Add "fallbackScrapeProtocol: PrometheusText0.0.4" to all pod monitors
2025-03-03 06:26:04 -08:00
Lari Hotari
492e273d82
Upgrade to kube-prometheus-stack 69.x including prometheus-operator 0.80.0 defaulting to Prometheus 3.x (#578)
* Upgrade to kube-prometheus-stack 67.x
  * Prometheus operator is upgraded to 0.80.0
  * Prometheus is upgraded from 2.55.0 to 3.2.1

* Enable pod monitors to test them

* Run linting with kube-prometheus-stack enabled

* Validate all CI configs
2025-03-03 05:49:03 -08:00
Lari Hotari
afca5aaf08
Upgrade to Pulsar 4.0.3 (#575) 2025-02-28 09:18:10 -08:00
Lari Hotari
4386eacba8
[fix] Fix broker service annotations issue and other annotations issues (#574)
* Fix broker services annotations issues

* Add annotations support to autorecovery.service

* Consistently use similar way to handle annotations

* Add autorecovery service annotations key to values.yaml
2025-02-28 09:17:54 -08:00
Lari Hotari
f928380124
Fix pulsar-cluster-initialize / pulsar-init rendering with kustomize (#572)
* Fix pulsar-cluster-initialize / pulsar-init rendering with kustomize

- reapply #166 changes that were reverted by #544 changes

* Add validation for kustomize output in CI
2025-02-19 00:46:24 -08:00
Philipp Dolif
ab46d2165e
Increase defaults for ensemble size, write quorum, and ack quorum to 2 (#570) 2025-02-18 22:27:34 -08:00
Alejandro Ramirez
0b6b03002c
Fix OOM issue on broker wait-zookeeper-ready initContainer (#568) 2025-02-18 22:26:39 -08:00
Lari Hotari
e55405cbe2 Improve RELEASE.md
- address word wrap issue in validation instructions
2025-01-20 19:22:51 +02:00
93 changed files with 4724 additions and 1822 deletions

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,73 @@
{
"clientId": $ARGS.named.CLIENT_ID,
"enabled": true,
"clientAuthenticatorType": "client-secret",
"secret": $ARGS.named.CLIENT_SECRET,
"standardFlowEnabled" : false,
"implicitFlowEnabled" : false,
"serviceAccountsEnabled": true,
"protocol": "openid-connect",
"attributes": {
"realm_client": "false",
"oidc.ciba.grant.enabled": "false",
"client.secret.creation.time": "1735689600",
"backchannel.logout.session.required": "true",
"standard.token.exchange.enabled": "false",
"frontchannel.logout.session.required": "true",
"oauth2.device.authorization.grant.enabled": "false",
"display.on.consent.screen": "false",
"backchannel.logout.revoke.offline.tokens": "false"
},
"protocolMappers": [
{
"name": "sub",
"protocol": "openid-connect",
"protocolMapper": "oidc-hardcoded-claim-mapper",
"consentRequired": false,
"config": {
"introspection.token.claim": "true",
"claim.value": $ARGS.named.SUB_CLAIM_VALUE,
"userinfo.token.claim": "true",
"id.token.claim": "true",
"lightweight.claim": "false",
"access.token.claim": "true",
"claim.name": "sub",
"jsonType.label": "String",
"access.tokenResponse.claim": "false"
}
},
{
"name": "nbf",
"protocol": "openid-connect",
"protocolMapper": "oidc-hardcoded-claim-mapper",
"consentRequired": false,
"config": {
"introspection.token.claim": "true",
"claim.value": "1735689600",
"userinfo.token.claim": "true",
"id.token.claim": "true",
"lightweight.claim": "false",
"access.token.claim": "true",
"claim.name": "nbf",
"jsonType.label": "long",
"access.tokenResponse.claim": "false"
}
}
],
"defaultClientScopes": [
"web-origins",
"service_account",
"acr",
"profile",
"roles",
"basic",
"email"
],
"optionalClientScopes": [
"address",
"phone",
"organization",
"offline_access",
"microprofile-jwt"
]
}

View File

@ -0,0 +1,26 @@
# Keycloak
Keycloak is used to validate OIDC configuration.
To create the pulsar realm configuration, we use :
* `0-realm-pulsar-partial-export.json` : after creating pulsar realm in Keycloack UI, this file is the result of the partial export in Keycloak UI without options.
* `1-client-template.json` : this is the template to create pulsar clients.
To create the final `realm-pulsar.json`, merge files with `jq` command :
* create a client with `CLIENT_ID`, `CLIENT_SECRET` and `SUB_CLAIM_VALUE` :
```
CLIENT_ID=xx
CLIENT_SECRET=yy
SUB_CLAIM_VALUE=zz
jq -n --arg CLIENT_ID "$CLIENT_ID" --arg CLIENT_SECRET "$CLIENT_SECRET" --arg SUB_CLAIM_VALUE "$SUB_CLAIM_VALUE" 1-client-template.json > client.json
```
* then merge the realm and the client :
```
jq '.clients += [input]' 0-realm-pulsar-partial-export.json client.json > realm-pulsar.json
```

View File

@ -0,0 +1,34 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
tls:
enabled: false
# This block sets up an example Pulsar Realm
# https://www.keycloak.org/server/importExport#_importing_a_realm_from_a_directory
extraEnvVars:
- name: KEYCLOAK_EXTRA_ARGS
value: "--import-realm"
extraVolumes:
- name: realm-config
secret:
secretName: keycloak-ci-realm-config
extraVolumeMounts:
- name: realm-config
mountPath: "/opt/bitnami/keycloak/data/import"
readOnly: true

View File

@ -0,0 +1,5 @@
{
"type": "client_credentials",
"client_id": $ARGS.named.CLIENT_ID,
"client_secret": $ARGS.named.CLIENT_SECRET
}

View File

@ -28,6 +28,7 @@ TLS=${TLS:-"false"}
SYMMETRIC=${SYMMETRIC:-"false"}
FUNCTION=${FUNCTION:-"false"}
MANAGER=${MANAGER:-"false"}
ALLOW_LOADBALANCERS=${ALLOW_LOADBALANCERS:-"false"}
source ${PULSAR_HOME}/.ci/helm.sh
@ -56,21 +57,28 @@ fi
install_type="install"
test_action="produce-consume"
if [[ "$UPGRADE_FROM_VERSION" != "" ]]; then
ALLOW_LOADBALANCERS="true"
# install older version of pulsar chart
PULSAR_CHART_VERSION="$UPGRADE_FROM_VERSION"
ci::install_pulsar_chart install ${PULSAR_HOME}/.ci/values-common.yaml ${PULSAR_HOME}/${VALUES_FILE} "${extra_opts[@]}"
# Install Prometheus Operator CRDs using the upgrade script since kube-prometheus-stack is now disabled before the upgrade
${PULSAR_HOME}/scripts/kube-prometheus-stack/upgrade_prometheus_operator_crds.sh
ci::install_pulsar_chart install ${PULSAR_HOME}/.ci/values-common.yaml ${PULSAR_HOME}/${VALUES_FILE} --set kube-prometheus-stack.enabled=false "${extra_opts[@]}"
install_type="upgrade"
echo "Wait 10 seconds"
sleep 10
# check pulsar environment
ci::check_pulsar_environment
# test that we can access the admin api
ci::test_pulsar_admin_api_access
# produce messages with old version of pulsar and consume with new version
ci::test_pulsar_producer_consumer "produce"
test_action="consume"
if [[ "$(ci::helm_values_for_deployment | yq .kube-prometheus-stack.enabled)" == "true" ]]; then
echo "Upgrade Prometheus Operator CRDs before upgrading the deployment"
${PULSAR_HOME}/scripts/kube-prometheus-stack/upgrade_prometheus_operator_crds.sh
if [[ "$(ci::helm_values_for_deployment | yq .victoria-metrics-k8s-stack.enabled)" == "true" ]]; then
echo "Upgrade Victoria Metrics Operator CRDs before upgrading the deployment"
${PULSAR_HOME}/scripts/victoria-metrics-k8s-stack/upgrade_vm_operator_crds.sh
fi
fi
@ -81,6 +89,11 @@ ci::install_pulsar_chart ${install_type} ${PULSAR_HOME}/.ci/values-common.yaml $
echo "Wait 10 seconds"
sleep 10
# check that there aren't any loadbalancers if ALLOW_LOADBALANCERS is false
if [[ "${ALLOW_LOADBALANCERS}" == "false" ]]; then
ci::check_loadbalancers
fi
# check pulsar environment
ci::check_pulsar_environment

View File

@ -0,0 +1,105 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# enable TLS with cacerts
tls:
enabled: true
proxy:
enabled: true
cacerts:
enabled: true
certs:
- name: common-cacert
existingSecret: "pulsar-ci-common-cacert"
secretKeys:
- ca.crt
broker:
enabled: true
cacerts:
enabled: true
certs:
- name: common-cacert
existingSecret: "pulsar-ci-common-cacert"
secretKeys:
- ca.crt
bookie:
enabled: true
cacerts:
enabled: true
certs:
- name: common-cacert
existingSecret: "pulsar-ci-common-cacert"
secretKeys:
- ca.crt
zookeeper:
enabled: true
cacerts:
enabled: true
certs:
- name: common-cacert
existingSecret: "pulsar-ci-common-cacert"
secretKeys:
- ca.crt
toolset:
cacerts:
enabled: true
certs:
- name: common-cacert
existingSecret: "pulsar-ci-common-cacert"
secretKeys:
- ca.crt
autorecovery:
cacerts:
enabled: true
certs:
- name: common-cacert
existingSecret: "pulsar-ci-common-cacert"
secretKeys:
- ca.crt
# enable cert-manager
certs:
internal_issuer:
enabled: true
type: selfsigning
# deploy cacerts
extraDeploy:
- |
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-common-cacert"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
spec:
secretName: "{{ template "pulsar.fullname" . }}-common-cacert"
commonName: "common-cacert"
duration: "{{ .Values.certs.internal_issuer.duration }}"
renewBefore: "{{ .Values.certs.internal_issuer.renewBefore }}"
usages:
- server auth
- client auth
isCA: true
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}"
kind: Issuer
group: cert-manager.io

View File

@ -21,9 +21,9 @@
auth:
authentication:
enabled: true
provider: "jwt"
jwt:
# Enable JWT authentication
enabled: true
# If the token is generated by a secret key, set the usingSecretKey as true.
# If the token is generated by a private key, set the usingSecretKey as false.
usingSecretKey: false

View File

@ -21,9 +21,9 @@
auth:
authentication:
enabled: true
provider: "jwt"
jwt:
# Enable JWT authentication
enabled: true
# If the token is generated by a secret key, set the usingSecretKey as true.
# If the token is generated by a private key, set the usingSecretKey as false.
usingSecretKey: true

View File

@ -0,0 +1,94 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# Mount crendentials to each component
proxy:
configData:
# Authentication settings of the broker itself. Used when the broker connects to other brokers, or when the proxy connects to brokers, either in same or other clusters
brokerClientAuthenticationPlugin: "org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2"
brokerClientAuthenticationParameters: '{"privateKey":"file:///pulsar/auth/proxy/credentials_file.json","audience":"account","issuerUrl":"http://keycloak-ci-headless:8080/realms/pulsar"}'
extraVolumes:
- name: pulsar-proxy-credentials
secret:
secretName: pulsar-proxy-credentials
extraVolumeMounts:
- name: pulsar-proxy-credentials
mountPath: "/pulsar/auth/proxy"
readOnly: true
broker:
configData:
# Authentication settings of the broker itself. Used when the broker connects to other brokers, or when the proxy connects to brokers, either in same or other clusters
brokerClientAuthenticationPlugin: "org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2"
brokerClientAuthenticationParameters: '{"privateKey":"file:///pulsar/auth/broker/credentials_file.json","audience":"account","issuerUrl":"http://keycloak-ci-headless:8080/realms/pulsar"}'
extraVolumes:
- name: pulsar-broker-credentials
secret:
secretName: pulsar-broker-credentials
extraVolumeMounts:
- name: pulsar-broker-credentials
mountPath: "/pulsar/auth/broker"
readOnly: true
toolset:
configData:
authPlugin: "org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2"
authParams: '{"privateKey":"file:///pulsar/auth/admin/credentials_file.json","audience":"account","issuerUrl":"http://keycloak-ci-headless:8080/realms/pulsar"}'
extraVolumes:
- name: pulsar-admin-credentials
secret:
secretName: pulsar-admin-credentials
extraVolumeMounts:
- name: pulsar-admin-credentials
mountPath: "/pulsar/auth/admin"
readOnly: true
auth:
authentication:
enabled: true
openid:
# Enable openid authentication
enabled: true
# https://pulsar.apache.org/docs/next/security-openid-connect/#enable-openid-connect-authentication-in-the-broker-and-proxy
openIDAllowedTokenIssuers:
- http://keycloak-ci-headless:8080/realms/pulsar
openIDAllowedAudiences:
- account
#openIDTokenIssuerTrustCertsFilePath:
openIDRoleClaim: "sub"
openIDAcceptedTimeLeewaySeconds: "0"
openIDCacheSize: "5"
openIDCacheRefreshAfterWriteSeconds: "64800"
openIDCacheExpirationSeconds: "86400"
openIDHttpConnectionTimeoutMillis: "10000"
openIDHttpReadTimeoutMillis: "10000"
openIDKeyIdCacheMissRefreshSeconds: "300"
openIDRequireIssuersUseHttps: "false"
openIDFallbackDiscoveryMode: "DISABLED"
authorization:
enabled: true
superUsers:
# broker to broker communication
broker: "broker-admin"
# proxy to broker communication
proxy: "proxy-admin"
# pulsar-admin client to broker/proxy communication
client: "admin"
# pulsar manager to broker
manager: "manager-admin"

View File

@ -17,4 +17,4 @@
# under the License.
#
defaultPulsarImageTag: 3.0.9
defaultPulsarImageTag: 3.0.12

View File

@ -0,0 +1,60 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
victoria-metrics-k8s-stack:
enabled: true
victoria-metrics-operator:
enabled: true
vmsingle:
enabled: true
vmagent:
enabled: true
grafana:
enabled: true
adminPassword: pulsar-ci-admin
prometheus-node-exporter:
enabled: true
zookeeper:
podMonitor:
enabled: true
bookkeeper:
podMonitor:
enabled: true
broker:
podMonitor:
enabled: true
autorecovery:
podMonitor:
enabled: true
proxy:
podMonitor:
enabled: true
oxia:
coordinator:
podMonitor:
enabled: true
server:
podMonitor:
enabled: true

View File

@ -27,7 +27,7 @@ function k9s() {
# install k9s on the fly
if [ ! -x /usr/local/bin/k9s ]; then
echo "Installing k9s..."
curl -L -s https://github.com/derailed/k9s/releases/download/v0.32.5/k9s_Linux_amd64.tar.gz | sudo tar xz -C /usr/local/bin k9s
curl -L -s https://github.com/derailed/k9s/releases/download/v0.40.5/k9s_Linux_amd64.tar.gz | sudo tar xz -C /usr/local/bin k9s
fi
command k9s "$@"
}

170
.ci/helm.sh Normal file → Executable file
View File

@ -84,13 +84,14 @@ function ci::install_cert_manager() {
function ci::helm_repo_add() {
echo "Adding the helm repo ..."
${HELM} repo add prometheus-community https://prometheus-community.github.io/helm-charts
${HELM} repo add vm https://victoriametrics.github.io/helm-charts/
${HELM} repo update
echo "Successfully added the helm repo."
}
function ci::print_pod_logs() {
echo "Logs for all pulsar containers:"
for k8sobject in $(${KUBECTL} get pods,jobs -n ${NAMESPACE} -l app=pulsar -o=name); do
echo "Logs for all containers:"
for k8sobject in $(${KUBECTL} get pods,jobs -n ${NAMESPACE} -o=name); do
${KUBECTL} logs -n ${NAMESPACE} "$k8sobject" --all-containers=true --ignore-errors=true --prefix=true --tail=100 || true
done;
}
@ -98,7 +99,7 @@ function ci::print_pod_logs() {
function ci::collect_k8s_logs() {
mkdir -p "${K8S_LOGS_DIR}" && cd "${K8S_LOGS_DIR}"
echo "Collecting k8s logs to ${K8S_LOGS_DIR}"
for k8sobject in $(${KUBECTL} get pods,jobs -n ${NAMESPACE} -l app=pulsar -o=name); do
for k8sobject in $(${KUBECTL} get pods,jobs -n ${NAMESPACE} -o=name); do
filebase="${k8sobject//\//_}"
${KUBECTL} logs -n ${NAMESPACE} "$k8sobject" --all-containers=true --ignore-errors=true --prefix=true > "${filebase}.$$.log.txt" || true
${KUBECTL} logs -n ${NAMESPACE} "$k8sobject" --all-containers=true --ignore-errors=true --prefix=true --previous=true > "${filebase}.previous.$$.log.txt" || true
@ -117,7 +118,7 @@ function ci::install_pulsar_chart() {
local extra_opts=()
local values_next=false
for arg in "$@"; do
if [[ "$arg" == "--values" ]]; then
if [[ "$arg" == "--values" || "$arg" == "--set" ]]; then
extra_values+=("$arg")
values_next=true
elif [[ "$values_next" == true ]]; then
@ -148,6 +149,11 @@ function ci::install_pulsar_chart() {
# configure metallb
${KUBECTL} apply -f ${BINDIR}/metallb/metallb-config.yaml
install_args=""
# create auth resources
if [[ "x${AUTHENTICATION_PROVIDER}" == "xopenid" ]]; then
ci::create_openid_resources
fi
else
install_args="--wait --wait-for-jobs --timeout 360s --debug"
fi
@ -271,6 +277,7 @@ function ci::retry() {
}
function ci::test_pulsar_admin_api_access() {
echo "Test pulsar admin api access"
ci::retry ${KUBECTL} exec -n ${NAMESPACE} ${CLUSTER}-toolset-0 -- bin/pulsar-admin tenants list
}
@ -423,3 +430,158 @@ function ci::test_pulsar_manager() {
exit 1
fi
}
function ci::check_loadbalancers() {
(
set +e
${KUBECTL} get services -n ${NAMESPACE} | grep LoadBalancer
if [ $? -eq 0 ]; then
echo "Error: Found service with type LoadBalancer. This is not allowed because of security reasons."
exit 1
fi
exit 0
)
}
function ci::validate_kustomize_yaml() {
# if kustomize is not installed, install kustomize to a temp directory
if ! command -v kustomize &> /dev/null; then
KUSTOMIZE_VERSION=5.6.0
KUSTOMIZE_DIR=$(mktemp -d)
echo "Installing kustomize ${KUSTOMIZE_VERSION} to ${KUSTOMIZE_DIR}"
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash -s ${KUSTOMIZE_VERSION} ${KUSTOMIZE_DIR}
export PATH=${KUSTOMIZE_DIR}:$PATH
fi
# prevent regression of https://github.com/apache/pulsar-helm-chart/issues/569
local kustomize_yaml_dir=$(mktemp -d)
cp ${PULSAR_HOME}/.ci/kustomization.yaml ${kustomize_yaml_dir}
PULSAR_HOME=${PULSAR_HOME} yq -i '.helmGlobals.chartHome = env(PULSAR_HOME) + "/charts"' ${kustomize_yaml_dir}/kustomization.yaml
failures=0
# validate zookeeper init
echo "Validating kustomize yaml output with zookeeper init"
_ci::validate_kustomize_yaml ${kustomize_yaml_dir} || ((failures++))
# validate oxia init
yq -i '.helmCharts[0].valuesInline.components += {"zookeeper": false, "oxia": true}' ${kustomize_yaml_dir}/kustomization.yaml
echo "Validating kustomize yaml output with oxia init"
_ci::validate_kustomize_yaml ${kustomize_yaml_dir} || ((failures++))
if [ $failures -gt 0 ]; then
exit 1
fi
}
function _ci::validate_kustomize_yaml() {
local kustomize_yaml_dir=$1
kustomize build --enable-helm --helm-kube-version 1.23.0 --load-restrictor=LoadRestrictionsNone ${kustomize_yaml_dir} | yq 'select(.spec.template.spec.containers[0].args != null) | .spec.template.spec.containers[0].args' | \
awk '{
if (prev_line ~ /\\$/ && $0 ~ /^$/) {
print "Found issue: backslash at end of line followed by empty line. Must use pipe character for multiline strings to support kustomize due to kubernetes-sigs/kustomize#4201.";
print "Line: " prev_line;
has_issue = 1;
}
prev_line = $0;
}
END {
if (!has_issue) {
print "No issues found: no backslash followed by empty line";
exit 0;
}
exit 1;
}'
}
# Create all resources needed for openid authentication
function ci::create_openid_resources() {
echo "Creating openid resources"
cp ${PULSAR_HOME}/.ci/auth/keycloak/0-realm-pulsar-partial-export.json /tmp/realm-pulsar.json
for component in broker proxy admin manager; do
echo "Creating openid resources for ${component}"
local client_id=pulsar-${component}
# Github action hang up when read string from /dev/urandom, so use python to generate a random string
local client_secret=$(python -c "import secrets; import string; length = 32; random_string = ''.join(secrets.choice(string.ascii_letters + string.digits) for _ in range(length)); print(random_string);")
if [[ "${component}" == "admin" ]]; then
local sub_claim_value="admin"
else
local sub_claim_value="${component}-admin"
fi
# Create the client credentials file
jq -n --arg CLIENT_ID $client_id --arg CLIENT_SECRET "$client_secret" -f ${PULSAR_HOME}/.ci/auth/oauth2/credentials_file.json > /tmp/${component}-credentials_file.json
# Create the secret for the client credentials
local secret_name="pulsar-${component}-credentials"
${KUBECTL} create secret generic ${secret_name} --from-file=credentials_file.json=/tmp/${component}-credentials_file.json -n ${NAMESPACE}
# Create the keycloak client file
jq -n --arg CLIENT_ID $client_id --arg CLIENT_SECRET "$client_secret" --arg SUB_CLAIM_VALUE "$sub_claim_value" -f ${PULSAR_HOME}/.ci/auth/keycloak/1-client-template.json > /tmp/${component}-keycloak-client.json
# Merge the keycloak client file with the realm
jq '.clients += [input]' /tmp/realm-pulsar.json /tmp/${component}-keycloak-client.json > /tmp/realm-pulsar.json.tmp
mv /tmp/realm-pulsar.json.tmp /tmp/realm-pulsar.json
done
echo "Create keycloak realm configuration"
${KUBECTL} create secret generic keycloak-ci-realm-config --from-file=realm-pulsar.json=/tmp/realm-pulsar.json -n ${NAMESPACE}
echo "Installing keycloak helm chart"
${HELM} install keycloak-ci oci://registry-1.docker.io/bitnamicharts/keycloak --version 24.6.4 --values ${PULSAR_HOME}/.ci/auth/keycloak/values.yaml -n ${NAMESPACE}
echo "Wait until keycloak is running"
WC=$(${KUBECTL} get pods -n ${NAMESPACE} --field-selector=status.phase=Running | grep keycloak-ci-0 | wc -l)
counter=1
while [[ ${WC} -lt 1 ]]; do
((counter++))
echo ${WC};
sleep 15
${KUBECTL} get pods,jobs -n ${NAMESPACE}
${KUBECTL} get events --sort-by=.lastTimestamp -A | tail -n 30 || true
if [[ $((counter % 20)) -eq 0 ]]; then
ci::print_pod_logs
if [[ $counter -gt 100 ]]; then
echo >&2 "Timeout waiting..."
exit 1
fi
fi
WC=$(${KUBECTL} get pods -n ${NAMESPACE} --field-selector=status.phase=Running | grep keycloak-ci-0 | wc -l)
done
echo "Wait until keycloak is ready"
${KUBECTL} wait --for=condition=Ready pod/keycloak-ci-0 -n ${NAMESPACE} --timeout 180s
echo "Check keycloack realm pulsar issuer url"
${KUBECTL} exec -n ${NAMESPACE} keycloak-ci-0 -c keycloak -- bash -c 'curl -sSL http://keycloak-ci-headless:8080/realms/pulsar'
}
# lists all available functions in this tool
function ci::list_functions() {
declare -F | awk '{print $NF}' | sort | grep -E '^ci::' | sed 's/^ci:://'
}
# Only run this section if the script is being executed directly (not sourced)
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
if [ -z "$1" ]; then
echo "usage: $0 [function_name]"
echo "Available functions:"
ci::list_functions
exit 1
fi
ci_function_name="ci::$1"
shift
if [[ "$(LC_ALL=C type -t "${ci_function_name}")" == "function" ]]; then
eval "$ci_function_name" "$@"
exit $?
else
echo "Invalid ci function"
echo "Available functions:"
ci::list_functions
exit 1
fi
fi

View File

@ -17,14 +17,16 @@
# under the License.
#
kube-prometheus-stack:
enabled: true
prometheus:
enabled: true
grafana:
enabled: true
adminPassword: pulsar-ci-admin
alertmanager:
enabled: false
prometheus-node-exporter:
enabled: true
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
helmGlobals:
chartHome: ../charts
helmCharts:
- name: pulsar
releaseName: pulsar
valuesInline:
victoria-metrics-k8s-stack:
enabled: false
components:
pulsar_manager: true
zookeeper: true

View File

@ -17,15 +17,35 @@
# under the License.
#
kube-prometheus-stack:
victoria-metrics-k8s-stack:
enabled: false
prometheusOperator:
victoria-metrics-operator:
enabled: false
grafana:
vmsingle:
enabled: false
vmagent:
enabled: false
vmalert:
enabled: false
alertmanager:
enabled: false
prometheus:
grafana:
enabled: false
prometheus-node-exporter:
enabled: false
kube-state-metrics:
enabled: false
kubelet:
enabled: false
kubeApiServer:
enabled: false
kubeControllerManager:
enabled: false
coreDns:
enabled: false
kubeEtcd:
enabled: false
kubeScheduler:
enabled: false
# disabled AntiAffinity
@ -55,6 +75,12 @@ bookkeeper:
diskUsageWarnThreshold: "0.999"
PULSAR_PREFIX_diskUsageThreshold: "0.999"
PULSAR_PREFIX_diskUsageWarnThreshold: "0.999"
# minimal memory use for bookkeeper
# https://bookkeeper.apache.org/docs/reference/config#db-ledger-storage-settings
dbStorage_writeCacheMaxSizeMb: "32"
dbStorage_readAheadCacheMaxSizeMb: "32"
dbStorage_rocksDB_writeBufferSizeMB: "8"
dbStorage_rocksDB_blockCacheSize: "8388608"
broker:
replicaCount: 1

View File

@ -39,15 +39,15 @@ inputs:
version:
description: "The chart-testing version to install"
required: false
default: v3.11.0
default: v3.12.0
yamllint_version:
description: "The yamllint version to install"
required: false
default: '1.33.0'
default: '1.35.1'
yamale_version:
description: "The yamale version to install"
required: false
default: '4.0.4'
default: '6.0.0'
runs:
using: composite
steps:

View File

@ -35,9 +35,9 @@ set -o errexit
set -o nounset
set -o pipefail
DEFAULT_CHART_TESTING_VERSION=v3.11.0
DEFAULT_YAMLLINT_VERSION=1.33.0
DEFAULT_YAMALE_VERSION=4.0.4
DEFAULT_CHART_TESTING_VERSION=v3.12.0
DEFAULT_YAMLLINT_VERSION=1.35.1
DEFAULT_YAMALE_VERSION=6.0.0
ARCH=$(uname -m)
case $ARCH in
@ -131,18 +131,24 @@ install_chart_testing() {
tar -xzf ct.tar.gz -C "$cache_dir"
rm -f ct.tar.gz
# if uv (https://docs.astral.sh/uv/) is not installed, install it
if ! command -v uv &> /dev/null; then
echo 'Installing uv...'
curl -LsSf https://astral.sh/uv/install.sh | sh
fi
echo 'Creating virtual Python environment...'
python3 -m venv "$venv_dir"
uv venv "$venv_dir"
echo 'Activating virtual environment...'
# shellcheck disable=SC1090
source "$venv_dir/bin/activate"
echo 'Installing yamllint...'
pip3 install "yamllint==${yamllint_version}"
uv pip install "yamllint==${yamllint_version}"
echo 'Installing Yamale...'
pip3 install "yamale==${yamale_version}"
uv pip install "yamale==${yamale_version}"
fi
# https://github.com/helm/chart-testing-action/issues/62

View File

@ -53,8 +53,8 @@ runs:
# tune filesystem mount options, https://www.kernel.org/doc/Documentation/filesystems/ext4.txt
# commit=999999, effectively disables automatic syncing to disk (default is every 5 seconds)
# nobarrier/barrier=0, loosen data consistency on system crash (no negative impact to empheral CI nodes)
sudo mount -o remount,nodiscard,commit=999999,barrier=0 /
sudo mount -o remount,nodiscard,commit=999999,barrier=0 /mnt
sudo mount -o remount,nodiscard,commit=999999,barrier=0 / || true
sudo mount -o remount,nodiscard,commit=999999,barrier=0 /mnt || true
# disable discard/trim at device level since remount with nodiscard doesn't seem to be effective
# https://www.spinics.net/lists/linux-ide/msg52562.html
for i in /sys/block/sd*/queue/discard_max_bytes; do
@ -77,12 +77,6 @@ runs:
# stop Azure Linux agent to save RAM
sudo systemctl stop walinuxagent.service || true
# enable docker experimental mode which is
# required for using "docker build --squash" / "-Ddocker.squash=true"
daemon_json="$(sudo cat /etc/docker/daemon.json | jq '.experimental = true')"
echo "$daemon_json" | sudo tee /etc/docker/daemon.json
# restart docker daemon
sudo systemctl restart docker
echo '::endgroup::'
# show memory

View File

@ -32,9 +32,10 @@ concurrency:
cancel-in-progress: true
jobs:
preconditions:
name: Preconditions
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
if: (github.event_name != 'schedule') || (github.repository == 'apache/pulsar-helm-chart')
outputs:
docs_only: ${{ steps.check_changes.outputs.docs_only }}
@ -62,7 +63,7 @@ jobs:
license-check:
needs: preconditions
name: License Check
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
timeout-minutes: 10
if: ${{ needs.preconditions.outputs.docs_only != 'true' }}
steps:
@ -83,7 +84,7 @@ jobs:
ct-lint:
needs: ['preconditions', 'license-check']
name: chart-testing lint
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
timeout-minutes: 45
if: ${{ needs.preconditions.outputs.docs_only != 'true' }}
steps:
@ -107,13 +108,17 @@ jobs:
if: ${{ steps.check_changes.outputs.docs_only != 'true' }}
uses: azure/setup-helm@v4
with:
version: v3.14.4
version: v3.16.4
- name: Set up Python
if: ${{ steps.check_changes.outputs.docs_only != 'true' }}
uses: actions/setup-python@v5
with:
python-version: '3.9'
python-version: '3.12'
- name: Install uv, a fast modern package manager for Python
if: ${{ steps.check_changes.outputs.docs_only != 'true' }}
run: curl -LsSf https://astral.sh/uv/install.sh | sh
- name: Set up chart-testing
if: ${{ steps.check_changes.outputs.docs_only != 'true' }}
@ -127,7 +132,7 @@ jobs:
--validate-maintainers=false \
--target-branch ${{ github.event.repository.default_branch }}
- name: Run kubeconform check for helm template with every major k8s version 1.23.0-1.30.0
- name: Run kubeconform check for helm template with every major k8s version 1.25.0-1.32.0
if: ${{ steps.check_changes.outputs.docs_only != 'true' }}
run: |
PULSAR_CHART_HOME=$(pwd)
@ -147,16 +152,25 @@ jobs:
else
echo ""
fi
helm template charts/pulsar --set kube-prometheus-stack.enabled=false --set components.pulsar_manager=true --kube-version $kube_version "$@" | \
helm template charts/pulsar --set victoria-metrics-k8s-stack.enabled=false --set components.pulsar_manager=true --kube-version $kube_version "$@" | \
kubeconform -schema-location default -schema-location 'https://raw.githubusercontent.com/datreeio/CRDs-catalog/main/{{.Group}}/{{.ResourceKind}}_{{.ResourceAPIVersion}}.json' -strict -kubernetes-version $kube_version -summary
}
set -o pipefail
for k8s_version_part in {23..30}; do
for k8s_version_part in {25..32}; do
k8s_version="1.${k8s_version_part}.0"
echo "Validating default values with k8s version $k8s_version"
validate_helm_template_with_k8s_version $k8s_version
echo "Validating with Oxia enabled"
validate_helm_template_with_k8s_version $k8s_version --set components.zookeeper=false --set components.oxia=true
for config in .ci/clusters/*.yaml; do
echo "Validating $config with k8s version $k8s_version"
validate_helm_template_with_k8s_version $k8s_version --values .ci/values-common.yaml --values $config
done
done
- name: Validate kustomize yaml for extra new lines in pulsar-init commands
if: ${{ steps.check_changes.outputs.docs_only != 'true' }}
run: |
./.ci/helm.sh validate_kustomize_yaml
- name: Wait for ssh connection when build fails
# ssh access is enabled for builds in own forks
uses: ./.github/actions/ssh-access
@ -167,19 +181,20 @@ jobs:
install-chart-tests:
name: ${{ matrix.testScenario.name }} - k8s ${{ matrix.k8sVersion.version }} - ${{ matrix.testScenario.type || 'install' }}
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
timeout-minutes: ${{ matrix.testScenario.timeout || 45 }}
needs: ['preconditions', 'ct-lint']
if: ${{ needs.preconditions.outputs.docs_only != 'true' }}
strategy:
fail-fast: false
matrix:
# see https://github.com/kubernetes-sigs/kind/releases/tag/v0.22.0 for the list of supported k8s versions for kind 0.22.0
# see https://github.com/kubernetes-sigs/kind/releases/tag/v0.27.0 for the list of supported k8s versions for kind 0.27.0
# docker images are available at https://hub.docker.com/r/kindest/node/tags
k8sVersion:
- version: "1.23.17"
kind_image_tag: v1.23.17@sha256:14d0a9a892b943866d7e6be119a06871291c517d279aedb816a4b4bc0ec0a5b3
- version: "1.29.2"
kind_image_tag: v1.29.2@sha256:51a1434a5397193442f0be2a297b488b6c919ce8a3931be0ce822606ea5ca245
- version: "1.25.16"
kind_image_tag: v1.25.16@sha256:6110314339b3b44d10da7d27881849a87e092124afab5956f2e10ecdb463b025
- version: "1.32.2"
kind_image_tag: v1.32.2@sha256:f226345927d7e348497136874b6d207e0b32cc52154ad8323129352923a3142f
testScenario:
- name: Upgrade latest released version
values_file: .ci/clusters/values-upgrade.yaml
@ -209,44 +224,39 @@ jobs:
- name: ZK & BK TLS Only
values_file: .ci/clusters/values-zkbk-tls.yaml
shortname: zkbk-tls
- name: PSP
values_file: .ci/clusters/values-psp.yaml
shortname: psp
- name: Pulsar Manager
values_file: .ci/clusters/values-pulsar-manager.yaml
shortname: pulsar-manager
- name: Oxia
values_file: .ci/clusters/values-oxia.yaml
shortname: oxia
- name: OpenID
values_file: .ci/clusters/values-openid.yaml
shortname: openid
- name: CA certificates
values_file: .ci/clusters/values-cacerts.yaml
shortname: cacerts
include:
- k8sVersion:
version: "1.23.17"
kind_image_tag: v1.23.17@sha256:14d0a9a892b943866d7e6be119a06871291c517d279aedb816a4b4bc0ec0a5b3
version: "1.25.16"
kind_image_tag: v1.25.16@sha256:6110314339b3b44d10da7d27881849a87e092124afab5956f2e10ecdb463b025
testScenario:
name: "Upgrade TLS"
values_file: .ci/clusters/values-tls.yaml
shortname: tls
type: upgrade
- k8sVersion:
version: "1.23.17"
kind_image_tag: v1.23.17@sha256:14d0a9a892b943866d7e6be119a06871291c517d279aedb816a4b4bc0ec0a5b3
version: "1.25.16"
kind_image_tag: v1.25.16@sha256:6110314339b3b44d10da7d27881849a87e092124afab5956f2e10ecdb463b025
testScenario:
name: "Upgrade PSP"
values_file: .ci/clusters/values-psp.yaml
shortname: psp
type: upgrade
- k8sVersion:
version: "1.23.17"
kind_image_tag: v1.23.17@sha256:14d0a9a892b943866d7e6be119a06871291c517d279aedb816a4b4bc0ec0a5b3
testScenario:
name: "Upgrade kube-prometheus-stack for previous LTS"
values_file: .ci/clusters/values-prometheus-grafana.yaml --values .ci/clusters/values-pulsar-previous-lts.yaml
shortname: prometheus-grafana
name: "Upgrade victoria-metrics-k8s-stack for previous LTS"
values_file: .ci/clusters/values-victoria-metrics-grafana.yaml --values .ci/clusters/values-pulsar-previous-lts.yaml
shortname: victoria-metrics-grafana
type: upgrade
upgradeFromVersion: 3.2.0
- k8sVersion:
version: "1.23.17"
kind_image_tag: v1.23.17@sha256:14d0a9a892b943866d7e6be119a06871291c517d279aedb816a4b4bc0ec0a5b3
version: "1.25.16"
kind_image_tag: v1.25.16@sha256:6110314339b3b44d10da7d27881849a87e092124afab5956f2e10ecdb463b025
testScenario:
name: "TLS with helm 3.12.0"
values_file: .ci/clusters/values-tls.yaml
@ -286,6 +296,9 @@ jobs:
"jwt-asymmetric")
export EXTRA_SUPERUSERS=manager-admin
;;
"openid")
export AUTHENTICATION_PROVIDER=openid
;;
esac
if [[ "${{ matrix.testScenario.type || 'install' }}" == "upgrade" ]]; then
export UPGRADE_FROM_VERSION="${{ matrix.testScenario.upgradeFromVersion || 'latest' }}"
@ -324,7 +337,7 @@ jobs:
pulsar-helm-chart-ci-checks-completed:
name: "CI checks completed"
if: ${{ always() && ((github.event_name != 'schedule') || (github.repository == 'apache/pulsar-helm-chart')) }}
runs-on: ubuntu-22.04
runs-on: ubuntu-24.04
timeout-minutes: 10
needs: [
'preconditions',

2
.gitignore vendored
View File

@ -17,5 +17,3 @@ charts/**/*.lock
PRIVATEKEY
PUBLICKEY
.vagrant/
pulsarctl-*-*.tar.gz
pulsarctl-*-*/

312
README.md
View File

@ -27,27 +27,113 @@ Read [Deploying Pulsar on Kubernetes](http://pulsar.apache.org/docs/deploy-kuber
> :warning: This helm chart is updated outside of the regular Pulsar release cycle and might lag behind a bit. It only supports basic Kubernetes features now. Currently, it can be used as no more than a template and starting point for a Kubernetes deployment. In many cases, it would require some customizations.
## Important Security Disclaimer for Helm Chart Usage
## Important Security Advisory for Helm Chart Usage
### Notice of Default Configuration
This Helm chart is provided with a default configuration that does not meet the security requirements for production environments or sensitive data handling. Users are strongly advised to thoroughly review and customize the security settings to ensure a secure deployment that aligns with their specific operational and security policies.
This Helm chart's default configuration DOES NOT meet production security requirements.
Users MUST review and customize security settings for their specific environment.
IMPORTANT: This Helm chart provides a starting point for Pulsar deployments but requires
significant security customization before use in production environments. We strongly
recommend implementing:
1. Authentication and authorization for all components
2. TLS encryption for all communication channels
3. Proper network isolation and access controls
4. Regular security updates and vulnerability assessments
As an open source project, we welcome contributions to improve security features.
Please consider submitting pull requests to address security gaps or enhance
existing security implementations.
### Pulsar Proxy Security Considerations
As per the [Pulsar Proxy documentation](https://pulsar.apache.org/docs/3.1.x/administration-proxy/), it is explicitly stated that the Pulsar proxy is not designed for exposure to the public internet. The design assumes that deployments will be protected by network perimeter security measures. It is crucial to understand that relying solely on the default configuration can expose your deployment to significant security vulnerabilities.
#### Recommendations:
### Upgrading
#### To 4.1.0
This version introduces `OpenID` authentication. Setting `auth.authentication.provider` is no longer supported, you need to enable the provider with `auth.authentication.<provider>.enabled`.
#### To 4.0.0
The default service type for the Pulsar proxy has changed from `LoadBalancer` to `ClusterIP` for security reasons. This limits access to within the Kubernetes environment by default.
### External Access Recommendations
If you need to expose the Pulsar Proxy outside the cluster:
1. **USE INTERNAL LOAD BALANCERS ONLY**
- Set type to LoadBalancer only in secured environments with proper network controls
- Add cloud provider-specific annotations for internal load balancers:
- Kubernetes documentation about internal load balancers:
- [Internal load balancer](https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer)
- See cloud provider documentation:
- AWS / EKS: [AWS Load Balancer Controller / Service Annotations](https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/)
- Azure / AKS: [Use an internal load balancer with Azure Kubernetes Service (AKS)](https://learn.microsoft.com/en-us/azure/aks/internal-lb)
- GCP / GKE: [LoadBalancer service parameters](https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer-parameters)
- Examples (verify correctness for your environment):
- AWS / EKS: `service.beta.kubernetes.io/aws-load-balancer-internal: "true"`
- Azure / AKS: `service.beta.kubernetes.io/azure-load-balancer-internal: "true"`
- GCP / GKE: `networking.gke.io/load-balancer-type: "Internal"`
2. **IMPLEMENT AUTHENTICATION AND AUTHORIZATION**
- Configure all clients to authenticate properly
- Set up appropriate authorization policies
3. **USE TLS FOR ALL CONNECTIONS**
- Enable TLS for client-to-proxy connections
- Enable TLS for proxy-to-broker connections
- Enable TLS for all internal cluster communications
- Note: TLS alone is NOT sufficient as a security solution. Even with TLS enabled, clusters exposed to untrusted networks remain vulnerable to denial-of-service attacks, authentication bypass attempts, and protocol-level exploits.
4. **NETWORK SECURITY**
- Use private networks (VPCs)
- Configure firewalls, security groups, and IP restrictions
5. **CLIENT IP ADDRESS BASED ACCESS RESTRICTIONS**
- When using a LoadBalancer service type, restrict access to specific IP ranges by configuring `proxy.service.loadBalancerSourceRanges` in your values.yaml:
```yaml
proxy:
service:
loadBalancerSourceRanges:
- 10.0.0.0/8 # Private network range
- 172.16.0.0/12 # Private network range
- 192.168.0.0/16 # Private network range
```
- This feature:
- Provides an additional defense layer by filtering traffic at the load balancer level
- Only allows connections from specified CIDR blocks
- Works only with LoadBalancer service type and when your cloud provider supports the `loadBalancerSourceRanges` parameter
- Important: This should be implemented alongside other security measures (internal load balancer, authentication, TLS, network policies) as part of a defense-in-depth strategy,
not as a standalone security solution
### Alternative for External Access
As an alternative method for external access, Pulsar has support for [SNI proxy routing](https://pulsar.apache.org/docs/next/concepts-proxy-sni-routing/). SNI Proxy routing is supported with proxy servers such as Apache Traffic Server, HAProxy and Nginx.
Note: This option isn't currently implemented in the Apache Pulsar Helm chart.
**IMPORTANT**: Pulsar binary protocol cannot be exposed outside of the Kubernetes cluster using Kubernetes Ingress. Kubernetes Ingress works for the Admin REST API and topic lookups, but clients would be connecting to the advertised listener addresses returned by the brokers and it would only work when clients can connect directly to brokers. This is not a supported secure option for exposing Pulsar to untrusted networks.
### General Recommendations
- **Network Perimeter Security:** It is imperative to implement robust network perimeter security to safeguard your deployment. The absence of such security measures can lead to unauthorized access and potential data breaches.
- **Restricted Access:** For environments where security is less critical, such as certain development or testing scenarios, the use of `loadBalancerSourceRanges` may be employed to restrict access to specified IP addresses or ranges. This, however, should not be considered a substitute for comprehensive security measures in production environments.
### User Responsibility
The user assumes full responsibility for the security and integrity of their deployment. This includes, but is not limited to, the proper configuration of security features and adherence to best practices for securing network access. The providers of this Helm chart disclaim all warranties, whether express or implied, including any warranties of merchantability, fitness for a particular purpose, and non-infringement of third-party rights.
### No Security Guarantees
The providers of this Helm chart make no guarantees regarding the security of the chart under any circumstances. It is the user's responsibility to ensure that their deployment is secure and complies with all relevant security standards and regulations.
By using this Helm chart, the user acknowledges the risks associated with its default configuration and the necessity for proper security customization. The user further agrees that the providers of the Helm chart shall not be liable for any security breaches or incidents resulting from the use of the chart.
## Features
This Helm Chart includes all the components of Apache Pulsar for a complete experience.
@ -61,7 +147,7 @@ This Helm Chart includes all the components of Apache Pulsar for a complete expe
- [x] Management & monitoring components:
- [x] Pulsar Manager
- [x] Optional PodMonitors for each component (enabled by default)
- [x] [Kube-Prometheus-Stack](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) (as of 3.0.0)
- [x] [victoria-metrics-k8s-stack](hhttps://github.com/VictoriaMetrics/helm-charts/tree/master/charts/victoria-metrics-k8s-stack) (as of 4.0.0)
It includes support for:
@ -74,9 +160,10 @@ It includes support for:
- [x] Broker
- [x] Toolset
- [x] Bookie
- [x] ZooKeeper
- [x] ZooKeeper (requires the `AdditionalCertificateOutputFormats=true` feature gate to be enabled in the cert-manager deployment when using cert-manager versions below 1.15.0)
- [x] Authentication
- [x] JWT
- [x] OpenID
- [ ] Mutal TLS
- [ ] Kerberos
- [x] Authorization
@ -97,9 +184,9 @@ It includes support for:
In order to use this chart to deploy Apache Pulsar on Kubernetes, the followings are required.
1. kubectl 1.23 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin))
1. kubectl 1.25 or higher, compatible with your cluster ([+/- 1 minor release from your cluster](https://kubernetes.io/docs/tasks/tools/install-kubectl/#before-you-begin))
2. Helm v3 (3.12.0 or higher)
3. A Kubernetes cluster, version 1.23 or higher.
3. A Kubernetes cluster, version 1.25 or higher.
## Environment setup
@ -114,26 +201,62 @@ Before proceeding to deploying Pulsar, you need to prepare your environment.
To add this chart to your local Helm repository:
```bash
helm repo add apache https://pulsar.apache.org/charts
helm repo add apachepulsar https://pulsar.apache.org/charts
helm repo update
```
## Kubernetes cluster preparation
You need a Kubernetes cluster whose version is 1.23 or higher in order to use this chart, due to the usage of certain Kubernetes features.
You need a Kubernetes cluster whose version is 1.25 or higher in order to use this chart, due to the usage of certain Kubernetes features.
We provide some instructions to guide you through the preparation: http://pulsar.apache.org/docs/helm-prepare/
## Deploy Pulsar to Kubernetes
1. Configure your values file. The best way to know which values are available is to read the [values.yaml](./charts/pulsar/values.yaml).
A best practice is to start with an empty values file and only set the keys that differ from the default configuration.
Anti-affinity rules for Zookeeper and Bookie components require at least one node per replica. For Kubernetes clusters with less than 3 nodes,
you must disable this feature by adding this to your initial values.yaml file:
```yaml
affinity:
anti_affinity: false
```
2. Install the chart:
```bash
helm install <release-name> -n <namespace> -f your-values.yaml apache/pulsar
helm install -n <namespace> --create-namespace <release-name> -f your-values.yaml apachepulsar/pulsar
```
3. Access the Pulsar cluster
3. Observe the deployment progress
Watching events to view progress of deployment:
```shell
kubectl get -n <namespace> events -o wide --watch
```
Watching state of deployed Kubernetes objects, updated every 2 seconds:
```shell
watch kubectl get -n <namespace> all
```
Waiting until Pulsar Proxy is available:
```shell
kubectl wait --timeout=600s --for=condition=ready pod -n <namespace> -l component=proxy
```
Watching state with k9s (https://k9scli.io/topics/install/):
```shell
k9s -n <namespace>
```
4. Access the Pulsar cluster
The default values will create a `ClusterIP` for the proxy you can use to interact with the cluster. To find the IP address of proxy use:
@ -144,7 +267,7 @@ We provide some instructions to guide you through the preparation: http://pulsar
For more information, please follow our detailed
[quick start guide](https://pulsar.apache.org/docs/getting-started-helm/).
## Customize the deployment
## Customize the deployment
We provide a [detailed guideline](https://pulsar.apache.org/docs/helm-deploy/) for you to customize
the Helm Chart for a production-ready deployment.
@ -160,26 +283,57 @@ You can also checkout out the example values file for different deployments.
- [Deploy a Pulsar cluster with JWT authentication using symmetric key](examples/values-jwt-symmetric.yaml)
- [Deploy a Pulsar cluster with JWT authentication using asymmetric key](examples/values-jwt-asymmetric.yaml)
## Disabling Kube-Prometheus-Stack CRDs
## Disabling victoria-metrics-k8s-stack components
In order to disable the kube-prometheus-stack fully, it is necessary to add the following to your `values.yaml`:
In order to disable the victoria-metrics-k8s-stack, you can add the following to your `values.yaml`.
Victoria Metrics components can also be disabled and enabled individually if you only need specific monitoring features.
```yaml
kube-prometheus-stack:
# disable VictoriaMetrics and related components
victoria-metrics-k8s-stack:
enabled: false
prometheusOperator:
victoria-metrics-operator:
enabled: false
vmsingle:
enabled: false
vmagent:
enabled: false
kube-state-metrics:
enabled: false
prometheus-node-exporter:
enabled: false
grafana:
enabled: false
alertmanager:
Additionally, you'll need to set each component's `podMonitor` property to `false`.
```yaml
# disable pod monitors
autorecovery:
podMonitor:
enabled: false
prometheus:
bookkeeper:
podMonitor:
enabled: false
oxia:
server:
podMonitor:
enabled: false
coordinator:
podMonitor:
enabled: false
broker:
podMonitor:
enabled: false
proxy:
podMonitor:
enabled: false
zookeeper:
podMonitor:
enabled: false
```
Otherwise, the helm chart installation will attempt to install the CRDs for the kube-prometheus-stack. Additionally,
you'll need to disable each of the component's `PodMonitors`. This is shown in some [examples](./examples) and is
verified in some [tests](./.ci/clusters).
This is shown in some [examples/values-disable-monitoring.yaml](examples/values-disable-monitoring.yaml).
## Pulsar Manager
@ -203,12 +357,12 @@ kubectl get secret -l component=pulsar-manager -o=jsonpath="{.items[0].data.UI_P
## Grafana Dashboards
The Apache Pulsar Helm Chart uses the `kube-prometheus-stack` Helm Chart to deploy Grafana.
The Apache Pulsar Helm Chart uses the `victoria-metrics-k8s-stack` Helm Chart to deploy Grafana.
There are several ways to configure Grafana dashboards. The default `values.yaml` comes with examples of Pulsar dashboards which get downloaded from the Apache-2.0 licensed [streamnative/apache-pulsar-grafana-dashboard OSS project](https://github.com/streamnative/apache-pulsar-grafana-dashboard) by URL.
There are several ways to configure Grafana dashboards. The default [`values.yaml`](charts/pulsar/values.yaml) comes with examples of Pulsar dashboards which get downloaded from the Apache-2.0 licensed [lhotari/pulsar-grafana-dashboards OSS project](https://github.com/lhotari/pulsar-grafana-dashboards) by URL.
Dashboards can be configured in `values.yaml` or by adding `ConfigMap` items with the label `grafana_dashboard: "1"`.
In `values.yaml`, it's possible to include dashboards by URL or by grafana.com dashboard id (`gnetId` and `revision`).
Dashboards can be configured in [`values.yaml`](charts/pulsar/values.yaml) or by adding `ConfigMap` items with the label `grafana_dashboard: "1"`.
In [`values.yaml`](charts/pulsar/values.yaml), it's possible to include dashboards by URL or by grafana.com dashboard id (`gnetId` and `revision`).
Please see the [Grafana Helm chart documentation for importing dashboards](https://github.com/grafana/helm-charts/blob/main/charts/grafana/README.md#import-dashboards).
You can connect to Grafana by forwarding port 3000
@ -236,53 +390,48 @@ Once your Pulsar Chart is installed, configuration changes and chart
updates should be done using `helm upgrade`.
```bash
helm repo add apache https://pulsar.apache.org/charts
helm repo add apachepulsar https://pulsar.apache.org/charts
helm repo update
helm get values <pulsar-release-name> > pulsar.yaml
helm upgrade -f pulsar.yaml \
<pulsar-release-name> apache/pulsar
# If you are using the provided victoria-metrics-k8s-stack for monitoring, this installs or upgrades the required CRDs
./scripts/victoria-metrics-k8s-stack/upgrade_vm_operator_crds.sh
# get the existing values.yaml used for the most recent deployment
helm get values -n <namespace> <pulsar-release-name> > values.yaml
# upgrade the deployment
helm upgrade -n <namespace> -f values.yaml <pulsar-release-name> apachepulsar/pulsar
```
For more detailed information, see our [Upgrading](http://pulsar.apache.org/docs/helm-upgrade/) guide.
## Upgrading from Helm Chart version 3.0.0-3.6.0 to 3.7.0 version and above
## Upgrading to Helm chart version 4.2.0 (not released yet)
The kube-prometheus-stack version has been upgraded to 65.x.x in Pulsar Helm Chart version 3.7.0 .
Before running "helm upgrade", you should first upgrade the Prometheus Operator CRDs as [instructed
in kube-prometheus-stack upgrade notes](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#from-64x-to-65x).
### TLS configuration for ZooKeeper has changed
There's a script to run the required commands:
The TLS configuration for ZooKeeper has been changed to fix certificate and private key expiration issues.
This change impacts configurations that have `tls.enabled` and `tls.zookeeper.enabled` set in `values.yaml`.
The revised solution requires the `AdditionalCertificateOutputFormats=true` feature gate to be enabled in the `cert-manager` deployment when using cert-manager versions below 1.15.0.
If you installed `cert-manager` using `./scripts/cert-manager/install-cert-manager.sh`, you can re-run the updated script to set the feature gate. The script currently installs or upgrades cert-manager LTS version 1.12.17, where the feature gate must be explicitly enabled.
## Upgrading from Helm Chart versions before 4.0.0 to 4.0.0 version and above
### Pulsar Proxy service's default type has been changed from `LoadBalancer` to `ClusterIP`
Please check the section "External Access Recommendations" for guidance and also check the security advisory section.
You will need to configure keys under `proxy.service` in your `values.yaml` to preserve existing functionality since the default has been changed.
### kube-prometheus-stack replaced with victoria-metrics-k8s-stack
The `kube-prometheus-stack` was replaced with `victoria-metrics-k8s-stack` in Pulsar Helm chart version 4.0.0. The trigger for the change was incompatibilities discovered in testing with most recent `kube-prometheus-stack` and Prometheus 3.2.1 which failed to scrape Pulsar metrics in certain cases without providing proper error messages or debug information at debug level logging.
[Victoria Metrics](https://docs.victoriametrics.com/) is Apache 2.0 Licensed OSS and it's a fully compatible drop-in replacement for Prometheus which is fast and efficient.
Before upgrading to Pulsar Helm Chart version 4.0.0, it is recommended to disable kube-prometheus-stack in the original Helm chart version that
is used:
```shell
./scripts/kube-prometheus-stack/upgrade_prometheus_operator_crds.sh 0.77.1
```
After, this you can proceed with `helm upgrade`.
## Upgrading from Helm Chart version 3.0.0-3.4.x to 3.5.0 version and above
The kube-prometheus-stack version has been upgraded to 59.x.x in Pulsar Helm Chart version 3.5.0 .
Before running "helm upgrade", you should first upgrade the Prometheus Operator CRDs as [instructed
in kube-prometheus-stack upgrade notes](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#from-58x-to-59x).
There's a script to run the required commands:
```shell
./scripts/kube-prometheus-stack/upgrade_prometheus_operator_crds.sh 0.74.0
```
After, this you can proceed with `helm upgrade`.
## Upgrading from Helm Chart version 3.0.0-3.2.x to 3.3.0 version and above
The kube-prometheus-stack version has been upgraded to 56.x.x in Pulsar Helm Chart version 3.3.0 .
Before running "helm upgrade", you should first upgrade the Prometheus Operator CRDs as [instructed
in kube-prometheus-stack upgrade notes](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#from-55x-to-56x).
There's a script to run the required commands:
```shell
./scripts/kube-prometheus-stack/upgrade_prometheus_operator_crds.sh 0.71.0
# get the existing values.yaml used for the most recent deployment
helm get values -n <namespace> <pulsar-release-name> > values.yaml
# disable kube-prometheus-stack in the currently used version before upgrading to Pulsar Helm chart 4.0.0
helm upgrade -n <namespace> -f values.yaml --version <your-current-chart-version> --set kube-prometheus-stack.enabled=false <pulsar-release-name> apachepulsar/pulsar
```
After, this you can proceed with `helm upgrade`.
@ -292,7 +441,7 @@ After, this you can proceed with `helm upgrade`.
The 2.10.0+ Apache Pulsar docker image is a non-root container, by default. That complicates an upgrade to 2.10.0
because the existing files are owned by the root user but are not writable by the root group. In order to leverage this
new security feature, the Bookkeeper and Zookeeper StatefulSet [securityContexts](https://kubernetes.io/docs/tasks/configure-pod-container/security-context)
are configurable in the `values.yaml`. They default to:
are configurable in the [`values.yaml`](charts/pulsar/values.yaml). They default to:
```yaml
securityContext:
@ -340,6 +489,7 @@ Caused by: org.rocksdb.RocksDBException: while open a file for lock: /pulsar/dat
### Recovering from `helm upgrade` error "unable to build kubernetes objects from current release manifest"
Example of the error message:
```bash
Error: UPGRADE FAILED: unable to build kubernetes objects from current release manifest:
[resource mapping not found for name: "pulsar-bookie" namespace: "pulsar" from "":
@ -391,6 +541,36 @@ We've done our best to make these charts as seamless as possible,
occasionally troubles do surface outside of our control. We've collected
tips and tricks for troubleshooting common issues. Please examine these first before raising an [issue](https://github.com/apache/pulsar-helm-chart/issues/new/choose), and feel free to add to them by raising a [Pull Request](https://github.com/apache/pulsar-helm-chart/compare)!
### VictoriaMetrics Troubleshooting
In example commands, k8s is namespace `pulsar` replace with your deployment namespace.
#### VictoriaMetrics Web UI
Connecting to `vmsingle` pod for web UI.
```shell
kubectl port-forward -n pulsar $(kubectl get pods -n pulsar -l app.kubernetes.io/name=vmsingle -o jsonpath='{.items[0].metadata.name}') 8429:8429
```
Now you can access the UI at http://localhost:8429 and http://localhost:8429/vmui (for similar UI as in Prometheus)
#### VictoriaMetrics Scraping debugging UI - Active Targets
Connection to `vmagent` pod for debugging targets.
```shell
kubectl port-forward -n pulsar $(kubectl get pods -n pulsar -l app.kubernetes.io/name=vmagent -o jsonpath='{.items[0].metadata.name}') 8429:8429
```
Now you can access the UI at http://localhost:8429
Active Targets UI
- http://localhost:8429/targets
Scraping Configuration
- http://localhost:8429/config
## Release Process
See [RELEASE.md](RELEASE.md)
See [RELEASE.md](RELEASE.md)

View File

@ -87,7 +87,7 @@ official Apache releases must not include the rcN suffix.
- Tag your release
```shell
git tag -s pulsar-${VERSION_RC} -m "Apache Pulsar Helm Chart $VERSION_RC"
git tag -u $APACHE_USER@apache.org -s pulsar-${VERSION_RC} -m "Apache Pulsar Helm Chart $VERSION_RC"
```
- Tarball the repo
@ -243,10 +243,16 @@ Public keys are available at: https://www.apache.org/dist/pulsar/KEYS
For convenience "index.yaml" has been uploaded (though excluded from voting), so you can also run the below commands.
helm repo add --force-update apache-pulsar-dist-dev https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/$VERSION_RC/
helm repo add --force-update apache-pulsar-dist-dev \\
https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/$VERSION_RC/
helm repo update
helm install pulsar apache-pulsar-dist-dev/pulsar --version ${VERSION_WITHOUT_RC} --set affinity.anti_affinity=false
helm install pulsar apache-pulsar-dist-dev/pulsar \\
--version ${VERSION_WITHOUT_RC} --set affinity.anti_affinity=false \\
--wait --timeout 10m --debug
For observing the deployment progress, you can use the k9s tool to view the cluster state changes in a different terminal window.
The k9s tool is available at https://k9scli.io/topics/install/.
pulsar-${VERSION_WITHOUT_RC}.tgz.prov - is also uploaded for verifying Chart Integrity, though it is not strictly required for releasing the artifact based on ASF Guidelines.
You can optionally verify this file using this helm plugin https://github.com/technosophos/helm-gpg, or by using helm --verify (https://helm.sh/docs/helm/helm_verify/).
@ -404,9 +410,14 @@ Contributors can run below commands to test the Helm Chart
```shell
export VERSION_RC=3.0.0-candidate-1
export VERSION_WITHOUT_RC=${VERSION_RC%-candidate-*}
helm repo add --force-update apache-pulsar-dist-dev https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/$VERSION_RC/
```
```shell
helm repo add --force-update \
apache-pulsar-dist-dev https://dist.apache.org/repos/dist/dev/pulsar/helm-chart/$VERSION_RC/
helm repo update
helm install pulsar apache-pulsar-dist-dev/pulsar --version ${VERSION_WITHOUT_RC} --set affinity.anti_affinity=false
helm install pulsar apache-pulsar-dist-dev/pulsar \
--version ${VERSION_WITHOUT_RC} --set affinity.anti_affinity=false
```
You can then perform any other verifications to check that it works as you expected by
@ -479,9 +490,7 @@ Verify that the packages appear in [Pulsar Helm Chart](https://dist.apache.org/r
Create and push the release tag:
```shell
cd "${PULSAR_REPO_ROOT}"
git checkout pulsar-${VERSION_RC}
git tag -s pulsar-${VERSION_WITHOUT_RC} -m "Apache Pulsar Helm Chart ${VERSION_WITHOUT_RC}"
git tag -u $APACHE_USER@apache.org pulsar-$VERSION_WITHOUT_RC $(git rev-parse pulsar-$VERSION_RC^{}) -m "Apache Pulsar Helm Chart ${VERSION_WITHOUT_RC}"
git push origin pulsar-${VERSION_WITHOUT_RC}
```
@ -502,7 +511,7 @@ cd pulsar-site
# Run on a branch based on main branch
cd static/charts
# need the chart file temporarily to update the index
wget https://downloads.apache.org/pulsar/helm-chart/${VERSION_WITHOUT_RC}/pulsar-${VERSION_WITHOUT_RC}.tgz
wget https://dist.apache.org/repos/dist/release/pulsar/helm-chart/${VERSION_WITHOUT_RC}/pulsar-${VERSION_WITHOUT_RC}.tgz
# store the license header temporarily
head -n 17 index.yaml > license_header.txt
# update the index
@ -515,14 +524,29 @@ rm license_header.txt index.yaml.new
rm pulsar-${VERSION_WITHOUT_RC}.tgz
```
Verify that the updated `index.yaml` file has the most recent version. Then run:
Verify that the updated `index.yaml` file has the most recent version.
Wait until the file is available:
```shell
while ! curl -fIL https://downloads.apache.org/pulsar/helm-chart/${VERSION_WITHOUT_RC}/pulsar-${VERSION_WITHOUT_RC}.tgz; do
echo "Waiting for pulsar-${VERSION_WITHOUT_RC}.tgz to become available..."
sleep 10
done
```
Then run:
```shell
git add index.yaml
git commit -m "Adding Pulsar Helm Chart ${VERSION_WITHOUT_RC} to index.yaml"
```
Then open a PR.
Then commit the change.
```
git push origin main
```
## Create release notes for the tag in GitHub UI

View File

@ -18,11 +18,11 @@
#
apiVersion: v2
appVersion: "4.0.2"
appVersion: "4.0.5"
description: Apache Pulsar Helm chart for Kubernetes
name: pulsar
version: 3.9.0
kubeVersion: ">=1.23.0-0"
version: 4.1.0
kubeVersion: ">=1.25.0-0"
home: https://pulsar.apache.org
sources:
- https://github.com/apache/pulsar
@ -32,7 +32,7 @@ maintainers:
- name: The Apache Pulsar Team
email: dev@pulsar.apache.org
dependencies:
- name: kube-prometheus-stack
version: 65.x.x
repository: https://prometheus-community.github.io/helm-charts
condition: kube-prometheus-stack.enabled
- name: victoria-metrics-k8s-stack
version: 0.38.x
repository: https://victoriametrics.github.io/helm-charts/
condition: victoria-metrics-k8s-stack.enabled

View File

@ -1,18 +1,185 @@
Thank you for installing Apache Pulsar Helm chart version {{ .Chart.Version }}.
======================================================================================
APACHE PULSAR HELM CHART
======================================================================================
!! WARNING !!
======================================================================================
SECURITY ADVISORY
======================================================================================
Important Security Disclaimer for Apache Pulsar Helm Chart Usage:
This Helm chart's default configuration DOES NOT meet production security requirements.
Users MUST review and customize security settings for their specific environment.
This Helm chart is provided with a default configuration that does not
meet the security requirements for production environments or sensitive
data handling. Users are strongly advised to thoroughly review and
customize the security settings to ensure a secure deployment that
aligns with their specific operational and security policies.
IMPORTANT: This Helm chart provides a starting point for Pulsar deployments but requires
significant security customization before use in production environments. We strongly
recommend implementing:
Go to https://github.com/apache/pulsar-helm-chart for more details.
1. Proper network isolation and access controls
2. Authentication and authorization for all components
3. TLS encryption for all communication channels
4. Regular security updates and vulnerability assessments
Ask usage questions at https://github.com/apache/pulsar/discussions/categories/q-a
Report issues to https://github.com/apache/pulsar-helm-chart/issues
Please contribute improvements to https://github.com/apache/pulsar-helm-chart
As an open source project, we welcome contributions to improve security features.
Please consider submitting pull requests to address security gaps or enhance
existing security implementations.
---------------------------------------------------------------------------------------
SECURITY NOTICE: The Pulsar proxy is not designed for direct public internet exposure.
It lacks security features required for untrusted networks and should only be deployed
within secured environments with proper network controls.
IMPORTANT CHANGE IN v4.0.0: Default service type changed from LoadBalancer to ClusterIP
for security reasons. This limits access to within the Kubernetes environment by default.
---------------------------------------------------------------------------------------
IF YOU NEED EXTERNAL ACCESS FOR YOUR PULSAR CLUSTER:
---------------------------------------------------------------------------------------
Note: This information might be outdated. Please go to https://github.com/apache/pulsar-helm-chart for updated information.
If you need to expose the Pulsar Proxy outside the cluster using a LoadBalancer service type:
1. USE INTERNAL LOAD BALANCERS ONLY
- Set type to LoadBalancer only in secured environments with proper network controls
- Add cloud provider-specific annotations for internal load balancers
- See cloud provider documentation:
* AWS / EKS: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/
* Azure / AKS: https://learn.microsoft.com/en-us/azure/aks/internal-lb
* GCP / GKE: https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer-parameters
- Examples (verify correctness for your environment):
* AWS / EKS: service.beta.kubernetes.io/aws-load-balancer-internal: "true"
* Azure / AKS: service.beta.kubernetes.io/azure-load-balancer-internal: "true"
* GCP / GKE: networking.gke.io/load-balancer-type: "Internal"
2. IMPLEMENT AUTHENTICATION AND AUTHORIZATION
- Configure all clients to authenticate properly
- Set up appropriate authorization policies
3. USE TLS FOR ALL CONNECTIONS
- Enable TLS for client-to-proxy connections
- Enable TLS for proxy-to-broker connections
- Enable TLS for all internal cluster communications (brokers, zookeepers, bookies)
- Note: TLS alone is NOT sufficient as a security solution in Pulsar. Even with TLS enabled,
clusters exposed to untrusted networks remain vulnerable to denial-of-service attacks,
authentication bypass attempts, and protocol-level exploits. Always implement defense-in-depth
security measures and limit exposure to trusted networks only.
4. NETWORK SECURITY
- Use private networks (VPCs)
- Configure firewalls, security groups, and IP restrictions appropriately
- In addition, consider using loadBalancerSourceRanges to limit access to specific IP ranges
5. CLIENT IP ADDRESS BASED ACCESS RESTRICTIONS
- When using a LoadBalancer service type, restrict access to specific IP ranges by configuring
`proxy.service.loadBalancerSourceRanges` in your values.yaml
- Important: This should be implemented alongside other security measures (internal load balancer,
authentication, TLS, network policies) as part of a defense-in-depth strategy,
not as a standalone security solution
---------------------------------------------------------------------------------------
ALTERNATIVE FOR EXTERNAL ACCESS
---------------------------------------------------------------------------------------
As an alternative method for external access, Pulsar has support for SNI proxy routing:
https://pulsar.apache.org/docs/next/concepts-proxy-sni-routing/
SNI Proxy routing is supported with proxy servers such as Apache Traffic Server, HAProxy and Nginx.
Note: This option isn't currently implemented in the Apache Pulsar Helm chart.
IMPORTANT: Pulsar binary protocol cannot be exposed outside of the Kubernetes cluster
using Kubernetes Ingress. Kubernetes Ingress works for the Admin REST API and topic lookups,
but clients would be connecting to the advertised listener addresses returned by the brokers and it
would only work when clients can connect directly to brokers. This is not a supported secure option
for exposing Pulsar to untrusted networks.
{{- if .Values.useReleaseStatus }}
======================================================================================
🚀 QUICK START 🚀
======================================================================================
Watching events to view progress of deployment:
kubectl get -n {{ .Values.namespace | default .Release.Namespace }} events -o wide --watch
Watching state of deployed Kubernetes objects, updated every 2 seconds:
watch kubectl get -n {{ .Values.namespace | default .Release.Namespace }} all
{{- if .Values.components.proxy }}
Waiting until Pulsar Proxy is available:
kubectl wait --timeout=600s --for=condition=ready pod -n {{ .Values.namespace | default .Release.Namespace }} -l component=proxy
{{- end }}
Watching state with k9s (https://k9scli.io/topics/install/):
k9s -n {{ .Values.namespace | default .Release.Namespace }}
{{- if and .Values.affinity.anti_affinity (or (gt (int .Values.bookkeeper.replicaCount) 1) (gt (int .Values.zookeeper.replicaCount) 1)) }}
======================================================================================
⚠️ NOTICE FOR DEV K8S CLUSTER USERS ⚠️
======================================================================================
Please note that anti-affinity rules for Zookeeper and Bookie components require at least
one node per replica. There are currently {{ .Values.bookkeeper.replicaCount }} bookies and {{ .Values.zookeeper.replicaCount }} zookeepers configured.
For Kubernetes clusters with fewer than 3 nodes, such as single-node Kubernetes clusters in
development environments like minikube, Docker Desktop, Rancher Desktop (k3s), or Podman
Desktop, you must disable the anti-affinity feature by either:
Adding to your values.yaml:
affinity:
anti_affinity: false
Or adding "--set affinity.anti_affinity=false" to the helm command line.
After making the changes to your values yaml file, redeploy with "helm upgrade":
helm upgrade -n {{ .Release.Namespace }} -f your_values_file.yaml {{ .Release.Name }} apachepulsar/pulsar
These configuration instructions can be omitted for Kubernetes clusters with 3 or more nodes.
{{- end }}
{{- end }}
{{- if and (eq .Values.proxy.service.type "LoadBalancer") (not .Values.proxy.service.annotations) }}
======================================================================================
⚠️ 🚨 INSECURE CONFIGURATION DETECTED 🚨 ⚠️
======================================================================================
WARNING: You are using a LoadBalancer service type without internal load balancer
annotations. This is potentially an insecure configuration. Please carefully review
the security recommendations above and visit https://github.com/apache/pulsar-helm-chart
for more information.
======================================================================================
{{- end }}
======================================================================================
DISCLAIMER
======================================================================================
The providers of this Helm chart make no guarantees regarding the security of the chart under
any circumstances. It is the user's responsibility to ensure that their deployment is secure
and complies with all relevant security standards and regulations.
By using this Helm chart, the user acknowledges the risks associated with its default
configuration and the necessity for proper security customization. The user further
agrees that the providers of the Helm chart shall not be liable for any security breaches
or incidents resulting from the use of the chart.
The user assumes full responsibility for the security and integrity of their deployment.
This includes, but is not limited to, the proper configuration of security features and
adherence to best practices for securing network access. The providers of this Helm chart
disclaim all warranties, whether express or implied, including any warranties of
merchantability, fitness for a particular purpose, and non-infringement of third-party rights.
======================================================================================
RESOURCES
======================================================================================
- 🖥️ Install k9s terminal interface for viewing and managing k8s clusters: https://k9scli.io/topics/install/
- ❓ Usage Questions: https://github.com/apache/pulsar/discussions/categories/q-a
- 🐛 Report Issues: https://github.com/apache/pulsar-helm-chart/issues
- 🔒 Security Issues: https://pulsar.apache.org/security/
- 📚 Documentation: https://github.com/apache/pulsar-helm-chart
🌟 Please contribute to improve the Apache Pulsar Helm chart and its documentation:
- 🤝 Contribute: https://github.com/apache/pulsar-helm-chart
Thank you for installing Apache Pulsar Helm chart version {{ .Chart.Version }}.

View File

@ -36,7 +36,7 @@ Define autorecovery zookeeper client tls settings
*/}}
{{- define "pulsar.autorecovery.zookeeper.tls.settings" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
/pulsar/keytool/keytool.sh autorecovery {{ template "pulsar.autorecovery.hostname" . }} true;
{{- include "pulsar.component.zookeeper.tls.settings" (dict "component" "autorecovery" "isClient" true "isCacerts" .Values.tls.autorecovery.cacerts.enabled) -}}
{{- end }}
{{- end }}
@ -51,11 +51,21 @@ Define autorecovery tls certs mounts
- name: ca
mountPath: "/pulsar/certs/ca"
readOnly: true
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
mountPath: "/pulsar/keytool/keytool.sh"
subPath: keytool.sh
{{- end }}
{{- if .Values.tls.autorecovery.cacerts.enabled }}
- mountPath: "/pulsar/certs/cacerts"
name: autorecovery-cacerts
{{- range $cert := .Values.tls.autorecovery.cacerts.certs }}
- name: {{ $cert.name }}
mountPath: "/pulsar/certs/{{ $cert.name }}"
readOnly: true
{{- end }}
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem.sh"
subPath: certs-combine-pem.sh
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem-infinity.sh"
subPath: certs-combine-pem-infinity.sh
{{- end }}
{{- end }}
@ -72,23 +82,32 @@ Define autorecovery tls certs volumes
path: tls.crt
- key: tls.key
path: tls.key
- key: tls-combined.pem
path: tls-combined.pem
- name: ca
secret:
{{- if eq .Values.certs.internal_issuer.type "selfsigning" }}
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
{{- end }}
{{- if eq .Values.certs.internal_issuer.type "ca" }}
secretName: "{{ .Values.certs.issuers.ca.secretName }}"
{{- end }}
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
items:
- key: ca.crt
path: ca.crt
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
configMap:
name: "{{ template "pulsar.fullname" . }}-keytool-configmap"
defaultMode: 0755
{{- end }}
{{- if .Values.tls.autorecovery.cacerts.enabled }}
- name: autorecovery-cacerts
emptyDir: {}
{{- range $cert := .Values.tls.autorecovery.cacerts.certs }}
- name: {{ $cert.name }}
secret:
secretName: "{{ $cert.existingSecret }}"
items:
{{- range $key := $cert.secretKeys }}
- key: {{ $key }}
path: {{ $key }}
{{- end }}
{{- end }}
- name: certs-scripts
configMap:
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
defaultMode: 0755
{{- end }}
{{- end }}
@ -98,7 +117,7 @@ Define autorecovery init container : verify cluster id
{{- define "pulsar.autorecovery.init.verify_cluster_id" -}}
bin/apply-config-from-env.py conf/bookkeeper.conf;
export BOOKIE_MEM="-Xmx128M";
{{- include "pulsar.autorecovery.zookeeper.tls.settings" . -}}
{{- include "pulsar.autorecovery.zookeeper.tls.settings" . }}
until timeout 15 bin/bookkeeper shell whatisinstanceid; do
sleep 3;
done;

View File

@ -37,7 +37,7 @@ Define bookie zookeeper client tls settings
*/}}
{{- define "pulsar.bookkeeper.zookeeper.tls.settings" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
/pulsar/keytool/keytool.sh bookie {{ template "pulsar.bookkeeper.hostname" . }} true;
{{- include "pulsar.component.zookeeper.tls.settings" (dict "component" "bookie" "isClient" true "isCacerts" .Values.tls.bookie.cacerts.enabled) -}}
{{- end }}
{{- end }}
@ -45,18 +45,30 @@ Define bookie zookeeper client tls settings
Define bookie tls certs mounts
*/}}
{{- define "pulsar.bookkeeper.certs.volumeMounts" -}}
{{- if and .Values.tls.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled) }}
{{- if .Values.tls.enabled }}
{{- if or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled }}
- name: bookie-certs
mountPath: "/pulsar/certs/bookie"
readOnly: true
{{- end }}
- name: ca
mountPath: "/pulsar/certs/ca"
readOnly: true
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
mountPath: "/pulsar/keytool/keytool.sh"
subPath: keytool.sh
{{- end }}
{{- if .Values.tls.bookie.cacerts.enabled }}
- mountPath: "/pulsar/certs/cacerts"
name: bookie-cacerts
{{- range $cert := .Values.tls.bookie.cacerts.certs }}
- name: {{ $cert.name }}
mountPath: "/pulsar/certs/{{ $cert.name }}"
readOnly: true
{{- end }}
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem.sh"
subPath: certs-combine-pem.sh
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem-infinity.sh"
subPath: certs-combine-pem-infinity.sh
{{- end }}
{{- end }}
@ -64,7 +76,8 @@ Define bookie tls certs mounts
Define bookie tls certs volumes
*/}}
{{- define "pulsar.bookkeeper.certs.volumes" -}}
{{- if and .Values.tls.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled) }}
{{- if .Values.tls.enabled }}
{{- if or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled }}
- name: bookie-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.bookie.cert_name }}"
@ -73,23 +86,35 @@ Define bookie tls certs volumes
path: tls.crt
- key: tls.key
path: tls.key
{{- if .Values.tls.zookeeper.enabled }}
- key: tls-combined.pem
path: tls-combined.pem
{{- end }}
{{- end }}
- name: ca
secret:
{{- if eq .Values.certs.internal_issuer.type "selfsigning" }}
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
{{- end }}
{{- if eq .Values.certs.internal_issuer.type "ca" }}
secretName: "{{ .Values.certs.issuers.ca.secretName }}"
{{- end }}
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
items:
- key: ca.crt
path: ca.crt
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
configMap:
name: "{{ template "pulsar.fullname" . }}-keytool-configmap"
defaultMode: 0755
{{- end }}
{{- if .Values.tls.bookie.cacerts.enabled }}
- name: bookie-cacerts
emptyDir: {}
{{- range $cert := .Values.tls.bookie.cacerts.certs }}
- name: {{ $cert.name }}
secret:
secretName: "{{ $cert.existingSecret }}"
items:
{{- range $key := $cert.secretKeys }}
- key: {{ $key }}
path: {{ $key }}
{{- end }}
{{- end }}
- name: certs-scripts
configMap:
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
defaultMode: 0755
{{- end }}
{{- end }}
@ -97,12 +122,31 @@ Define bookie tls certs volumes
Define bookie common config
*/}}
{{- define "pulsar.bookkeeper.config.common" -}}
{{/*
Configure BookKeeper's metadata store (available since BookKeeper 4.7.0 / BP-29)
https://bookkeeper.apache.org/bps/BP-29-metadata-store-api-module/
https://bookkeeper.apache.org/docs/deployment/manual#cluster-metadata-setup
*/}}
# Set empty values for zkServers and zkLedgersRootPath since we're using the metadataServiceUri to configure BookKeeper's metadata store
zkServers: ""
zkLedgersRootPath: ""
{{- if .Values.components.zookeeper }}
zkServers: "{{ template "pulsar.zookeeper.connect" . }}"
zkLedgersRootPath: "{{ .Values.metadataPrefix }}/ledgers"
{{- if (and (hasKey .Values.pulsar_metadata "bookkeeper") .Values.pulsar_metadata.bookkeeper.usePulsarMetadataBookieDriver) }}
# there's a bug when using PulsarMetadataBookieDriver since it always appends /ledgers to the metadataServiceUri
# Possibly a bug in org.apache.pulsar.metadata.bookkeeper.AbstractMetadataDriver#resolveLedgersRootPath in Pulsar code base
metadataServiceUri: "metadata-store:zk:{{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }}"
{{- else }}
# use zk+hierarchical:// when using BookKeeper's built-in metadata driver
metadataServiceUri: "zk+hierarchical://{{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }}/ledgers"
{{- end }}
{{- else if .Values.components.oxia }}
metadataServiceUri: "{{ template "pulsar.oxia.metadata.url.bookkeeper" . }}"
{{- end }}
{{- /* metadataStoreSessionTimeoutMillis maps to zkTimeout in bookkeeper.conf for both zookeeper and oxia metadata stores */}}
{{- if (and (hasKey .Values.pulsar_metadata "bookkeeper") (hasKey .Values.pulsar_metadata.bookkeeper "metadataStoreSessionTimeoutMillis")) }}
zkTimeout: "{{ .Values.pulsar_metadata.bookkeeper.metadataStoreSessionTimeoutMillis }}"
{{- end }}
# enable bookkeeper http server
httpServerEnabled: "true"
httpServerPort: "{{ .Values.bookkeeper.ports.http }}"
@ -122,7 +166,7 @@ PULSAR_PREFIX_tlsCertificatePath: /pulsar/certs/bookie/tls.crt
PULSAR_PREFIX_tlsKeyStoreType: PEM
PULSAR_PREFIX_tlsKeyStore: /pulsar/certs/bookie/tls.key
PULSAR_PREFIX_tlsTrustStoreType: PEM
PULSAR_PREFIX_tlsTrustStore: /pulsar/certs/ca/ca.crt
PULSAR_PREFIX_tlsTrustStore: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.bookie.cacerts.enabled | quote }}
{{- end }}
{{- end }}
@ -133,7 +177,7 @@ Define bookie init container : verify cluster id
{{- if not (and .Values.volumes.persistence .Values.bookkeeper.volumes.persistence) }}
bin/apply-config-from-env.py conf/bookkeeper.conf;
export BOOKIE_MEM="-Xmx128M";
{{- include "pulsar.bookkeeper.zookeeper.tls.settings" . -}}
{{- include "pulsar.bookkeeper.zookeeper.tls.settings" . }}
until timeout 15 bin/bookkeeper shell whatisinstanceid; do
sleep 3;
done;
@ -143,7 +187,7 @@ bin/bookkeeper shell bookieformat -nonInteractive -force -deleteCookie || true
set -e;
bin/apply-config-from-env.py conf/bookkeeper.conf;
export BOOKIE_MEM="-Xmx128M";
{{- include "pulsar.bookkeeper.zookeeper.tls.settings" . -}}
{{- include "pulsar.bookkeeper.zookeeper.tls.settings" . }}
until timeout 15 bin/bookkeeper shell whatisinstanceid; do
sleep 3;
done;

View File

@ -43,7 +43,7 @@ Define broker zookeeper client tls settings
*/}}
{{- define "pulsar.broker.zookeeper.tls.settings" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
/pulsar/keytool/keytool.sh broker {{ template "pulsar.broker.hostname" . }} true;
{{- include "pulsar.component.zookeeper.tls.settings" (dict "component" "broker" "isClient" true "isCacerts" .Values.tls.broker.cacerts.enabled) -}}
{{- end }}
{{- end }}
@ -51,18 +51,30 @@ Define broker zookeeper client tls settings
Define broker tls certs mounts
*/}}
{{- define "pulsar.broker.certs.volumeMounts" -}}
{{- if and .Values.tls.enabled (or .Values.tls.broker.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled)) }}
{{- if .Values.tls.enabled }}
{{- if or .Values.tls.broker.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled) }}
- name: broker-certs
mountPath: "/pulsar/certs/broker"
readOnly: true
{{- end }}
- name: ca
mountPath: "/pulsar/certs/ca"
readOnly: true
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
mountPath: "/pulsar/keytool/keytool.sh"
subPath: keytool.sh
{{- end }}
{{- if .Values.tls.broker.cacerts.enabled }}
- mountPath: "/pulsar/certs/cacerts"
name: broker-cacerts
{{- range $cert := .Values.tls.broker.cacerts.certs }}
- name: {{ $cert.name }}
mountPath: "/pulsar/certs/{{ $cert.name }}"
readOnly: true
{{- end }}
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem.sh"
subPath: certs-combine-pem.sh
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem-infinity.sh"
subPath: certs-combine-pem-infinity.sh
{{- end }}
{{- end }}
@ -70,7 +82,8 @@ Define broker tls certs mounts
Define broker tls certs volumes
*/}}
{{- define "pulsar.broker.certs.volumes" -}}
{{- if and .Values.tls.enabled (or .Values.tls.broker.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled)) }}
{{- if .Values.tls.enabled }}
{{- if or .Values.tls.broker.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled) }}
- name: broker-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.broker.cert_name }}"
@ -79,22 +92,34 @@ Define broker tls certs volumes
path: tls.crt
- key: tls.key
path: tls.key
{{- if .Values.tls.zookeeper.enabled }}
- key: tls-combined.pem
path: tls-combined.pem
{{- end }}
{{- end }}
- name: ca
secret:
{{- if eq .Values.certs.internal_issuer.type "selfsigning" }}
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
{{- end }}
{{- if eq .Values.certs.internal_issuer.type "ca" }}
secretName: "{{ .Values.certs.issuers.ca.secretName }}"
{{- end }}
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
items:
- key: ca.crt
path: ca.crt
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
{{- end }}
{{- if .Values.tls.broker.cacerts.enabled }}
- name: broker-cacerts
emptyDir: {}
{{- range $cert := .Values.tls.broker.cacerts.certs }}
- name: {{ $cert.name }}
secret:
secretName: "{{ $cert.existingSecret }}"
items:
{{- range $key := $cert.secretKeys }}
- key: {{ $key }}
path: {{ $key }}
{{- end }}
{{- end }}
- name: certs-scripts
configMap:
name: "{{ template "pulsar.fullname" . }}-keytool-configmap"
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
defaultMode: 0755
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,132 @@
{{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}}
{{/*
Define the pulsar certs ca issuer name
*/}}
{{- define "pulsar.certs.issuers.ca.name" -}}
{{- if .Values.certs.internal_issuer.enabled -}}
{{- if and (eq .Values.certs.internal_issuer.type "selfsigning") .Values.certs.issuers.selfsigning.name -}}
{{- .Values.certs.issuers.selfsigning.name -}}
{{- else if and (eq .Values.certs.internal_issuer.type "ca") .Values.certs.issuers.ca.name -}}
{{- .Values.certs.issuers.ca.name -}}
{{- else -}}
{{- template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer
{{- end -}}
{{- else -}}
{{- if .Values.certs.issuers.ca.name -}}
{{- .Values.certs.issuers.ca.name -}}
{{- else -}}
{{- fail "certs.issuers.ca.name is required when TLS is enabled and certs.internal_issuer.enabled is false" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Define the pulsar certs ca issuer secret name
*/}}
{{- define "pulsar.certs.issuers.ca.secretName" -}}
{{- if .Values.certs.internal_issuer.enabled -}}
{{- if and (eq .Values.certs.internal_issuer.type "selfsigning") .Values.certs.issuers.selfsigning.secretName -}}
{{- .Values.certs.issuers.selfsigning.secretName -}}
{{- else if and (eq .Values.certs.internal_issuer.type "ca") .Values.certs.issuers.ca.secretName -}}
{{- .Values.certs.issuers.ca.secretName -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name .Values.tls.ca_suffix -}}
{{- end -}}
{{- else -}}
{{- if .Values.certs.issuers.ca.secretName -}}
{{- .Values.certs.issuers.ca.secretName -}}
{{- else -}}
{{- fail "certs.issuers.ca.secretName is required when TLS is enabled and certs.internal_issuer.enabled is false" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Common certificate template
Usage: {{- include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.proxy "tlsConfig" .Values.tls.proxy) -}}
*/}}
{{- define "pulsar.cert.template" -}}
{{- if eq .root.Values.certs.internal_issuer.apiVersion "cert-manager.io/v1beta1" -}}
{{- fail "cert-manager.io/v1beta1 is no longer supported. Please set certs.internal_issuer.apiVersion to cert-manager.io/v1" -}}
{{- end -}}
apiVersion: "{{ .root.Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" .root }}-{{ .tlsConfig.cert_name }}"
namespace: {{ template "pulsar.namespace" .root }}
labels:
{{- include "pulsar.standardLabels" .root | nindent 4 }}
spec:
# Secret names are always required.
secretName: "{{ .root.Release.Name }}-{{ .tlsConfig.cert_name }}"
{{- if .root.Values.tls.zookeeper.enabled }}
additionalOutputFormats:
- type: CombinedPEM
{{- end }}
duration: "{{ .root.Values.tls.common.duration }}"
renewBefore: "{{ .root.Values.tls.common.renewBefore }}"
subject:
organizations:
{{ toYaml .root.Values.tls.common.organization | indent 4 }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" .root }}-{{ .componentConfig.component }}"
isCA: false
privateKey:
size: {{ .root.Values.tls.common.keySize }}
algorithm: {{ .root.Values.tls.common.keyAlgorithm }}
encoding: {{ .root.Values.tls.common.keyEncoding }}
usages:
- server auth
- client auth
# At least one of a DNS Name, USI SAN, or IP address is required.
dnsNames:
{{- if .tlsConfig.dnsNames }}
{{ toYaml .tlsConfig.dnsNames | indent 4 }}
{{- end }}
- {{ printf "*.%s-%s.%s.svc.%s" (include "pulsar.fullname" .root) .componentConfig.component (include "pulsar.namespace" .root) .root.Values.clusterDomain | quote }}
- {{ printf "%s-%s" (include "pulsar.fullname" .root) .componentConfig.component | quote }}
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.certs.issuers.ca.name" .root }}"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{- end -}}
{{/*
CA certificates template
Usage: {{ include "pulsar.certs.cacerts" (dict "certs" .Values.tls.<component>.cacerts.certs) }}
*/}}
{{- define "pulsar.certs.cacerts" -}}
{{- $certs := .certs -}}
{{- $cacerts := list -}}
{{- $cacerts = print "/pulsar/certs/ca/ca.crt" | append $cacerts -}}
{{- range $cert := $certs -}}
{{- range $key := $cert.secretKeys -}}
{{- $cacerts = print "/pulsar/certs/" $cert.name "/" $key | append $cacerts -}}
{{- end -}}
{{- end -}}
{{ join " " $cacerts }}
{{- end -}}

View File

@ -0,0 +1,97 @@
{{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}}
{{- define "pulsar.podMonitor" -}}
{{- $root := index . 0 }}
{{- $component := index . 1 }}
{{- $matchLabel := index . 2 }}
{{- $portName := "http" }}
{{- if gt (len .) 3 }}
{{- $portName = index . 3 }}
{{- end }}
{{/* Extract component parts for nested values */}}
{{- $componentParts := splitList "." $component }}
{{- $valuesPath := $root.Values }}
{{- range $componentParts }}
{{- $valuesPath = index $valuesPath . }}
{{- end }}
{{- if index $root.Values "victoria-metrics-k8s-stack" "enabled" }}
apiVersion: operator.victoriametrics.com/v1beta1
kind: VMPodScrape
{{- else }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
{{- end }}
metadata:
name: {{ template "pulsar.fullname" $root }}-{{ replace "." "-" $component }}
labels:
{{- include "pulsar.standardLabels" $root | nindent 4 }}
spec:
jobLabel: {{ replace "." "-" $component }}
podMetricsEndpoints:
- port: {{ $portName }}
path: /metrics
scheme: http
interval: {{ $valuesPath.podMonitor.interval }}
scrapeTimeout: {{ $valuesPath.podMonitor.scrapeTimeout }}
# Set honor labels to true to allow overriding namespace label with Pulsar's namespace label
honorLabels: true
{{- if index $root.Values "victoria-metrics-k8s-stack" "enabled" }}
relabelConfigs:
{{- else }}
relabelings:
{{- end }}
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if or $valuesPath.podMonitor.metricRelabelings (and $valuesPath.podMonitor.dropUnderscoreCreatedMetrics (index $valuesPath.podMonitor.dropUnderscoreCreatedMetrics "enabled")) }}
{{- if index $root.Values "victoria-metrics-k8s-stack" "enabled" }}
metricRelabelConfigs:
{{- else }}
metricRelabelings:
{{- end }}
{{- if and $valuesPath.podMonitor.dropUnderscoreCreatedMetrics (index $valuesPath.podMonitor.dropUnderscoreCreatedMetrics "enabled") }}
# Drop metrics that end with _created, auto-created by metrics library to match OpenMetrics format
- sourceLabels: [__name__]
{{- if and (hasKey $valuesPath.podMonitor.dropUnderscoreCreatedMetrics "excludePatterns") $valuesPath.podMonitor.dropUnderscoreCreatedMetrics.excludePatterns }}
regex: "(?!{{ $valuesPath.podMonitor.dropUnderscoreCreatedMetrics.excludePatterns | join "|" }}).*_created$"
{{- else }}
regex: ".*_created$"
{{- end }}
action: drop
{{- end }}
{{- with $valuesPath.podMonitor.metricRelabelings }}
{{ toYaml . | indent 8 }}
{{- end }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" $root | nindent 6 }}
{{ $matchLabel }}
{{- end -}}

View File

@ -106,7 +106,11 @@ Define coordinator entrypoint
{{- define "oxia.coordinator.entrypoint" -}}
- "oxia"
- "coordinator"
{{- if .Values.oxia.coordinator.customConfigMapName }}
- "--conf=configmap:{{ template "pulsar.namespace" . }}/{{ .Values.oxia.coordinator.customConfigMapName }}"
{{- else }}
- "--conf=configmap:{{ template "pulsar.namespace" . }}/{{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator"
{{- end }}
- "--log-json"
- "--metadata=configmap"
- "--k8s-namespace={{ template "pulsar.namespace" . }}"

View File

@ -0,0 +1,95 @@
{{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}}
{{/*
Define proxy tls certs mounts
*/}}
{{- define "pulsar.proxy.certs.volumeMounts" -}}
{{- if .Values.tls.enabled }}
{{- if .Values.tls.proxy.enabled }}
- mountPath: "/pulsar/certs/proxy"
name: proxy-certs
readOnly: true
{{- end }}
- mountPath: "/pulsar/certs/ca"
name: ca
readOnly: true
{{- end }}
{{- if .Values.tls.proxy.cacerts.enabled }}
- mountPath: "/pulsar/certs/cacerts"
name: proxy-cacerts
{{- range $cert := .Values.tls.proxy.cacerts.certs }}
- name: {{ $cert.name }}
mountPath: "/pulsar/certs/{{ $cert.name }}"
readOnly: true
{{- end }}
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem.sh"
subPath: certs-combine-pem.sh
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem-infinity.sh"
subPath: certs-combine-pem-infinity.sh
{{- end }}
{{- end }}
{{/*
Define proxy tls certs volumes
*/}}
{{- define "pulsar.proxy.certs.volumes" -}}
{{- if .Values.tls.enabled }}
{{- if .Values.tls.proxy.enabled }}
- name: proxy-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.proxy.cert_name }}"
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
{{- if .Values.tls.zookeeper.enabled }}
- key: tls-combined.pem
path: tls-combined.pem
{{- end }}
{{- end }}
- name: ca
secret:
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
items:
- key: ca.crt
path: ca.crt
{{- end }}
{{- if .Values.tls.proxy.cacerts.enabled }}
- name: proxy-cacerts
emptyDir: {}
{{- range $cert := .Values.tls.proxy.cacerts.certs }}
- name: {{ $cert.name }}
secret:
secretName: "{{ $cert.existingSecret }}"
items:
{{- range $key := $cert.secretKeys }}
- key: {{ $key }}
path: {{ $key }}
{{- end }}
{{- end }}
- name: certs-scripts
configMap:
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
defaultMode: 0755
{{- end }}
{{- end }}

View File

@ -36,7 +36,7 @@ Define toolset zookeeper client tls settings
*/}}
{{- define "pulsar.toolset.zookeeper.tls.settings" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled -}}
/pulsar/keytool/keytool.sh toolset {{ template "pulsar.toolset.hostname" . }} true;
{{- include "pulsar.component.zookeeper.tls.settings" (dict "component" "toolset" "isClient" true "isCacerts" .Values.tls.toolset.cacerts.enabled) -}}
{{- end -}}
{{- end }}
@ -44,18 +44,30 @@ Define toolset zookeeper client tls settings
Define toolset tls certs mounts
*/}}
{{- define "pulsar.toolset.certs.volumeMounts" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
{{- if .Values.tls.enabled }}
{{- if .Values.tls.zookeeper.enabled }}
- name: toolset-certs
mountPath: "/pulsar/certs/toolset"
readOnly: true
{{- end }}
- name: ca
mountPath: "/pulsar/certs/ca"
readOnly: true
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
mountPath: "/pulsar/keytool/keytool.sh"
subPath: keytool.sh
{{- end }}
{{- if .Values.tls.toolset.cacerts.enabled }}
- mountPath: "/pulsar/certs/cacerts"
name: toolset-cacerts
{{- range $cert := .Values.tls.toolset.cacerts.certs }}
- name: {{ $cert.name }}
mountPath: "/pulsar/certs/{{ $cert.name }}"
readOnly: true
{{- end }}
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem.sh"
subPath: certs-combine-pem.sh
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem-infinity.sh"
subPath: certs-combine-pem-infinity.sh
{{- end }}
{{- end }}
@ -63,7 +75,8 @@ Define toolset tls certs mounts
Define toolset tls certs volumes
*/}}
{{- define "pulsar.toolset.certs.volumes" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
{{- if .Values.tls.enabled }}
{{- if .Values.tls.zookeeper.enabled }}
- name: toolset-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.toolset.cert_name }}"
@ -72,22 +85,32 @@ Define toolset tls certs volumes
path: tls.crt
- key: tls.key
path: tls.key
- key: tls-combined.pem
path: tls-combined.pem
{{- end }}
- name: ca
secret:
{{- if eq .Values.certs.internal_issuer.type "selfsigning" }}
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
{{- end }}
{{- if eq .Values.certs.internal_issuer.type "ca" }}
secretName: "{{ .Values.certs.issuers.ca.secretName }}"
{{- end }}
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
items:
- key: ca.crt
path: ca.crt
{{- if .Values.tls.zookeeper.enabled }}
- name: keytool
{{- end }}
{{- if .Values.tls.toolset.cacerts.enabled }}
- name: toolset-cacerts
emptyDir: {}
{{- range $cert := .Values.tls.toolset.cacerts.certs }}
- name: {{ $cert.name }}
secret:
secretName: "{{ $cert.existingSecret }}"
items:
{{- range $key := $cert.secretKeys }}
- key: {{ $key }}
path: {{ $key }}
{{- end }}
{{- end }}
- name: certs-scripts
configMap:
name: "{{ template "pulsar.fullname" . }}-keytool-configmap"
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
defaultMode: 0755
{{- end }}
{{- end }}
{{- end }}

View File

@ -0,0 +1,37 @@
{{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}}
{{/*
Renders a value that contains template perhaps with scope if the scope is present.
Usage:
{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $ ) }}
{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $ "scope" $app ) }}
*/}}
{{- define "common.tplvalues.render" -}}
{{- $value := typeIs "string" .value | ternary .value (.value | toYaml) }}
{{- if contains "{{" (toJson .value) }}
{{- if .scope }}
{{- tpl (cat "{{- with $.RelativeScope -}}" $value "{{- end }}") (merge (dict "RelativeScope" .scope) .context) }}
{{- else }}
{{- tpl $value .context }}
{{- end }}
{{- else }}
{{- $value }}
{{- end }}
{{- end -}}

View File

@ -0,0 +1,25 @@
{{/*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/}}
{{/*
Check deprecated setting auth.authentication.provider since 4.1.0
*/}}
{{- if (and .Values.auth.authentication.enabled (not (empty .Values.auth.authentication.provider))) }}
{{- fail "ERROR: Setting auth.authentication.provider is no longer supported. For details, see the migration guide in README.md." }}
{{- end }}

View File

@ -53,7 +53,93 @@ Define zookeeper tls settings
*/}}
{{- define "pulsar.zookeeper.tls.settings" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
/pulsar/keytool/keytool.sh zookeeper {{ template "pulsar.zookeeper.hostname" . }} false;
{{- include "pulsar.component.zookeeper.tls.settings" (dict "component" "zookeeper" "isClient" false "isCacerts" .Values.tls.zookeeper.cacerts.enabled) -}}
{{- end }}
{{- end }}
{{- define "pulsar.component.zookeeper.tls.settings" }}
{{- $component := .component -}}
{{- $isClient := .isClient -}}
{{- $keyFile := printf "/pulsar/certs/%s/tls-combined.pem" $component -}}
{{- $caFile := ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .isCacerts -}}
{{- if $isClient }}
echo $'\n' >> conf/pulsar_env.sh
echo "PULSAR_EXTRA_OPTS=\"\${PULSAR_EXTRA_OPTS} -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.client.secure=true -Dzookeeper.client.certReload=true -Dzookeeper.ssl.keyStore.location={{- $keyFile }} -Dzookeeper.ssl.keyStore.type=PEM -Dzookeeper.ssl.trustStore.location={{- $caFile }} -Dzookeeper.ssl.trustStore.type=PEM\"" >> conf/pulsar_env.sh
echo $'\n' >> conf/bkenv.sh
echo "BOOKIE_EXTRA_OPTS=\"\${BOOKIE_EXTRA_OPTS} -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.client.secure=true -Dzookeeper.client.certReload=true -Dzookeeper.ssl.keyStore.location={{- $keyFile }} -Dzookeeper.ssl.keyStore.type=PEM -Dzookeeper.ssl.trustStore.location={{- $caFile }} -Dzookeeper.ssl.trustStore.type=PEM\"" >> conf/bkenv.sh
{{- else }}
echo $'\n' >> conf/pulsar_env.sh
echo "PULSAR_EXTRA_OPTS=\"\${PULSAR_EXTRA_OPTS} -Dzookeeper.ssl.keyStore.location={{- $keyFile }} -Dzookeeper.ssl.keyStore.type=PEM -Dzookeeper.ssl.trustStore.location={{- $caFile }} -Dzookeeper.ssl.trustStore.type=PEM\"" >> conf/pulsar_env.sh
{{- end }}
{{- end }}
{{/*
Define zookeeper tls certs mounts
*/}}
{{- define "pulsar.zookeeper.certs.volumeMounts" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
- mountPath: "/pulsar/certs/zookeeper"
name: zookeeper-certs
readOnly: true
- mountPath: "/pulsar/certs/ca"
name: ca
readOnly: true
{{- end }}
{{- if .Values.tls.zookeeper.cacerts.enabled }}
- mountPath: "/pulsar/certs/cacerts"
name: zookeeper-cacerts
{{- range $cert := .Values.tls.zookeeper.cacerts.certs }}
- name: {{ $cert.name }}
mountPath: "/pulsar/certs/{{ $cert.name }}"
readOnly: true
{{- end }}
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem.sh"
subPath: certs-combine-pem.sh
- name: certs-scripts
mountPath: "/pulsar/bin/certs-combine-pem-infinity.sh"
subPath: certs-combine-pem-infinity.sh
{{- end }}
{{- end }}
{{/*
Define zookeeper tls certs volumes
*/}}
{{- define "pulsar.zookeeper.certs.volumes" -}}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
- name: zookeeper-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.zookeeper.cert_name }}"
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
- key: tls-combined.pem
path: tls-combined.pem
- name: ca
secret:
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
items:
- key: ca.crt
path: ca.crt
{{- end }}
{{- if .Values.tls.zookeeper.cacerts.enabled }}
- name: zookeeper-cacerts
emptyDir: {}
{{- range $cert := .Values.tls.zookeeper.cacerts.certs }}
- name: {{ $cert.name }}
secret:
secretName: "{{ $cert.existingSecret }}"
items:
{{- range $key := $cert.secretKeys }}
- key: {{ $key }}
path: {{ $key }}
{{- end }}
{{- end }}
- name: certs-scripts
configMap:
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
defaultMode: 0755
{{- end }}
{{- end }}

View File

@ -17,42 +17,7 @@
# under the License.
#
# deploy broker PodMonitor only when `$.Values.broker.podMonitor.enabled` is true
# deploy autorecovery PodMonitor only when `$.Values.autorecovery.podMonitor.enabled` is true
{{- if $.Values.autorecovery.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ template "pulsar.name" . }}-recovery
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobLabel: recovery
podMetricsEndpoints:
- port: http
path: /metrics
scheme: http
interval: {{ $.Values.autorecovery.podMonitor.interval }}
scrapeTimeout: {{ $.Values.autorecovery.podMonitor.scrapeTimeout }}
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if $.Values.autorecovery.podMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml $.Values.autorecovery.podMonitor.metricRelabelings | nindent 8 }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: {{ .Values.autorecovery.component }}
{{- include "pulsar.podMonitor" (list . "autorecovery" (printf "component: %s" .Values.autorecovery.component)) }}
{{- end }}

View File

@ -1,85 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
namespace: {{ template "pulsar.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
namespace: {{ template "pulsar.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
subjects:
- kind: ServiceAccount
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
namespace: {{ template "pulsar.namespace" . }}
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
{{- if .Values.rbac.limit_to_namespace }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}-{{ template "pulsar.namespace" . }}"
{{- else}}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
{{- end}}
spec:
readOnlyRootFilesystem: false
privileged: false
allowPrivilegeEscalation: false
runAsUser:
rule: 'RunAsAny'
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end }}

View File

@ -26,6 +26,10 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.autorecovery.component }}
{{- with .Values.autorecovery.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
ports:
- name: http

View File

@ -23,6 +23,7 @@ kind: StatefulSet
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
namespace: {{ template "pulsar.namespace" . }}
annotations: {{ .Values.autorecovery.appAnnotations | toYaml | nindent 4 }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.autorecovery.component }}
@ -43,8 +44,10 @@ spec:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.autorecovery.component }}
annotations:
{{- if not .Values.autorecovery.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.autorecovery.ports.http }}"
{{- end }}
{{- if .Values.autorecovery.restartPodsOnConfigMapChange }}
checksum/config: {{ include (print $.Template.BasePath "/autorecovery-configmap.yaml") . | sha256sum }}
{{- end }}
@ -110,6 +113,18 @@ spec:
terminationGracePeriodSeconds: {{ .Values.autorecovery.gracePeriod }}
serviceAccountName: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
initContainers:
{{- if .Values.tls.autorecovery.cacerts.enabled }}
- name: cacerts
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.autorecovery "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.autorecovery "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.autorecovery.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.autorecovery.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if and .Values.autorecovery.waitBookkeeperTimeout (gt (.Values.autorecovery.waitBookkeeperTimeout | int) 0) }}
# This initContainer will wait for bookkeeper initnewcluster to complete
# before deploying the bookies
@ -119,12 +134,15 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.autorecovery.waitBookkeeperTimeout }}", "sh", "-c"]
args:
- >
- |
{{- include "pulsar.autorecovery.init.verify_cluster_id" . | nindent 10 }}
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
volumeMounts:
{{- if .Values.autorecovery.extraVolumeMounts }}
{{ toYaml .Values.autorecovery.extraVolumeMounts | indent 8 }}
{{- end }}
{{- include "pulsar.autorecovery.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if .Values.autorecovery.initContainers }}
@ -138,13 +156,14 @@ spec:
resources:
{{ toYaml .Values.autorecovery.resources | indent 10 }}
{{- end }}
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end}}
command: ["sh", "-c"]
args:
- >
- |
{{- if .Values.tls.autorecovery.cacerts.enabled }}
cd /pulsar/certs/cacerts;
nohup /pulsar/bin/certs-combine-pem-infinity.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.autorecovery.cacerts.certs) }} > /pulsar/certs/cacerts/certs-combine-pem-infinity.log 2>&1 &
cd /pulsar;
{{- end }}
bin/apply-config-from-env.py conf/bookkeeper.conf;
{{- include "pulsar.autorecovery.zookeeper.tls.settings" . | nindent 10 }}
OPTS="${OPTS} -Dlog4j2.formatMsgNoLookups=true" exec bin/bookkeeper autorecovery
@ -158,6 +177,9 @@ spec:
{{- include "pulsar.autorecovery.certs.volumeMounts" . | nindent 8 }}
volumes:
{{- include "pulsar.autorecovery.certs.volumes" . | nindent 6 }}
{{- if .Values.autorecovery.extraVolumes }}
{{ toYaml .Values.autorecovery.extraVolumes | indent 6 }}
{{- end }}
{{- include "pulsar.imagePullSecrets" . | nindent 6}}
{{- end }}

View File

@ -33,10 +33,14 @@ spec:
ttlSecondsAfterFinished: {{ .Values.job.ttl.secondsAfterFinished | default 600 }}
{{- end }}
template:
metadata:
labels:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.bookkeeper.component }}-init
spec:
{{- include "pulsar.imagePullSecrets" . | nindent 6 }}
serviceAccountName: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
{{- with .Values.pulsar_metadata.nodeSelector }}
{{- if .Values.pulsar_metadata.nodeSelector }}
nodeSelector:
{{ toYaml .Values.pulsar_metadata.nodeSelector | indent 8 }}
{{- end }}
@ -45,6 +49,18 @@ spec:
{{ toYaml .Values.pulsar_metadata.tolerations | indent 8 }}
{{- end }}
initContainers:
{{- if .Values.tls.bookie.cacerts.enabled }}
- name: cacerts
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.bookie "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.bookie "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.bookie.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.toolset.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if and .Values.components.zookeeper .Values.bookkeeper.metadata.waitZookeeperTimeout (gt (.Values.bookkeeper.metadata.waitZookeeperTimeout | int) 0) }}
- name: wait-zookeeper-ready
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.bookie "root" .) }}"
@ -52,7 +68,7 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.bookkeeper.metadata.waitZookeeperTimeout }}", "sh", "-c"]
args:
- >-
- |
{{- if $zk:=.Values.pulsar_metadata.userProvidedZookeepers }}
export PULSAR_MEM="-Xmx128M";
until timeout 15 bin/pulsar zookeeper-shell -server {{ $zk }} ls {{ or .Values.metadataPrefix "/" }}; do
@ -71,7 +87,7 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.bookkeeper.metadata.waitOxiaTimeout }}", "sh", "-c"]
args:
- >-
- |
until nslookup {{ template "pulsar.oxia.server.service" . }}; do
sleep 3;
done;
@ -86,7 +102,7 @@ spec:
{{- end }}
command: ["timeout", "{{ .Values.bookkeeper.metadata.initTimeout | default 60 }}", "sh", "-c"]
args:
- >
- |
bin/apply-config-from-env.py conf/bookkeeper.conf;
{{- include "pulsar.toolset.zookeeper.tls.settings" . | nindent 12 }}
export BOOKIE_MEM="-Xmx128M";
@ -101,10 +117,6 @@ spec:
{{- if .Values.extraInitCommand }}
{{ .Values.extraInitCommand }}
{{- end }}
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end }}
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"

View File

@ -19,40 +19,5 @@
# deploy bookkeeper PodMonitor only when `$.Values.bookkeeper.podMonitor.enabled` is true
{{- if $.Values.bookkeeper.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ template "pulsar.fullname" . }}-bookie
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobLabel: bookie
podMetricsEndpoints:
- port: http
path: /metrics
scheme: http
interval: {{ $.Values.bookkeeper.podMonitor.interval }}
scrapeTimeout: {{ $.Values.bookkeeper.podMonitor.scrapeTimeout }}
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if $.Values.bookkeeper.podMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml $.Values.bookkeeper.podMonitor.metricRelabelings | nindent 8 }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: bookie
{{- end }}
{{- include "pulsar.podMonitor" (list . "bookkeeper" (printf "component: %s" .Values.bookkeeper.component)) }}
{{- end }}

View File

@ -1,85 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
namespace: {{ template "pulsar.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
namespace: {{ template "pulsar.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
subjects:
- kind: ServiceAccount
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
namespace: {{ template "pulsar.namespace" . }}
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
{{- if .Values.rbac.limit_to_namespace }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}-{{ template "pulsar.namespace" . }}"
{{- else}}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
{{- end}}
spec:
readOnlyRootFilesystem: false
privileged: false
allowPrivilegeEscalation: false
runAsUser:
rule: 'RunAsAny'
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end}}

View File

@ -26,9 +26,9 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.bookkeeper.component }}
{{- if .Values.bookkeeper.service.annotations }}
{{- with .Values.bookkeeper.service.annotations }}
annotations:
{{ toYaml .Values.bookkeeper.service.annotations | indent 4 }}
{{ toYaml . | indent 4 }}
{{- end }}
spec:
ports:

View File

@ -23,6 +23,7 @@ kind: StatefulSet
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
namespace: {{ template "pulsar.namespace" . }}
annotations: {{ .Values.bookkeeper.appAnnotations | toYaml | nindent 4 }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.bookkeeper.component }}
@ -42,8 +43,10 @@ spec:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.bookkeeper.component }}
annotations:
{{- if not .Values.bookkeeper.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.bookkeeper.ports.http }}"
{{- end }}
{{- if .Values.bookkeeper.restartPodsOnConfigMapChange }}
checksum/config: {{ include (print $.Template.BasePath "/bookkeeper-configmap.yaml") . | sha256sum }}
{{- end }}
@ -112,6 +115,18 @@ spec:
{{- end }}
{{- if and .Values.bookkeeper.waitMetadataTimeout (gt (.Values.bookkeeper.waitMetadataTimeout | int) 0) }}
initContainers:
{{- if .Values.tls.bookie.cacerts.enabled }}
- name: cacerts
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.bookie "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.bookie "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.bookie.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.bookkeeper.certs.volumeMounts" . | nindent 8 }}
{{- end }}
# This initContainer will wait for bookkeeper initnewcluster to complete
# before deploying the bookies
- name: pulsar-bookkeeper-verify-clusterid
@ -121,15 +136,11 @@ spec:
command: ["timeout", "{{ .Values.bookkeeper.waitMetadataTimeout }}", "sh", "-c"]
args:
# only reformat bookie if bookkeeper is running without persistence
- >
- |
{{- include "pulsar.bookkeeper.init.verify_cluster_id" . | nindent 10 }}
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end}}
volumeMounts:
{{- include "pulsar.bookkeeper.certs.volumeMounts" . | nindent 8 }}
{{- end }}
@ -176,17 +187,34 @@ spec:
{{- end }}
command: ["sh", "-c"]
args:
- >
{{- if .Values.bookkeeper.additionalCommand }}
- |
# set required environment variables to use rocksdb config files provided in the Pulsar image
export PULSAR_PREFIX_defaultRocksdbConf=${PULSAR_PREFIX_defaultRocksdbConf:-conf/default_rocksdb.conf}
export PULSAR_PREFIX_entryLocationRocksdbConf=${PULSAR_PREFIX_entryLocationRocksdbConf:-conf/entry_location_rocksdb.conf}
export PULSAR_PREFIX_ledgerMetadataRocksdbConf=${PULSAR_PREFIX_ledgerMetadataRocksdbConf:-conf/ledger_metadata_rocksdb.conf}
if [ -x bin/update-rocksdb-conf-from-env.py ] && [ -f "${PULSAR_PREFIX_entryLocationRocksdbConf}" ]; then
echo "Updating ${PULSAR_PREFIX_entryLocationRocksdbConf} from environment variables starting with dbStorage_rocksDB_*"
bin/update-rocksdb-conf-from-env.py "${PULSAR_PREFIX_entryLocationRocksdbConf}"
else
# Ensure that Bookkeeper will not load RocksDB config from existing files and fallback to use default RocksDB config
# See https://github.com/apache/bookkeeper/pull/3523 as reference
export PULSAR_PREFIX_defaultRocksdbConf=conf/non_existing_default_rocksdb.conf
export PULSAR_PREFIX_entryLocationRocksdbConf=conf/non_existing_entry_location_rocksdb.conf
export PULSAR_PREFIX_ledgerMetadataRocksdbConf=conf/non_existing_ledger_metadata_rocksdb.conf
# Ensure that Bookkeeper will use RocksDB format_version 5 (this currently applies only to the entry location rocksdb due to a bug in Bookkeeper)
export PULSAR_PREFIX_dbStorage_rocksDB_format_version=${PULSAR_PREFIX_dbStorage_rocksDB_format_version:-5}
fi
{{- if .Values.bookkeeper.additionalCommand }}
{{ .Values.bookkeeper.additionalCommand }}
{{- end }}
{{- end }}
{{- if .Values.tls.bookie.cacerts.enabled }}
cd /pulsar/certs/cacerts;
nohup /pulsar/bin/certs-combine-pem-infinity.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.bookie.cacerts.certs) }} > /pulsar/certs/cacerts/certs-combine-pem-infinity.log 2>&1 &
cd /pulsar;
{{- end }}
bin/apply-config-from-env.py conf/bookkeeper.conf;
{{- include "pulsar.bookkeeper.zookeeper.tls.settings" . | nindent 10 }}
OPTS="${OPTS} -Dlog4j2.formatMsgNoLookups=true" exec bin/pulsar bookie;
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end}}
ports:
- name: "{{ .Values.tcpPrefix }}bookie"
containerPort: {{ .Values.bookkeeper.ports.bookie }}
@ -235,10 +263,10 @@ spec:
emptyDir: {}
{{- end }}
{{- include "pulsar.bookkeeper.certs.volumes" . | nindent 6 }}
{{- include "pulsar.imagePullSecrets" . | nindent 6}}
{{- if .Values.bookkeeper.extraVolumes }}
{{ toYaml .Values.bookkeeper.extraVolumes | indent 6 }}
{{- end }}
{{- include "pulsar.imagePullSecrets" . | nindent 6}}
{{- if and (and .Values.persistence .Values.volumes.persistence) .Values.bookkeeper.volumes.persistence}}
volumeClaimTemplates:
{{- if .Values.bookkeeper.volumes.useSingleCommonVolume }}

View File

@ -29,12 +29,18 @@ metadata:
data:
# Metadata settings
{{- if .Values.components.zookeeper }}
zookeeperServers: "{{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }}"
metadataStoreUrl: "zk:{{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }}"
{{- $configMetadataStoreUrl := "" }}
{{- if .Values.pulsar_metadata.configurationStore }}
configurationStoreServers: "{{ template "pulsar.configurationStore.connect" . }}{{ .Values.pulsar_metadata.configurationStoreMetadataPrefix }}"
{{- $configMetadataStoreUrl = printf "zk:%s%s" (include "pulsar.configurationStore.connect" .) .Values.pulsar_metadata.configurationStoreMetadataPrefix }}
{{- else }}
{{- $configMetadataStoreUrl = printf "zk:%s%s" (include "pulsar.zookeeper.connect" .) .Values.metadataPrefix }}
{{- end }}
{{- if not .Values.pulsar_metadata.configurationStore }}
configurationStoreServers: "{{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }}"
configurationMetadataStoreUrl: "{{ $configMetadataStoreUrl }}"
{{- if .Values.pulsar_metadata.bookkeeper.usePulsarMetadataClientDriver }}
bookkeeperMetadataServiceUri: "metadata-store:{{ $configMetadataStoreUrl }}/ledgers"
{{- else }}
bookkeeperMetadataServiceUri: "zk+hierarchical://{{ template "pulsar.zookeeper.connect" . }}{{ .Values.metadataPrefix }}/ledgers"
{{- end }}
{{- end }}
{{- if .Values.components.oxia }}
@ -43,11 +49,49 @@ data:
bookkeeperMetadataServiceUri: "{{ template "pulsar.oxia.metadata.url.bookkeeper" . }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreAllowReadOnlyOperations" }}
PULSAR_PREFIX_metadataStoreAllowReadOnlyOperations: "{{ .Values.pulsar_metadata.metadataStoreAllowReadOnlyOperations }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreSessionTimeoutMillis" }}
metadataStoreSessionTimeoutMillis: "{{ .Values.pulsar_metadata.metadataStoreSessionTimeoutMillis }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreOperationTimeoutSeconds" }}
metadataStoreOperationTimeoutSeconds: "{{ .Values.pulsar_metadata.metadataStoreOperationTimeoutSeconds }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreCacheExpirySeconds" }}
metadataStoreCacheExpirySeconds: "{{ .Values.pulsar_metadata.metadataStoreCacheExpirySeconds }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreBatchingEnabled" }}
metadataStoreBatchingEnabled: "{{ .Values.pulsar_metadata.metadataStoreBatchingEnabled }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreBatchingMaxDelayMillis" }}
metadataStoreBatchingMaxDelayMillis: "{{ .Values.pulsar_metadata.metadataStoreBatchingMaxDelayMillis }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreBatchingMaxOperations" }}
metadataStoreBatchingMaxOperations: "{{ .Values.pulsar_metadata.metadataStoreBatchingMaxOperations }}"
{{- end }}
{{- if hasKey .Values.pulsar_metadata "metadataStoreBatchingMaxSizeKb" }}
metadataStoreBatchingMaxSizeKb: "{{ .Values.pulsar_metadata.metadataStoreBatchingMaxSizeKb }}"
{{- end }}
# Broker settings
clusterName: {{ template "pulsar.cluster.name" . }}
# Enable all metrics by default
exposeTopicLevelMetricsInPrometheus: "true"
exposeConsumerLevelMetricsInPrometheus: "true"
exposeProducerLevelMetricsInPrometheus: "true"
exposeManagedLedgerMetricsInPrometheus: "true"
exposeManagedCursorMetricsInPrometheus: "true"
exposeBundlesMetricsInPrometheus: "true"
exposePublisherStats: "true"
exposePreciseBacklogInPrometheus: "true"
replicationMetricsEnabled: "true"
splitTopicAndPartitionLabelInPrometheus: "true"
aggregatePublisherStatsByProducerName: "true"
bookkeeperClientExposeStatsToPrometheus: "true"
numHttpServerThreads: "8"
zooKeeperSessionTimeoutMillis: "30000"
statusFilePath: "{{ template "pulsar.home" . }}/logs/status"
# Tiered storage settings
@ -160,7 +204,7 @@ data:
# TLS Settings
tlsCertificateFilePath: "/pulsar/certs/broker/tls.crt"
tlsKeyFilePath: "/pulsar/certs/broker/tls.key"
tlsTrustCertsFilePath: "/pulsar/certs/ca/ca.crt"
tlsTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.broker.cacerts.enabled | quote }}
{{- end }}
# Authentication Settings
@ -173,9 +217,14 @@ data:
proxyRoles: {{ .Values.auth.superUsers.proxy }}
{{- end }}
{{- end }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.jwt.enabled }}
# token authentication configuration
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.jwt.enabled .Values.auth.authentication.openid.enabled }}
authenticationProviders: "org.apache.pulsar.broker.authentication.AuthenticationProviderToken,org.apache.pulsar.broker.authentication.oidc.AuthenticationProviderOpenID"
{{- end }}
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.jwt.enabled ( not .Values.auth.authentication.openid.enabled ) }}
authenticationProviders: "org.apache.pulsar.broker.authentication.AuthenticationProviderToken"
{{- end }}
brokerClientAuthenticationParameters: "file:///pulsar/tokens/broker/token"
brokerClientAuthenticationPlugin: "org.apache.pulsar.client.impl.auth.AuthenticationToken"
{{- if .Values.auth.authentication.jwt.usingSecretKey }}
@ -184,6 +233,25 @@ data:
tokenPublicKey: "file:///pulsar/keys/token/public.key"
{{- end }}
{{- end }}
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.openid.enabled }}
# openid authentication configuration
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.openid.enabled ( not .Values.auth.authentication.jwt.enabled ) }}
authenticationProviders: "org.apache.pulsar.broker.authentication.oidc.AuthenticationProviderOpenID"
{{- end }}
PULSAR_PREFIX_openIDAllowedTokenIssuers: {{ .Values.auth.authentication.openid.openIDAllowedTokenIssuers | uniq | compact | sortAlpha | join "," | quote }}
PULSAR_PREFIX_openIDAllowedAudiences: {{ .Values.auth.authentication.openid.openIDAllowedAudiences | uniq | compact | sortAlpha | join "," | quote }}
PULSAR_PREFIX_openIDTokenIssuerTrustCertsFilePath: {{ .Values.auth.authentication.openid.openIDTokenIssuerTrustCertsFilePath | quote }}
PULSAR_PREFIX_openIDRoleClaim: {{ .Values.auth.authentication.openid.openIDRoleClaim | quote }}
PULSAR_PREFIX_openIDAcceptedTimeLeewaySeconds: {{ .Values.auth.authentication.openid.openIDAcceptedTimeLeewaySeconds | quote }}
PULSAR_PREFIX_openIDCacheSize: {{ .Values.auth.authentication.openid.openIDCacheSize | quote }}
PULSAR_PREFIX_openIDCacheRefreshAfterWriteSeconds: {{ .Values.auth.authentication.openid.openIDCacheRefreshAfterWriteSeconds | quote }}
PULSAR_PREFIX_openIDCacheExpirationSeconds: {{ .Values.auth.authentication.openid.openIDCacheExpirationSeconds | quote }}
PULSAR_PREFIX_openIDHttpConnectionTimeoutMillis: {{ .Values.auth.authentication.openid.openIDHttpConnectionTimeoutMillis | quote }}
PULSAR_PREFIX_openIDHttpReadTimeoutMillis: {{ .Values.auth.authentication.openid.openIDHttpReadTimeoutMillis | quote }}
PULSAR_PREFIX_openIDKeyIdCacheMissRefreshSeconds: {{ .Values.auth.authentication.openid.openIDKeyIdCacheMissRefreshSeconds | quote }}
PULSAR_PREFIX_openIDRequireIssuersUseHttps: {{ .Values.auth.authentication.openid.openIDRequireIssuersUseHttps | quote }}
PULSAR_PREFIX_openIDFallbackDiscoveryMode: {{ .Values.auth.authentication.openid.openIDFallbackDiscoveryMode | quote }}
{{- end }}
{{- end }}
{{- if and .Values.tls.enabled .Values.tls.bookie.enabled }}
@ -192,13 +260,13 @@ data:
bookkeeperTLSKeyFileType: "PEM"
bookkeeperTLSKeyFilePath: "/pulsar/certs/broker/tls.key"
bookkeeperTLSCertificateFilePath: "/pulsar/certs/broker/tls.crt"
bookkeeperTLSTrustCertsFilePath: "/pulsar/certs/ca/ca.crt"
bookkeeperTLSTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.broker.cacerts.enabled | quote }}
bookkeeperTLSTrustCertTypes: "PEM"
PULSAR_PREFIX_bookkeeperTLSClientAuthentication: "true"
PULSAR_PREFIX_bookkeeperTLSKeyFileType: "PEM"
PULSAR_PREFIX_bookkeeperTLSKeyFilePath: "/pulsar/certs/broker/tls.key"
PULSAR_PREFIX_bookkeeperTLSCertificateFilePath: "/pulsar/certs/broker/tls.crt"
PULSAR_PREFIX_bookkeeperTLSTrustCertsFilePath: "/pulsar/certs/ca/ca.crt"
PULSAR_PREFIX_bookkeeperTLSTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.broker.cacerts.enabled | quote }}
PULSAR_PREFIX_bookkeeperTLSTrustCertTypes: "PEM"
# https://github.com/apache/bookkeeper/pull/2300
bookkeeperUseV2WireProtocol: "false"

View File

@ -19,40 +19,5 @@
# deploy broker PodMonitor only when `$.Values.broker.podMonitor.enabled` is true
{{- if $.Values.broker.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ template "pulsar.fullname" . }}-broker
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobLabel: broker
podMetricsEndpoints:
- port: http
path: /metrics
scheme: http
interval: {{ $.Values.broker.podMonitor.interval }}
scrapeTimeout: {{ $.Values.broker.podMonitor.scrapeTimeout }}
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if $.Values.broker.podMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml $.Values.broker.podMonitor.metricRelabelings | nindent 8 }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: broker
{{- end }}
{{- include "pulsar.podMonitor" (list . "broker" (printf "component: %s" .Values.broker.component)) }}
{{- end }}

View File

@ -1,85 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}-psp"
namespace: {{ template "pulsar.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}-psp"
namespace: {{ template "pulsar.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}-psp"
subjects:
- kind: ServiceAccount
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}-acct"
namespace: {{ template "pulsar.namespace" . }}
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
{{- if .Values.rbac.limit_to_namespace }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}-{{ template "pulsar.namespace" . }}"
{{- else}}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
{{- end}}
spec:
readOnlyRootFilesystem: false
privileged: false
allowPrivilegeEscalation: false
runAsUser:
rule: 'RunAsAny'
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end}}

View File

@ -26,7 +26,7 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.broker.component }}
{{- with .Values.broker.service_account.annotations }}
{{- with .Values.broker.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}

View File

@ -25,6 +25,7 @@ metadata:
name: {{ $stsName | quote }}
{{- $namespace := include "pulsar.namespace" . }}
namespace: {{ $namespace | quote }}
annotations: {{ .Values.broker.appAnnotations | toYaml | nindent 4 }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.broker.component }}
@ -62,8 +63,10 @@ spec:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.broker.component }}
annotations:
{{- if not .Values.broker.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.broker.ports.http }}"
{{- end }}
{{- if .Values.broker.restartPodsOnConfigMapChange }}
checksum/config: {{ include (print $.Template.BasePath "/broker-configmap.yaml") . | sha256sum }}
{{- end }}
@ -127,6 +130,18 @@ spec:
{{- end }}
terminationGracePeriodSeconds: {{ .Values.broker.gracePeriod }}
initContainers:
{{- if .Values.tls.broker.cacerts.enabled }}
- name: cacerts
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.broker "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.broker "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.broker.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.broker.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if and .Values.components.zookeeper .Values.broker.waitZookeeperTimeout (gt (.Values.broker.waitZookeeperTimeout | int) 0) }}
# This init container will wait for zookeeper to be ready before
# deploying the bookies
@ -136,21 +151,17 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.broker.waitZookeeperTimeout }}", "sh", "-c"]
args:
- >-
- |
{{- include "pulsar.broker.zookeeper.tls.settings" . | nindent 12 }}
export BOOKIE_MEM="-Xmx128M";
export PULSAR_MEM="-Xmx128M";
{{- if .Values.pulsar_metadata.configurationStore }}
until timeout 15 bin/pulsar zookeeper-shell -server {{ template "pulsar.configurationStore.connect" . }} get {{ .Values.configurationStoreMetadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
until timeout 15 bin/pulsar zookeeper-shell -server {{ template "pulsar.configurationStore.connect" . }} get {{ .Values.pulsar_metadata.configurationStoreMetadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
{{- end }}
{{- if not .Values.pulsar_metadata.configurationStore }}
until timeout 15 bin/pulsar zookeeper-shell -server {{ template "pulsar.zookeeper.connect" . }} get {{ .Values.metadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
{{- end }}
echo "pulsar cluster {{ template "pulsar.cluster.name" . }} isn't initialized yet ... check in 3 seconds ..." && sleep 3;
done;
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end }}
volumeMounts:
{{- include "pulsar.broker.certs.volumeMounts" . | nindent 8 }}
{{- end }}
@ -161,7 +172,7 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.broker.waitOxiaTimeout }}", "sh", "-c"]
args:
- >-
- |
until nslookup {{ template "pulsar.oxia.server.service" . }}; do
sleep 3;
done;
@ -175,7 +186,7 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.broker.waitBookkeeperTimeout }}", "sh", "-c"]
args:
- >
- |
{{- include "pulsar.broker.zookeeper.tls.settings" . | nindent 12 }}
bin/apply-config-from-env.py conf/bookkeeper.conf;
export BOOKIE_MEM="-Xmx128M";
@ -194,10 +205,6 @@ spec:
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end }}
volumeMounts:
{{- include "pulsar.broker.certs.volumeMounts" . | nindent 10 }}
{{- end }}
@ -244,10 +251,15 @@ spec:
{{- end }}
command: ["sh", "-c"]
args:
- >
- |
{{- if .Values.broker.additionalCommand }}
{{ .Values.broker.additionalCommand }}
{{- end }}
{{- if .Values.tls.broker.cacerts.enabled }}
cd /pulsar/certs/cacerts;
nohup /pulsar/bin/certs-combine-pem-infinity.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.broker.cacerts.certs) }} > /pulsar/certs/cacerts/certs-combine-pem-infinity.log 2>&1 &
cd /pulsar;
{{- end }}
bin/apply-config-from-env.py conf/broker.conf;
bin/gen-yml-from-env.py conf/functions_worker.yml;
echo "OK" > "${statusFilePath:-status}";
@ -281,7 +293,7 @@ spec:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
volumeMounts:
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
- mountPath: "/pulsar/keys"
name: token-keys
readOnly: true
@ -301,10 +313,6 @@ spec:
{{ toYaml .Values.broker.extraVolumeMounts | indent 10 }}
{{- end }}
{{- include "pulsar.broker.certs.volumeMounts" . | nindent 10 }}
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end }}
env:
{{- if and (and .Values.broker.storageOffload (eq .Values.broker.storageOffload.driver "aws-s3")) .Values.broker.storageOffload.secret }}
- name: AWS_ACCESS_KEY_ID
@ -338,7 +346,7 @@ spec:
{{ toYaml .Values.broker.extraVolumes | indent 6 }}
{{- end }}
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
- name: token-keys
secret:
{{- if not .Values.auth.authentication.jwt.usingSecretKey }}

View File

@ -0,0 +1,82 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ template "pulsar.fullname" . }}-certs-scripts"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: certs-scripts
data:
certs-combine-pem.sh: |
#!/bin/bash
# This script combines all certificates into a single file.
# Usage: certs-combine-pem.sh <output_file> <cert1> <cert2> ...
set -eu -o pipefail
if [ "$#" -lt 2 ]; then
echo "Usage: $0 <output_file> <cert1> <cert2> ..."
exit 1
fi
OUTPUT_FILE="$1"
shift
OUTPUT_FILE_TMP="${OUTPUT_FILE}.tmp"
rm -f "$OUTPUT_FILE_TMP"
for CERT in "$@"; do
if [ -f "$CERT" ]; then
echo "# $CERT" >> "$OUTPUT_FILE_TMP"
cat "$CERT" >> "$OUTPUT_FILE_TMP"
else
echo "Certificate file '$CERT' does not exist, skipping"
fi
done
if [ ! -f "$OUTPUT_FILE" ]; then
touch "$OUTPUT_FILE"
fi
if diff -q "$OUTPUT_FILE" "$OUTPUT_FILE_TMP" > /dev/null; then
# No changes detected, skipping update
rm -f "$OUTPUT_FILE_TMP"
else
# Update $OUTPUT_FILE with new certificates
mv "$OUTPUT_FILE_TMP" "$OUTPUT_FILE"
fi
certs-combine-pem-infinity.sh: |
#!/bin/bash
# This script combines all certificates into a single file, every minutes.
# Usage: certs-combine-pem-infinity.sh <output_file> <cert1> <cert2> ...
set -eu -o pipefail
if [ "$#" -lt 2 ]; then
echo "Usage: $0 <output_file> <cert1> <cert2> ..."
exit 1
fi
while true; do
/pulsar/bin/certs-combine-pem.sh "$@"
sleep 60
done

View File

@ -17,6 +17,7 @@
# under the License.
#
rbac:
enabled: true
psp: true
{{- range .Values.extraDeploy }}
---
{{ include "common.tplvalues.render" (dict "value" . "context" $) }}
{{- end }}

View File

@ -1,110 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# script to process key/cert to keystore and truststore
{{- if .Values.tls.zookeeper.enabled }}
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ template "pulsar.fullname" . }}-keytool-configmap"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: keytool
data:
keytool.sh: |
#!/bin/bash
component=$1
name=$2
isClient=$3
crtFile=/pulsar/certs/${component}/tls.crt
keyFile=/pulsar/certs/${component}/tls.key
caFile=/pulsar/certs/ca/ca.crt
tlsDir=/tmp/pulsar-tls$$
p12File=${tlsDir}/${component}.p12
keyStoreFile=${tlsDir}/${component}.keystore.jks
trustStoreFile=${tlsDir}/${component}.truststore.jks
# create tmp dir for keystore and truststore files
mkdir ${tlsDir}
chmod 0700 ${tlsDir}
function checkFile() {
local file=$1
local len=$(wc -c ${file} | awk '{print $1}')
echo "processing ${file} : len = ${len}"
if [ ! -f ${file} ]; then
echo "${file} is not found"
return -1
fi
if [ $len -le 0 ]; then
echo "${file} is empty"
return -1
fi
}
function ensureFileNotEmpty() {
local file=$1
until checkFile ${file}; do
echo "file isn't initialized yet ... check in 3 seconds ..." && sleep 3;
done;
}
ensureFileNotEmpty ${crtFile}
ensureFileNotEmpty ${keyFile}
ensureFileNotEmpty ${caFile}
PASSWORD=$(head /dev/urandom | base64 | head -c 24)
openssl pkcs12 \
-export \
-in ${crtFile} \
-inkey ${keyFile} \
-out ${p12File} \
-name ${name} \
-passout "pass:${PASSWORD}"
keytool -importkeystore \
-srckeystore ${p12File} \
-srcstoretype PKCS12 -srcstorepass "${PASSWORD}" \
-alias ${name} \
-destkeystore ${keyStoreFile} \
-deststorepass "${PASSWORD}"
keytool -import \
-file ${caFile} \
-storetype JKS \
-alias ${name} \
-keystore ${trustStoreFile} \
-storepass "${PASSWORD}" \
-trustcacerts -noprompt
ensureFileNotEmpty ${keyStoreFile}
ensureFileNotEmpty ${trustStoreFile}
if [[ "x${isClient}" == "xtrue" ]]; then
echo $'\n' >> conf/pulsar_env.sh
echo "PULSAR_EXTRA_OPTS=\"\${PULSAR_EXTRA_OPTS} -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.client.secure=true -Dzookeeper.ssl.keyStore.location=${keyStoreFile} -Dzookeeper.ssl.keyStore.password=${PASSWORD} -Dzookeeper.ssl.trustStore.location=${trustStoreFile} -Dzookeeper.ssl.trustStore.password=${PASSWORD}\"" >> conf/pulsar_env.sh
echo $'\n' >> conf/bkenv.sh
echo "BOOKIE_EXTRA_OPTS=\"\${BOOKIE_EXTRA_OPTS} -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty -Dzookeeper.client.secure=true -Dzookeeper.ssl.keyStore.location=${keyStoreFile} -Dzookeeper.ssl.keyStore.password=${PASSWORD} -Dzookeeper.ssl.trustStore.location=${trustStoreFile} -Dzookeeper.ssl.trustStore.password=${PASSWORD}\"" >> conf/bkenv.sh
else
echo $'\n' >> conf/pulsar_env.sh
echo "PULSAR_EXTRA_OPTS=\"\${PULSAR_EXTRA_OPTS} -Dzookeeper.ssl.keyStore.location=${keyStoreFile} -Dzookeeper.ssl.keyStore.password=${PASSWORD} -Dzookeeper.ssl.trustStore.location=${trustStoreFile} -Dzookeeper.ssl.trustStore.password=${PASSWORD}\"" >> conf/pulsar_env.sh
fi
{{- end }}

View File

@ -16,7 +16,7 @@
# specific language governing permissions and limitations
# under the License.
#
{{- if .Values.components.oxia }}
{{- if and .Values.components.oxia (not .Values.oxia.coordinator.customConfigMapName) }}
apiVersion: v1
kind: ConfigMap
metadata:
@ -29,4 +29,4 @@ data:
config.yaml: |
{{- include "oxia.coordinator.config.yaml" . | nindent 4 }}
{{- end }}
{{- end }}

View File

@ -26,6 +26,7 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.oxia.component }}-coordinator
annotations: {{ .Values.oxia.coordinator.appAnnotations | toYaml | nindent 4 }}
spec:
replicas: 1
selector:
@ -40,23 +41,32 @@ spec:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.oxia.component }}-coordinator
annotations:
{{- if not .Values.oxia.coordinator.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.oxia.coordinator.ports.metrics }}"
{{- end }}
{{- with .Values.oxia.coordinator.annotations }}
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{- if .Values.oxia.server.nodeSelector }}
{{- if .Values.oxia.coordinator.nodeSelector }}
nodeSelector:
{{ toYaml .Values.oxia.server.nodeSelector | indent 8 }}
{{ toYaml .Values.oxia.coordinator.nodeSelector | indent 8 }}
{{- end }}
{{- if .Values.oxia.server.tolerations }}
{{- if .Values.oxia.coordinator.tolerations }}
tolerations:
{{ toYaml .Values.oxia.server.tolerations | indent 8 }}
{{ toYaml .Values.oxia.coordinator.tolerations | indent 8 }}
{{- end }}
serviceAccountName: {{ template "pulsar.fullname" . }}-{{ .Values.oxia.component }}-coordinator
containers:
- command:
{{- if .Values.oxia.coordinator.entrypoint }}
{{ toYaml .Values.oxia.coordinator.entrypoint | indent 12 }}
{{- else }}
{{- include "oxia.coordinator.entrypoint" . | nindent 12 }}
{{- end }}
image: "{{ .Values.images.oxia.repository }}:{{ .Values.images.oxia.tag }}"
imagePullPolicy: {{ .Values.images.oxia.pullPolicy }}
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.oxia "root" .) }}"
name: coordinator
ports:
{{- range $key, $value := .Values.oxia.coordinator.ports }}
@ -67,8 +77,19 @@ spec:
limits:
cpu: {{ .Values.oxia.coordinator.cpuLimit }}
memory: {{ .Values.oxia.coordinator.memoryLimit }}
{{- if .Values.oxia.coordinator.extraVolumeMounts }}
volumeMounts:
{{- toYaml .Values.oxia.coordinator.extraVolumeMounts | nindent 12 }}
{{- end }}
livenessProbe:
{{- include "oxia-cluster.probe" .Values.oxia.coordinator.ports.internal | nindent 12 }}
readinessProbe:
{{- include "oxia-cluster.probe" .Values.oxia.coordinator.ports.internal | nindent 12 }}
{{- end }}
{{- if .Values.oxia.coordinator.extraContainers }}
{{- toYaml .Values.oxia.coordinator.extraContainers | nindent 8 }}
{{- end }}
{{- if .Values.oxia.coordinator.extraVolumes }}
volumes:
{{- toYaml .Values.oxia.coordinator.extraVolumes | nindent 8 }}
{{- end }}
{{- end }}

View File

@ -17,42 +17,7 @@
# under the License.
#
# deploy oxia-coordinator PodMonitor only when `$.Values.oxia.podMonitor.enabled` is true
# deploy oxia-coordinator PodMonitor only when `$.Values.oxia.coordinator.podMonitor.enabled` is true
{{- if and $.Values.components.oxia $.Values.oxia.coordinator.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ template "pulsar.fullname" . }}-oxia-coordinator
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobLabel: oxia-coordinator
podMetricsEndpoints:
- port: metrics
path: /metrics
scheme: http
interval: {{ $.Values.oxia.coordinator.podMonitor.interval }}
scrapeTimeout: {{ $.Values.oxia.coordinator.podMonitor.scrapeTimeout }}
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if $.Values.oxia.coordinator.podMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml $.Values.oxia.coordinator.podMonitor.metricRelabelings | nindent 8 }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: oxia-coordinator
{{- include "pulsar.podMonitor" (list . "oxia.coordinator" (printf "component: %s-coordinator" .Values.oxia.component) "metrics") }}
{{- end }}

View File

@ -17,42 +17,7 @@
# under the License.
#
# deploy oxia-server PodMonitor only when `$.Values.oxia.podMonitor.enabled` is true
# deploy oxia-server PodMonitor only when `$.Values.oxia.server.podMonitor.enabled` is true
{{- if and $.Values.components.oxia $.Values.oxia.server.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ template "pulsar.fullname" . }}-oxia-server
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobLabel: oxia-server
podMetricsEndpoints:
- port: metrics
path: /metrics
scheme: http
interval: {{ $.Values.oxia.server.podMonitor.interval }}
scrapeTimeout: {{ $.Values.oxia.server.podMonitor.scrapeTimeout }}
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if $.Values.oxia.server.podMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml $.Values.oxia.server.podMonitor.metricRelabelings | nindent 8 }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
app.kubernetes.io/component: oxia-server
{{- include "pulsar.podMonitor" (list . "oxia.server" (printf "component: %s-server" .Values.oxia.component) "metrics") }}
{{- end }}

View File

@ -26,6 +26,7 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.oxia.component }}-server
annotations: {{ .Values.oxia.server.appAnnotations | toYaml | nindent 4 }}
spec:
replicas: {{ .Values.oxia.server.replicas }}
selector:
@ -40,8 +41,13 @@ spec:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.oxia.component }}-server
annotations:
{{- if not .Values.oxia.server.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.oxia.server.ports.metrics }}"
{{- end }}
{{- with .Values.oxia.server.annotations }}
{{ toYaml . | indent 8 }}
{{- end }}
spec:
{{- if .Values.oxia.server.nodeSelector }}
nodeSelector:
@ -112,8 +118,8 @@ spec:
{{- if .Values.oxia.pprofEnabled }}
- "--profile"
{{- end}}
image: "{{ .Values.images.oxia.repository }}:{{ .Values.images.oxia.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.images.oxia.pullPolicy }}
image: "{{ .Values.images.oxia.repository }}:{{ .Values.images.oxia.tag }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.oxia "root" .) }}"
name: server
ports:
{{- range $key, $value := .Values.oxia.server.ports }}
@ -144,4 +150,4 @@ spec:
resources:
requests:
storage: {{ .Values.oxia.server.storageSize }}
{{- end}}
{{- end}}

View File

@ -42,14 +42,14 @@ data:
webServicePortTls: "{{ .Values.proxy.ports.containerPorts.https }}"
tlsCertificateFilePath: "/pulsar/certs/proxy/tls.crt"
tlsKeyFilePath: "/pulsar/certs/proxy/tls.key"
tlsTrustCertsFilePath: "/pulsar/certs/ca/ca.crt"
tlsTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.proxy.cacerts.enabled | quote }}
{{- if and .Values.tls.enabled .Values.tls.broker.enabled }}
# if broker enables TLS, configure proxy to talk to broker using TLS
brokerServiceURLTLS: pulsar+ssl://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.pulsarssl }}
brokerWebServiceURLTLS: https://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.https }}
tlsEnabledWithBroker: "true"
tlsCertRefreshCheckDurationSec: "300"
brokerClientTrustCertsFilePath: "/pulsar/certs/ca/ca.crt"
brokerClientTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.proxy.cacerts.enabled | quote }}
{{- end }}
{{- if not (and .Values.tls.enabled .Values.tls.broker.enabled) }}
brokerServiceURL: pulsar://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.pulsar }}
@ -70,9 +70,14 @@ data:
superUserRoles: {{ .Values.auth.superUsers | values | compact | sortAlpha | join "," }}
{{- end }}
{{- end }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.jwt.enabled }}
# token authentication configuration
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.jwt.enabled .Values.auth.authentication.openid.enabled }}
authenticationProviders: "org.apache.pulsar.broker.authentication.AuthenticationProviderToken,org.apache.pulsar.broker.authentication.oidc.AuthenticationProviderOpenID"
{{- end }}
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.jwt.enabled ( not .Values.auth.authentication.openid.enabled ) }}
authenticationProviders: "org.apache.pulsar.broker.authentication.AuthenticationProviderToken"
{{- end }}
brokerClientAuthenticationParameters: "file:///pulsar/tokens/proxy/token"
brokerClientAuthenticationPlugin: "org.apache.pulsar.client.impl.auth.AuthenticationToken"
{{- if .Values.auth.authentication.jwt.usingSecretKey }}
@ -81,6 +86,25 @@ data:
tokenPublicKey: "file:///pulsar/keys/token/public.key"
{{- end }}
{{- end }}
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.openid.enabled }}
# openid authentication configuration
{{- if and .Values.auth.authentication.enabled .Values.auth.authentication.openid.enabled ( not .Values.auth.authentication.jwt.enabled ) }}
authenticationProviders: "org.apache.pulsar.broker.authentication.oidc.AuthenticationProviderOpenID"
{{- end }}
PULSAR_PREFIX_openIDAllowedTokenIssuers: {{ .Values.auth.authentication.openid.openIDAllowedTokenIssuers | uniq | compact | sortAlpha | join "," | quote }}
PULSAR_PREFIX_openIDAllowedAudiences: {{ .Values.auth.authentication.openid.openIDAllowedAudiences | uniq | compact | sortAlpha | join "," | quote }}
PULSAR_PREFIX_openIDTokenIssuerTrustCertsFilePath: {{ .Values.auth.authentication.openid.openIDTokenIssuerTrustCertsFilePath | quote }}
PULSAR_PREFIX_openIDRoleClaim: {{ .Values.auth.authentication.openid.openIDRoleClaim | quote }}
PULSAR_PREFIX_openIDAcceptedTimeLeewaySeconds: {{ .Values.auth.authentication.openid.openIDAcceptedTimeLeewaySeconds | quote }}
PULSAR_PREFIX_openIDCacheSize: {{ .Values.auth.authentication.openid.openIDCacheSize | quote }}
PULSAR_PREFIX_openIDCacheRefreshAfterWriteSeconds: {{ .Values.auth.authentication.openid.openIDCacheRefreshAfterWriteSeconds | quote }}
PULSAR_PREFIX_openIDCacheExpirationSeconds: {{ .Values.auth.authentication.openid.openIDCacheExpirationSeconds | quote }}
PULSAR_PREFIX_openIDHttpConnectionTimeoutMillis: {{ .Values.auth.authentication.openid.openIDHttpConnectionTimeoutMillis | quote }}
PULSAR_PREFIX_openIDHttpReadTimeoutMillis: {{ .Values.auth.authentication.openid.openIDHttpReadTimeoutMillis | quote }}
PULSAR_PREFIX_openIDKeyIdCacheMissRefreshSeconds: {{ .Values.auth.authentication.openid.openIDKeyIdCacheMissRefreshSeconds | quote }}
PULSAR_PREFIX_openIDRequireIssuersUseHttps: {{ .Values.auth.authentication.openid.openIDRequireIssuersUseHttps | quote }}
PULSAR_PREFIX_openIDFallbackDiscoveryMode: {{ .Values.auth.authentication.openid.openIDFallbackDiscoveryMode | quote }}
{{- end }}
{{- end }}
{{ toYaml .Values.proxy.configData | indent 2 }}
{{- end }}

View File

@ -27,6 +27,8 @@ kind: HorizontalPodAutoscaler
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
spec:
maxReplicas: {{ .Values.proxy.autoscaling.maxReplicas }}
{{- with .Values.proxy.autoscaling.metrics }}

View File

@ -19,40 +19,5 @@
# deploy proxy PodMonitor only when `$.Values.proxy.podMonitor.enabled` is true
{{- if $.Values.proxy.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ template "pulsar.fullname" . }}-proxy
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobLabel: proxy
podMetricsEndpoints:
- port: http
path: /metrics
scheme: http
interval: {{ $.Values.proxy.podMonitor.interval }}
scrapeTimeout: {{ $.Values.proxy.podMonitor.scrapeTimeout }}
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if $.Values.proxy.podMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml $.Values.proxy.podMonitor.metricRelabelings | nindent 8 }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: proxy
{{- end }}
{{- include "pulsar.podMonitor" (list . "proxy" (printf "component: %s" .Values.proxy.component) "sts-http") }}
{{- end }}

View File

@ -1,85 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
namespace: {{ template "pulsar.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
namespace: {{ template "pulsar.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
subjects:
- kind: ServiceAccount
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
namespace: {{ template "pulsar.namespace" . }}
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
{{- if .Values.rbac.limit_to_namespace }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}-{{ template "pulsar.namespace" . }}"
{{- else}}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
{{- end}}
spec:
readOnlyRootFilesystem: false
privileged: false
allowPrivilegeEscalation: false
runAsUser:
rule: 'RunAsAny'
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end}}

View File

@ -23,6 +23,7 @@ kind: StatefulSet
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
namespace: {{ template "pulsar.namespace" . }}
annotations: {{ .Values.proxy.appAnnotations | toYaml | nindent 4 }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.proxy.component }}
@ -44,8 +45,10 @@ spec:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.proxy.component }}
annotations:
{{- if not .Values.proxy.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.proxy.ports.containerPorts.http }}"
{{- end }}
{{- if .Values.proxy.restartPodsOnConfigMapChange }}
checksum/config: {{ include (print $.Template.BasePath "/proxy-configmap.yaml") . | sha256sum }}
{{- end }}
@ -109,6 +112,18 @@ spec:
terminationGracePeriodSeconds: {{ .Values.proxy.gracePeriod }}
serviceAccountName: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
initContainers:
{{- if .Values.tls.proxy.cacerts.enabled }}
- name: combine-certs
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.proxy "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.proxy "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.proxy.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.proxy.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if and .Values.components.zookeeper .Values.proxy.waitZookeeperTimeout (gt (.Values.proxy.waitZookeeperTimeout | int) 0) }}
# This init container will wait for zookeeper to be ready before
# deploying the bookies
@ -118,15 +133,19 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.proxy.waitZookeeperTimeout }}", "sh", "-c"]
args:
- >-
- |
export PULSAR_MEM="-Xmx128M";
{{- if $zk:=.Values.pulsar_metadata.userProvidedZookeepers }}
until timeout 15 bin/pulsar zookeeper-shell -server {{ $zk }} ls {{ or .Values.metadataPrefix "/" }}; do
echo "user provided zookeepers {{ $zk }} are unreachable... check in 3 seconds ..." && sleep 3;
done;
{{- else if .Values.pulsar_metadata.configurationStore }}
until timeout 15 bin/pulsar zookeeper-shell -server {{ template "pulsar.configurationStore.service" . }} get {{ .Values.pulsar_metadata.configurationStoreMetadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
echo "pulsar cluster {{ template "pulsar.cluster.name" . }} isn't initialized yet ... check in 3 seconds ..." && sleep 3;
done;
{{ else }}
until timeout 15 bin/pulsar zookeeper-shell -server {{ template "pulsar.configurationStore.service" . }} get {{ .Values.metadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
sleep 3;
{{- else }}
until timeout 15 bin/pulsar zookeeper-shell -server {{ template "pulsar.zookeeper.service" . }} get {{ .Values.metadataPrefix }}/admin/clusters/{{ template "pulsar.cluster.name" . }}; do
echo "pulsar cluster {{ template "pulsar.cluster.name" . }} isn't initialized yet ... check in 3 seconds ..." && sleep 3;
done;
{{- end}}
{{- end}}
@ -137,7 +156,7 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.proxy.waitOxiaTimeout }}", "sh", "-c"]
args:
- >-
- |
until nslookup {{ template "pulsar.oxia.server.service" . }}; do
sleep 3;
done;
@ -151,7 +170,7 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.proxy.waitBrokerTimeout }}", "sh", "-c"]
args:
- >-
- |
set -e;
brokerServiceNumber="$(nslookup -timeout=10 {{ template "pulsar.fullname" . }}-{{ .Values.broker.component }} | grep Name | wc -l)";
until [ ${brokerServiceNumber} -ge 1 ]; do
@ -203,10 +222,15 @@ spec:
{{- end }}
command: ["sh", "-c"]
args:
- >
- |
{{- if .Values.proxy.additionalCommand }}
{{ .Values.proxy.additionalCommand }}
{{- end }}
{{- if .Values.tls.proxy.cacerts.enabled }}
cd /pulsar/certs/cacerts;
nohup /pulsar/bin/certs-combine-pem-infinity.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.proxy.cacerts.certs) }} > /pulsar/certs/cacerts/certs-combine-pem-infinity.log 2>&1 &
cd /pulsar;
{{- end }}
bin/apply-config-from-env.py conf/proxy.conf &&
echo "OK" > "${statusFilePath:-status}" &&
OPTS="${OPTS} -Dlog4j2.formatMsgNoLookups=true" exec bin/pulsar proxy
@ -224,10 +248,6 @@ spec:
- name: "sts-{{ .Values.tlsPrefix }}pulsarssl"
containerPort: {{ .Values.proxy.ports.pulsarssl }}
{{- end }}
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end }}
{{- if .Values.proxy.extraEnvs }}
env:
{{ toYaml .Values.proxy.extraEnvs | indent 8 }}
@ -238,7 +258,7 @@ spec:
{{- if or .Values.proxy.extraVolumeMounts .Values.auth.authentication.enabled (and .Values.tls.enabled (or .Values.tls.proxy.enabled .Values.tls.broker.enabled)) }}
volumeMounts:
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
- mountPath: "/pulsar/keys"
name: token-keys
readOnly: true
@ -247,16 +267,7 @@ spec:
readOnly: true
{{- end }}
{{- end }}
{{- if .Values.tls.proxy.enabled }}
- mountPath: "/pulsar/certs/proxy"
name: proxy-certs
readOnly: true
{{- end}}
{{- if .Values.tls.enabled }}
- mountPath: "/pulsar/certs/ca"
name: ca
readOnly: true
{{- end}}
{{- include "pulsar.proxy.certs.volumeMounts" . | nindent 10 }}
{{- if .Values.proxy.extraVolumeMounts }}
{{ toYaml .Values.proxy.extraVolumeMounts | indent 10 }}
{{- end }}
@ -268,7 +279,7 @@ spec:
{{ toYaml .Values.proxy.extraVolumes | indent 8 }}
{{- end }}
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
- name: token-keys
secret:
{{- if not .Values.auth.authentication.jwt.usingSecretKey }}
@ -293,26 +304,6 @@ spec:
path: proxy/token
{{- end}}
{{- end}}
{{- if .Values.tls.proxy.enabled }}
- name: ca
secret:
{{- if eq .Values.certs.internal_issuer.type "selfsigning" }}
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
{{- end }}
{{- if eq .Values.certs.internal_issuer.type "ca" }}
secretName: "{{ .Values.certs.issuers.ca.secretName }}"
{{- end }}
items:
- key: ca.crt
path: ca.crt
- name: proxy-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.proxy.cert_name }}"
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
{{- end}}
{{- include "pulsar.proxy.certs.volumes" . | nindent 8 }}
{{- end}}
{{- end }}

View File

@ -34,6 +34,10 @@ spec:
ttlSecondsAfterFinished: {{ .Values.job.ttl.secondsAfterFinished | default 600 }}
{{- end }}
template:
metadata:
labels:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.pulsar_metadata.component }}
spec:
{{- include "pulsar.imagePullSecrets" . | nindent 6 }}
{{- if .Values.pulsar_metadata.nodeSelector }}
@ -41,6 +45,18 @@ spec:
{{ toYaml .Values.pulsar_metadata.nodeSelector | indent 8 }}
{{- end }}
initContainers:
{{- if .Values.tls.toolset.cacerts.enabled }}
- name: cacerts
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.pulsar_metadata.image "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.toolset.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.toolset.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if and .Values.components.zookeeper .Values.pulsar_metadata.waitZookeeperTimeout (gt (.Values.pulsar_metadata.waitZookeeperTimeout | int) 0) }}
{{- if .Values.pulsar_metadata.configurationStore }}
- name: wait-zk-cs-ready
@ -49,7 +65,7 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.pulsar_metadata.waitZookeeperTimeout }}", "sh", "-c"]
args:
- >-
- |
until nslookup {{ .Values.pulsar_metadata.configurationStore}}; do
sleep 3;
done;
@ -60,7 +76,7 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.pulsar_metadata.waitZookeeperTimeout }}", "sh", "-c"]
args:
- >-
- |
{{- if $zk := .Values.pulsar_metadata.userProvidedZookeepers }}
export PULSAR_MEM="-Xmx128M";
until timeout 15 bin/pulsar zookeeper-shell -server {{ $zk }} ls {{ or .Values.metadataPrefix "/" }}; do
@ -79,7 +95,7 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.pulsar_metadata.waitOxiaTimeout }}", "sh", "-c"]
args:
- >-
- |
until nslookup {{ template "pulsar.oxia.server.service" . }}; do
sleep 3;
done;
@ -93,7 +109,7 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["timeout", "{{ .Values.pulsar_metadata.waitBookkeeperTimeout }}", "sh", "-c"]
args:
- >-
- |
bin/apply-config-from-env.py conf/bookkeeper.conf;
echo Default BOOKIE_MEM settings are set very high, which can cause the init container to fail.;
echo Setting the memory to a lower value to avoid OOM as operations below are not memory intensive.;
@ -119,7 +135,7 @@ spec:
command: ["timeout", "{{ .Values.pulsar_metadata.initTimeout | default 60 }}", "sh", "-c"]
{{- if .Values.components.zookeeper }}
args:
- >-
- | # Use the pipe character for the YAML multiline string. Workaround for kubernetes-sigs/kustomize#4201
{{- include "pulsar.toolset.zookeeper.tls.settings" . | nindent 12 }}
export PULSAR_MEM="-Xmx128M";
bin/pulsar initialize-cluster-metadata \
@ -139,7 +155,7 @@ spec:
{{- end }}
{{- else if .Values.components.oxia }}
args:
- >-
- | # Use the pipe character for the YAML multiline string. Workaround for kubernetes-sigs/kustomize#4201
export PULSAR_MEM="-Xmx128M";
bin/pulsar initialize-cluster-metadata \
--cluster {{ template "pulsar.cluster.name" . }} \

View File

@ -24,12 +24,8 @@ metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}-secret"
namespace: {{ template "pulsar.namespace" . }}
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}
cluster: {{ template "pulsar.fullname" . }}
"helm.sh/resource-policy": "keep" # do not remove when uninstalling to keep it for next install
type: Opaque
data:

View File

@ -32,6 +32,10 @@ spec:
ttlSecondsAfterFinished: {{ .Values.job.ttl.secondsAfterFinished | default 600 }}
{{- end }}
template:
metadata:
labels:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.pulsar_manager.component }}-init
spec:
{{- include "pulsar.imagePullSecrets" . | nindent 6 }}
nodeSelector:
@ -64,7 +68,7 @@ spec:
resources: {{ toYaml .Values.initContainer.resources | nindent 12 }}
command: [ "sh", "-c" ]
args:
- >-
- |
set -e;
brokerServiceNumber="$(nslookup -timeout=10 {{ template "pulsar.fullname" . }}-{{ .Values.broker.component }} | grep Name | wc -l)";
until [ ${brokerServiceNumber} -ge 1 ]; do

View File

@ -31,7 +31,7 @@ data:
PULSAR_MANAGER_OPTS: "-Dlog4j2.formatMsgNoLookups=true"
{{- if .Values.auth.authentication.enabled }}
# auth
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
{{- if .Values.auth.authentication.jwt.usingSecretKey }}
SECRET_KEY: "file:///pulsar-manager/keys/token/secret.key"
{{- else }}

View File

@ -26,8 +26,10 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}
{{- with .Values.pulsar_manager.service.annotations }}
annotations:
{{ toYaml .Values.pulsar_manager.service.annotations | indent 4 }}
{{ toYaml . | indent 4 }}
{{- end }}
spec:
type: {{ .Values.pulsar_manager.service.type }}
{{- if .Values.pulsar_manager.service.externalTrafficPolicy }}
@ -58,8 +60,10 @@ metadata:
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}
{{- with .Values.pulsar_manager.adminService.annotations }}
annotations:
{{ toYaml .Values.pulsar_manager.adminService.annotations | indent 4 }}
{{ toYaml . | indent 4 }}
{{- end }}
spec:
type: {{ .Values.pulsar_manager.adminService.type }}
ports:

View File

@ -23,6 +23,7 @@ kind: StatefulSet
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.pulsar_manager.component }}"
namespace: {{ template "pulsar.namespace" . }}
annotations: {{ .Values.pulsar_manager.appAnnotations | toYaml | nindent 4 }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.pulsar_manager.component }}
@ -81,7 +82,7 @@ spec:
{{ toYaml .Values.pulsar_manager.extraVolumeMounts | indent 10 }}
{{- end }}
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
- name: pulsar-manager-keys
mountPath: /pulsar-manager/keys
{{- end }}
@ -109,7 +110,7 @@ spec:
{{- end }}
key: DB_PASSWORD
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
{{- if .Values.auth.superUsers.manager }}
- name: JWT_TOKEN
valueFrom:
@ -125,7 +126,7 @@ spec:
{{ toYaml .Values.pulsar_manager.extraVolumes | indent 8 }}
{{- end }}
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
- name: pulsar-manager-keys
secret:
defaultMode: 420

View File

@ -24,6 +24,8 @@ kind: Issuer
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
spec:
selfSigned: {}
---
@ -32,8 +34,10 @@ kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-ca"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
spec:
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
commonName: "{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
duration: "{{ .Values.certs.internal_issuer.duration }}"
renewBefore: "{{ .Values.certs.internal_issuer.renewBefore }}"
@ -50,23 +54,15 @@ spec:
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
---
{{- end }}
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Issuer
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
name: "{{ template "pulsar.certs.issuers.ca.name" . }}"
namespace: {{ template "pulsar.namespace" . }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
spec:
ca:
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
{{- end }}
{{- if eq .Values.certs.internal_issuer.type "ca" }}
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Issuer
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
namespace: {{ template "pulsar.namespace" . }}
spec:
ca:
secretName: "{{ .Values.certs.issuers.ca.secretName }}"
{{- end }}
secretName: "{{ template "pulsar.certs.issuers.ca.secretName" . }}"
{{- end }}

View File

@ -18,328 +18,30 @@
#
{{- if .Values.tls.enabled }}
{{- if .Values.certs.internal_issuer.enabled }}
{{- if .Values.tls.proxy.enabled }}
{{- if .Values.tls.proxy.createCert }}
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.tls.proxy.cert_name }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
# Secret names are always required.
secretName: "{{ .Release.Name }}-{{ .Values.tls.proxy.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
{{- if eq .Values.certs.internal_issuer.apiVersion "cert-manager.io/v1" }}
subject:
organizations:
{{ toYaml .Values.tls.common.organization | indent 4 }}
{{- else }}
organization:
{{ toYaml .Values.tls.common.organization | indent 2 }}
{{- end }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
isCA: false
{{- if eq .Values.certs.internal_issuer.apiVersion "cert-manager.io/v1" }}
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
{{- else }}
keySize: {{ .Values.tls.common.keySize }}
keyAlgorithm: {{ .Values.tls.common.keyAlgorithm }}
keyEncoding: {{ .Values.tls.common.keyEncoding }}
{{- end }}
usages:
- server auth
- client auth
# At least one of a DNS Name, USI SAN, or IP address is required.
dnsNames:
{{- if .Values.tls.proxy.dnsNames }}
{{ toYaml .Values.tls.proxy.dnsNames | indent 4 }}
{{- end }}
- "*.{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
- "{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}"
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{ include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.proxy "tlsConfig" .Values.tls.proxy) }}
---
{{- end }}
{{- end }}
{{- if or .Values.tls.broker.enabled (or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled) }}
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.tls.broker.cert_name }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
# Secret names are always required.
secretName: "{{ .Release.Name }}-{{ .Values.tls.broker.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
{{- if eq .Values.certs.internal_issuer.apiVersion "cert-manager.io/v1" }}
subject:
organizations:
{{ toYaml .Values.tls.common.organization | indent 4 }}
{{- else }}
organization:
{{ toYaml .Values.tls.common.organization | indent 2 }}
{{- end }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
isCA: false
{{- if eq .Values.certs.internal_issuer.apiVersion "cert-manager.io/v1" }}
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
{{- else }}
keySize: {{ .Values.tls.common.keySize }}
keyAlgorithm: {{ .Values.tls.common.keyAlgorithm }}
keyEncoding: {{ .Values.tls.common.keyEncoding }}
{{- end }}
usages:
- server auth
- client auth
# At least one of a DNS Name, USI SAN, or IP address is required.
dnsNames:
{{- if .Values.tls.broker.dnsNames }}
{{ toYaml .Values.tls.broker.dnsNames | indent 4 }}
{{- end}}
- "*.{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
- "{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}"
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{ include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.broker "tlsConfig" .Values.tls.broker) }}
---
{{- end }}
{{- if or .Values.tls.bookie.enabled .Values.tls.zookeeper.enabled }}
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.tls.bookie.cert_name }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
# Secret names are always required.
secretName: "{{ .Release.Name }}-{{ .Values.tls.bookie.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
{{- if eq .Values.certs.internal_issuer.apiVersion "cert-manager.io/v1" }}
subject:
organizations:
{{ toYaml .Values.tls.common.organization | indent 4 }}
{{- else }}
organization:
{{ toYaml .Values.tls.common.organization | indent 2 }}
{{- end }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
isCA: false
{{- if eq .Values.certs.internal_issuer.apiVersion "cert-manager.io/v1" }}
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
{{- else }}
keySize: {{ .Values.tls.common.keySize }}
keyAlgorithm: {{ .Values.tls.common.keyAlgorithm }}
keyEncoding: {{ .Values.tls.common.keyEncoding }}
{{- end }}
usages:
- server auth
- client auth
dnsNames:
{{- if .Values.tls.bookie.dnsNames }}
{{ toYaml .Values.tls.bookie.dnsNames | indent 4 }}
{{- end }}
- "*.{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
- "{{ template "pulsar.fullname" . }}-{{ .Values.bookkeeper.component }}"
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{ include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.bookkeeper "tlsConfig" .Values.tls.bookie) }}
---
{{- end }}
{{- if .Values.tls.zookeeper.enabled }}
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.tls.autorecovery.cert_name }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
# Secret names are always required.
secretName: "{{ .Release.Name }}-{{ .Values.tls.autorecovery.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
{{- if eq .Values.certs.internal_issuer.apiVersion "cert-manager.io/v1" }}
subject:
organizations:
{{ toYaml .Values.tls.common.organization | indent 4 }}
{{- else }}
organization:
{{ toYaml .Values.tls.common.organization | indent 2 }}
{{- end }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
isCA: false
{{- if eq .Values.certs.internal_issuer.apiVersion "cert-manager.io/v1" }}
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
{{- else }}
keySize: {{ .Values.tls.common.keySize }}
keyAlgorithm: {{ .Values.tls.common.keyAlgorithm }}
keyEncoding: {{ .Values.tls.common.keyEncoding }}
{{- end }}
usages:
- server auth
- client auth
dnsNames:
{{- if .Values.tls.autorecovery.dnsNames }}
{{ toYaml .Values.tls.autorecovery.dnsNames | indent 4 }}
{{- end }}
- "*.{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
- "{{ template "pulsar.fullname" . }}-{{ .Values.autorecovery.component }}"
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{ include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.autorecovery "tlsConfig" .Values.tls.autorecovery) }}
---
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.tls.toolset.cert_name }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
# Secret names are always required.
secretName: "{{ .Release.Name }}-{{ .Values.tls.toolset.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
{{- if eq .Values.certs.internal_issuer.apiVersion "cert-manager.io/v1" }}
subject:
organizations:
{{ toYaml .Values.tls.common.organization | indent 4 }}
{{- else }}
organization:
{{ toYaml .Values.tls.common.organization | indent 2 }}
{{- end }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
isCA: false
{{- if eq .Values.certs.internal_issuer.apiVersion "cert-manager.io/v1" }}
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
{{- else }}
keySize: {{ .Values.tls.common.keySize }}
keyAlgorithm: {{ .Values.tls.common.keyAlgorithm }}
keyEncoding: {{ .Values.tls.common.keyEncoding }}
{{- end }}
usages:
- server auth
- client auth
dnsNames:
{{- if .Values.tls.toolset.dnsNames }}
{{ toYaml .Values.tls.toolset.dnsNames | indent 4 }}
{{- end }}
- "*.{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
- "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{ include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.toolset "tlsConfig" .Values.tls.toolset) }}
---
apiVersion: "{{ .Values.certs.internal_issuer.apiVersion }}"
kind: Certificate
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.tls.zookeeper.cert_name }}"
namespace: {{ template "pulsar.namespace" . }}
spec:
# Secret names are always required.
secretName: "{{ .Release.Name }}-{{ .Values.tls.zookeeper.cert_name }}"
duration: "{{ .Values.tls.common.duration }}"
renewBefore: "{{ .Values.tls.common.renewBefore }}"
{{- if eq .Values.certs.internal_issuer.apiVersion "cert-manager.io/v1" }}
subject:
organizations:
{{ toYaml .Values.tls.common.organization | indent 4 }}
{{- else }}
organization:
{{ toYaml .Values.tls.common.organization | indent 2 }}
{{- end }}
# The use of the common name field has been deprecated since 2000 and is
# discouraged from being used.
commonName: "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}"
isCA: false
{{- if eq .Values.certs.internal_issuer.apiVersion "cert-manager.io/v1" }}
privateKey:
size: {{ .Values.tls.common.keySize }}
algorithm: {{ .Values.tls.common.keyAlgorithm }}
encoding: {{ .Values.tls.common.keyEncoding }}
{{- else }}
keySize: {{ .Values.tls.common.keySize }}
keyAlgorithm: {{ .Values.tls.common.keyAlgorithm }}
keyEncoding: {{ .Values.tls.common.keyEncoding }}
{{- end }}
usages:
- server auth
- client auth
dnsNames:
{{- if .Values.tls.zookeeper.dnsNames }}
{{ toYaml .Values.tls.zookeeper.dnsNames | indent 4 }}
{{- end }}
- "*.{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}.{{ template "pulsar.namespace" . }}.svc.{{ .Values.clusterDomain }}"
- "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}"
# Issuer references are always required.
issuerRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.certs.internal_issuer.component }}-ca-issuer"
# We can reference ClusterIssuers by changing the kind here.
# The default value is Issuer (i.e. a locally namespaced Issuer)
kind: Issuer
# This is optional since cert-manager will default to this value however
# if you are using an external issuer, change this to that issuer group.
group: cert-manager.io
{{ include "pulsar.cert.template" (dict "root" . "componentConfig" .Values.zookeeper "tlsConfig" .Values.tls.zookeeper) }}
{{- end }}
{{- end }}
{{- end }}

View File

@ -36,7 +36,7 @@ data:
brokerServiceUrl: "pulsar+ssl://{{ template "pulsar.fullname" . }}-{{ .Values.broker.component }}:{{ .Values.broker.ports.pulsarssl }}/"
useTls: "true"
tlsAllowInsecureConnection: "false"
tlsTrustCertsFilePath: "/pulsar/certs/proxy-ca/ca.crt"
tlsTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.toolset.cacerts.enabled | quote }}
tlsEnableHostnameVerification: "false"
{{- end }}
{{- if not (and .Values.tls.enabled .Values.tls.broker.enabled) }}
@ -51,7 +51,7 @@ data:
brokerServiceUrl: "pulsar+ssl://{{ template "pulsar.fullname" . }}-{{ .Values.proxy.component }}:{{ .Values.proxy.ports.pulsarssl }}/"
useTls: "true"
tlsAllowInsecureConnection: "false"
tlsTrustCertsFilePath: "/pulsar/certs/proxy-ca/ca.crt"
tlsTrustCertsFilePath: {{ ternary "/pulsar/certs/cacerts/ca-combined.pem" "/pulsar/certs/ca/ca.crt" .Values.tls.toolset.cacerts.enabled | quote }}
tlsEnableHostnameVerification: "false"
{{- end }}
{{- if not (and .Values.tls.enabled .Values.tls.proxy.enabled) }}
@ -61,7 +61,7 @@ data:
{{- end }}
# Authentication Settings
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
authParams: "file:///pulsar/tokens/client/token"
authPlugin: "org.apache.pulsar.client.impl.auth.AuthenticationToken"
{{- end }}

View File

@ -1,85 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
namespace: {{ template "pulsar.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
namespace: {{ template "pulsar.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
subjects:
- kind: ServiceAccount
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
namespace: {{ template "pulsar.namespace" . }}
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
{{- if .Values.rbac.limit_to_namespace }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}-{{ template "pulsar.namespace" . }}"
{{- else}}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
{{- end}}
spec:
readOnlyRootFilesystem: false
privileged: false
allowPrivilegeEscalation: false
runAsUser:
rule: 'RunAsAny'
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end}}

View File

@ -23,6 +23,7 @@ kind: StatefulSet
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
namespace: {{ template "pulsar.namespace" . }}
annotations: {{ .Values.toolset.appAnnotations | toYaml | nindent 4 }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.toolset.component }}
@ -63,8 +64,20 @@ spec:
{{- end }}
terminationGracePeriodSeconds: {{ .Values.toolset.gracePeriod }}
serviceAccountName: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
{{- if .Values.toolset.initContainers }}
initContainers:
{{- if .Values.tls.toolset.cacerts.enabled }}
- name: cacerts
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.toolset "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.toolset "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.toolset.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.toolset.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if .Values.toolset.initContainers }}
{{- toYaml .Values.toolset.initContainers | nindent 6 }}
{{- end }}
containers:
@ -82,41 +95,37 @@ spec:
{{- end }}
command: ["sh", "-c"]
args:
- >
- |
{{- if .Values.toolset.additionalCommand }}
{{ .Values.toolset.additionalCommand }}
{{- end }}
{{- if .Values.tls.toolset.cacerts.enabled }}
cd /pulsar/certs/cacerts;
nohup /pulsar/bin/certs-combine-pem-infinity.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.toolset.cacerts.certs) }} > /pulsar/certs/cacerts/certs-combine-pem-infinity.log 2>&1 &
cd /pulsar;
{{- end }}
bin/apply-config-from-env.py conf/client.conf;
bin/apply-config-from-env.py conf/bookkeeper.conf;
{{- include "pulsar.toolset.zookeeper.tls.settings" . | nindent 10 }}
sleep 10000000000
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end }}
envFrom:
- configMapRef:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.toolset.component }}"
volumeMounts:
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
- mountPath: "/pulsar/tokens"
name: client-token
readOnly: true
{{- end }}
{{- end }}
{{- if and .Values.tls.enabled (or .Values.tls.broker.enabled .Values.tls.proxy.enabled) }}
- mountPath: "/pulsar/certs/proxy-ca"
name: proxy-ca
readOnly: true
{{- end}}
{{- if .Values.toolset.extraVolumeMounts }}
{{ toYaml .Values.toolset.extraVolumeMounts | indent 8 }}
{{- end }}
{{- include "pulsar.toolset.certs.volumeMounts" . | nindent 8 }}
volumes:
{{- if .Values.auth.authentication.enabled }}
{{- if eq .Values.auth.authentication.provider "jwt" }}
{{- if .Values.auth.authentication.jwt.enabled }}
- name: client-token
secret:
secretName: "{{ .Release.Name }}-token-{{ .Values.auth.superUsers.client }}"
@ -125,19 +134,6 @@ spec:
path: client/token
{{- end}}
{{- end}}
{{- if and .Values.tls.enabled (or .Values.tls.broker.enabled .Values.tls.proxy.enabled) }}
- name: proxy-ca
secret:
{{- if eq .Values.certs.internal_issuer.type "selfsigning" }}
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
{{- end }}
{{- if eq .Values.certs.internal_issuer.type "ca" }}
secretName: "{{ .Values.certs.issuers.ca.secretName }}"
{{- end }}
items:
- key: ca.crt
path: ca.crt
{{- end}}
{{- if .Values.toolset.extraVolumes }}
{{ toYaml .Values.toolset.extraVolumes | indent 6 }}
{{- end }}

View File

@ -20,41 +20,6 @@
# deploy zookeeper PodMonitor only when `$.Values.zookeeper.podMonitor.enabled` is true
{{- if .Values.components.zookeeper }}
{{- if $.Values.zookeeper.podMonitor.enabled }}
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: {{ template "pulsar.fullname" . }}-zookeeper
labels:
app: {{ template "pulsar.name" . }}
chart: {{ template "pulsar.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
jobLabel: zookeeper
podMetricsEndpoints:
- port: http
path: /metrics
scheme: http
interval: {{ $.Values.zookeeper.podMonitor.interval }}
scrapeTimeout: {{ $.Values.zookeeper.podMonitor.scrapeTimeout }}
relabelings:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
- sourceLabels: [__meta_kubernetes_namespace]
action: replace
targetLabel: kubernetes_namespace
- sourceLabels: [__meta_kubernetes_pod_label_component]
action: replace
targetLabel: job
- sourceLabels: [__meta_kubernetes_pod_name]
action: replace
targetLabel: kubernetes_pod_name
{{- if $.Values.zookeeper.podMonitor.metricRelabelings }}
metricRelabelings: {{ toYaml $.Values.zookeeper.podMonitor.metricRelabelings | nindent 8 }}
{{- end }}
selector:
matchLabels:
{{- include "pulsar.matchLabels" . | nindent 6 }}
component: zookeeper
{{- include "pulsar.podMonitor" (list . "zookeeper" (printf "component: %s" .Values.zookeeper.component)) }}
{{- end }}
{{- end }}

View File

@ -1,85 +0,0 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}"
namespace: {{ template "pulsar.namespace" . }}
rules:
- apiGroups:
- policy
resourceNames:
- "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}"
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}"
namespace: {{ template "pulsar.namespace" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}"
subjects:
- kind: ServiceAccount
name: "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}"
namespace: {{ template "pulsar.namespace" . }}
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
{{- if .Values.rbac.limit_to_namespace }}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}-{{ template "pulsar.namespace" . }}"
{{- else}}
name: "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}"
{{- end}}
spec:
readOnlyRootFilesystem: false
privileged: false
allowPrivilegeEscalation: false
runAsUser:
rule: 'RunAsAny'
supplementalGroups:
ranges:
- max: 65535
min: 1
rule: MustRunAs
fsGroup:
rule: 'MustRunAs'
ranges:
- min: 1
max: 65535
seLinux:
rule: 'RunAsAny'
volumes:
- configMap
- emptyDir
- projected
- secret
- downwardAPI
- persistentVolumeClaim
{{- end}}

View File

@ -28,7 +28,10 @@ metadata:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.zookeeper.component }}
annotations:
{{ toYaml .Values.zookeeper.service.annotations | indent 4 }}
{{- with .Values.zookeeper.service.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
ports:
# prometheus needs to access /metrics endpoint

View File

@ -24,6 +24,7 @@ kind: StatefulSet
metadata:
name: "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}"
namespace: {{ template "pulsar.namespace" . }}
annotations: {{ .Values.zookeeper.appAnnotations | toYaml | nindent 4 }}
labels:
{{- include "pulsar.standardLabels" . | nindent 4 }}
component: {{ .Values.zookeeper.component }}
@ -43,6 +44,10 @@ spec:
{{- include "pulsar.template.labels" . | nindent 8 }}
component: {{ .Values.zookeeper.component }}
annotations:
{{- if not .Values.zookeeper.podMonitor.enabled }}
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.zookeeper.ports.http }}"
{{- end }}
{{- if .Values.zookeeper.restartPodsOnConfigMapChange }}
checksum/config: {{ include (print $.Template.BasePath "/zookeeper-configmap.yaml") . | sha256sum }}
{{- end }}
@ -109,8 +114,20 @@ spec:
securityContext:
{{ toYaml .Values.zookeeper.securityContext | indent 8 }}
{{- end }}
{{- if .Values.zookeeper.initContainers }}
initContainers:
{{- if .Values.tls.zookeeper.cacerts.enabled }}
- name: cacerts
image: "{{ template "pulsar.imageFullName" (dict "image" .Values.images.zookeeper "root" .) }}"
imagePullPolicy: "{{ template "pulsar.imagePullPolicy" (dict "image" .Values.images.zookeeper "root" .) }}"
resources: {{ toYaml .Values.initContainer.resources | nindent 10 }}
command: ["sh", "-c"]
args:
- |
bin/certs-combine-pem.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.zookeeper.cacerts.certs) }}
volumeMounts:
{{- include "pulsar.zookeeper.certs.volumeMounts" . | nindent 8 }}
{{- end }}
{{- if .Values.zookeeper.initContainers }}
{{- toYaml .Values.zookeeper.initContainers | nindent 6 }}
{{- end }}
containers:
@ -123,10 +140,15 @@ spec:
{{- end }}
command: ["sh", "-c"]
args:
- >
- |
{{- if .Values.zookeeper.additionalCommand }}
{{ .Values.zookeeper.additionalCommand }}
{{- end }}
{{- if .Values.tls.zookeeper.cacerts.enabled }}
cd /pulsar/certs/cacerts;
nohup /pulsar/bin/certs-combine-pem-infinity.sh /pulsar/certs/cacerts/ca-combined.pem {{ template "pulsar.certs.cacerts" (dict "certs" .Values.tls.zookeeper.cacerts.certs) }} > /pulsar/certs/cacerts/certs-combine-pem-infinity.log 2>&1 &
cd /pulsar;
{{- end }}
bin/apply-config-from-env.py conf/zookeeper.conf;
{{- include "pulsar.zookeeper.tls.settings" . | nindent 10 }}
bin/generate-zookeeper-config.sh conf/zookeeper.conf;
@ -173,10 +195,6 @@ spec:
{{- $zkConnectCommand = print "nc 127.0.0.1 " .Values.zookeeper.ports.client -}}
{{- end }}
{{- if .Values.zookeeper.probe.readiness.enabled }}
{{- if and (semverCompare "<1.25-0" .Capabilities.KubeVersion.Version) .Values.rbac.enabled .Values.rbac.psp }}
securityContext:
readOnlyRootFilesystem: false
{{- end}}
readinessProbe:
exec:
command:
@ -219,17 +237,7 @@ spec:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}-{{ .Values.zookeeper.volumes.datalog.name }}"
mountPath: /pulsar/data-log
{{- end }}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
- mountPath: "/pulsar/certs/zookeeper"
name: zookeeper-certs
readOnly: true
- mountPath: "/pulsar/certs/ca"
name: ca
readOnly: true
- name: keytool
mountPath: "/pulsar/keytool/keytool.sh"
subPath: keytool.sh
{{- end }}
{{- include "pulsar.zookeeper.certs.volumeMounts" . | nindent 8 }}
{{- if .Values.zookeeper.extraVolumeMounts }}
{{ toYaml .Values.zookeeper.extraVolumeMounts | indent 8 }}
{{- end }}
@ -238,34 +246,10 @@ spec:
- name: "{{ template "pulsar.fullname" . }}-{{ .Values.zookeeper.component }}-{{ .Values.zookeeper.volumes.data.name }}"
emptyDir: {}
{{- end }}
{{- include "pulsar.zookeeper.certs.volumes" . | nindent 6 }}
{{- if .Values.zookeeper.extraVolumes }}
{{ toYaml .Values.zookeeper.extraVolumes | indent 6 }}
{{- end }}
{{- if and .Values.tls.enabled .Values.tls.zookeeper.enabled }}
- name: zookeeper-certs
secret:
secretName: "{{ .Release.Name }}-{{ .Values.tls.zookeeper.cert_name }}"
items:
- key: tls.crt
path: tls.crt
- key: tls.key
path: tls.key
- name: ca
secret:
{{- if eq .Values.certs.internal_issuer.type "selfsigning" }}
secretName: "{{ .Release.Name }}-{{ .Values.tls.ca_suffix }}"
{{- end }}
{{- if eq .Values.certs.internal_issuer.type "ca" }}
secretName: "{{ .Values.certs.issuers.ca.secretName }}"
{{- end }}
items:
- key: ca.crt
path: ca.crt
- name: keytool
configMap:
name: "{{ template "pulsar.fullname" . }}-keytool-configmap"
defaultMode: 0755
{{- end}}
{{- include "pulsar.imagePullSecrets" . | nindent 6}}
{{- if and (and .Values.persistence .Values.volumes.persistence) .Values.zookeeper.volumes.persistence }}
volumeClaimTemplates:

View File

@ -21,9 +21,12 @@
### K8S Settings
###
### Namespace to deploy pulsar
# The namespace to use to deploy the pulsar components, if left empty
# will default to .Release.Namespace (aka helm --namespace).
### Namespace to deploy Pulsar
### Note: Prefer using helm's --namespace flag with --create-namespace instead
## The namespace to use to deploy the Pulsar components. If left empty,
## it will default to .Release.Namespace (aka helm --namespace).
## Please note that victoria-metrics-k8s-stack might not be able to scrape Pulsar component metrics by default unless
## it is deployed in the same namespace as Pulsar.
namespace: ""
namespaceCreate: false
@ -35,6 +38,7 @@ clusterDomain: cluster.local
###
## Set to true on install
## There's no need to set this value unless you're using a system that doesn't track .Release.IsInstall or .Release.IsUpgrade (like argocd)
initialize: false
## Set useReleaseStatus to false if you're deploying this chart using a system that doesn't track .Release.IsInstall or .Release.IsUpgrade (like argocd)
useReleaseStatus: true
@ -90,10 +94,8 @@ volumes:
rbac:
enabled: false
psp: false
limit_to_namespace: true
## AntiAffinity
##
## Flag to enable and disable `AntiAffinity` for all components.
@ -101,6 +103,8 @@ rbac:
## If you need to disable AntiAffinity for a component, you can set
## the `affinity.anti_affinity` settings to `false` for that component.
affinity:
## When set to true, the scheduler will try to spread pods across different nodes.
## It is necessary to set this to false if you're using a Kubernetes cluster with less than 3 nodes, such as local development environments.
anti_affinity: true
# Set the anti affinity type. Valid values:
# requiredDuringSchedulingIgnoredDuringExecution - rules must be met for pod to be scheduled (hard) requires at least one node per replica
@ -206,8 +210,8 @@ images:
hasCommand: false
oxia:
repository: streamnative/oxia
tag: 0.11.9
pullPolicy: Always
tag: 0.12.0
pullPolicy:
## TLS
## templates/tls-certs.yaml
@ -237,6 +241,13 @@ tls:
# The dnsNames field specifies a list of Subject Alternative Names to be associated with the certificate.
dnsNames:
# - example.com
cacerts:
enabled: false
certs:
# - name: proxy-cacert
# existingSecret: proxy-cacert
# secretKeys:
# - ca.crt
# settings for generating certs for broker
broker:
enabled: false
@ -244,37 +255,96 @@ tls:
# The dnsNames field specifies a list of Subject Alternative Names to be associated with the certificate.
dnsNames:
# - example.com
cacerts:
enabled: false
certs:
# - name: broker-cacert
# existingSecret: broker-cacert
# secretKeys:
# - ca.crt
# settings for generating certs for bookies
bookie:
enabled: false
cert_name: tls-bookie
cacerts:
enabled: false
certs:
# - name: bookie-cacert
# existingSecret: bookie-cacert
# secretKeys:
# - ca.crt
# settings for generating certs for zookeeper
zookeeper:
enabled: false
cert_name: tls-zookeeper
cacerts:
enabled: false
certs:
# - name: zookeeper-cacert
# existingSecret: zookeeper-cacert
# secretKeys:
# - ca.crt
# settings for generating certs for recovery
autorecovery:
cert_name: tls-recovery
cacerts:
enabled: false
certs:
# - name: autorecovery-cacert
# existingSecret: autorecovery-cacert
# secretKeys:
# - ca.crt
# settings for generating certs for toolset
toolset:
cert_name: tls-toolset
cacerts:
enabled: false
certs:
# - name: toolset-cacert
# existingSecret: toolset-cacert
# secretKeys:
# - ca.crt
# TLS setting for function runtime instance
function_instance:
# controls the use of TLS for function runtime connections towards brokers
enabled: false
oxia:
enabled: false
pulsar_metadata:
cacerts:
enabled: false
certs:
# - name: pulsar-metadata-cacert
# existingSecret: pulsar-metadata-cacert
# secretKeys:
# - ca.crt
# Enable or disable broker authentication and authorization.
auth:
authentication:
enabled: false
provider: "jwt"
jwt:
enabled: false
# Enable JWT authentication
# If the token is generated by a secret key, set the usingSecretKey as true.
# If the token is generated by a private key, set the usingSecretKey as false.
usingSecretKey: false
openid:
enabled: false
# # https://pulsar.apache.org/docs/next/security-openid-connect/#enable-openid-connect-authentication-in-the-broker-and-proxy
openIDAllowedTokenIssuers: []
openIDAllowedAudiences: []
openIDTokenIssuerTrustCertsFilePath:
openIDRoleClaim:
openIDAcceptedTimeLeewaySeconds: "0"
openIDCacheSize: "5"
openIDCacheRefreshAfterWriteSeconds: "64800"
openIDCacheExpirationSeconds: "86400"
openIDHttpConnectionTimeoutMillis: "10000"
openIDHttpReadTimeoutMillis: "10000"
openIDKeyIdCacheMissRefreshSeconds: "300"
openIDRequireIssuersUseHttps: "true"
openIDFallbackDiscoveryMode: "DISABLED"
authorization:
enabled: false
superUsers:
@ -295,13 +365,15 @@ auth:
######################################################################
## cert-manager
## templates/tls-cert-issuer.yaml
## templates/tls-cert-internal-issuer.yaml
##
## Cert manager is used for automatically provisioning TLS certificates
## for components within a Pulsar cluster
certs:
internal_issuer:
apiVersion: cert-manager.io/v1
# To enable internal issuer for TLS certificates, set this to true
# It is necessary to have cert-manager installed in the cluster
enabled: false
component: internal-cert-issuer
# The type of issuer, supports selfsigning and ca
@ -311,10 +383,19 @@ certs:
# 15d
renewBefore: 360h
issuers:
# Used for certs.type as selfsigning, the selfsigned issuer has no dependency on any other resource.
# Used for certs.internal_issuer.type as selfsigning
selfsigning:
# used for certs.type as ca, the CA issuer needs to reference a Secret which contains your CA certificate and signing private key.
# The name of the issuer, if not specified, the default value is used
name:
# The secret name of the selfsigned CA certificate, if not specified, the default value is used
secretName:
# used for certs.internal_issuer.type as ca or when internal_issuer is disabled
ca:
# The name of the issuer, it is mandatory to specify this value if TLS is enabled
# and selfsigning is not used
name:
# The secret name of the CA certificate, it is mandatory to specify this value if TLS is enabled
# and selfsigning is not used
secretName:
######################################################################
@ -334,7 +415,7 @@ zookeeper:
type: RollingUpdate
podManagementPolicy: Parallel
initContainers: []
# This is how prometheus discovers this component
# This is how Victoria Metrics or Prometheus discovers this component
podMonitor:
enabled: true
interval: 60s
@ -381,6 +462,8 @@ zookeeper:
type: requiredDuringSchedulingIgnoredDuringExecution
# set topologySpreadConstraint to deploy pods across different zones
topologySpreadConstraints: []
# annotations for the app (statefulset/deployment)
appAnnotations: {}
annotations: {}
tolerations: []
gracePeriod: 30
@ -495,7 +578,11 @@ oxia:
replicationFactor: 3
## templates/coordinator-deployment.yaml
coordinator:
# This is how prometheus discovers this component
# annotations for the app (statefulset/deployment)
appAnnotations: {}
# pods annotations
annotations: {}
# This is how Victoria Metrics or Prometheus discovers this component
podMonitor:
enabled: true
interval: 60s
@ -515,9 +602,18 @@ oxia:
tolerations: []
# nodeSelector:
# cloud.google.com/gke-nodepool: default-pool
extraContainers: []
extraVolumes: []
extraVolumeMounts: []
# customConfigMapName: ""
# entrypoint: []
## templates/server-statefulset.yaml
server:
# This is how prometheus discovers this component
# annotations for the app (statefulset/deployment)
appAnnotations: {}
# pods annotations
annotations: {}
# This is how Victoria Metrics or Prometheus discovers this component
podMonitor:
enabled: true
interval: 60s
@ -590,7 +686,7 @@ bookkeeper:
type: RollingUpdate
podManagementPolicy: Parallel
initContainers: []
# This is how prometheus discovers this component
# This is how Victoria Metrics or Prometheus discovers this component
podMonitor:
enabled: true
interval: 60s
@ -634,6 +730,8 @@ bookkeeper:
type: requiredDuringSchedulingIgnoredDuringExecution
# set topologySpreadConstraint to deploy pods across different zones
topologySpreadConstraints: []
# annotations for the app (statefulset/deployment)
appAnnotations: {}
annotations: {}
tolerations: []
gracePeriod: 30
@ -801,7 +899,7 @@ autorecovery:
component: recovery
replicaCount: 1
initContainers: []
# This is how prometheus discovers this component
# This is how Victoria Metrics or Prometheus discovers this component
podMonitor:
enabled: true
interval: 60s
@ -824,6 +922,8 @@ autorecovery:
type: requiredDuringSchedulingIgnoredDuringExecution
# set topologySpreadConstraint to deploy pods across different zones
topologySpreadConstraints: []
# annotations for the app (statefulset/deployment)
appAnnotations: {}
annotations: {}
# tolerations: []
gracePeriod: 30
@ -833,6 +933,10 @@ autorecovery:
requests:
memory: 64Mi
cpu: 0.05
## Bookkeeper auto-recovery service
## templates/autorecovery-service.yaml
service:
annotations: {}
## Bookkeeper auto-recovery service account
## templates/autorecovery-service-account.yaml
service_account:
@ -844,6 +948,8 @@ autorecovery:
BOOKIE_MEM: >
-Xms64m -Xmx64m
PULSAR_PREFIX_useV2WireProtocol: "true"
extraVolumes: []
extraVolumeMounts: []
## Pulsar Zookeeper metadata. The metadata will be deployed as
## soon as the last zookeeper node is reachable. The deployment
@ -876,6 +982,52 @@ pulsar_metadata:
## Timeout for running metadata initialization
initTimeout: 60
## Allow read-only operations on the metadata store when the metadata store is not available.
## This is useful when you want to continue serving requests even if the metadata store is not fully available with quorum.
metadataStoreAllowReadOnlyOperations: false
## The session timeout for the metadata store in milliseconds.
metadataStoreSessionTimeoutMillis: 30000
## Metadata store operation timeout in seconds.
metadataStoreOperationTimeoutSeconds: 30
## The expiry time for the metadata store cache in seconds.
metadataStoreCacheExpirySeconds: 300
## Whether we should enable metadata operations batching
metadataStoreBatchingEnabled: true
## Maximum delay to impose on batching grouping (in milliseconds)
metadataStoreBatchingMaxDelayMillis: 5
## Maximum number of operations to include in a singular batch
metadataStoreBatchingMaxOperations: 1000
## Maximum size of a batch (in KB)
metadataStoreBatchingMaxSizeKb: 128
## BookKeeper client and BookKeeper metadata configuration settings with Pulsar Helm Chart deployments
bookkeeper:
## Controls whether to use the PIP-45 metadata driver (PulsarMetadataClientDriver) for BookKeeper client
## in the Pulsar Broker when using ZooKeeper as a metadata store.
## This is setting applies to Pulsar Broker's BookKeeper client.
## When set to true, Pulsar Broker's BookKeeper client will use the PIP-45 metadata driver (PulsarMetadataBookieDriver).
## When set to false, Pulsar Broker's BookKeeper client will use BookKeeper's default ZooKeeper connection implementation.
usePulsarMetadataClientDriver: false
## Controls whether to use the PIP-45 metadata driver (PulsarMetadataBookieDriver) for BookKeeper components
## when using ZooKeeper as a metadata store.
## This is a global setting that applies to all BookKeeper components.
## When set to true, BookKeeper components will use the PIP-45 metadata driver (PulsarMetadataBookieDriver).
## When set to false, BookKeeper components will use BookKeeper's default ZooKeeper connection implementation.
## Warning: Do not enable this feature unless you are aware of the risks and have tested it in non-production environments.
usePulsarMetadataBookieDriver: false
## The session timeout for the metadata store in milliseconds. This setting is mapped to `zkTimeout` in `bookkeeper.conf`.
## due to implementation details in the PulsarMetadataBookieDriver, it also applies when Oxia metadata store is enabled.
metadataStoreSessionTimeoutMillis: 30000
# resources for bin/pulsar initialize-cluster-metadata
resources:
# requests:
@ -916,11 +1068,21 @@ broker:
# The podManagementPolicy cannot be modified for an existing deployment. If you need to change this value, you will need to manually delete the existing broker StatefulSet and then redeploy the chart.
podManagementPolicy:
initContainers: []
# This is how prometheus discovers this component
# This is how Victoria Metrics or Prometheus discovers this component
podMonitor:
enabled: true
interval: 60s
scrapeTimeout: 60s
# Removes metrics that end with _created suffix
# These metrics are automatically generated by the Prometheus client library to comply with OpenMetrics format
# and aren't currently used. Disable this if you need to use these metrics or add an exclusion pattern when
# a specific metric is needed.
dropUnderscoreCreatedMetrics:
enabled: true
# Optional regex pattern to exclude specific metrics from being dropped
# excludePatterns:
# - pulsar_topic_load_times_created
# Custom metric relabelings to apply to all metrics
metricRelabelings:
# - action: labeldrop
# regex: cluster
@ -961,6 +1123,8 @@ broker:
type: preferredDuringSchedulingIgnoredDuringExecution
# set topologySpreadConstraint to deploy pods across different zones
topologySpreadConstraints: []
# annotations for the app (statefulset/deployment)
appAnnotations: {}
annotations: {}
tolerations: []
gracePeriod: 30
@ -1016,9 +1180,9 @@ broker:
-XX:-ResizePLAB
-XX:+ExitOnOutOfMemoryError
-XX:+PerfDisableSharedMem
managedLedgerDefaultEnsembleSize: "1"
managedLedgerDefaultWriteQuorum: "1"
managedLedgerDefaultAckQuorum: "1"
managedLedgerDefaultEnsembleSize: "2"
managedLedgerDefaultWriteQuorum: "2"
managedLedgerDefaultAckQuorum: "2"
## Add a custom command to the start up process of the broker pods (e.g. update-ca-certificates, jvm commands, etc)
additionalCommand:
@ -1163,11 +1327,21 @@ proxy:
metrics: ~
behavior: ~
initContainers: []
# This is how prometheus discovers this component
# This is how Victoria Metrics or Prometheus discovers this component
podMonitor:
enabled: true
interval: 60s
scrapeTimeout: 60s
# Removes metrics that end with _created suffix
# These metrics are automatically generated by the Prometheus client library to comply with OpenMetrics format
# and aren't currently used. Disable this if you need to use these metrics or add an exclusion pattern when
# a specific metric is needed.
dropUnderscoreCreatedMetrics:
enabled: true
# Optional regex pattern to exclude specific metrics from being dropped
# excludePatterns:
# - pulsar_proxy_new_connections_created
# Custom metric relabelings to apply to all metrics
metricRelabelings:
# - action: labeldrop
# regex: cluster
@ -1203,6 +1377,8 @@ proxy:
type: requiredDuringSchedulingIgnoredDuringExecution
# set topologySpreadConstraint to deploy pods across different zones
topologySpreadConstraints: []
# annotations for the app (statefulset/deployment)
appAnnotations: {}
annotations: {}
tolerations: []
gracePeriod: 30
@ -1275,8 +1451,48 @@ proxy:
http: 8080
https: 8443
service:
annotations: {}
type: LoadBalancer
# Service type defaults to ClusterIP for security reasons.
#
# SECURITY NOTICE: The Pulsar proxy is not designed for direct public internet exposure
# (see https://pulsar.apache.org/docs/4.0.x/administration-proxy/).
#
# If you need to expose the proxy outside of the cluster using a LoadBalancer service type:
# 1. Set type to LoadBalancer only in secured environments with proper network controls.
# In cloud managed Kubernetes clusters, make sure to add annotations to the service to create an
# internal load balancer so that the load balancer is not exposed to the public internet.
# You must also ensure that the configuration is correct so that the load balancer is not exposed to the public internet.
# 2. Configure authentication and authorization
# 3. Use TLS for all connections
# 4. If you are exposing to unsecure networks, implement additional security measures like
# IP restrictions (loadBalancerSourceRanges)
#
# Please notice that the the Apache Pulsar project takes no responsibility for any security issues
# for your deployment. Exposing the cluster using Pulsar Proxy to unsecure networks is not supported.
#
# Previous chart versions defaulted to LoadBalancer which could create security risks.
type: ClusterIP
# When using a LoadBalancer service type, add internal load balancer annotations to the service to create an internal load balancer.
annotations: {
## Set internal load balancer annotations when using a LoadBalancer service type because of security reasons.
## You must also ensure that the configuration is correct so that the load balancer is not exposed to the public internet.
## This information below is for reference only and may not be applicable to your cloud provider.
## Please refer to the cloud provider's documentation for the correct annotations.
## Kubernetes documentation about internal load balancers
## https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
## AWS / EKS
## Ensure that you have recent AWS Load Balancer Controller installed.
## Docs: https://kubernetes-sigs.github.io/aws-load-balancer-controller/latest/guide/service/annotations/
# service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
## Azure / AKS
## Docs: https://learn.microsoft.com/en-us/azure/aks/internal-lb
# service.beta.kubernetes.io/azure-load-balancer-internal: "true"
## GCP / GKE
## Docs: https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer-parameters
# networking.gke.io/load-balancer-type: "Internal"
## Allow global access to the internal load balancer when needed.
# networking.gke.io/internal-load-balancer-allow-global-access: "true"
}
## Optional. Leave it blank to get next available random IP.
loadBalancerIP: ""
## Set external traffic policy to: "Local" to preserve source IP on providers supporting it.
@ -1331,6 +1547,8 @@ toolset:
# cloud.google.com/gke-nodepool: default-pool
# set topologySpreadConstraint to deploy pods across different zones
topologySpreadConstraints: []
# annotations for the app (statefulset/deployment)
appAnnotations: {}
annotations: {}
tolerations: []
gracePeriod: 30
@ -1367,92 +1585,239 @@ toolset:
additionalCommand:
#############################################################
### Monitoring Stack : kube-prometheus-stack chart
### Monitoring Stack : victoria-metrics-k8s-stack chart
#############################################################
## Prometheus, Grafana, and the rest of the kube-prometheus-stack are managed by the dependent chart here:
## https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
## For sample values, please see their documentation.
kube-prometheus-stack:
## Victoria Metrics, Grafana, and the rest of the monitoring stack are managed by the dependent chart here:
## https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-metrics-k8s-stack
## For sample values, please see: https://github.com/VictoriaMetrics/helm-charts/blob/master/charts/victoria-metrics-k8s-stack/values.yaml
victoria-metrics-k8s-stack:
## Enable the victoria-metrics-k8s-stack chart
enabled: true
prometheus:
## VictoriaMetrics Operator dependency chart configuration
victoria-metrics-operator:
enabled: true
# Install CRDs for VictoriaMetrics Operator
crds:
plain: true
operator:
## By default, operator is configured to not convert Prometheus Operator monitoring.coreos.com/v1 objects
## to Victoria Metrics operator operator.victoriametrics.com/v1beta1 objects.
# Enable this if you want to use Prometheus Operator objects for other purposes.
disable_prometheus_converter: true
## Single-node VM instance
vmsingle:
enabled: true
## -- Full spec for VMSingle CRD. Allowed values describe [here](https://docs.victoriametrics.com/operator/api#vmsinglespec)
spec:
retentionPeriod: "10d"
storage:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
## VM Agent for scraping metrics
vmagent:
enabled: true
## Minikube specific settings - uncomment when using minikube
# spec:
# volumes:
# - hostPath:
# path: /var/lib/minikube/certs/etcd
# type: DirectoryOrCreate
# name: etcd-certs
# volumeMounts:
# - mountPath: /var/lib/minikube/certs/etcd
# name: etcd-certs
## VM Alert for alerting rules - disabled by default
vmalert:
enabled: false
## Alertmanager component - disabled by default
alertmanager:
enabled: false
## Grafana component
## Refer to https://github.com/grafana/helm-charts/blob/main/charts/grafana/values.yaml
grafana:
enabled: true
# Use random password at installation time for Grafana by default by setting empty value to `adminPassword`.
# You can find out the actual password by running the following command:
# kubectl get secret -l app.kubernetes.io/name=grafana -o=jsonpath="{.items[0].data.admin-password}" | base64 --decode
adminPassword:
# Configure Pulsar dashboards for Grafana
persistence:
enabled: true
size: 5Gi
## Disable Grafana sidecar dashboards
## since this cannot be enabled in the same time as dashboards are enabled
sidecar:
dashboards:
enabled: false
# grafana.ini settings
grafana.ini:
analytics:
check_for_updates: false
dashboards:
default_home_dashboard_path: /var/lib/grafana/dashboards/pulsar/overview.json
## Configure Pulsar dashboards for Grafana
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'pulsar'
- name: 'default'
orgId: 1
folder: 'Pulsar'
folder: ''
type: file
disableDeletion: true
editable: true
allowUiUpdates: true
options:
path: /var/lib/grafana/dashboards/default
- name: oxia
orgId: 1
folder: Oxia
type: file
disableDeletion: true
editable: true
allowUiUpdates: true
options:
path: /var/lib/grafana/dashboards/oxia
- name: pulsar
orgId: 1
folder: Pulsar
type: file
disableDeletion: true
editable: true
allowUiUpdates: true
options:
path: /var/lib/grafana/dashboards/pulsar
dashboards:
default:
victoriametrics:
gnetId: 10229
revision: 38
datasource: VictoriaMetrics
kubernetes:
gnetId: 14205
datasource: VictoriaMetrics
oxia:
oxia-containers:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/oxia/oxia-containers.json
oxia-coordinator:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/oxia/oxia-coordinator.json
oxia-golang:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/oxia/oxia-golang.json
oxia-grpc:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/oxia/oxia-grpc.json
oxia-nodes:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/oxia/oxia-nodes.json
oxia-overview:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/oxia/oxia-overview.json
oxia-shards:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/oxia/oxia-shards.json
pulsar:
# Download the maintained dashboards from AL 2.0 licenced repo https://github.com/streamnative/apache-pulsar-grafana-dashboard
bookkeeper-compaction:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/bookkeeper-compaction.json
bookkeeper:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/bookkeeper.json
datasource: Prometheus
broker:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/broker.json
datasource: Prometheus
connector_sink:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/connector_sink.json
datasource: Prometheus
connector_source:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/connector_source.json
datasource: Prometheus
container:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/container.json
datasource: Prometheus
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/bookkeeper.json
broker-cache-by-broker:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/broker-cache-by-broker.json
broker-cache:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/broker-cache.json
connector-sink:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/connector-sink.json
connector-source:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/connector-source.json
functions:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/functions.json
datasource: Prometheus
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/functions.json
jvm:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/jvm.json
datasource: Prometheus
loadbalance:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/loadbalance.json
datasource: Prometheus
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/jvm.json
load-balancing:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/load-balancing.json
messaging:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/messaging.json
datasource: Prometheus
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/messaging.json
namespace:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/namespace.json
node:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/node.json
datasource: Prometheus
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/node.json
offloader:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/offloader.json
overview-by-broker:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/overview-by-broker.json
overview:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/overview.json
datasource: Prometheus
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/overview.json
proxy:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/proxy.json
datasource: Prometheus
recovery:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/recovery.json
datasource: Prometheus
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/proxy.json
sockets:
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/sockets.json
topic:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/topic.json
datasource: Prometheus
transaction:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/transaction.json
datasource: Prometheus
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/topic.json
zookeeper:
url: https://raw.githubusercontent.com/streamnative/apache-pulsar-grafana-dashboard/master/dashboards.kubernetes/zookeeper-3.6.json
datasource: Prometheus
url: https://raw.githubusercontent.com/lhotari/pulsar-grafana-dashboards/master/pulsar/zookeeper.json
## Node exporter component
prometheus-node-exporter:
enabled: true
hostRootFsMount:
enabled: false
alertmanager:
enabled: false
## Kube state metrics component
kube-state-metrics:
enabled: true
## Components scraping Kubernetes services
kubelet:
enabled: true
kubeApiServer:
enabled: true
kubeControllerManager:
enabled: true
## Additional settings for minikube environments
vmScrape:
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
port: http-metrics
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true # For development environments like minikube
coreDns:
enabled: true
kubeEtcd:
enabled: true
## Minikube specific settings - uncomment or adjust when using minikube
# service:
# port: 2381
# targetPort: 2381
# vmScrape:
# spec:
# endpoints:
# - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
# port: http-metrics
# scheme: http # Minikube often uses http instead of https for etcd
kubeScheduler:
enabled: true
## Additional settings for minikube environments
vmScrape:
spec:
endpoints:
- bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
port: http-metrics
scheme: https
tlsConfig:
caFile: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
insecureSkipVerify: true # For development environments like minikube
## Components Stack: pulsar_manager
## templates/pulsar-manager.yaml
@ -1467,6 +1832,8 @@ pulsar_manager:
# cloud.google.com/gke-nodepool: default-pool
# set topologySpreadConstraint to deploy pods across different zones
topologySpreadConstraints: []
# annotations for the app (statefulset/deployment)
appAnnotations: {}
annotations: {}
tolerations: []
extraVolumes: []
@ -1572,3 +1939,7 @@ initContainer:
requests:
memory: 256Mi
cpu: 0.1
## Array of extra objects to deploy with the release (evaluated as a template)
##
extraDeploy: []

View File

@ -37,7 +37,7 @@ components:
pulsar_manager: false
## disable monitoring stack
kube-prometheus-stack:
victoria-metrics-k8s-stack:
enabled: false
prometheusOperator:
enabled: false

View File

@ -37,7 +37,7 @@ components:
pulsar_manager: false
## disable monitoring stack
kube-prometheus-stack:
victoria-metrics-k8s-stack:
enabled: false
prometheusOperator:
enabled: false

View File

@ -0,0 +1,58 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# disable monitoring
victoria-metrics-k8s-stack:
enabled: false
victoria-metrics-operator:
enabled: false
vmsingle:
enabled: false
vmagent:
enabled: false
kube-state-metrics:
enabled: false
prometheus-node-exporter:
enabled: false
grafana:
enabled: false
# disable pod monitors
autorecovery:
podMonitor:
enabled: false
bookkeeper:
podMonitor:
enabled: false
oxia:
server:
podMonitor:
enabled: false
coordinator:
podMonitor:
enabled: false
broker:
podMonitor:
enabled: false
proxy:
podMonitor:
enabled: false
zookeeper:
podMonitor:
enabled: false

View File

@ -28,7 +28,7 @@ components:
pulsar_manager: true
## disable monitoring stack
kube-prometheus-stack:
victoria-metrics-k8s-stack:
enabled: false
prometheusOperator:
enabled: false

View File

@ -0,0 +1,46 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# disabled AntiAffinity
affinity:
anti_affinity: false
victoria-metrics-k8s-stack:
grafana:
adminPassword: verysecureword123
bookkeeper:
configData:
# more aggressive disk cleanup
journalMaxSizeMB: "256"
majorCompactionInterval: "600"
minorCompactionInterval: "300"
compactionRateByEntries: "5000"
gcWaitTime: "60000"
broker:
configData:
# more aggressive disk cleanup
managedLedgerMinLedgerRolloverTimeMinutes: "1"
managedLedgerMaxLedgerRolloverTimeMinutes: "5"
# configure deletion of inactive topics
brokerDeleteInactiveTopicsMaxInactiveDurationSeconds: "86400"
proxy:
replicaCount: 1

View File

@ -37,7 +37,7 @@ components:
pulsar_manager: false
## disable monitoring stack
kube-prometheus-stack:
victoria-metrics-k8s-stack:
enabled: false
prometheusOperator:
enabled: false

View File

@ -25,14 +25,14 @@ fi
OUTPUT=${PULSAR_CHART_HOME}/output
OUTPUT_BIN=${OUTPUT}/bin
: "${KUBECTL_VERSION:=1.23.17}"
: "${KUBECTL_VERSION:=1.28.15}"
KUBECTL_BIN=$OUTPUT_BIN/kubectl
HELM_BIN=$OUTPUT_BIN/helm
: "${HELM_VERSION:=3.14.4}"
: "${KIND_VERSION:=0.22.0}"
: "${HELM_VERSION:=3.16.4}"
: "${KIND_VERSION:=0.27.0}"
KIND_BIN=$OUTPUT_BIN/kind
CR_BIN=$OUTPUT_BIN/cr
: "${CR_VERSION:=1.6.0}"
: "${CR_VERSION:=1.7.0}"
KUBECONFORM_BIN=$OUTPUT_BIN/kubeconform
: "${KUBECONFORM_VERSION:=0.6.7}"
export PATH="$OUTPUT_BIN:$PATH"

View File

@ -25,7 +25,7 @@ set -e
NAMESPACE=cert-manager
NAME=cert-manager
# check compatibility with k8s versions from https://cert-manager.io/docs/installation/supported-releases/
VERSION=v1.12.13
VERSION=v1.12.17
# Install cert-manager CustomResourceDefinition resources
echo "Installing cert-manager CRD resources ..."
@ -41,10 +41,12 @@ echo "Updating local helm chart repository cache ..."
helm repo update
echo "Installing cert-manager ${VERSION} to namespace ${NAMESPACE} as '${NAME}' ..."
helm install \
helm upgrade \
--install \
--namespace ${NAMESPACE} \
--create-namespace \
--version ${VERSION} \
--set featureGates=AdditionalCertificateOutputFormats=true \
${NAME} \
jetstack/cert-manager
echo "Successfully installed cert-manager ${VERSION}."

View File

@ -21,7 +21,7 @@
# This script is used to upgrade the Prometheus Operator CRDs before running "helm upgrade"
# source: https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack#upgrading-an-existing-release-to-a-new-major-version
# "Run these commands to update the CRDs before applying the upgrade."
PROMETHEUS_OPERATOR_VERSION="${1:-"0.77.1"}"
PROMETHEUS_OPERATOR_VERSION="${1:-"0.80.0"}"
PREFIX_URL="https://raw.githubusercontent.com/prometheus-operator/prometheus-operator/v${PROMETHEUS_OPERATOR_VERSION}/example/prometheus-operator-crd"
for crd in alertmanagerconfigs alertmanagers podmonitors probes prometheusagents prometheuses prometheusrules scrapeconfigs servicemonitors thanosrulers; do
# "--force-conflicts" is required to upgrade the CRDs. Following instructions from https://github.com/prometheus-community/helm-charts/issues/2489

View File

@ -18,34 +18,13 @@
# under the License.
#
if [ -z "$CHART_HOME" ]; then
echo "error: CHART_HOME should be initialized"
exit 1
if [ -z "$PULSAR_VERSION" ]; then
if command -v yq &> /dev/null; then
# use yq to get the appVersion from the Chart.yaml file
PULSAR_VERSION=$(yq .appVersion charts/pulsar/Chart.yaml)
else
# use a default version if yq is not installed
PULSAR_VERSION="4.0.3"
fi
fi
OUTPUT=${CHART_HOME}/output
OUTPUT_BIN=${OUTPUT}/bin
PULSARCTL_VERSION=v3.0.2.6
PULSARCTL_BIN=${HOME}/.pulsarctl/pulsarctl
export PATH=${HOME}/.pulsarctl/plugins:${PATH}
test -d "$OUTPUT_BIN" || mkdir -p "$OUTPUT_BIN"
function pulsar::verify_pulsarctl() {
if test -x "$PULSARCTL_BIN"; then
return
fi
return 1
}
function pulsar::ensure_pulsarctl() {
if pulsar::verify_pulsarctl; then
return 0
fi
echo "Get pulsarctl install.sh script ..."
install_script=$(mktemp)
trap "test -f $install_script && rm $install_script" RETURN
curl --retry 10 -L -o $install_script https://raw.githubusercontent.com/streamnative/pulsarctl/master/install.sh
chmod +x $install_script
$install_script --user --version ${PULSARCTL_VERSION}
}
PULSAR_TOKENS_CONTAINER_IMAGE="apachepulsar/pulsar:${PULSAR_VERSION}"

View File

@ -20,9 +20,12 @@
set -e
CHART_HOME=$(unset CDPATH && cd $(dirname "${BASH_SOURCE[0]}")/../.. && pwd)
SCRIPT_DIR="$(unset CDPATH && cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd)"
CHART_HOME=$(unset CDPATH && cd "$SCRIPT_DIR/../.." && pwd)
cd ${CHART_HOME}
source "${SCRIPT_DIR}/common_auth.sh"
usage() {
cat <<EOF
This script is used to generate token for a given pulsar role.
@ -86,10 +89,6 @@ if [[ "x${role}" == "x" ]]; then
exit 1
fi
source ${CHART_HOME}/scripts/pulsar/common_auth.sh
pulsar::ensure_pulsarctl
namespace=${namespace:-pulsar}
release=${release:-pulsar-dev}
@ -101,7 +100,6 @@ function pulsar::jwt::get_secret() {
if [[ "${local}" == "true" ]]; then
cp ${type} ${tmpfile}
else
echo "kubectl get -n ${namespace} secrets ${secret_name} -o jsonpath="{.data.${type}}" | base64 --decode > ${tmpfile}"
kubectl get -n ${namespace} secrets ${secret_name} -o jsonpath="{.data['${type}']}" | base64 --decode > ${tmpfile}
fi
}
@ -110,31 +108,41 @@ function pulsar::jwt::generate_symmetric_token() {
local token_name="${release}-token-${role}"
local secret_name="${release}-token-symmetric-key"
tmpfile=$(mktemp)
trap "test -f $tmpfile && rm $tmpfile" RETURN
tokentmpfile=$(mktemp)
trap "test -f $tokentmpfile && rm $tokentmpfile" RETURN
pulsar::jwt::get_secret SECRETKEY ${tmpfile} ${secret_name}
${PULSARCTL_BIN} token create -a HS256 --secret-key-file ${tmpfile} --subject ${role} 2&> ${tokentmpfile}
newtokentmpfile=$(mktemp)
local tmpdir=$(mktemp -d)
trap "test -d $tmpdir && rm -rf $tmpdir" RETURN
secretkeytmpfile=${tmpdir}/secret.key
tokentmpfile=${tmpdir}/token.jwt
pulsar::jwt::get_secret SECRETKEY ${secretkeytmpfile} ${secret_name}
docker run --user 0 --rm -t -v ${tmpdir}:/keydir ${PULSAR_TOKENS_CONTAINER_IMAGE} bin/pulsar tokens create -a HS256 --subject "${role}" --secret-key=file:/keydir/secret.key > ${tokentmpfile}
newtokentmpfile=${tmpdir}/token.jwt.new
tr -d '\n' < ${tokentmpfile} > ${newtokentmpfile}
echo "kubectl create secret generic ${token_name} -n ${namespace} --from-file="TOKEN=${newtokentmpfile}" --from-literal="TYPE=symmetric" ${local:+ -o yaml --dry-run=client}"
kubectl create secret generic ${token_name} -n ${namespace} --from-file="TOKEN=${newtokentmpfile}" --from-literal="TYPE=symmetric" ${local:+ -o yaml --dry-run=client}
rm -rf $tmpdir
}
function pulsar::jwt::generate_asymmetric_token() {
local token_name="${release}-token-${role}"
local secret_name="${release}-token-asymmetric-key"
privatekeytmpfile=$(mktemp)
trap "test -f $privatekeytmpfile && rm $privatekeytmpfile" RETURN
tokentmpfile=$(mktemp)
trap "test -f $tokentmpfile && rm $tokentmpfile" RETURN
local tmpdir=$(mktemp -d)
trap "test -d $tmpdir && rm -rf $tmpdir" RETURN
privatekeytmpfile=${tmpdir}/privatekey.der
tokentmpfile=${tmpdir}/token.jwt
pulsar::jwt::get_secret PRIVATEKEY ${privatekeytmpfile} ${secret_name}
${PULSARCTL_BIN} token create -a RS256 --private-key-file ${privatekeytmpfile} --subject ${role} 2&> ${tokentmpfile}
newtokentmpfile=$(mktemp)
# Generate token
docker run --user 0 --rm -t -v ${tmpdir}:/keydir ${PULSAR_TOKENS_CONTAINER_IMAGE} bin/pulsar tokens create -a RS256 --subject "${role}" --private-key=file:/keydir/privatekey.der > ${tokentmpfile}
newtokentmpfile=${tmpdir}/token.jwt.new
tr -d '\n' < ${tokentmpfile} > ${newtokentmpfile}
kubectl create secret generic ${token_name} -n ${namespace} --from-file="TOKEN=${newtokentmpfile}" --from-literal="TYPE=asymmetric" ${local:+ -o yaml --dry-run=client}
rm -rf $tmpdir
}
if [[ "${symmetric}" == "true" ]]; then

View File

@ -20,9 +20,12 @@
set -e
CHART_HOME=$(unset CDPATH && cd $(dirname "${BASH_SOURCE[0]}")/../.. && pwd)
SCRIPT_DIR="$(unset CDPATH && cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd)"
CHART_HOME=$(unset CDPATH && cd "$SCRIPT_DIR/../.." && pwd)
cd ${CHART_HOME}
source "${SCRIPT_DIR}/common_auth.sh"
usage() {
cat <<EOF
This script is used to generate token secret key for a given pulsar helm release.
@ -74,10 +77,6 @@ case $key in
esac
done
source ${CHART_HOME}/scripts/pulsar/common_auth.sh
pulsar::ensure_pulsarctl
namespace=${namespace:-pulsar}
release=${release:-pulsar-dev}
local_cmd=${file:+-o yaml --dry-run=client >secret.yaml}
@ -85,31 +84,38 @@ local_cmd=${file:+-o yaml --dry-run=client >secret.yaml}
function pulsar::jwt::generate_symmetric_key() {
local secret_name="${release}-token-symmetric-key"
tmpfile=$(mktemp)
trap "test -f $tmpfile && rm $tmpfile" RETURN
${PULSARCTL_BIN} token create-secret-key --output-file ${tmpfile}
mv $tmpfile SECRETKEY
kubectl create secret generic ${secret_name} -n ${namespace} --from-file=SECRETKEY ${local:+ -o yaml --dry-run=client}
if [[ "${local}" != "true" ]]; then
rm SECRETKEY
local tmpdir=$(mktemp -d)
trap "test -d $tmpdir && rm -rf $tmpdir" RETURN
local tmpfile=${tmpdir}/SECRETKEY
docker run --rm -t ${PULSAR_TOKENS_CONTAINER_IMAGE} bin/pulsar tokens create-secret-key > "${tmpfile}"
kubectl create secret generic ${secret_name} -n ${namespace} --from-file=$tmpfile ${local:+ -o yaml --dry-run=client}
# if local is true, keep the file available for debugging purposes
if [[ "${local}" == "true" ]]; then
mv $tmpfile SECRETKEY
fi
rm -rf $tmpdir
}
function pulsar::jwt::generate_asymmetric_key() {
local secret_name="${release}-token-asymmetric-key"
privatekeytmpfile=$(mktemp)
trap "test -f $privatekeytmpfile && rm $privatekeytmpfile" RETURN
publickeytmpfile=$(mktemp)
trap "test -f $publickeytmpfile && rm $publickeytmpfile" RETURN
${PULSARCTL_BIN} token create-key-pair -a RS256 --output-private-key ${privatekeytmpfile} --output-public-key ${publickeytmpfile}
mv $privatekeytmpfile PRIVATEKEY
mv $publickeytmpfile PUBLICKEY
kubectl create secret generic ${secret_name} -n ${namespace} --from-file=PRIVATEKEY --from-file=PUBLICKEY ${local:+ -o yaml --dry-run=client}
if [[ "${local}" != "true" ]]; then
rm PRIVATEKEY
rm PUBLICKEY
local tmpdir=$(mktemp -d)
trap "test -d $tmpdir && rm -rf $tmpdir" RETURN
privatekeytmpfile=${tmpdir}/PRIVATEKEY
publickeytmpfile=${tmpdir}/PUBLICKEY
# Generate key pair
docker run --user 0 --rm -t -v ${tmpdir}:/keydir ${PULSAR_TOKENS_CONTAINER_IMAGE} bin/pulsar tokens create-key-pair --output-private-key=/keydir/PRIVATEKEY --output-public-key=/keydir/PUBLICKEY
kubectl create secret generic ${secret_name} -n ${namespace} --from-file=$privatekeytmpfile --from-file=$publickeytmpfile ${local:+ -o yaml --dry-run=client}
# if local is true, keep the files available for debugging purposes
if [[ "${local}" == "true" ]]; then
mv $privatekeytmpfile PRIVATEKEY
mv $publickeytmpfile PUBLICKEY
fi
rm -rf $tmpdir
}
if [[ "${symmetric}" == "true" ]]; then

View File

@ -74,10 +74,6 @@ if [[ "x${role}" == "x" ]]; then
exit 1
fi
source ${CHART_HOME}/scripts/pulsar/common_auth.sh
pulsar::ensure_pulsarctl
namespace=${namespace:-pulsar}
release=${release:-pulsar-dev}

View File

@ -0,0 +1,23 @@
#!/usr/bin/env bash
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#
# This script is used to upgrade the Victoria Metrics Operator CRDs before running "helm upgrade"
VM_OPERATOR_VERSION="${1:-"0.42.4"}"
kubectl apply --server-side --force-conflicts -f "https://github.com/VictoriaMetrics/operator/releases/download/v${VM_OPERATOR_VERSION}/crd.yaml"