46 Commits

Author SHA1 Message Date
Lari Hotari
63cbdfe687
Increase default initialDelaySeconds for Zookeeper probes to workaround ZOOKEEPER-3988 (#202)
- When TLS is enabled for Zookeeper, NettyServerCnxnFactory will be used.
  It contains the issue https://github.com/apache/pulsar/issues/11070 /
  https://issues.apache.org/jira/browse/ZOOKEEPER-3988
  - as a workaround, increase the initialDelaySeconds from 10 to 20 to
    reduce the likely hood of ZOOKEEPER-3988
2022-01-18 18:38:29 +02:00
Lari Hotari
a27ec0aebf
Change default podManagementPolicy to Parallel for Zookeeper (#203) 2022-01-18 18:38:22 +02:00
Hang Chen
aea6a4f367
useV2WireProtocol for bookkeeper autorecovery (#165) 2022-01-18 09:06:26 +02:00
cogito-kyle
adbc6b7fcf
Add custom labels to all k8s objects in chart (#201) 2022-01-18 08:47:49 +02:00
Aaron Johnson
cee3b5c5e6
added additionalCommand parameter (#150)
Co-authored-by: Aaron Johnson <aaron.johnson@crowdstrike.com>
2022-01-05 10:26:55 -06:00
Frank Kelly
a919f309c6
Add ability to run extra commands in the initialization jobs e.g. to quit istio sidecars (#181) 2022-01-05 16:24:19 +02:00
Jiwei Guo
0f6dea8022
Bump to Pulsar 2.7.4 (#189)
* Bump to Pulsar 2.7.4

* update

* update
2021-12-30 08:55:57 +02:00
Lari Hotari
a16c6bbf19
Make k8s probe timeoutSeconds configurable and set default to 5s for k8s 1.20+ compatibility (#179)
- set to 5 seconds by default

- address compatibility with Kubernetes 1.20+. This impacts "bin/pulsar-zookeeper-ruok.sh" exec probe used in ZK.
  "Before Kubernetes 1.20, the field timeoutSeconds was not respected for exec probes: probes continued running indefinitely, even past their configured deadline, until a result was returned."
   https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
2021-11-25 08:46:42 +01:00
Frank Kelly
5b10f48f5b
Fix #152 Add Helm chart support for Istio port naming (attempt 2) (#162)
Fixes #152 

### Motivation

Support prefix in front of port names to abide by Istio protocol rules
https://istio.io/latest/docs/ops/configuration/traffic-management/protocol-selection/#explicit-protocol-selection

### Modifications

Support adding a prefix
- pulsar -> tcp-pulsar
- pulsarssl --> tls-pulsarssl etc
2021-09-10 08:56:16 +08:00
Peter Tinti
f307cc32af
updates pulsar ca name generation to use suffix making cert swappable (#141)
Updates CA name generation to be configurable allowing the swapping in of a CA.

### Motivation

We recently swapped out cert issuers and found that with the current helm chart we were unable to do a hot swap without downtime (via helm) because the CA cert name is not configurable. Being able to change the name of the CA allows us to create a new CA first -> Validate -> then swap over in follow up apply/release.

### Modifications

Adds the ability to specify the suffix used to generate the CA name (not the whole name in order to preserve back compatibility regardless of the release name.)
2021-08-25 23:14:03 -07:00
Aaron Johnson
c45813ffe5
added extraVolumes and extraVolumeMounts (#149)
Fixes #147

### Motivation
This gives the helm chart user the ability to specify a secret or other type of volume to be mounted into any of the statefulset pods

### Modifications
* Added conditionals to `bookkeeper`, `broker`, `proxy`, `toolset`, and `zookeeper` statefulsets which allow the chart user to specify extraVolumes and extraVolumeMounts for deployed pods.
* Added `extraVolumes` and `extraVolumeMounts` parameters to values.yaml
2021-08-25 23:13:27 -07:00
TC-robV
75169707fb
add enableAdminApi for prometheus (#121)
Fixes #<xyz>

### Motivation

would be nice to have this option here so people can run admin commands against the prometheus. 

### Modifications

added a new value and modified the deployment, taken from the official prom helm.

### Verifying this change

- [ ] Make sure that the change passes the CI checks.
2021-06-23 21:12:20 -07:00
Peter Tinti
d6d240a123
Updates internal issuer cert to include duration and renew configs (#131)
### Motivation
* While component certs can be configured with a custom duration the CA cert for self-signed configuration uses default values. It can be convenient to have this certificate expire more than a month out.

### Modifications
* Updates the internal issuer `{{ .Release.Name }}-ca-tls` certificate to make `duration` and `renewBefore` configurable. Does not use `common` so that the CA can be configured to last much longer than individual components certs if desired.

### Verifying this change
- [x] Make sure that the change passes the CI checks.
2021-06-23 21:00:17 -07:00
Enrico Olivelli
6d0db35216
Update to Pulsar 2.7.2 (#119)
Co-authored-by: Enrico Olivelli <eolivelli@datastax.com>
2021-06-03 11:31:47 +03:00
Jean Helou
ba356e5df7
makes cert-manager apiVersion configurable (#107)
This commit let's users override the apiVersion referenced in this
chart so that the chart can be used with newer cert-manager releases.
(script/cert-manager/install-cert-manager.sh installs 0.13.0 when
current version is 1.2.0...)

Fixes #68

### Motivation

cert-manager apiVersion changed after cert-manager 1.0.0 was released, which prevents the chart from provisionning certificates with newer cert-manager installation because of an incompatible apiVersion.

I have a cluster with cert-manager >1.0.0 installed, making `apiVersion` overridable makes it easy for me to install pulsar on that cluster

### Modifications

I introduced the value `certs.internal_issuer.apiVersion`, which by default uses the apiVersion that was previously hardcoded (`cert-manager.io/v1alpha2`) 
I replaced all occurrences of that apiVersion by a reference to the value so that users can override it to `cert-manager.io/v1` if they have a newer version of cert-manager installed.

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2021-03-16 00:44:38 -07:00
Yong Zhang
e0903c633c
Bump pulsar version to 2.7.1 (#109)
### Motivation

Release with pulsar 2.7.1

### Modification

- update pulsar version from 2.7.0 to 2.7.1
- add a script for updating the pulsar version
2021-03-16 00:43:30 -07:00
wuYin
67818a48cb
Support common volume for journal and ledgers (#93)
### Motivation

In some case, my k8s node only have 1 large capacity ssd, for deploying 1 bookie, I need:

- Partition the ssd into 2 disks, and make 2 pv over it.
- Just make 1 pv over it, but journal & ledgers under same mount path (this PR did)

Both can't isolate IO for journal & ledgers, so I prefer the second one for reusability.


### Modifications

values.yaml
  - add `useSingleCommonVolume` option, default false

bookkeeper-statefulset.yaml
   - mount the only PV to path `/pulsar/data/bookkeeper`
   - use configured common storageClassName

bookkeeper-storageclass.yaml
  - use configured provisioner for the common storageClass 

### Others
This may not be an issue for everyone, if it's not necessary to merge, I'll just use it locally

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2021-01-30 09:28:45 -08:00
Miecio
025b263206
Extend podmonitor and add relabels (#100)
### Motivation

As I wanted to use [streamnative/apache-pulsar-grafana-dashboard](https://github.com/streamnative/apache-pulsar-grafana-dashboard) with this helm chart and own cluster wide Prometheus stack I decided that use of PodMonitor CRD is a good way. Unfortunately prometheus config has some metrics relabelings that are required by grafana dashboard. I decied to port them directly to PodMonitor definition

### Modifications

* Added missing PodMonitor for autorecovery
* Port relabelings from `prometheus-configmap.yaml` to each PodMonitor

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2021-01-30 09:24:21 -08:00
Miloš Matijašević
c2f672881e
Updating pods on configmap change (#73)
Fixes #71 

### Motivation

Pods are not restarting when config maps are changed after changing values.yaml file, so they need to be restarted manually in order to pick up new values from config map. 

### Modifications

As I mentioned `restartPodsOnConfigMapChange` flag for each component is added in values.yaml file whether to restart pods on configmap change or not, default is `false`.
In statefulset templates for each component is added part which is adding annotation that contains hash of corresponding configmap if `restartPodsOnConfigMapChange` is `true`, which will cause pods to restart if corresponding configmap has been changed (https://helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments).

### Verifying this change

- [ ] Make sure that the change passes the CI checks.
2021-01-07 21:28:11 -08:00
Miecio
667e634af0
Add basic PSP and RBAC for core components (#87)
Add PSP and add/modify RBAC. I'm open for all discussion.

### Motivation

On clusters which use PSP and restrictive default policy pulsar cannot be installed, because it uses root user and requires writable container root directory. Additionally default RBAC for broker are too permissive (usage of ClusterRoleBinding) in my opinion.

### Modifications

Add PSP and RBAC for bookkeeper and autorecovery to add
exception to allow startup even in secure environment
where containers cannot access RW on root by default.

Add option for limiting broker ClusterRoleBinding
to single namespace by replacing to RoleBinding

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2021-01-07 21:26:44 -08:00
Jiří Pinkava
8d5339f9ff
Allow use of existing secret for pulsar manager credentials (#69)
Signed-off-by: Jiří Pinkava <jiri.pinkava@rossum.ai>

Co-authored-by: Jiri Pinkava <jiri.pinkava@rossum.ai>
2021-01-07 21:24:52 -08:00
lipenghui
f6705f0aec
Bump Pulsar 2.7.0 (#88)
Co-authored-by: Sijie Guo <sijie@apache.org>
2020-12-03 20:14:05 -08:00
Jean Helou
6c9856a1af
Use .Release.Namespace by default to handle namespaces (#80)
It remains possible to override the current release namespace by setting
the `namespace` value though this may lead to having the helm metadata
and the pulsar components in different namespaces

Fixes #66

### Motivation

Trying to deploy the chart in a namespace using the usual helm pattern fails for example
```
kubectl create ns pulsartest
helm upgrade --install pulsar -n pulsartest apache/pulsar
Error: namespaces "pulsar" not found
```
fixing that while keeping the helm metadata and the deployed objects in the same namespace requires declaring the namespace twice 
```
kubectl create ns pulsartest
helm upgrade --install pulsar -n pulsartest apache/pulsar --set namespace=pulsartest
Error: namespaces "pulsar" not found
```
This is needlessly confusing for newcomers who follow the helm documentation and is contrary to helm best practices.

### Modifications

I changed the chart to use the context namespace `.Release.Namespace` by default while preserving the ability to override that by explicitly providing a namespace on the commande line, with the this modification both  examples behave as expected
 
### Verifying this change

- [x] Make sure that the change passes the CI checks.
2020-12-03 19:32:05 -08:00
xiaolong ran
ebc40c3382
Bump the image version to 2.6.2 (#81)
Signed-off-by: xiaolong.ran <rxl@apache.org>

### Motivation

Bump the image version to 2.6.2

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2020-11-12 20:31:41 -07:00
Naveen Ramanathan
fb4c44f449
changed publishNotReadyAddresses to (#64)
### Motivation

* ```publishNotReadyAddresses``` is a service spec and not a service annotation. This is mentioned in the K8s API docs at https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#servicespec-v1-core

### Modifications

* Modified ```publishNotReadyAddresses``` from annotation to service spec

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2020-10-15 18:42:13 +08:00
Naveen Ramanathan
bf5db574d1
Make forceSync by default as "yes" (#63)
### Motivation

* It's not recommended to run a production zookkeeper cluster with forceSync as "no".  This is also mentioned in the forceSync section in https://pulsar.apache.org/docs/en/next/reference-configuration/#zookeeper

### Modifications

* Removed ```-Dzookeeper.forceSync=no``` from ```values.yaml``` as default ```forceSync``` is ```yes```.
2020-09-22 09:47:41 -05:00
Thomas O'Neill
bf349a8c05
Ingress optional hostname (#54)
Fixes #50 

### Motivation
The host option is not required to setup an ingress, so I made it an optional value
### Modifications

*Describe the modifications you've done.*
Made setting the host optional.
2020-09-21 13:16:20 -05:00
Elad Dolev
5049d3564a
add support for multiple clusters (#60)
Co-authored-by: Elad Dolev <elad@firebolt.io>

### Motivation

Give the ability to deploy multi-cluster instance on K8s clusters with non-default `clusterDomain`, and connect to external configuration-store

### Modifications

- give the ability to change cluster's name
- give the ability to change `clusterDomain`
- fix external configuration store functionality
- use broker ports variables
- use label templates, and add `component` label in several places

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2020-09-08 10:06:30 +08:00
冉小龙
4178c70d90
Bump the image version to 2.6.1 (#57)
Signed-off-by: xiaolong.ran rxl@apache.org

Motivation
Follow release process and bump the image version to 2.6.1
2020-08-21 22:50:27 +08:00
Thomas O'Neill
b44b523c8a
Allow initialization to be set (#53)
Fixes #47 

### Motivation
Only create the initialize job on install. 

### Modifications

- Added an initialize value that can be set to true on install, matching the documentation in the README.md
2020-08-13 10:20:01 -07:00
Thomas O'Neill
207d697bed
Fix zookeeper antiaffinity (#52)
Fixes #39 

### Motivation

The match expression for the "app" label was incorrect breaking the antiaffinity since they would never match. Fixing this makes the podAntiAffinity work, but now requires at least N nodes to be in the cluster where N = largest replica set with affinity. Added the option to set the affinity type to preferredDuringSchedulingIgnoredDuringExecution where it will try to follow the affinity, but will still deploy a pod if it needs to break it. 

### Modifications

- Fixed app matchExpression 
- Added option to set the affinity type 
- bumped chart version

### Verifying this change

- [X] Make sure that the change passes the CI checks.
2020-08-13 10:19:01 -07:00
Thomas O'Neill
a41b6c5063
Allow Grafana to work with a reverse proxy (#48)
### Motivation

Allow Grafana to be served from a sub path.  

### Modifications

- Added a config map to add extra environment variables to the grafana deployment. As the grafana image adds new features that require environment variables, this can be used to set them.
- Bumped the grafana image to allow a reverse proxy
- removed ingress annotations as they are specific to nginx, and to match all the other ingresses
- bumped the chart version as per the README 


Example values:
```
grafana:
  configData:
    GRAFANA_ROOT_URL: /pulsar/grafana
    GRAFANA_SERVE_FROM_SUB_PATH: "true"
  ingress:
      enabled: true
      port: 3000
      path: "/pulsar/grafana/?(.*)"
      annotations:
        nginx.ingress.kubernetes.io/rewrite-target: /$1
```
2020-08-12 00:31:23 -07:00
John Harris
6b92881149
Add zookeeper metrics port and PodMonitors (#44)
* Add 'http' port specification to zookeeper statefulset

This makes the zookeeper spec inline with the other statefulset specs
in this chart and it provides a port target for custom podMonitors

* Added PodMonitors for bookie, broker, proxy, and zookeeper

New PodMonitors are needed for prometheus-operator to pickup scrape
targets.
Defaults to disabled so users need to opt in to deploy

* Added Apache license info to podmonitor yamls
2020-07-23 10:34:43 +08:00
冉小龙
682dfcee69
Update grafana dashboard images version to 0.0.9 (#45)
Signed-off-by: xiaolong.ran <rxl@apache.org>

### Modifications

- Update grafana dashboard images version to 0.0.9
- Add `.gitignore` file
2020-07-23 10:34:12 +08:00
Niklas Wagner
2fbec08b02
Add Ingress to Pulsar Proxy and Pulsar Manager (#42) 2020-07-19 23:04:32 -07:00
wuYin
135868c66c
Add optional user provided zookeeper as metadata store for other components (#38)
## Motivation
### Case
I have a physical zk cluster and want configure bookkeeper & broker & proxy to use it.
So I set components.zookeeper as false, and only found pulsar.zookeeper.connect to set my physical zk address.
But deploy stage was stucked in bookkeeper wait-zookeeper-ready container.

### Issue
The wait-zookeeper-ready initContainer in bookkeeper-cluster-initialize Job used spliced zk Service hosts to detect zk ready or not, other component init Job initContainer do the same thing. Actually, zk service are unreachable because I disabled zk component.

## Modifications
- Add optional pulsar_metadata.userProvidedZookeepers config for this case, and make component's init Job use user zk to detect liveness, instead of spliced Service hosts.

- Delete redundant image reference in bookkeeper init Job.
2020-07-15 13:19:06 +08:00
Rahul Vashishth
714ff4131e
add targetport for grafana nad manager service (#37)
Co-authored-by: rahul.name <rahul@mail.com>
2020-07-14 22:14:11 -07:00
Prashanth Tirupachur Vasanthakrishnan
bf152134b2
Issue-29: Bump missed out pulsar-image tags to 2.6.0 (#30)
Fixes #29 

### Motivation

Bumped missed out pulsar-image tags to 2.6.0

### Modifications

Modified the following files:
1. .ci/clusters/values-pulsar-image.yaml
2. charts/pulsar/values.yaml
3. examples/values-one-node.yaml
4. examples/values-pulsar.yaml
2020-07-01 23:01:39 -07:00
Sijie Guo
9778ce2fe1
Remove double quotes from the environment variables (#24)
*Motivation*

Some of the environment variables still use double quotes. They result in the following

```bash
Could not find or load main class "
```
2020-06-23 10:14:23 -07:00
Julien Berard
6cddb81da1
Allow to change broker service account annotations (#22)
### Motivation

We need to be able to change annotation to inject AWS IAM role (EKS based deployment).
https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html

With 2.6.0 and this annotation change we were able to use Tiered Storage with S3 and EKS/IAM(OIDC).

e.g : 
```
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::66666:role/my-iam-role-with-s3-access
```
values.yaml
```
broker:
  service_account:
    annotations:
      eks.amazonaws.com/role-arn: arn:aws:iam::66666:role/my-iam-role-with-s3-access
```
### Modifications

Added a value to allow to change annotations fro broker service account.
I've tried following style from other part of the code.

### Verifying this change

- [ ] Make sure that the change passes the CI checks.
2020-06-22 18:11:28 -07:00
Sijie Guo
d5a788e617
Update pulsar image to 2.6.0 (#20)
* Update pulsar image to 2.6.0

* Update the image to the official release image
2020-06-19 23:17:41 -07:00
Luke Stephenson
5914996e89
Removing reference to bastion pod (#14)
Has otherwise been cleaned up in f64c396906e9f99999ec14bd3ac7336e6609a86a
2020-05-29 17:33:54 -07:00
Matteo Merli
6e9ad25ba3
Use regular 2-2-2 BK client settings by default (#13)
Using write=3 and ack=2 leads to unbound memory usage in BK client when one bookie is slow or failing, so we should avoid it by default.
2020-05-21 21:52:53 -07:00
Oscar Espitia
06652d7e8b
Decouple credentials from key secrets generation (#7)
Fixes #6 

### Motivation

As suggested here: https://pulsar.apache.org/docs/en/helm-deploy/#prepare-the-helm-release. The ```prepare_helm_release.sh``` script provided with this Helm chart can create a secret credentials resource and
> The username and password are used for logging into Grafana dashboard and Pulsar Manager.

However, I haven't been able to make use of such a feature for a number of reasons:

1. This secret doesn't seem to affect the ```pulsar-manager-deployment.yaml``` definition. Instead, the ```./templates/pulsar-manager-admin-secret.yaml``` seems to be the one providing the credentials for the pulsar manager (UI) (with the added possibility to overwrite via values.yaml at ```pulsar_manager.admin.user/password```).

2. Using the Pulsar chart as a dependency for an umbrella chart (this is currently my use case), will bring extra hassle that will make it very hard to have all resources follow the same naming structure, thus causing some resources to never be deployed successfully e.g.: ```./templates/grafana-deployment.yaml``` will complain that it couldn't find the secret created by the bash script. Attempting to fix this issue via the ```-k``` flag passed to the script will cause the JWT secret tokens to have a name that's unexpected by the broker, etc.

### Modifications

Decouple grafana credentials from pulsar manager via a new secret resource named ```./charts/pulsar/templates/grafana-admin-secret.yaml```.

Add credentials overriding via values.yaml in the same way as pulsar_manager (grafana.admin.user/password) & delete secret resource manipulation from bash scripts (cleaup_helm_release.sh & prepare_helm_release.sh)

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2020-04-29 01:27:16 -07:00
Oscar Espitia
4009c04811
Update grafana & prometheus docker images (#8)
### Motivation

As seen below, there is a fix for one of the Grafana dashboards that are currently broken in this project (available since version 0.0.5):
- [The Pulsar-topics metrics can't load in Grafana](https://github.com/streamnative/charts/issues/49)

Additionally, upgrading Prometheus to the latest version improves performance as seen here: https://prometheus.io/blog/2017/11/08/announcing-prometheus-2-0

### Modifications

Bring Docker images to their most up-to-date version (streamnative/apache-pulsar-grafana-dashboard-k8s:0.0.6, prom/prometheus:v2.17.2) to fix the following issues:
- https://github.com/streamnative/charts/issues/49 <- fixes Pulsar-topics metrics failure to load
- https://github.com/prometheus/prometheus/pull/2859 <- prevent escalation vulnerabilities by defaulting to the ```nobody``` user

**Note**: upgrading to the latest version of Prometheus (currently v2.17.2) caused the pod to fail with the following error: ```open /prometheus/queries.active: permission denied```. In order to fix this issue I followed the instructions from these 2 comments:

- [Permission denied UID/GID solution](https://github.com/prometheus/prometheus/issues/5976#issuecomment-532942295)
- [Unable to create mmap-ed active query log securityContext fix](https://github.com/aws/eks-charts/issues/21#issuecomment-607031756)

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2020-04-29 01:25:32 -07:00
Sijie Guo
0338d17b89
Publish chart index to gh-pages branch (#3)
*Motivation*

Release helm chart when new tags are created
2020-04-21 02:44:58 -07:00