### Motivation
* ```publishNotReadyAddresses``` is a service spec and not a service annotation. This is mentioned in the K8s API docs at https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.15/#servicespec-v1-core
### Modifications
* Modified ```publishNotReadyAddresses``` from annotation to service spec
### Verifying this change
- [x] Make sure that the change passes the CI checks.
### Motivation
* It's not recommended to run a production zookkeeper cluster with forceSync as "no". This is also mentioned in the forceSync section in https://pulsar.apache.org/docs/en/next/reference-configuration/#zookeeper
### Modifications
* Removed ```-Dzookeeper.forceSync=no``` from ```values.yaml``` as default ```forceSync``` is ```yes```.
Fixes#50
### Motivation
The host option is not required to setup an ingress, so I made it an optional value
### Modifications
*Describe the modifications you've done.*
Made setting the host optional.
Co-authored-by: Elad Dolev <elad@firebolt.io>
### Motivation
Give the ability to deploy multi-cluster instance on K8s clusters with non-default `clusterDomain`, and connect to external configuration-store
### Modifications
- give the ability to change cluster's name
- give the ability to change `clusterDomain`
- fix external configuration store functionality
- use broker ports variables
- use label templates, and add `component` label in several places
### Verifying this change
- [x] Make sure that the change passes the CI checks.
Fixes#47
### Motivation
Only create the initialize job on install.
### Modifications
- Added an initialize value that can be set to true on install, matching the documentation in the README.md
Fixes#39
### Motivation
The match expression for the "app" label was incorrect breaking the antiaffinity since they would never match. Fixing this makes the podAntiAffinity work, but now requires at least N nodes to be in the cluster where N = largest replica set with affinity. Added the option to set the affinity type to preferredDuringSchedulingIgnoredDuringExecution where it will try to follow the affinity, but will still deploy a pod if it needs to break it.
### Modifications
- Fixed app matchExpression
- Added option to set the affinity type
- bumped chart version
### Verifying this change
- [X] Make sure that the change passes the CI checks.
Fixes#46
### Motivation
There were some templates that relied on extra values that are deprecated.
### Modifications
Modified the checks to check for non deprecated values or deprecated values.
### Verifying this change
- [X] Make sure that the change passes the CI checks.
### Motivation
Allow Grafana to be served from a sub path.
### Modifications
- Added a config map to add extra environment variables to the grafana deployment. As the grafana image adds new features that require environment variables, this can be used to set them.
- Bumped the grafana image to allow a reverse proxy
- removed ingress annotations as they are specific to nginx, and to match all the other ingresses
- bumped the chart version as per the README
Example values:
```
grafana:
configData:
GRAFANA_ROOT_URL: /pulsar/grafana
GRAFANA_SERVE_FROM_SUB_PATH: "true"
ingress:
enabled: true
port: 3000
path: "/pulsar/grafana/?(.*)"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
```
* Add 'http' port specification to zookeeper statefulset
This makes the zookeeper spec inline with the other statefulset specs
in this chart and it provides a port target for custom podMonitors
* Added PodMonitors for bookie, broker, proxy, and zookeeper
New PodMonitors are needed for prometheus-operator to pickup scrape
targets.
Defaults to disabled so users need to opt in to deploy
* Added Apache license info to podmonitor yamls
### Motivation
PR #37 updated the location of the ports in the default values yaml. This causes a null pointer exception when rendering this helm chart.
### Modifications
Fix variable reference
## Motivation
### Case
I have a physical zk cluster and want configure bookkeeper & broker & proxy to use it.
So I set components.zookeeper as false, and only found pulsar.zookeeper.connect to set my physical zk address.
But deploy stage was stucked in bookkeeper wait-zookeeper-ready container.
### Issue
The wait-zookeeper-ready initContainer in bookkeeper-cluster-initialize Job used spliced zk Service hosts to detect zk ready or not, other component init Job initContainer do the same thing. Actually, zk service are unreachable because I disabled zk component.
## Modifications
- Add optional pulsar_metadata.userProvidedZookeepers config for this case, and make component's init Job use user zk to detect liveness, instead of spliced Service hosts.
- Delete redundant image reference in bookkeeper init Job.
*Motivation*
based on [helm documentation](https://helm.sh/docs/topics/charts/),
the `appVersion` is the version of the app that this contains. Since the repo
is using 2.6.0 image, update `appVersion` to 2.6.0
### Motivation
We need to be able to change annotation to inject AWS IAM role (EKS based deployment).
https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html
With 2.6.0 and this annotation change we were able to use Tiered Storage with S3 and EKS/IAM(OIDC).
e.g :
```
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::66666:role/my-iam-role-with-s3-access
```
values.yaml
```
broker:
service_account:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::66666:role/my-iam-role-with-s3-access
```
### Modifications
Added a value to allow to change annotations fro broker service account.
I've tried following style from other part of the code.
### Verifying this change
- [ ] Make sure that the change passes the CI checks.
### Motivation
The secret resources generation was appending a newline at the end of the JWT token strings (```\n```). From my understanding, this is not an issue inside Pulsar likely because it trims the contents of the JWT programmatically. However, when setting pulsar as a sink destination for [Vector](https://vector.dev/) (vector produces messages into Pulsar), I noticed the token was always invalid due to this extra newline.
### Modifications
Remove newline from secret tokens generation by using the utility command tr. Granted, this is not the nicest way to go about it but given that the contents are JWT strings, it appears to do the job just fine while keeping everything else working (e.g.: producing/consuming as well as other components like Prometheus). Please advise if you have any concerns or suggestions.
Fixes#6
### Motivation
As suggested here: https://pulsar.apache.org/docs/en/helm-deploy/#prepare-the-helm-release. The ```prepare_helm_release.sh``` script provided with this Helm chart can create a secret credentials resource and
> The username and password are used for logging into Grafana dashboard and Pulsar Manager.
However, I haven't been able to make use of such a feature for a number of reasons:
1. This secret doesn't seem to affect the ```pulsar-manager-deployment.yaml``` definition. Instead, the ```./templates/pulsar-manager-admin-secret.yaml``` seems to be the one providing the credentials for the pulsar manager (UI) (with the added possibility to overwrite via values.yaml at ```pulsar_manager.admin.user/password```).
2. Using the Pulsar chart as a dependency for an umbrella chart (this is currently my use case), will bring extra hassle that will make it very hard to have all resources follow the same naming structure, thus causing some resources to never be deployed successfully e.g.: ```./templates/grafana-deployment.yaml``` will complain that it couldn't find the secret created by the bash script. Attempting to fix this issue via the ```-k``` flag passed to the script will cause the JWT secret tokens to have a name that's unexpected by the broker, etc.
### Modifications
Decouple grafana credentials from pulsar manager via a new secret resource named ```./charts/pulsar/templates/grafana-admin-secret.yaml```.
Add credentials overriding via values.yaml in the same way as pulsar_manager (grafana.admin.user/password) & delete secret resource manipulation from bash scripts (cleaup_helm_release.sh & prepare_helm_release.sh)
### Verifying this change
- [x] Make sure that the change passes the CI checks.
### Motivation
As seen below, there is a fix for one of the Grafana dashboards that are currently broken in this project (available since version 0.0.5):
- [The Pulsar-topics metrics can't load in Grafana](https://github.com/streamnative/charts/issues/49)
Additionally, upgrading Prometheus to the latest version improves performance as seen here: https://prometheus.io/blog/2017/11/08/announcing-prometheus-2-0
### Modifications
Bring Docker images to their most up-to-date version (streamnative/apache-pulsar-grafana-dashboard-k8s:0.0.6, prom/prometheus:v2.17.2) to fix the following issues:
- https://github.com/streamnative/charts/issues/49 <- fixes Pulsar-topics metrics failure to load
- https://github.com/prometheus/prometheus/pull/2859 <- prevent escalation vulnerabilities by defaulting to the ```nobody``` user
**Note**: upgrading to the latest version of Prometheus (currently v2.17.2) caused the pod to fail with the following error: ```open /prometheus/queries.active: permission denied```. In order to fix this issue I followed the instructions from these 2 comments:
- [Permission denied UID/GID solution](https://github.com/prometheus/prometheus/issues/5976#issuecomment-532942295)
- [Unable to create mmap-ed active query log securityContext fix](https://github.com/aws/eks-charts/issues/21#issuecomment-607031756)
### Verifying this change
- [x] Make sure that the change passes the CI checks.
### Motivation
While making use of the scripts provided in this repo to prepare helm releases, I noticed that providing the ```-d``` flag (delete namespace) for the ```./scripts/pulsar/cleanup_helm_release.sh``` would always fail claiming that the **namespace already exists**. Upon closer examination, I noticed that the kubectl command to delete the provided namespace is actually attempting to create it instead.
### Modifications
I've gone ahead and made the corresponding modification on the script to delete the namespace (went from ```kubectl create namespace ${namespace}``` to ```kubectl delete namespace ${namespace}```).
### Verifying this change
I'm not sure what possible verifications I can provide for this PR. Please advise.
* Make secret name consistent
---
*Motivation*
Make the secret name consistent. And all secret names should
use the release name as the prefix.
* Update ci script
* Fix the file path
* Fix path
* Fix env
Co-authored-by: Sijie Guo <sijie@apache.org>
*Motivation*
The current helm chart is lacking documentation. This pull request aims to add documentation.
*Changes*
- Update Helm chart documentation
- Add a get-started section with Helm chart
- Remove the documentation of using yaml files.
*Motivation*
In versions older than 2.5.0, PULSAR_PREFIX is used for appending settings
that don't exist in existing configuration files.
*Modifications*
Remove `PULSAR_PREFIX` for backward compatibility
Fixes#6338
### Motivation
This commit started while I was using helm in my local minikube, noticed that there's a mismatch between `values-mini.yaml` and `values.yaml` files. At first I thought it was a copy/paste error. So I created #6338;
Then I looked into the details how these env-vars[ were used](28875d5abc/conf/bkenv.sh (L36)), found out its ok to use `PULSAR_MEM` as an alternative. But it introduce problems:
1. Since `BOOKIE_GC` was not defined , the default [BOOKIE_EXTRA_OPTS](28875d5abc/conf/bkenv.sh (L39)) will finally use default value of `BOOKIE_GC`, thus would cover same the JVM parameters defined prior in `PULSAR_MEM`.
2. May cause problems when bootstrap scripts changed in later dev, better to make it explicitly.
So I create this pr to solve above problems(hidden trouble).
### Modifications
As mentioned above, I've made such modifications below:
1. make `BOOKIE_MEM` and `BOOKIE_GC` explicit in `values-mini.yaml` file. Keep up with the format in`values.yaml` file.
2. remove all print-gc-logs related args. Considering the resource constraints of minikube environment. The removed part's content is `-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintHeapAtGC -verbosegc -XX:G1LogLevel=finest`
3. leave `PULSAR_PREFIX_dbStorage_rocksDB_blockCacheSize` empty as usual, as [conf/standalone.conf#L576](df15210941/conf/standalone.conf (L576)) says it would to use 10% of the direct memory size by default.
### Motivation
Fixes feature enhancement request #6143:
Currently, there are quite a few undocumented steps that are needed to be performed manually in order to make sure that the functions can be submitted as pods in K8s runtime environment. It would be much better if this process would be automated.
#### Proposed solution:
Automate this process via helm install and update the helm charts with templates.
### Modifications
I've added an additional `functionsAsPods` filed in extra components inside the values file. If the setting is set to `yes`, then it would add `serviceAccount` to the broker deployment. It will also add the rbac policy to give the brokers permissions to deploy functions. The policies can be found in the new `broker-rbac.yaml` template file. Moreover, it will also change the `functions_worker` settings and set the function runtime factory setting that can be found inside `broker-configmap.yaml`.
### Verifying this change
1) Set `functionsAsPods: yes` inside helm values yaml file.
2) Follow the instructions on how deploying helm and run:
`helm install pulsar --values pulsar/values-mini.yaml ./pulsar/`.
3) Wait until all the services are up and running.
4) Set up tenant, namespace.
5) Create a function, sink and source and submit it using the CLI to make sure the pods are running alongside the Pulsar cluster. In addition, set up such a flow where the data is flowing from source to topics, the processed by a function and sink outputs the data
6) Push data into cluster through the source and make sure it comes out of the sink into destination. There shouldn't be any errors in the logs of brokers, bookie, sources, sinks and functions.
#### Modules affected:
The changes in the PR are affecting the deployment using the helm charts. Now the if the flag `functionsAsPods` is set to `yes` inside the `values.yaml. file, the functions would run as pods.
### Documentation
Currently, the documentations explaining the helm chart deployment process is lacking and this should be updated.