*Motivation*
In versions older than 2.5.0, PULSAR_PREFIX is used for appending settings
that don't exist in existing configuration files.
*Modifications*
Remove `PULSAR_PREFIX` for backward compatibility
Fixes#6338
### Motivation
This commit started while I was using helm in my local minikube, noticed that there's a mismatch between `values-mini.yaml` and `values.yaml` files. At first I thought it was a copy/paste error. So I created #6338;
Then I looked into the details how these env-vars[ were used](28875d5abc/conf/bkenv.sh (L36)), found out its ok to use `PULSAR_MEM` as an alternative. But it introduce problems:
1. Since `BOOKIE_GC` was not defined , the default [BOOKIE_EXTRA_OPTS](28875d5abc/conf/bkenv.sh (L39)) will finally use default value of `BOOKIE_GC`, thus would cover same the JVM parameters defined prior in `PULSAR_MEM`.
2. May cause problems when bootstrap scripts changed in later dev, better to make it explicitly.
So I create this pr to solve above problems(hidden trouble).
### Modifications
As mentioned above, I've made such modifications below:
1. make `BOOKIE_MEM` and `BOOKIE_GC` explicit in `values-mini.yaml` file. Keep up with the format in`values.yaml` file.
2. remove all print-gc-logs related args. Considering the resource constraints of minikube environment. The removed part's content is `-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintHeapAtGC -verbosegc -XX:G1LogLevel=finest`
3. leave `PULSAR_PREFIX_dbStorage_rocksDB_blockCacheSize` empty as usual, as [conf/standalone.conf#L576](df15210941/conf/standalone.conf (L576)) says it would to use 10% of the direct memory size by default.
### Motivation
Fixes feature enhancement request #6143:
Currently, there are quite a few undocumented steps that are needed to be performed manually in order to make sure that the functions can be submitted as pods in K8s runtime environment. It would be much better if this process would be automated.
#### Proposed solution:
Automate this process via helm install and update the helm charts with templates.
### Modifications
I've added an additional `functionsAsPods` filed in extra components inside the values file. If the setting is set to `yes`, then it would add `serviceAccount` to the broker deployment. It will also add the rbac policy to give the brokers permissions to deploy functions. The policies can be found in the new `broker-rbac.yaml` template file. Moreover, it will also change the `functions_worker` settings and set the function runtime factory setting that can be found inside `broker-configmap.yaml`.
### Verifying this change
1) Set `functionsAsPods: yes` inside helm values yaml file.
2) Follow the instructions on how deploying helm and run:
`helm install pulsar --values pulsar/values-mini.yaml ./pulsar/`.
3) Wait until all the services are up and running.
4) Set up tenant, namespace.
5) Create a function, sink and source and submit it using the CLI to make sure the pods are running alongside the Pulsar cluster. In addition, set up such a flow where the data is flowing from source to topics, the processed by a function and sink outputs the data
6) Push data into cluster through the source and make sure it comes out of the sink into destination. There shouldn't be any errors in the logs of brokers, bookie, sources, sinks and functions.
#### Modules affected:
The changes in the PR are affecting the deployment using the helm charts. Now the if the flag `functionsAsPods` is set to `yes` inside the `values.yaml. file, the functions would run as pods.
### Documentation
Currently, the documentations explaining the helm chart deployment process is lacking and this should be updated.
Fixes ##6314
### Motivation
Pulsar Manager do not work if Pulsar authentication is enabled.
### Modifications
pulsar-manager-configmap.yaml was created in order to allow configuration of the enviroment properties in values.yaml
### Motivation
Exposing Grafana via soft ingress controller so that it can be exposed through a Load Balancer.
#### Proposed solution:
Create ingress template for Grafana so that it can be automatically picked up if ingress controller instance is running in the cluster. The other solutions are to expose Grafana as NodePort or setting it as a LoadBalancer.
### Modifications
Added `grafana-ingress.yaml` template in the templates and an `ingress` section for Grafana in the values file.
### Verifying this change
1) Set ingress to `true` for Grafana in values file and provide hostname. Currently tested with NGINX, but can use another ingress controller, but will need to change the ingress controller class to another one in the template.
2) Add NGINX Helm repository :
```bash
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
```
3) Install with Helm 3:
```bash
helm install nginix-ingress-crl nginx-stable/nginx-ingress
```
4) Follow the instructions on how deploying helm and run:
`helm install pulsar --values pulsar/values-mini.yaml ./pulsar/`.
5) Wait until all the services are up and running.
6) Verify that Grafana is accessible via url.
**Path settings**
Currently, by default the path setting is set to `/grafana`. For that to work, the NGINX configuration file `nginx.conf` should have `grafana` sub path enabled:
```
See https://grafana.com/docs/grafana/latest/installation/behind_proxy/
To avoid having to mess with NGINX configurations files `path` can be changed to `/`, but this path might conflict with other services that are being proxied in the cluster.
#### Modules affected:
The changes in the PR are affecting the deployment using the helm charts. Now the if the flag `functionsAsPods` is set to `yes` inside the `values.yaml. file, the functions would run as pods.
### Documentation
This PR will be adding ingress capability for Grafana and this should be documented.
Signed-off-by: xiaolong.ran <rxl@apache.org>
### Modifications
- Add [pulsar-manager](https://github.com/apache/pulsar-manager) to helm chart
- Replace pulsar-dashboard with pulsar-manager
- Currently, we can deprecate pulsar-dashboard, In later versions, we can use `pulsar-manager` replace `pulsar-dashboard`.
Signed-off-by: xiaolong.ran <rxl@apache.org>
Fixes#5687
### Motivation
When the user wants to add new keys for Env, adding fails if no prefix is added.
Currently, add new keys for Env use the script of [apply-config-from-env.py](https://github.com/apache/pulsar/commits/master/docker/pulsar/scripts/apply-config-from-env.py), to ensure that the env set by the user can take effect, add the prefix(**PULSAR_PREFIX_**) for all keys.
### Modifications
- Add prefix for new keys from Env
Allows to opt-in for an ingress on top of the dashboard service.
This is very important in production-grade deployments where
you want to expose the Pulsar dashboard through an easy to remember URL.
* [documentation][deploy] Update deployment instructions for deploying to Minikube
* Enable functions workers
* [documentation][deploy] Improve helm deployment script to deploy Pulsar to minikube
### Changes
- update the helm scripts: bookie/autorecovery/broker pods should wait until metadata is initialized
- disable `autoRecovery` on bookies since we start `AutoRecovery` in separate pods
- enable function worker on brokers
- provide a values file for minikube
- update documentation for using helm chart to deploy a cluster to minikube
* move the service type definition to values file