Motivation:
Fixes#5958:
The following error appears when trying to deploy Pulsar using helm and values-mini.yaml:
```unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): unknown field "requests" in io.k8s.api.core.v1.Container```
Cause:
Mistake in the `pulsar-manager-deployment.yaml` deployment file:
First line **63** should be:
`{{- if .Values.pulsar_manager.resources }}` and it is currently `{{- if .Values.grafana.resources }}`
There is also a mistake at line **65**:
`{{ toYaml .Values.grafana.resources | indent 10 }}` should be `{{ toYaml .Values.pulsar_manager.resources | indent 12 }}`
Modifications:
Changed values in `values.yaml` and `templates/pulsar-manager-deployment.yaml`
Test:
Deploy the application into a kubernetes local cluster with
`helm install pulsar-cluster --values pulsar/values-mini.yaml pulsar`
AND
`helm install pulsar-cluster --values pulsar/values.yaml pulsar`
Documentation:
Does this pull request introduce a new feature? - **No**
### Motivation
Fixes#5994:
If the proxy service comes up before the brokers are up and reachable there will be HTTP 403 when running `bin/pulsar-admin` commands from inside the proxy pod.
The proxy will also not be able to connect to the brokers when data is pushed through binary port with the following error:
```bash
Caused by: org.apache.pulsar.broker.service.BrokerServiceException$PersistenceException: org.apache.bookkeeper.mledger.ManagedLedgerException: Not enough non-faulty bookies available
... 14 more
Caused by: org.apache.bookkeeper.mledger.ManagedLedgerException: Not enough non-faulty bookies available
22:11:07.633 [pulsar-web-32-6] INFO org.eclipse.jetty.server.RequestLog - 172.17.0.6 - - [24/Jan/2020:22:11:07 +0000] "PUT /admin/v2/persistent/public/functions/assignments HTTP/1.1" 500 2528 "-" "Pulsar-Java-v2.5.0" 280
```
#### Workaround:
Restart the proxy pods once brokers pods are running
#### Proposed solution:
Hold off starting of the proxies until at least one broker is reachable in the cluster.
### Modifications
Changes are inside `proxy-deployment.yaml` helm template file that defines a new init container before proxy is started. The init container waits until broker is reachable using the nslookup on the broker service with a sleep of 30 seconds between retries and up to number of brokers times.
Alternative solution that doesn't always work was `'until nslookup broker-service; sleep 2; done;', but 403 would still sometimes (could have been a fluke, but I saw it happening once).
### Verifying this change
1) Follow the instructions on how deploying helm and run:
`helm install pulsar --values pulsar/values-mini.yaml ./pulsar/`.
2) Wait until all the services are up and running.
3) Connect to proxy pod and run `bin/pulsar-admin broker-stats monitoring-metrics` - no 403 or permission errors should arise
4) Set up tenant, namespace
5) Push data into a topic - No errors in the proxy logs and client is able to push data into cluster through proxies
Fixes#5857
### Motivation
With current aproach for specifying storage class in persistent volume claim it's not possible to customize the provisioner parameters. If the property 'storageClass' is declared the chart always create a new storage class with hardcoded parameters.
### Modifications
A property 'storageClassName' was added to support an existent storage class.
### Verifying this change
This change is a trivial rework / code cleanup without any test coverage.
This patch allows tls to be enabled with an empty secretName for ingress controllers might be able to provide a default certificate.
Fixes#5858, provides better defaults for the Ingress object and allows TLS to be enabled with an empty secretName.
### Motivation
The current helm chart can create an Ingress with TLS, but it requires a secretName to be added. This is not an Ingress requirement and, in some cases, the ingress controller can provide a default certificate when the Ingress object does not declare one.
### Modifications
Modifications include `values.yaml` and `dashboard-ingress.yaml` to address the issue.
Signed-off-by: xiaolong.ran <rxl@apache.org>
### Modifications
- Add [pulsar-manager](https://github.com/apache/pulsar-manager) to helm chart
- Replace pulsar-dashboard with pulsar-manager
- Currently, we can deprecate pulsar-dashboard, In later versions, we can use `pulsar-manager` replace `pulsar-dashboard`.
Signed-off-by: xiaolong.ran <rxl@apache.org>
Fixes#5687
### Motivation
When the user wants to add new keys for Env, adding fails if no prefix is added.
Currently, add new keys for Env use the script of [apply-config-from-env.py](https://github.com/apache/pulsar/commits/master/docker/pulsar/scripts/apply-config-from-env.py), to ensure that the env set by the user can take effect, add the prefix(**PULSAR_PREFIX_**) for all keys.
### Modifications
- Add prefix for new keys from Env
### Motivation
Incorrect value is being used in Pulsar Helm template `autorecovery-deployment.yaml`
### Modifications
Proper variable name set.
### Verifying this change
Fixed variable name is already set in `values.yaml` and `values-mini.yaml`.
This change is a trivial rework / code cleanup without any test coverage.
### Documentation
None needed.
Allows to opt-in for an ingress on top of the dashboard service.
This is very important in production-grade deployments where
you want to expose the Pulsar dashboard through an easy to remember URL.
### Motivation
The following errors occurs when running :
helm lint pulsar/
==> Linting pulsar/
[INFO] Chart.yaml: icon is recommended
[ERROR] templates/: render error in "pulsar/templates/zookeeper-metadata.yaml": template: pulsar/templates/zookeeper-metadata.yaml:49:20: executing "pulsar/templates/zookeeper-metadata.yaml" at <.Values.zookeeper_me...>: can't evaluate field resources in type interface {}
### Modifications
Change zookeeper_metadata in deployment/kubernetes/helm/pulsar/templates/zookeeper-metadata.yaml to zookeeperMetadata
### Result
helm lint pulsar/
==> Linting pulsar/
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, no failures
### Motivation
zookeeper failed to start because of wrong ZOOKEEPER_SERVERS was set.
### Modifications
Changed the reference of zookeeper names by how they were created.
### Result
zookeeper started successfully and broker worked as expected.
* [documentation][deploy] Update deployment instructions for deploying to Minikube
* Enable functions workers
* [documentation][deploy] Improve helm deployment script to deploy Pulsar to minikube
### Changes
- update the helm scripts: bookie/autorecovery/broker pods should wait until metadata is initialized
- disable `autoRecovery` on bookies since we start `AutoRecovery` in separate pods
- enable function worker on brokers
- provide a values file for minikube
- update documentation for using helm chart to deploy a cluster to minikube
* move the service type definition to values file
* Helm charts for deployment on GKE
* Repackaginh helm charts under deployment/kubernetes/helm
* Formatting licences
* Removing cloud specific values to enable more generic deployments