346 Commits

Author SHA1 Message Date
Oscar Espitia
552e86c663
Remove newline from secret tokens generation (#18)
### Motivation

The secret resources generation was appending a newline at the end of the JWT token strings (```\n```). From my understanding, this is not an issue inside Pulsar likely because it trims the contents of the JWT programmatically. However, when setting pulsar as a sink destination for [Vector](https://vector.dev/) (vector produces messages into Pulsar), I noticed the token was always invalid due to this extra newline.

### Modifications

Remove newline from secret tokens generation by using the utility command tr. Granted, this is not the nicest way to go about it but given that the contents are JWT strings, it appears to do the job just fine while keeping everything else working (e.g.: producing/consuming as well as other components like Prometheus). Please advise if you have any concerns or suggestions.
2020-06-09 22:40:27 -07:00
Luke Stephenson
5914996e89
Removing reference to bastion pod (#14)
Has otherwise been cleaned up in f64c396906e9f99999ec14bd3ac7336e6609a86a
2020-05-29 17:33:54 -07:00
Matteo Merli
6e9ad25ba3
Use regular 2-2-2 BK client settings by default (#13)
Using write=3 and ack=2 leads to unbound memory usage in BK client when one bookie is slow or failing, so we should avoid it by default.
2020-05-21 21:52:53 -07:00
Luke Stephenson
96dbab924f
Support load balance source ip range (#12)
Grafana and pulsar manager now support restricting
the available IPs that can be used.
2020-05-18 01:24:58 -07:00
Luke Stephenson
45fd2c6878
symmetric / create_namespace flags were only working if last argument (#11)
Move defaults outside the while loop so they are not constantly reset

Fixes #10
2020-05-14 00:35:48 -07:00
Oscar Espitia
06652d7e8b
Decouple credentials from key secrets generation (#7)
Fixes #6 

### Motivation

As suggested here: https://pulsar.apache.org/docs/en/helm-deploy/#prepare-the-helm-release. The ```prepare_helm_release.sh``` script provided with this Helm chart can create a secret credentials resource and
> The username and password are used for logging into Grafana dashboard and Pulsar Manager.

However, I haven't been able to make use of such a feature for a number of reasons:

1. This secret doesn't seem to affect the ```pulsar-manager-deployment.yaml``` definition. Instead, the ```./templates/pulsar-manager-admin-secret.yaml``` seems to be the one providing the credentials for the pulsar manager (UI) (with the added possibility to overwrite via values.yaml at ```pulsar_manager.admin.user/password```).

2. Using the Pulsar chart as a dependency for an umbrella chart (this is currently my use case), will bring extra hassle that will make it very hard to have all resources follow the same naming structure, thus causing some resources to never be deployed successfully e.g.: ```./templates/grafana-deployment.yaml``` will complain that it couldn't find the secret created by the bash script. Attempting to fix this issue via the ```-k``` flag passed to the script will cause the JWT secret tokens to have a name that's unexpected by the broker, etc.

### Modifications

Decouple grafana credentials from pulsar manager via a new secret resource named ```./charts/pulsar/templates/grafana-admin-secret.yaml```.

Add credentials overriding via values.yaml in the same way as pulsar_manager (grafana.admin.user/password) & delete secret resource manipulation from bash scripts (cleaup_helm_release.sh & prepare_helm_release.sh)

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2020-04-29 01:27:16 -07:00
Oscar Espitia
4009c04811
Update grafana & prometheus docker images (#8)
### Motivation

As seen below, there is a fix for one of the Grafana dashboards that are currently broken in this project (available since version 0.0.5):
- [The Pulsar-topics metrics can't load in Grafana](https://github.com/streamnative/charts/issues/49)

Additionally, upgrading Prometheus to the latest version improves performance as seen here: https://prometheus.io/blog/2017/11/08/announcing-prometheus-2-0

### Modifications

Bring Docker images to their most up-to-date version (streamnative/apache-pulsar-grafana-dashboard-k8s:0.0.6, prom/prometheus:v2.17.2) to fix the following issues:
- https://github.com/streamnative/charts/issues/49 <- fixes Pulsar-topics metrics failure to load
- https://github.com/prometheus/prometheus/pull/2859 <- prevent escalation vulnerabilities by defaulting to the ```nobody``` user

**Note**: upgrading to the latest version of Prometheus (currently v2.17.2) caused the pod to fail with the following error: ```open /prometheus/queries.active: permission denied```. In order to fix this issue I followed the instructions from these 2 comments:

- [Permission denied UID/GID solution](https://github.com/prometheus/prometheus/issues/5976#issuecomment-532942295)
- [Unable to create mmap-ed active query log securityContext fix](https://github.com/aws/eks-charts/issues/21#issuecomment-607031756)

### Verifying this change

- [x] Make sure that the change passes the CI checks.
2020-04-29 01:25:32 -07:00
Oscar Espitia
3e451fecb3
Fix namespace delete command in cleanup-helm-release.sh script (#5)
### Motivation

While making use of the scripts provided in this repo to prepare helm releases, I noticed that providing the ```-d``` flag (delete namespace) for the ```./scripts/pulsar/cleanup_helm_release.sh``` would always fail claiming that the **namespace already exists**. Upon closer examination, I noticed that the kubectl command to delete the provided namespace is actually attempting to create it instead.

### Modifications

I've gone ahead and made the corresponding modification on the script to delete the namespace (went from ```kubectl create namespace ${namespace}``` to ```kubectl delete namespace ${namespace}```).

### Verifying this change

I'm not sure what possible verifications I can provide for this PR. Please advise.
2020-04-27 00:11:45 -07:00
Sijie Guo
0338d17b89
Publish chart index to gh-pages branch (#3)
*Motivation*

Release helm chart when new tags are created
2020-04-21 02:44:58 -07:00
Sijie Guo
47f05b7650
Add github action to check license header (#2) v2.5.0 2020-04-21 00:23:01 -07:00
Sijie Guo
7dcf1c7aca
Enable CI for pulsar chart (#1) 2020-04-21 14:14:14 +08:00
Sijie Guo
f38711d581
Merge branch 'master' of https://github.com/apache/pulsar 2020-04-20 22:55:34 -07:00
Sijie Guo
8410c0d4c4
Initialize the Pulsar Helm chart 2020-04-20 22:31:15 -07:00
Yong Zhang
977999f9a0 Make secret name consistent (#6739)
* Make secret name consistent
---

*Motivation*

Make the secret name consistent. And all secret names should
use the release name as the prefix.

* Update ci script

* Fix the file path

* Fix path

* Fix env

Co-authored-by: Sijie Guo <sijie@apache.org>
2020-04-16 23:59:26 +08:00
Sijie Guo
9e540ab791 Update Helm Chart Documentation (#6725)
*Motivation*

The current helm chart is lacking documentation. This pull request aims to add documentation.

*Changes*

- Update Helm chart documentation
- Add a get-started section with Helm chart
- Remove the documentation of using yaml files.
2020-04-13 10:17:41 -07:00
Sijie Guo
f64c396906 Improve Helm chart (#6673)
* Improve Helm chart

- Support TLS for all components
- Support Authentication & Authorization (TLS)
- Add CI for different cluster settings
2020-04-08 11:20:01 -07:00
Sijie Guo
19ed28a330 Remove deprecated -XX:+AggressiveOpts (#6689)
*Motivation*

-XX:+AggressiveOpts is deprecated in JDK11
2020-04-07 17:55:04 -07:00
Sijie Guo
cbc1c68e91 Remove PULSAR_PREFIX for k8s yaml and helm values file (#6671)
*Motivation*

In versions older than 2.5.0, PULSAR_PREFIX is used for appending settings
that don't exist in existing configuration files.

*Modifications*

Remove `PULSAR_PREFIX` for backward compatibility
2020-04-06 10:43:51 -07:00
Kévin Dunglas
6a2d9a1091 Fix an error in the Helm chart (#6665) 2020-04-03 10:14:24 -07:00
John Harris
4efddf92c5 [Issue 6355][HELM] autorecovery - could not find or load main class (#6373)
This applies the recommended fix from
https://github.com/apache/pulsar/issues/6355#issuecomment-587756717

Fixes #6355

### Motivation

This PR corrects the configmap data which was causing the autorecovery pod to crashloop
with `could not find or load main class`

### Modifications

Updated the configmap var data per [this comment](https://github.com/apache/pulsar/issues/6355#issuecomment-587756717) from @sijie
2020-02-21 22:07:10 -08:00
liyuntao
2ee5fb61df explicit statement env 'BOOKIE_MEM' and 'BOOKIE_GC' for values-mini.yaml (#6340)
Fixes #6338

### Motivation
This commit started while I was using helm in my local minikube, noticed that there's a mismatch between `values-mini.yaml` and `values.yaml` files. At first I thought it was a copy/paste error. So I created #6338;

Then I looked into the details how these env-vars[ were used](28875d5abc/conf/bkenv.sh (L36)), found out its ok to use `PULSAR_MEM` as an alternative. But it introduce problems:
1. Since `BOOKIE_GC` was not defined , the default [BOOKIE_EXTRA_OPTS](28875d5abc/conf/bkenv.sh (L39))  will finally use default value of `BOOKIE_GC`, thus would cover same the JVM parameters defined prior in `PULSAR_MEM`.
2. May cause problems when bootstrap scripts changed in later dev, better to make it explicitly.

So I create this pr to solve above problems(hidden trouble).

### Modifications

As mentioned above, I've made such modifications below:
1. make `BOOKIE_MEM` and `BOOKIE_GC` explicit in `values-mini.yaml` file.  Keep up with the format in`values.yaml` file.
2. remove all  print-gc-logs related args. Considering the resource constraints of minikube environment. The removed part's content is `-XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCApplicationStoppedTime -XX:+PrintHeapAtGC -verbosegc -XX:G1LogLevel=finest`
3. leave `PULSAR_PREFIX_dbStorage_rocksDB_blockCacheSize` empty as usual, as [conf/standalone.conf#L576](df15210941/conf/standalone.conf (L576)) says it would to use 10% of the direct memory size by default.
2020-02-17 18:41:06 +08:00
roman-popenov
a6d1f86974 [Issue-6143][helm]: Add the rbac policy to give the brokers permissions to deploy functions (#6191)
### Motivation
Fixes feature enhancement request #6143:
Currently, there are quite a few undocumented steps that are needed to be performed manually in order to make sure that the functions can be submitted as pods in K8s runtime environment. It would be much better if this process would be automated.

#### Proposed solution:
Automate this process via helm install and update the helm charts with templates.

### Modifications

I've added an additional `functionsAsPods` filed in extra components inside the values file. If the setting is set to `yes`, then it would add `serviceAccount` to the broker deployment. It will also add the rbac policy to give the brokers permissions to deploy functions. The policies can be found in the new `broker-rbac.yaml` template file. Moreover, it will also change the `functions_worker` settings and set the function runtime factory setting that can be found inside `broker-configmap.yaml`.
### Verifying this change
1) Set `functionsAsPods: yes` inside helm values yaml file.
2) Follow the instructions on how deploying helm and run:
`helm install pulsar --values pulsar/values-mini.yaml ./pulsar/`. 
3) Wait until all the services are up and running.
4) Set up tenant, namespace.
5) Create a function, sink and source and submit it using the CLI to make sure the pods are running alongside the Pulsar cluster. In addition, set up such a flow where the data is flowing from source to topics, the processed by a function and sink outputs the data
6) Push data into cluster through the source and make sure it comes out of the sink into destination. There shouldn't be any errors in the logs of brokers, bookie, sources, sinks and functions.

#### Modules affected:
The changes in the PR are affecting the deployment using the helm charts. Now the if the flag `functionsAsPods` is set to `yes` inside the `values.yaml. file, the functions would run as pods.

### Documentation
Currently, the documentations explaining the helm chart deployment process is lacking and this should be updated.
2020-02-13 13:45:31 -08:00
SakaSun
7abb297a6b [Helm] Pulsar Manager do not work if Pulsar authentication is enabled (#6315)
Fixes ##6314

### Motivation

Pulsar Manager do not work if Pulsar authentication is enabled.

### Modifications

pulsar-manager-configmap.yaml was created in order to allow configuration of the enviroment properties in values.yaml
2020-02-13 13:39:32 -08:00
roman-popenov
4d00b385ac [deployment][helm] Add Grafana ingress template (#6280)
### Motivation
Exposing Grafana via soft ingress controller so that it can be exposed through a Load Balancer. 

#### Proposed solution:
Create ingress template for Grafana so that it can be automatically picked up if ingress controller instance is running in the cluster. The other solutions are to expose Grafana as NodePort or setting it as a LoadBalancer.

### Modifications
Added `grafana-ingress.yaml` template in the templates and an `ingress` section for Grafana in the values file.

### Verifying this change
1) Set ingress to `true` for Grafana in values file and provide hostname. Currently tested with NGINX, but can use another ingress controller, but will need to change the ingress controller class to another one in the template.

2) Add NGINX Helm repository :

```bash
helm repo add nginx-stable https://helm.nginx.com/stable
helm repo update
```
3) Install with Helm 3:

```bash
helm install nginix-ingress-crl nginx-stable/nginx-ingress
```

4) Follow the instructions on how deploying helm and run:
`helm install pulsar --values pulsar/values-mini.yaml ./pulsar/`. 

5) Wait until all the services are up and running.

6) Verify that Grafana is accessible via url.


**Path settings**

Currently, by default the path setting is set to `/grafana`. For that to work, the NGINX configuration file `nginx.conf` should have `grafana` sub path enabled:
```    
See https://grafana.com/docs/grafana/latest/installation/behind_proxy/

To avoid having to mess with NGINX configurations files `path` can be changed to `/`, but this path might conflict with other services that are being proxied in the cluster.

#### Modules affected:
The changes in the PR are affecting the deployment using the helm charts. Now the if the flag `functionsAsPods` is set to `yes` inside the `values.yaml. file, the functions would run as pods.

### Documentation
This PR will be adding ingress capability for Grafana and this should be documented.
2020-02-10 00:09:56 -08:00
roman-popenov
ef099c96d2 [ISSUE-6131]: Ensure JVM memory and GC options are set for bookie (#6201)
### Motivation
Fixes #6131 (caused by #5675):

When upgrading an existing 2.4.1 bookie cluster to 2.5.0 on kubernetes, the bookie fails to start with the following exception during initialization: io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 2147483648, max: 2147483648). This is caused by the fact that the bookie environment variables `BOOKIE_MEM` and `BOOKIE_FC` defined in conf/bkenv.sh has no effect, and it is always using the defaults values. 

#### Proposed solution:
Set `BOOKIE_MEM` and `BOOKIE_GC` in the helm deployments charts and default to `PULSAR_MEM` if the `BOOKIE` settings are not set and then use the default settings if none of those environment variables are set.

#### Changes made
Helm chart deployment `values.yaml` and `values-mini.yaml` along with the `bkenv.sh` configuration script.

### Documentation
Currently, the documentation explaining the deployment process and how to change settings is lacking and need to be updated.
2020-02-07 17:15:53 -08:00
ericpsimon
e760ae3118 Fix mispelling of tolarations. Correctly spelled at tolerations. (#6265) 2020-02-07 09:45:56 -08:00
Thomas Memenga
13dabe6edf add missing check to dashboard-ingress (helm chart) (#6160)
### Motivation

if you deploy pulsar using the helm chart and disable monitoring with

```
extras:
  dashboard: no

```

but you have the ingress of the dashboard set to true

```
dashboard:
  ingress:
    enabled: true
```
	

the helm chart will create an ingress that points to a non-existing service because the dashboard itself was not deployed.


### Modifications

I've added the same check that is already in place in dashboard-service and dashboard-deployment

### Verifying this change

I dont know of any automated tests, i tested it manually. In the end it's the same "if" that is already in place in dashboard-service and dashboard-deployment


### Does this pull request potentially affect one of the following parts:

Affects deployment via helm chart. An unwanted ingress object is suppressed.

### Documentation

 no documentation need
2020-02-01 00:07:42 -08:00
roman-popenov
97ed16d2c6 [Issue-5958][helm]: Fixing templates for helm deployment (#6148)
Motivation:
Fixes #5958: 

The following error appears when trying to deploy Pulsar using helm and values-mini.yaml: 

```unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): unknown field "requests" in io.k8s.api.core.v1.Container``` 

Cause:
Mistake in the `pulsar-manager-deployment.yaml` deployment file:

First line **63** should be:
`{{- if .Values.pulsar_manager.resources }}` and it is currently `{{- if .Values.grafana.resources }}`

There is also a mistake at line **65**:
`{{ toYaml .Values.grafana.resources | indent 10 }}` should be `{{ toYaml .Values.pulsar_manager.resources | indent 12 }}`

Modifications:
Changed values in `values.yaml` and `templates/pulsar-manager-deployment.yaml`

Test:
Deploy the application into a kubernetes local cluster with
`helm install pulsar-cluster --values pulsar/values-mini.yaml pulsar`

AND

`helm install pulsar-cluster --values pulsar/values.yaml pulsar`

Documentation:
Does this pull request introduce a new feature? - **No**
2020-01-31 23:56:09 -08:00
roman-popenov
56e0d05e25 [Issue-5994]: Start proxy pods when at least one broker pod is running (#6158)
### Motivation
Fixes #5994:
If the proxy service comes up before the brokers are up and reachable there will be HTTP 403 when running `bin/pulsar-admin` commands from inside the proxy pod.
 
The proxy will also not be able to connect to the brokers when data is pushed through binary port with the following error:
```bash
Caused by: org.apache.pulsar.broker.service.BrokerServiceException$PersistenceException: org.apache.bookkeeper.mledger.ManagedLedgerException: Not enough non-faulty bookies available
	... 14 more
Caused by: org.apache.bookkeeper.mledger.ManagedLedgerException: Not enough non-faulty bookies available
22:11:07.633 [pulsar-web-32-6] INFO  org.eclipse.jetty.server.RequestLog - 172.17.0.6 - - [24/Jan/2020:22:11:07 +0000] "PUT /admin/v2/persistent/public/functions/assignments HTTP/1.1" 500 2528 "-" "Pulsar-Java-v2.5.0" 280
```

#### Workaround:
Restart the proxy pods once brokers pods are running

#### Proposed solution:
Hold off starting of the proxies until at least one broker is reachable in the cluster. 

### Modifications

Changes are inside `proxy-deployment.yaml` helm template file that defines a new init container before proxy is started. The init container waits until broker is reachable using the nslookup on the broker service with a sleep of 30 seconds between retries and up to number of brokers times.

Alternative solution that doesn't always work was `'until nslookup broker-service; sleep 2; done;', but 403 would still sometimes (could have been a fluke, but I saw it happening once).

### Verifying this change
1) Follow the instructions on how deploying helm and run:
`helm install pulsar --values pulsar/values-mini.yaml ./pulsar/`. 
2) Wait until all the services are up and running.  
3) Connect to proxy pod and run `bin/pulsar-admin broker-stats monitoring-metrics` - no 403 or permission errors should arise
4) Set up tenant, namespace
5) Push data into a topic - No errors in the proxy logs and client is able to push data into cluster through proxies
2020-01-31 23:52:11 -08:00
Kévin Dunglas
cad175d8e6 Fix typos in Helm chart and sync values-mini with values (#6009)
### Motivation

Fix typos and sync values-mini with values

### Modifications

Comments only.
2020-01-11 21:48:44 +08:00
冉小龙
f76253d699 [Issue:5818]Set the startup order of broker and bookie (#5957)
Signed-off-by: xiaolong.ran <rxl@apache.org>

Set the startup order of broker and bookie
2020-01-02 15:52:39 +08:00
SakaSun
cea744e9a7 [Issue 5857][Helm Chart] - Support to existing Storage Class with StorageClassName (#5860)
Fixes #5857 

### Motivation

With current aproach for specifying storage class in persistent volume claim it's not possible to customize the provisioner parameters. If the property 'storageClass' is declared the chart always create a new storage class with hardcoded parameters.

### Modifications

A property 'storageClassName' was added to support an existent storage class.

### Verifying this change

This change is a trivial rework / code cleanup without any test coverage.
2019-12-19 21:12:15 -08:00
Julio H Morimoto
1b680d3a54 Provide better defaults for ingress tls and secretName configuration. (#5859)
This patch allows tls to be enabled with an empty secretName for ingress controllers might be able to provide a default certificate.

Fixes #5858, provides better defaults for the Ingress object and allows TLS to be enabled with an empty secretName.

### Motivation

The current helm chart can create an Ingress with TLS, but it requires a secretName to be added. This is not an Ingress requirement and, in some cases, the ingress controller can provide a default certificate when the Ingress object does not declare one.

### Modifications

Modifications include `values.yaml` and `dashboard-ingress.yaml` to address the issue.
2019-12-17 19:40:25 +08:00
冉小龙
44ce326879 Add pulsar-manager to helm chart (#5810)
Signed-off-by: xiaolong.ran <rxl@apache.org>

### Modifications

- Add [pulsar-manager](https://github.com/apache/pulsar-manager) to helm chart
- Replace pulsar-dashboard with pulsar-manager
  - Currently, we can deprecate pulsar-dashboard, In later versions, we can use `pulsar-manager` replace `pulsar-dashboard`.
2019-12-08 19:58:49 +08:00
冉小龙
76b45b46a2 [Issue:5687] Add prefix for new keys from Env (#5790)
Signed-off-by: xiaolong.ran <rxl@apache.org>

Fixes #5687 

### Motivation

When the user wants to add new keys for Env, adding fails if no prefix is added. 

Currently, add new keys for Env use the script of [apply-config-from-env.py](https://github.com/apache/pulsar/commits/master/docker/pulsar/scripts/apply-config-from-env.py), to ensure that the env set by the user can take effect, add the prefix(**PULSAR_PREFIX_**) for all keys.

### Modifications

- Add prefix for new keys from Env
2019-12-06 18:07:08 +08:00
冉小龙
298f63483c [Issue:5787] Fix docs for creating a K8S cluster on Minikube fail (#5805)
Signed-off-by: xiaolong.ran <rxl@apache.org>

Signed-off-by: xiaolong.ran <rxl@apache.org>

Fixes #5787 

### Motivation

When we creating a K8S cluster on Minikube, due to the different versions of Minikube in the local environment, the installation fails on `--kubernetes-version=v1.10.5`.

### Modifications

- Remove the `--kubernetes-version=v1.10.5` in docs.
2019-12-06 16:19:37 +08:00
Chris Bartholomew
b5a7f0a2ac Fixing pod anti-affinity rules in Kubernetes files including the Helm chart (#5381) 2019-10-15 10:41:53 -07:00
Robert Moucha
13acfa4690 Fix typo in helm chart (#4875)
### Motivation
Incorrect value is being used in Pulsar Helm template `autorecovery-deployment.yaml`

### Modifications
Proper variable name set.

### Verifying this change
Fixed variable name is already set in `values.yaml` and `values-mini.yaml`.
This change is a trivial rework / code cleanup without any test coverage.

### Documentation
None needed.
2019-08-05 14:39:24 +08:00
Edward Xie
515c745648 Update ingress port from server to 80 (#4204) 2019-05-06 19:17:27 -07:00
Cristian
5a729d812f [Kubernetes] Added ingress resource to dashboard (#3996)
Allows to opt-in for an ingress on top of the dashboard service.

This is very important in production-grade deployments where
you want to expose the Pulsar dashboard through an easy to remember URL.
2019-04-08 20:09:31 +08:00
Cristian
b59167352c Fix typos (#3893) 2019-03-24 09:18:44 -07:00
Yifan Zhang
53ce119519 Option to not to use rbac in helm deployment (#3082)
* option to not to use rbac

* default value to match previous settings
2018-11-29 20:39:47 -08:00
Benjamin Huo
4cd61bfca8 Fix helm lint error for zookeeper-metadata.yaml (#2878)
### Motivation

The following errors occurs when running : 

helm lint pulsar/
==> Linting pulsar/
[INFO] Chart.yaml: icon is recommended
[ERROR] templates/: render error in "pulsar/templates/zookeeper-metadata.yaml": template: pulsar/templates/zookeeper-metadata.yaml:49:20: executing "pulsar/templates/zookeeper-metadata.yaml" at <.Values.zookeeper_me...>: can't evaluate field resources in type interface {}

### Modifications

Change  zookeeper_metadata in deployment/kubernetes/helm/pulsar/templates/zookeeper-metadata.yaml to zookeeperMetadata

### Result

helm lint pulsar/
==> Linting pulsar/
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, no failures
2018-10-29 13:00:00 -07:00
Victor
a59e7a27dd fixed zookeeper name references in helm charts. (#2525)
### Motivation

zookeeper failed to start because of wrong ZOOKEEPER_SERVERS was set.

### Modifications

Changed the reference of zookeeper names by how they were created.

### Result

zookeeper started successfully and broker worked as expected.
2018-09-07 12:25:23 -07:00
Sijie Guo
483107e2b9 [documentation][deploy] Improve helm deployment script to deploy Pulsar to minikube (#2363)
* [documentation][deploy] Update deployment instructions for deploying to Minikube

* Enable functions workers

* [documentation][deploy] Improve helm deployment script to deploy Pulsar to minikube

 ### Changes

- update the helm scripts: bookie/autorecovery/broker pods should wait until metadata is initialized
- disable `autoRecovery` on bookies since we start `AutoRecovery` in separate pods
- enable function worker on brokers
- provide a values file for minikube
- update documentation for using helm chart to deploy a cluster to minikube

* move the service type definition to values file
2018-08-16 00:25:49 -07:00
Daniel Jorge
7cfbe4a415 Helm charts for deployment on GKE (#1993)
* Helm charts for deployment on GKE

* Repackaginh helm charts under deployment/kubernetes/helm

* Formatting licences

* Removing cloud specific values to enable more generic deployments
2018-06-25 10:47:19 -07:00