doc: improve documentations following SMB driver repo
This commit is contained in:
parent
bef4ee7dbb
commit
1fed4f3cd8
10
Makefile
10
Makefile
@ -13,7 +13,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
CMDS=nfsplugin
|
||||
DEPLOY_FOLDER = ./deploy/kubernetes
|
||||
DEPLOY_FOLDER = ./deploy
|
||||
CMDS=nfsplugin
|
||||
PKG = github.com/kubernetes-csi/csi-driver-nfs
|
||||
GINKGO_FLAGS = -ginkgo.v
|
||||
@ -93,7 +93,7 @@ push:
|
||||
|
||||
.PHONY: install-nfs-server
|
||||
install-nfs-server:
|
||||
kubectl apply -f ./examples/nfs-server.yaml
|
||||
kubectl apply -f ./deploy/example/nfs-provisioner/nfs-server.yaml
|
||||
|
||||
.PHONY: install-helm
|
||||
install-helm:
|
||||
@ -117,6 +117,6 @@ e2e-test:
|
||||
|
||||
.PHONY: create-example-deployment
|
||||
create-example-deployment:
|
||||
kubectl apply -f ./examples/storageclass-nfs.yaml
|
||||
kubectl apply -f ./examples/deployment.yaml
|
||||
kubectl apply -f ./examples/statefulset.yaml
|
||||
kubectl apply -f ./deploy/example/storageclass-nfs.yaml
|
||||
kubectl apply -f ./deploy/example/deployment.yaml
|
||||
kubectl apply -f ./deploy/example/statefulset.yaml
|
||||
|
||||
76
README.md
76
README.md
@ -1,6 +1,6 @@
|
||||
# CSI NFS driver
|
||||
|
||||
## Overview
|
||||
### Overview
|
||||
|
||||
This is a repository for [NFS](https://en.wikipedia.org/wiki/Network_File_System) [CSI](https://kubernetes-csi.github.io/docs/) Driver.
|
||||
Currently it implements bare minimum of the [CSI spec](https://github.com/container-storage-interface/spec) and is in the alpha state
|
||||
@ -10,79 +10,34 @@ of the development.
|
||||
|
||||
| **nfs.csi.k8s.io** | K8s version compatibility | CSI versions compatibility | Dynamic Provisioning | Resize | Snapshots | Raw Block | AccessModes | Status |
|
||||
|--------------------|---------------------------|----------------------------|----------------------|--------|-----------|-----------|--------------------------|------------------------------------------------------------------------------|
|
||||
|master | 1.14 + | v1.0 + | no | no | no | no | Read/Write Multiple Pods | Alpha |
|
||||
|master | 1.14 + | v1.0 + | yes | no | no | no | Read/Write Multiple Pods | Alpha |
|
||||
|v2.0.0 | 1.14 + | v1.0 + | no | no | no | no | Read/Write Multiple Pods | Alpha |
|
||||
|v1.0.0 | 1.9 - 1.15 | v1.0 | no | no | no | no | Read/Write Multiple Pods | [deprecated](https://github.com/kubernetes-csi/drivers/tree/master/pkg/nfs) |
|
||||
|
||||
## Requirements
|
||||
### Requirements
|
||||
|
||||
The CSI NFS driver requires Kubernetes cluster of version 1.14 or newer and
|
||||
preexisting NFS server, whether it is deployed on cluster or provisioned
|
||||
independently. The plugin itself provides only a communication layer between
|
||||
resources in the cluser and the NFS server.
|
||||
|
||||
## Install NFS CSI driver on a kubernetes cluster
|
||||
Please refer to [install NFS CSI driver](https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/docs/install-csi-driver.md).
|
||||
### Install NFS CSI driver on a kubernetes cluster
|
||||
Please refer to [install NFS CSI driver](./docs/install-csi-driver.md).
|
||||
|
||||
## Example
|
||||
### Driver parameters
|
||||
Please refer to [`nfs.csi.k8s.io` driver parameters](./docs/driver-parameters.md)
|
||||
|
||||
There are multiple ways to create a kubernetes cluster, the NFS CSI plugin
|
||||
should work invariantly of your cluster setup. Very simple way of getting
|
||||
a local environment for testing can be achieved using for example
|
||||
[kind](https://github.com/kubernetes-sigs/kind).
|
||||
### Examples
|
||||
- [Set up a NFS Server on a Kubernetes cluster](./deploy/example/nfs-provisioner/README.md)
|
||||
- [Basic usage](./deploy/example/README.md)
|
||||
|
||||
There are also multiple different NFS servers you can use for testing of
|
||||
the plugin, the major versions of the protocol v2, v3 and v4 should be supported
|
||||
by the current implementation.
|
||||
### Troubleshooting
|
||||
- [CSI driver troubleshooting guide](./docs/csi-debug.md)
|
||||
|
||||
The example assumes you have your cluster created (e.g. `kind create cluster`)
|
||||
and working NFS server (e.g. https://github.com/rootfs/nfs-ganesha-docker)
|
||||
## Kubernetes Development
|
||||
Please refer to [development guide](./docs/csi-dev.md)
|
||||
|
||||
#### Deploy
|
||||
|
||||
Deploy the NFS plugin along with the `CSIDriver` info.
|
||||
```console
|
||||
kubectl create -f deploy/kubernetes
|
||||
```
|
||||
|
||||
#### Example Nginx application
|
||||
|
||||
The [/examples/kubernetes/nginx.yaml](/examples/kubernetes/nginx.yaml) contains a `PersistentVolume`,
|
||||
`PersistentVolumeClaim` and an nginx `Pod` mounting the NFS volume under `/var/www`.
|
||||
|
||||
You will need to update the NFS Server IP and the share information under
|
||||
`volumeAttributes` inside `PersistentVolume` in `nginx.yaml` file to match your
|
||||
NFS server public end point and configuration. You can also provide additional
|
||||
`mountOptions`, such as protocol version, in the `PersistentVolume` `spec`
|
||||
relevant for your NFS Server.
|
||||
|
||||
```console
|
||||
kubectl create -f examples/kubernetes/nginx.yaml
|
||||
```
|
||||
|
||||
## Running Kubernetes End To End tests on an NFS Driver
|
||||
|
||||
First, stand up a local cluster `ALLOW_PRIVILEGED=1 hack/local-up-cluster.sh` (from your Kubernetes repo)
|
||||
For Fedora/RHEL clusters, the following might be required:
|
||||
```console
|
||||
sudo chown -R $USER:$USER /var/run/kubernetes/
|
||||
sudo chown -R $USER:$USER /var/lib/kubelet
|
||||
sudo chcon -R -t svirt_sandbox_file_t /var/lib/kubelet
|
||||
```
|
||||
If you are plannig to test using your own private image, you could either install your nfs driver using your own set of YAML files, or edit the existing YAML files to use that private image.
|
||||
|
||||
When using the [existing set of YAML files](https://github.com/kubernetes-csi/csi-driver-nfs/tree/master/deploy/kubernetes), you would edit [csi-nfs-node.yaml](https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/kubernetes/csi-nfs-node.yaml#L45) files to include your private image instead of the default one. After editing these files, skip to step 3 of the following steps.
|
||||
|
||||
If you already have a driver installed, skip to step 4 of the following steps.
|
||||
|
||||
1) Build the nfs driver by running `make`
|
||||
2) Create NFS Driver Image, where the image tag would be whatever that is required by your YAML deployment files `docker build -t quay.io/k8scsi/nfsplugin:v2.0.0 .`
|
||||
3) Install the Driver: `kubectl create -f deploy/kubernetes`
|
||||
4) Build E2E test binary: `make build-tests`
|
||||
5) Run E2E Tests using the following command: `./bin/tests --ginkgo.v --ginkgo.progress --kubeconfig=/var/run/kubernetes/admin.kubeconfig`
|
||||
|
||||
|
||||
## Community, discussion, contribution, and support
|
||||
### Community, discussion, contribution, and support
|
||||
|
||||
Learn how to engage with the Kubernetes community on the [community page](http://kubernetes.io/community/).
|
||||
|
||||
@ -91,7 +46,6 @@ You can reach the maintainers of this project at:
|
||||
- [Slack channel](https://kubernetes.slack.com/messages/sig-storage)
|
||||
- [Mailing list](https://groups.google.com/forum/#!forum/kubernetes-sig-storage)
|
||||
|
||||
|
||||
### Code of conduct
|
||||
|
||||
Participation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](code-of-conduct.md).
|
||||
|
||||
@ -37,21 +37,21 @@ The following table lists the configurable parameters of the latest NFS CSI Driv
|
||||
|
||||
| Parameter | Description | Default |
|
||||
|---------------------------------------------------|------------------------------------------------------------|-------------------------------------------------------------------|
|
||||
| `image.nfs.repository` | csi-driver-nfs docker image | mcr.microsoft.com/k8s/csi/nfs-csi |
|
||||
| `image.nfs.tag` | csi-driver-nfs docker image tag | latest |
|
||||
| `image.nfs.repository` | csi-driver-nfs docker image | gcr.io/k8s-staging-sig-storage/nfsplugin |
|
||||
| `image.nfs.tag` | csi-driver-nfs docker image tag | amd64-linux-canary |
|
||||
| `image.nfs.pullPolicy` | csi-driver-nfs image pull policy | IfNotPresent |
|
||||
| `image.csiProvisioner.repository` | csi-provisioner docker image | mcr.microsoft.com/oss/kubernetes-csi/csi-provisioner |
|
||||
| `image.csiProvisioner.tag` | csi-provisioner docker image tag | v1.4.0 |
|
||||
| `image.csiProvisioner.repository` | csi-provisioner docker image | k8s.gcr.io/sig-storage/csi-provisioner |
|
||||
| `image.csiProvisioner.tag` | csi-provisioner docker image tag | v2.0.4 |
|
||||
| `image.csiProvisioner.pullPolicy` | csi-provisioner image pull policy | IfNotPresent |
|
||||
| `image.livenessProbe.repository` | liveness-probe docker image | mcr.microsoft.com/oss/kubernetes-csi/livenessprobe |
|
||||
| `image.livenessProbe.tag` | liveness-probe docker image tag | v1.1.0 |
|
||||
| `image.livenessProbe.repository` | liveness-probe docker image | k8s.gcr.io/sig-storage/livenessprobe |
|
||||
| `image.livenessProbe.tag` | liveness-probe docker image tag | v2.1.0 |
|
||||
| `image.livenessProbe.pullPolicy` | liveness-probe image pull policy | IfNotPresent |
|
||||
| `image.nodeDriverRegistrar.repository` | csi-node-driver-registrar docker image | mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar |
|
||||
| `image.nodeDriverRegistrar.tag` | csi-node-driver-registrar docker image tag | v1.2.0 |
|
||||
| `image.nodeDriverRegistrar.repository` | csi-node-driver-registrar docker image | k8s.gcr.io/sig-storage/csi-node-driver-registrar |
|
||||
| `image.nodeDriverRegistrar.tag` | csi-node-driver-registrar docker image tag | v2.0.1 |
|
||||
| `image.nodeDriverRegistrar.pullPolicy` | csi-node-driver-registrar image pull policy | IfNotPresent |
|
||||
| `serviceAccount.create` | whether create service account of csi-nfs-controller | true |
|
||||
| `rbac.create` | whether create rbac of csi-nfs-controller | true |
|
||||
| `controller.replicas` | the replicas of csi-nfs-controller | 2 |
|
||||
| `controller.replicas` | the replicas of csi-nfs-controller | 2 |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
|
||||
36
deploy/example/README.md
Normal file
36
deploy/example/README.md
Normal file
@ -0,0 +1,36 @@
|
||||
# CSI driver example
|
||||
|
||||
After the NFS CSI Driver is deployed in your cluster, you can follow this documentation to quickly deploy some examples.
|
||||
|
||||
You can use NFS CSI Driver to provision Persistent Volumes statically or dynamically. Please read [Kubernetes Persistent Volumes documentation](https://kubernetes.io/docs/concepts/storage/persistent-volumes/) for more information about Static and Dynamic provisioning.
|
||||
|
||||
Please refer to [driver parameters](../../docs/driver-parameters.md) for more detailed usage.
|
||||
|
||||
## Prerequisite
|
||||
|
||||
- [Set up a NFS Server on a Kubernetes cluster](./nfs-provisioner/README.md)
|
||||
- [Install NFS CSI Driver](../../docs/install-csi-driver.md)
|
||||
|
||||
## Storage Class Usage (Dynamic Provisioning)
|
||||
|
||||
- Follow the folling command to create a `StorageClass`, and then `PersistentVolume` and `PersistentVolumeClaim` dynamically.
|
||||
|
||||
```bash
|
||||
# create StorageClass
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/storageclass-nfs.yaml
|
||||
|
||||
# create PVC
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/pvc-nfs-csi-dynamic.yaml
|
||||
```
|
||||
|
||||
## PV/PVC Usage (Static Provisioning)
|
||||
|
||||
- Follow the folling command to create `PersistentVolume` and `PersistentVolumeClaim` statically.
|
||||
|
||||
```bash
|
||||
# create PV
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/pv-nfs-csi.yaml
|
||||
|
||||
# create PVC
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/pvc-nfs-csi-static.yaml
|
||||
```
|
||||
@ -14,21 +14,21 @@ spec:
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: deployment-nfs-rwm
|
||||
name: deployment-nfs
|
||||
spec:
|
||||
replicas: 3
|
||||
selector:
|
||||
matchLabels:
|
||||
name: deployment-nfs-rwm
|
||||
name: deployment-nfs
|
||||
template:
|
||||
metadata:
|
||||
name: deployment-nfs-rwm
|
||||
name: deployment-nfs
|
||||
labels:
|
||||
name: deployment-nfs-rwm
|
||||
name: deployment-nfs
|
||||
spec:
|
||||
containers:
|
||||
containers:
|
||||
- name: deployment-nfs-rwm
|
||||
- name: deployment-nfs
|
||||
image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
|
||||
command:
|
||||
- "/bin/sh"
|
||||
36
deploy/example/nfs-provisioner/README.md
Normal file
36
deploy/example/nfs-provisioner/README.md
Normal file
@ -0,0 +1,36 @@
|
||||
# Set up a NFS Server on a Kubernetes cluster
|
||||
|
||||
After the NFS CSI Driver is deployed in your cluster, you can follow this documentation to quickly deploy some example applications. You can use NFS CSI Driver to provision Persistent Volumes statically or dynamically. Please read Kubernetes Persistent Volumes for more information about Static and Dynamic provisioning.
|
||||
|
||||
There are multiple different NFS servers you can use for testing of
|
||||
the plugin, the major versions of the protocol v2, v3 and v4 should be supported
|
||||
by the current implementation. This page will show you how to set up a NFS Server deployment on a Kubernetes cluster.
|
||||
|
||||
- To create a NFS provisioner on your Kubernetes cluster, run the following command.
|
||||
|
||||
```bash
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/nfs-provisioner/nfs-server.yaml
|
||||
```
|
||||
|
||||
- During the deployment, a new service `nfs-server` will be created which exposes the NFS server endpoint `nfs-server.default.svc.cluster.local` and the share path `/`. You can specify `PersistentVolume` or `StorageClass` using these information.
|
||||
|
||||
- Deploy the NFS CSI driver, please refer to [install NFS CSI driver](../../../docs/install-csi-driver.md).
|
||||
|
||||
- To check if the NFS server is working, we can statically create a PersistentVolume and a PersistentVolumeClaim, and mount it onto a sample pod:
|
||||
|
||||
```bash
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/example/nfs-provisioner/nginx-pod.yaml
|
||||
```
|
||||
|
||||
- Verify if the NFS server is functional, you can check the mount point from the example pod.
|
||||
|
||||
```bash
|
||||
kubectl exec nginx-nfs-example -- bash -c "findmnt /var/www -o TARGET,SOURCE,FSTYPE"
|
||||
```
|
||||
|
||||
- The output should look like the following:
|
||||
|
||||
```bash
|
||||
TARGET SOURCE FSTYPE
|
||||
/var/www nfs-server.default.svc.cluster.local:/ nfs4
|
||||
```
|
||||
53
deploy/example/nfs-provisioner/nginx-pod.yaml
Normal file
53
deploy/example/nfs-provisioner/nginx-pod.yaml
Normal file
@ -0,0 +1,53 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolume
|
||||
metadata:
|
||||
name: pv-nginx
|
||||
spec:
|
||||
capacity:
|
||||
storage: 10Gi
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
persistentVolumeReclaimPolicy: Delete
|
||||
mountOptions:
|
||||
- hard
|
||||
- nfsvers=4.1
|
||||
csi:
|
||||
driver: nfs.csi.k8s.io
|
||||
readOnly: false
|
||||
volumeHandle: unique-volumeid # make sure it's a unique id in the cluster
|
||||
volumeAttributes:
|
||||
server: nfs-server.default.svc.cluster.local
|
||||
share: /
|
||||
---
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
name: pvc-nginx
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 10Gi
|
||||
volumeName: pv-nginx
|
||||
storageClassName: ""
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx-nfs-example
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /var/www
|
||||
name: pvc-nginx
|
||||
volumes:
|
||||
- name: pvc-nginx
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc-nginx
|
||||
30
docs/csi-debug.md
Normal file
30
docs/csi-debug.md
Normal file
@ -0,0 +1,30 @@
|
||||
## CSI driver debug tips
|
||||
|
||||
### Case#1: volume create/delete failed
|
||||
- locate csi driver pod
|
||||
```console
|
||||
$ kubectl get pod -o wide -n kube-system | grep csi-nfs-controller
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
csi-nfs-controller-56bfddd689-dh5tk 5/5 Running 0 35s 10.240.0.19 k8s-agentpool-22533604-0
|
||||
csi-nfs-controller-56bfddd689-sl4ll 5/5 Running 0 35s 10.240.0.23 k8s-agentpool-22533604-1
|
||||
```
|
||||
- get csi driver logs
|
||||
```console
|
||||
$ kubectl logs csi-nfs-controller-56bfddd689-dh5tk -c nfs -n kube-system > csi-nfs-controller.log
|
||||
```
|
||||
> note: there could be multiple controller pods, if there are no helpful logs, try to get logs from other controller pods
|
||||
|
||||
### Case#2: volume mount/unmount failed
|
||||
- locate csi driver pod and figure out which pod does tha actual volume mount/unmount
|
||||
|
||||
```console
|
||||
$ kubectl get pod -o wide -n kube-system | grep csi-nfs-node
|
||||
NAME READY STATUS RESTARTS AGE IP NODE
|
||||
csi-nfs-node-cvgbs 3/3 Running 0 7m4s 10.240.0.35 k8s-agentpool-22533604-1
|
||||
csi-nfs-node-dr4s4 3/3 Running 0 7m4s 10.240.0.4 k8s-agentpool-22533604-0
|
||||
```
|
||||
|
||||
- get csi driver logs
|
||||
```console
|
||||
$ kubectl logs csi-nfs-node-cvgbs -c nfs -n kube-system > csi-nfs-node.log
|
||||
```
|
||||
139
docs/csi-dev.md
139
docs/csi-dev.md
@ -1,49 +1,114 @@
|
||||
# CSI Driver Development Guide
|
||||
# NFS CSI driver development guide
|
||||
|
||||
## Build this project
|
||||
|
||||
- Clone this repo
|
||||
|
||||
```bash
|
||||
git clone https://github.com/kubernetes-csi/csi-driver-nfs
|
||||
## How to build this project
|
||||
- Clone repo
|
||||
```console
|
||||
$ mkdir -p $GOPATH/src/sigs.k8s.io/
|
||||
$ git clone https://github.com/kubernetes-csi/csi-driver-nfs $GOPATH/src/github.com/kubernetes-csi/csi-driver-nfs
|
||||
```
|
||||
|
||||
- Build CSI Driver
|
||||
|
||||
```bash
|
||||
$ cd csi-driver-nfs
|
||||
- Build CSI driver
|
||||
```console
|
||||
$ cd $GOPATH/src/github.com/kubernetes-csi/csi-driver-nfs
|
||||
$ make
|
||||
```
|
||||
|
||||
- Verify code before submitting PRs
|
||||
|
||||
```bash
|
||||
make verify
|
||||
```
|
||||
## Test CSI Driver locally
|
||||
|
||||
> WIP
|
||||
|
||||
## Test CSI Driver in a Kubernetes Cluster
|
||||
|
||||
- Build container image and push to DockerHub
|
||||
|
||||
```bash
|
||||
# Run `docker login` first
|
||||
$ export LOCAL_USER=<DockerHub Username>
|
||||
$ make local-build-push
|
||||
- Run verification test before submitting code
|
||||
```console
|
||||
$ make verify
|
||||
```
|
||||
|
||||
- Replace `quay.io/k8scsi/nfsplugin:v2.0.0` in `deploy/kubernetes/csi-nfs-controller.yaml` and `deploy/kubernetes/csi-nfs-node.yaml` with `<YOUR DOCKERHUB ID>/nfsplugin:latest`
|
||||
## How to test CSI driver in local environment
|
||||
|
||||
- Install driver locally
|
||||
|
||||
```bash
|
||||
make local-k8s-install
|
||||
Install `csc` tool according to https://github.com/rexray/gocsi/tree/master/csc
|
||||
```console
|
||||
$ mkdir -p $GOPATH/src/github.com
|
||||
$ cd $GOPATH/src/github.com
|
||||
$ git clone https://github.com/rexray/gocsi.git
|
||||
$ cd rexray/gocsi/csc
|
||||
$ make build
|
||||
```
|
||||
|
||||
- Uninstall driver
|
||||
|
||||
```bash
|
||||
make local-k8s-uninstall
|
||||
#### Start CSI driver locally
|
||||
```console
|
||||
$ cd $GOPATH/src/github.com/kubernetes-csi/csi-driver-nfs
|
||||
$ ./_output/nfsplugin --endpoint tcp://127.0.0.1:10000 --nodeid CSINode -v=5 &
|
||||
```
|
||||
|
||||
#### 0. Set environmnet variables
|
||||
```console
|
||||
$ cap="1,mount,"
|
||||
$ volname="test-$(date +%s)"
|
||||
$ volsize="2147483648"
|
||||
$ endpoint="unix:///tmp/csi.sock"
|
||||
$ target_path="/tmp/targetpath"
|
||||
$ params="server=127.0.0.1,share=/"
|
||||
```
|
||||
|
||||
#### 1. Get plugin info
|
||||
```console
|
||||
$ csc identity plugin-info --endpoint "$endpoint"
|
||||
"nfs.csi.k8s.io" "v2.0.0"
|
||||
```
|
||||
|
||||
#### 2. Create a new nfs volume
|
||||
```console
|
||||
$ value="$(csc controller new --endpoint "$endpoint" --cap "$cap" "$volname" --req-bytes "$volsize" --params "$params")"
|
||||
$ sleep 15
|
||||
$ volumeid="$(echo "$value" | awk '{print $1}' | sed 's/"//g')"
|
||||
$ echo "Got volume id: $volumeid"
|
||||
```
|
||||
|
||||
#### 3. Publish a nfs volume
|
||||
```
|
||||
$ csc node publish --endpoint "$endpoint" --cap "$cap" --vol-context "$params" --target-path "$target_path" "$volumeid"
|
||||
```
|
||||
|
||||
#### 4. Unpublish a nfs volume
|
||||
```
|
||||
$ csc node unpublish --endpoint "$endpoint" --target-path "$target_path" "$volumeid"
|
||||
```
|
||||
|
||||
#### 6. Validate volume capabilities
|
||||
```console
|
||||
$ csc controller validate-volume-capabilities --endpoint "$endpoint" --cap "$cap" "$volumeid"
|
||||
```
|
||||
|
||||
#### 7. Delete the nfs volume
|
||||
```console
|
||||
$ csc controller del --endpoint "$endpoint" "$volumeid" --timeout 10m
|
||||
```
|
||||
|
||||
#### 8. Get NodeID
|
||||
```console
|
||||
$ csc node get-info --endpoint "$endpoint"
|
||||
CSINode
|
||||
```
|
||||
|
||||
## How to test CSI driver in a Kubernetes cluster
|
||||
- Set environmnet variable
|
||||
```console
|
||||
export REGISTRY=<dockerhub-alias>
|
||||
export IMAGE_VERSION=latest
|
||||
```
|
||||
|
||||
- Build continer image and push image to dockerhub
|
||||
```console
|
||||
# run `docker login` first
|
||||
# build docker image
|
||||
make container
|
||||
# push the docker image
|
||||
make push
|
||||
```
|
||||
|
||||
- Deploy a Kubernetes cluster and make sure `kubectl get nodes` works on your dev box.
|
||||
|
||||
- Run E2E test on the Kubernetes cluster.
|
||||
|
||||
```console
|
||||
# install NFS CSI Driver on the Kubernetes cluster
|
||||
make e2e-bootstrap
|
||||
|
||||
# run the E2E test
|
||||
make e2e-test
|
||||
```
|
||||
18
docs/driver-parameters.md
Normal file
18
docs/driver-parameters.md
Normal file
@ -0,0 +1,18 @@
|
||||
## Driver Parameters
|
||||
> This plugin driver itself only provides a communication layer between resources in the cluser and the NFS server, you need to bring your own NFS server before using this driver.
|
||||
|
||||
### Storage Class Usage (Dynamic Provisioning)
|
||||
> [`StorageClass` example](../deploy/example/storageclass-nfs.yaml)
|
||||
|
||||
Name | Meaning | Example Value | Mandatory | Default value
|
||||
--- | --- | --- | --- | ---
|
||||
server | NFS Server endpoint | Domain name `nfs-server.default.svc.cluster.local` <br>Or IP address `127.0.0.1` | Yes |
|
||||
share | NFS share path | `/` | Yes |
|
||||
|
||||
### PV/PVC Usage (Static Provisioning)
|
||||
> [`PersistentVolume` example](../deploy/example/pv-nfs-csi.yaml)
|
||||
|
||||
Name | Meaning | Example Value | Mandatory | Default value
|
||||
--- | --- | --- | --- | ---
|
||||
volumeAttributes.source | NFS Server endpoint | Domain name `nfs-server.default.svc.cluster.local` <br>Or IP address `127.0.0.1` | Yes |
|
||||
volumeAttributes.share | NFS share path | `/` | Yes |
|
||||
@ -5,14 +5,14 @@ If you have already installed Helm, you can also use it to install NFS CSI drive
|
||||
## Install with kubectl
|
||||
- remote install
|
||||
```console
|
||||
curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/kubernetes/install-driver.sh | bash -s master --
|
||||
curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/install-driver.sh | bash -s master --
|
||||
```
|
||||
|
||||
- local install
|
||||
```console
|
||||
git clone https://github.com/kubernetes-csi/csi-driver-nfs.git
|
||||
cd csi-driver-nfs
|
||||
./deploy/kubernetes/install-driver.sh master local
|
||||
./deploy/install-driver.sh master local
|
||||
```
|
||||
|
||||
- check pods status:
|
||||
@ -33,5 +33,5 @@ csi-nfs-node-dr4s4 3/3 Running 0 35s 1
|
||||
|
||||
- clean up NFS CSI driver
|
||||
```console
|
||||
curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/kubernetes/uninstall-driver.sh | bash -s master --
|
||||
curl -skSL https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/deploy/uninstall-driver.sh | bash -s master --
|
||||
```
|
||||
@ -1,30 +0,0 @@
|
||||
# Set up a NFS Server on a Kubernetes cluster
|
||||
|
||||
> Note: This example is for development only. Because the NFS server is sticky to the node it is scheduled on, data shall be lost if the pod is rescheduled on another node.
|
||||
|
||||
- To create a NFS provisioner on your Kubernetes cluster, run the following command
|
||||
|
||||
```bash
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/examples/kubernetes/nfs-provisioner/nfs-server.yaml
|
||||
```
|
||||
|
||||
- After deploying, a new service `nfs-server` is created, nfs share path is`nfs-server.default.svc.cluster.local:/`.
|
||||
|
||||
- To check if the server is working, we can statically create a `PersistentVolume` and a `PersistentVolumeClaim`, and mount it onto a sample pod:
|
||||
|
||||
```bash
|
||||
kubectl create -f https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/examples/kubernetes/nfs-provisioner/app.yaml
|
||||
```
|
||||
|
||||
Verify if the newly create deployment is Running:
|
||||
|
||||
```bash
|
||||
# kubectl exec -it nfs-busybox-8cd8d9c5b-sf8mx sh
|
||||
/ # df -h
|
||||
Filesystem Size Used Available Use% Mounted on
|
||||
...
|
||||
nfs-server.default.svc.cluster.local:/
|
||||
123.9G 15.2G 108.6G 12% /mnt
|
||||
...
|
||||
```
|
||||
|
||||
@ -1,31 +0,0 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: pvc-nginx
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: 100Gi
|
||||
storageClassName: nfs-csi
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: nginx
|
||||
spec:
|
||||
containers:
|
||||
- image: nginx
|
||||
name: nginx
|
||||
ports:
|
||||
- containerPort: 80
|
||||
protocol: TCP
|
||||
volumeMounts:
|
||||
- mountPath: /var/www
|
||||
name: pvc-nginx
|
||||
volumes:
|
||||
- name: pvc-nginx
|
||||
persistentVolumeClaim:
|
||||
claimName: pvc-nginx
|
||||
Loading…
x
Reference in New Issue
Block a user