Merge pull request #27 from wozniakjan/issue3/update_readme
Add basic info to README about the plugin compatibility and features
This commit is contained in:
commit
c197f2ae07
102
README.md
102
README.md
@ -1,70 +1,62 @@
|
||||
# CSI NFS driver
|
||||
|
||||
## Kubernetes
|
||||
### Requirements
|
||||
## Overview
|
||||
|
||||
The folllowing feature gates and runtime config have to be enabled to deploy the driver
|
||||
This is a repository for [NFS](https://en.wikipedia.org/wiki/Network_File_System) [CSI](https://kubernetes-csi.github.io/docs/) Driver.
|
||||
Currently it implements bare minimum of the [CSI spec](https://github.com/container-storage-interface/spec) and is in the alpha state
|
||||
of the development.
|
||||
|
||||
#### CSI Feature matrix
|
||||
|
||||
| **nfs.csi.k8s.io** | K8s version compatibility | CSI versions compatibility | Dynamic Provisioning | Resize | Snapshots | Raw Block | AccessModes | Status |
|
||||
|--------------------|---------------------------|----------------------------|----------------------|--------|-----------|-----------|--------------------------|------------------------------------------------------------------------------|
|
||||
|master | 1.14 + | v1.0 + | no | no | no | no | Read/Write Multiple Pods | Alpha |
|
||||
|v2.0.0 | 1.14 + | v1.0 + | no | no | no | no | Read/Write Multiple Pods | Alpha |
|
||||
|v1.0.0 | 1.9 - 1.15 | v1.0 | no | no | no | no | Read/Write Multiple Pods | [deprecated](https://github.com/kubernetes-csi/drivers/tree/master/pkg/nfs) |
|
||||
|
||||
## Requirements
|
||||
|
||||
The CSI NFS driver requires Kubernetes cluster of version 1.14 or newer and
|
||||
preexisting NFS server, whether it is deployed on cluster or provisioned
|
||||
independently. The plugin itself provides only a communication layer between
|
||||
resources in the cluser and the NFS server.
|
||||
|
||||
## Example
|
||||
|
||||
There are multiple ways to create a kubernetes cluster, the NFS CSI plugin
|
||||
should work invariantly of your cluster setup. Very simple way of getting
|
||||
a local environment for testing can be achieved using for example
|
||||
[kind](https://github.com/kubernetes-sigs/kind).
|
||||
|
||||
There are also multiple different NFS servers you can use for testing of
|
||||
the plugin, the major versions of the protocol v2, v3 and v4 should be supported
|
||||
by the current implementation.
|
||||
|
||||
The example assumes you have your cluster created (e.g. `kind create cluster`)
|
||||
and working NFS server (e.g. https://github.com/rootfs/nfs-ganesha-docker)
|
||||
|
||||
#### Deploy
|
||||
|
||||
Deploy the NFS plugin along with the `CSIDriver` info.
|
||||
```
|
||||
FEATURE_GATES=CSIPersistentVolume=true,MountPropagation=true
|
||||
RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true"
|
||||
kubectl -f deploy/kubernetes create
|
||||
```
|
||||
|
||||
Mountprogpation requries support for privileged containers. So, make sure privileged containers are enabled in the cluster.
|
||||
#### Example Nginx application
|
||||
|
||||
### Example local-up-cluster.sh
|
||||
The [/examples/kubernetes/nginx.yaml](/examples/kubernetes/nginx.yaml) contains a `PersistentVolume`,
|
||||
`PersistentVolumeClaim` and an nginx `Pod` mounting the NFS volume under `/var/www`.
|
||||
|
||||
```ALLOW_PRIVILEGED=true FEATURE_GATES=CSIPersistentVolume=true,MountPropagation=true RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true" LOG_LEVEL=5 hack/local-up-cluster.sh```
|
||||
You will need to update the NFS Server IP and the share information under
|
||||
`volumeAttributes` inside `PersistentVolume` in `nginx.yaml` file to match your
|
||||
NFS server public end point and configuration. You can also provide additional
|
||||
`mountOptions`, such as protocol version, in the `PersistentVolume` `spec`
|
||||
relevant for your NFS Server.
|
||||
|
||||
### Deploy
|
||||
|
||||
```kubectl -f deploy/kubernetes create```
|
||||
|
||||
### Example Nginx application
|
||||
Please update the NFS Server & share information in nginx.yaml file.
|
||||
|
||||
```kubectl -f examples/kubernetes/nginx.yaml create```
|
||||
|
||||
## Using CSC tool
|
||||
|
||||
### Build nfsplugin
|
||||
```
|
||||
$ make nfs
|
||||
kubectl -f examples/kubernetes/nginx.yaml create
|
||||
```
|
||||
|
||||
### Start NFS driver
|
||||
```
|
||||
$ sudo ./_output/nfsplugin --endpoint tcp://127.0.0.1:10000 --nodeid CSINode -v=5
|
||||
```
|
||||
|
||||
## Test
|
||||
Get ```csc``` tool from https://github.com/rexray/gocsi/tree/master/csc
|
||||
|
||||
#### Get plugin info
|
||||
```
|
||||
$ csc identity plugin-info --endpoint tcp://127.0.0.1:10000
|
||||
"NFS" "0.1.0"
|
||||
```
|
||||
|
||||
#### NodePublish a volume
|
||||
```
|
||||
$ export NFS_SERVER="Your Server IP (Ex: 10.10.10.10)"
|
||||
$ export NFS_SHARE="Your NFS share"
|
||||
$ csc node publish --endpoint tcp://127.0.0.1:10000 --target-path /mnt/nfs --attrib server=$NFS_SERVER --attrib share=$NFS_SHARE nfstestvol
|
||||
nfstestvol
|
||||
```
|
||||
|
||||
#### NodeUnpublish a volume
|
||||
```
|
||||
$ csc node unpublish --endpoint tcp://127.0.0.1:10000 --target-path /mnt/nfs nfstestvol
|
||||
nfstestvol
|
||||
```
|
||||
|
||||
#### Get NodeID
|
||||
```
|
||||
$ csc node get-id --endpoint tcp://127.0.0.1:10000
|
||||
CSINode
|
||||
```
|
||||
## Running Kubernetes End To End tests on an NFS Driver
|
||||
|
||||
First, stand up a local cluster `ALLOW_PRIVILEGED=1 hack/local-up-cluster.sh` (from your Kubernetes repo)
|
||||
@ -81,7 +73,7 @@ When using the [existing set of YAML files](https://github.com/kubernetes-csi/cs
|
||||
If you already have a driver installed, skip to step 4 of the following steps.
|
||||
|
||||
1) Build the nfs driver by running `make`
|
||||
2) Create NFS Driver Image, where the image tag would be whatever that is required by your YAML deployment files `docker build -t quay.io/k8scsi/nfsplugin:v1.0.0 .`
|
||||
2) Create NFS Driver Image, where the image tag would be whatever that is required by your YAML deployment files `docker build -t quay.io/k8scsi/nfsplugin:v2.0.0 .`
|
||||
3) Install the Driver: `kubectl create -f deploy/kubernetes`
|
||||
4) Build E2E test binary: `make build-tests`
|
||||
5) Run E2E Tests using the following command: `./bin/tests --ginkgo.v --ginkgo.progress --kubeconfig=/var/run/kubernetes/admin.kubeconfig`
|
||||
|
||||
@ -42,7 +42,7 @@ spec:
|
||||
capabilities:
|
||||
add: ["SYS_ADMIN"]
|
||||
allowPrivilegeEscalation: true
|
||||
image: quay.io/k8scsi/nfsplugin:v1.0.0
|
||||
image: quay.io/k8scsi/nfsplugin:v2.0.0
|
||||
args :
|
||||
- "--nodeid=$(NODE_ID)"
|
||||
- "--endpoint=$(CSI_ENDPOINT)"
|
||||
|
||||
@ -40,7 +40,7 @@ const (
|
||||
)
|
||||
|
||||
var (
|
||||
version = "1.0.0"
|
||||
version = "2.0.0"
|
||||
)
|
||||
|
||||
func NewNFSdriver(nodeID, endpoint string) *nfsDriver {
|
||||
|
||||
Loading…
x
Reference in New Issue
Block a user