|
| 1 | +# Start ArangoDB cluster on Amazon EKS |
| 2 | + |
| 3 | +## Requirements: |
| 4 | + |
| 5 | +* [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) (**version >= 1.10**) |
| 6 | +* [helm](https://www.helm.sh/) |
| 7 | +* [AWS IAM authenticator](https://github.com/kubernetes-sigs/aws-iam-authenticator) |
| 8 | +* [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) (**version >= 1.16**) |
| 9 | + |
| 10 | +``` |
| 11 | +$ aws --version |
| 12 | + aws-cli/1.16.43 Python/2.7.15rc1 Linux/4.15.0-36-generic botocore/1.12.33 |
| 13 | +``` |
| 14 | + |
| 15 | +## Create a Kubernetes cluster |
| 16 | + |
| 17 | + |
| 18 | + |
| 19 | +## Wait for cluster to be `ACTIVE` |
| 20 | + |
| 21 | + |
| 22 | +## Continue with aws client |
| 23 | + |
| 24 | +### Configure AWS client |
| 25 | + |
| 26 | +Refer to the [documentation](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html) to accordingly fill in the below with your credentials. |
| 27 | +Pay special attention to the correct region information to find your cluster next. |
| 28 | + |
| 29 | +``` |
| 30 | +$ aws configure |
| 31 | + AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE |
| 32 | + AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY |
| 33 | + Default region name [None]: us-west-2 |
| 34 | + Default output format [None]: json |
| 35 | +``` |
| 36 | + |
| 37 | +Verify that you can see your cluster listed, when authenticated |
| 38 | +``` |
| 39 | +$ aws eks list-clusters |
| 40 | +{ |
| 41 | + "clusters": [ |
| 42 | + "ArangoDB" |
| 43 | + ] |
| 44 | +} |
| 45 | +``` |
| 46 | + |
| 47 | +You should be able to verify the `ACTIVE` state of your cluster |
| 48 | +``` |
| 49 | +$ aws eks describe-cluster --name ArangoDB --query cluster.status |
| 50 | + "ACTIVE" |
| 51 | +``` |
| 52 | + |
| 53 | +### Integrate kubernetes configuration locally |
| 54 | + |
| 55 | +It's time to integrate the cluster into your local kubernetes configurations |
| 56 | + |
| 57 | +``` |
| 58 | +$ aws eks update-kubeconfig --name ArangoDB |
| 59 | + Added new context arn:aws:eks:us-west-2:XXXXXXXXXXX:cluster/ArangoDB to ... |
| 60 | +
|
| 61 | +``` |
| 62 | + |
| 63 | +At this point, we are ready to use kubectl to communicate with the cluster. |
| 64 | +``` |
| 65 | +$ kubectl get service |
| 66 | + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE |
| 67 | + kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 23h |
| 68 | +``` |
| 69 | + |
| 70 | +``` |
| 71 | +$ kubectl get nodes |
| 72 | + No resources found. |
| 73 | +``` |
| 74 | + |
| 75 | +### Create worker Stack |
| 76 | + |
| 77 | +On Amazon EKS, we need to launch worker nodes, as the cluster has none. |
| 78 | +Open Amazon's [cloud formation console](https://console.aws.amazon.com/cloudformation/) and choose `Create Stack` by specifying this S3 template URL: |
| 79 | + |
| 80 | +``` |
| 81 | +https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-eks-nodegroup.yaml |
| 82 | +``` |
| 83 | + |
| 84 | + |
| 85 | + |
| 86 | +### Worker stack details |
| 87 | + |
| 88 | +Pay good attention to details here. If your input is not complete, your worker nodes are either not spawned or you won't be able to integrate the workers into your kubernetes cluster. |
| 89 | + |
| 90 | +**Stack name**: Choose a name for your stack. For example ArangoDB-stack |
| 91 | + |
| 92 | +**ClusterName**: **Important!!!** Use the same name as above, refer to `aws eks list-clusters`. |
| 93 | + |
| 94 | +**ClusterControlPlaneSecurityGroup**: Choose the same SecurityGroups value as above, when you create your EKS Cluster. |
| 95 | + |
| 96 | +**NodeGroupName**: Enter a name for your node group for example `ArangoDB-node-group` |
| 97 | + |
| 98 | +**NodeAutoScalingGroupMinSize**: Minimum number of nodes to which you may scale your workers. |
| 99 | + |
| 100 | +**NodeAutoScalingGroupMaxSize**: Nomen est omen. |
| 101 | + |
| 102 | +**NodeInstanceType**: Choose an instance type for your worker nodes. For this test we went with the default `t2.medium` instances. |
| 103 | + |
| 104 | +**NodeImageId**: Dependent on the region, there are two image Ids for boxes with and wothout GPU support. |
| 105 | + |
| 106 | +| Region | without GPU | with GPU | |
| 107 | +|-----------|-----------------------|-----------------------| |
| 108 | +| us-west-2 | ami-0a54c984b9f908c81 | ami-0440e4f6b9713faf6 | |
| 109 | +| us-east-1 | ami-0440e4f6b9713faf6 | ami-058bfb8c236caae89 | |
| 110 | +| eu-west-1 | ami-0c7a4976cb6fafd3a | ami-0706dc8a5eed2eed9 | |
| 111 | + |
| 112 | +**KeyName**: SSH key pair, which may be used to ssh into the nodes. This is required input. |
| 113 | + |
| 114 | +**VpcId**: The same VPCId, which you get using `aws eks describe-cluster --name <your-cluster-name> --query cluster.resourcesVpcConfig.vpcId` |
| 115 | + |
| 116 | +**Subnets**: Choose the subnets that you created in Create your Amazon EKS Cluster VPC. |
| 117 | + |
| 118 | +### Review your stack and submit |
| 119 | + |
| 120 | + |
| 121 | +### Wait for stack to get ready |
| 122 | + |
| 123 | + |
| 124 | +### Wait for stack to get ready |
| 125 | + |
| 126 | + |
| 127 | +### Note down `NodeInstanceRole` |
| 128 | +Once stack is ready, navigate at the bottom to the Outputs pane and note down the `NodeInstanceRole` |
| 129 | + |
| 130 | + |
| 131 | +### Integrate worker stack as Kubernetes nodes |
| 132 | + |
| 133 | +* Download the configuration map here: |
| 134 | +``` |
| 135 | +$ curl -O https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/aws-auth-cm.yaml |
| 136 | +``` |
| 137 | +* Modify `data|mapRoles|rolearn` to match |
| 138 | + equal to the `NoteInstanceRole`, you acquired after your node stack was finished |
| 139 | + |
| 140 | +### Wait for nodes to join the cluster and get ready |
| 141 | +Monitor `kubectl get nodes` and watch your nodes to be ready |
| 142 | +``` |
| 143 | +$ kubectl get nodes |
| 144 | + NAME STATUS ROLES AGE VERSION |
| 145 | + ip-172-31-20-103.us-west-2.compute.internal Ready <none> 1d v1.10.3 |
| 146 | + ip-172-31-38-160.us-west-2.compute.internal Ready <none> 1d v1.10.3 |
| 147 | + ip-172-31-45-199.us-west-2.compute.internal Ready <none> 1d v1.10.3 |
| 148 | +``` |
| 149 | + |
| 150 | + |
| 151 | +### Setup `helm` |
| 152 | +* Create service account for `tiller` |
| 153 | +``` |
| 154 | +$ kubectl create serviceaccount --namespace kube-system tiller |
| 155 | + serviceaccount/tiller created |
| 156 | +``` |
| 157 | +* Allow `tiller` to modify the cluster |
| 158 | +``` |
| 159 | +$ kubectl create clusterrolebinding tiller-cluster-rule \ |
| 160 | + --clusterrole=cluster-admin --serviceaccount=kube-system:tiller |
| 161 | + clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created |
| 162 | +``` |
| 163 | +* Initialise `helm` |
| 164 | +``` |
| 165 | +$ helm init --service-account tiller |
| 166 | + $HELM_HOME has been configured at ~/.helm. |
| 167 | + ... |
| 168 | + Happy Helming! |
| 169 | +``` |
| 170 | + |
| 171 | +### Deploy ArangoDB cluster |
| 172 | +``` |
| 173 | +$ kubectl apply -f https://raw.githubusercontent.com/arangodb/kube-arangodb/master/examples/simple-cluster.yaml |
| 174 | +``` |
| 175 | + |
| 176 | +### Wait for cluster to become ready |
| 177 | +Get `LoadBalancer` address from below command to access your coordinator. |
| 178 | +``` |
| 179 | +$ kubectl get svc |
| 180 | +``` |
| 181 | + |
| 182 | +### Secure ArangoDB cluster |
| 183 | +Do not forget to immediately assign a secure database `root` password once on coordinator |
0 commit comments