You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The ElasticSearch operator is designed to manage one or more elastic search clusters. Included in the project (initially) is the ability to create the Elastic cluster, deploy the `data nodes` across zones in your Kubernetes cluster, and snapshot indexes to AWS S3.
5
+
The ElasticSearch operator is designed to manage one or more elastic search clusters. Included in the project (initially) is the ability to create the Elastic cluster, deploy the `data nodes` across zones in your Kubernetes cluster, and snapshot indexes to AWS S3.
6
6
7
7
# Requirements
8
8
9
9
## Kubernetes
10
10
11
-
The operator was built and tested on a 1.7.X Kubernetes cluster and is the minimum version required due to the operators use of Custom Resource Definitions.
11
+
The operator was built and tested on a 1.7.X Kubernetes cluster and is the minimum version required due to the operators use of Custom Resource Definitions.
12
12
13
13
_NOTE: If using on an older cluster, please make sure to use version [v0.0.7](https://github.com/upmc-enterprises/elasticsearch-operator/releases/tag/v0.0.7) which still utilize third party resources._
14
14
15
15
## Cloud
16
16
17
17
The operator was also _currently_ designed to leverage [Amazon AWS S3](https://aws.amazon.com/s3/) for snapshot / restore to the elastic cluster. The goal of this project is to extend to support additional clouds and scenarios to make it fully featured.
18
18
19
-
By swapping out the storage types, this can be used in GKE, but snapshots won't work at the moment.
19
+
By swapping out the storage types, this can be used in GKE, but snapshots won't work at the moment.
- scheduler-enabled: If the cron scheduler should be running to enable snapshotting
50
52
- bucket-name: Name of S3 bucket to dump snapshots
@@ -70,14 +72,15 @@ Following parameters are available to customize the elastic cluster:
70
72
- cerebro: Deploy [cerebro](https://github.com/lmenezes/cerebro) to cluster and automatically reference certs from secret
71
73
- image: Image to use (Note: Using [custom image](https://github.com/upmc-enterprises/cerebro-docker) since upstream has no docker images available)
72
74
- nodeSelector: list of k8s NodeSelectors which are applied to the Master Nodes and Data Nodes
73
-
-`key: "value`
75
+
-`key: value`
74
76
- tolerations: list of k8s Tolerations which are applied to the Master Nodes and Data Nodes
75
77
-`- effect:` eg: NoSchedule, NoExecute
76
78
`key:` eg: somekey
77
79
`operator:` eg: exists
78
80
- affinity: affinity rules to put on the client node deployments
79
-
- example:
80
-
```
81
+
- example:
82
+
83
+
```sh
81
84
affinity:
82
85
podAntiAffinity:
83
86
requiredDuringSchedulingIgnoredDuringExecution:
@@ -89,9 +92,10 @@ Following parameters are available to customize the elastic cluster:
89
92
- client
90
93
topologyKey: kubernetes.io/hostname
91
94
```
95
+
92
96
## Certs secret
93
97
94
-
The default image used adds TLS to the Elastic cluster. If not existing, secrets are automatically generated by the operator dynamically.
98
+
The default image used adds TLS to the Elastic cluster. If not existing, secrets are automatically generated by the operator dynamically.
95
99
96
100
If supplying your own certs, first generate them and add to a secret. Secret should contain `truststore.jks` and `node-keystore.jks`. The name of the secret should follow the pattern: `es-certs-[ClusterName]`. So for example if your cluster is named `example-es-cluster` then the secret should be `es-certs-example-es-cluster`.
97
101
@@ -102,8 +106,10 @@ The base image used is `upmcenterprises/docker-elasticsearch-kubernetes:6.1.3_0`
102
106
_NOTE: If no image is specified, the default noted previously is used._
103
107
104
108
## Image pull secret
109
+
105
110
If you are using a private repository you can add a pull secret under spec in your ElasticsearchCluster manifest
106
-
```
111
+
112
+
```sh
107
113
spec:
108
114
client-node-replicas: 3
109
115
data-node-replicas: 3
@@ -130,7 +136,7 @@ spec:
130
136
131
137
To deploy the operator simply deploy to your cluster:
@@ -140,32 +146,32 @@ _NOTE: In the example we're putting the operator into the namespace `operator`.
140
146
141
147
# Create Example ElasticSearch Cluster
142
148
143
-
Run the following command to create a [sample cluster](example/example-es-cluster.yaml) on AWS and you most likely will have to update the [zones](example/example-es-cluster.yaml#L16) to match your AWS Account, other examples are available as well if not running on AWS:
149
+
Run the following command to create a [sample cluster](example/example-es-cluster.yaml) on AWS and you most likely will have to update the [zones](example/example-es-cluster.yaml#L16) to match your AWS Account, other examples are available as well if not running on AWS:
_NOTE: Creating a custom cluster requires the creation of a CustomResourceDefinition. This happens automatically after the controller is created._
149
156
150
157
# Create Example ElasticSearch Cluster (Minikube)
151
158
152
159
To run the operator on minikube, this sample file is setup to do that. It sets lower Java memory constraints as well as uses the default storage class in Minikube which writes to hostPath.
[Kibana](https://www.elastic.co/products/kibana) and [Cerebro](https://github.com/lmenezes/cerebro) can be automatically deployed by adding the cerebro piece to the manifest:
178
184
179
-
```
185
+
```sh
180
186
spec:
181
-
kibana:
187
+
kibana:
182
188
image: docker.elastic.co/kibana/kibana-oss:6.1.3
183
189
cerebro:
184
190
image: upmcenterprises/cerebro:0.6.8
@@ -188,13 +194,13 @@ Once added the operator will create certs for Kibana or Cerebro and automaticall
188
194
189
195
To access, just port-forward to the pod:
190
196
191
-
```
197
+
```sh
192
198
Kibana:
193
199
$ kubectl port-forward <podName> 5601:5601
194
200
$ curl https://localhost:5601
195
201
````
196
202
197
-
```
203
+
```sh
198
204
Cerebro:
199
205
$ kubectl port-forward <podName> 9000:9000
200
206
$ curl https://localhost:9000
@@ -214,13 +220,13 @@ Elasticsearch can snapshot it's indexes for easy backup / recovery of the cluste
214
220
215
221
Snapshots can be scheduled via a Cron syntax by defining the cron schedule in your elastic cluster. See: [https://godoc.org/github.com/robfig/cron](https://godoc.org/github.com/robfig/cron)
216
222
217
-
_NOTE: Be sure to enable the scheduler as well by setting `scheduler-enabled=true`_
223
+
_NOTE: Be sure to enable the scheduler as well by setting `scheduler-enabled=true`_
218
224
219
225
## AWS Setup
220
226
221
-
To enable the snapshots create a bucket in S3, then apply the following IAM permissions to your EC2 instances replacing `{!YOUR_BUCKET!}` with the correct bucket name.
227
+
To enable the snapshots create a bucket in S3, then apply the following IAM permissions to your EC2 instances replacing `{!YOUR_BUCKET!}` with the correct bucket name.
222
228
223
-
```
229
+
```json
224
230
{
225
231
"Statement": [
226
232
{
@@ -257,7 +263,7 @@ To enable the snapshots create a bucket in S3, then apply the following IAM perm
257
263
258
264
To enable snapshots with GCS on GKE, create a bucket in GCS and bind the `storage.admin` role to the cluster service account replacing `${BUCKET}` with your bucket name:
0 commit comments