Skip to content

Commit 1aeeb9e

Browse files
authored
Merge pull request #310 from komljen/custom_annotations
Add support for custom annotations
2 parents 4afe59d + 24b4b47 commit 1aeeb9e

File tree

7 files changed

+61
-40
lines changed

7 files changed

+61
-40
lines changed

README.md

Lines changed: 36 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -1,24 +1,25 @@
1-
# elasticsearch operator
1+
# Elasticsearch operator
22

33
[![Build Status](https://travis-ci.org/upmc-enterprises/elasticsearch-operator.svg?branch=master)](https://travis-ci.org/upmc-enterprises/elasticsearch-operator)
44

5-
The ElasticSearch operator is designed to manage one or more elastic search clusters. Included in the project (initially) is the ability to create the Elastic cluster, deploy the `data nodes` across zones in your Kubernetes cluster, and snapshot indexes to AWS S3.
5+
The ElasticSearch operator is designed to manage one or more elastic search clusters. Included in the project (initially) is the ability to create the Elastic cluster, deploy the `data nodes` across zones in your Kubernetes cluster, and snapshot indexes to AWS S3.
66

77
# Requirements
88

99
## Kubernetes
1010

11-
The operator was built and tested on a 1.7.X Kubernetes cluster and is the minimum version required due to the operators use of Custom Resource Definitions.
11+
The operator was built and tested on a 1.7.X Kubernetes cluster and is the minimum version required due to the operators use of Custom Resource Definitions.
1212

1313
_NOTE: If using on an older cluster, please make sure to use version [v0.0.7](https://github.com/upmc-enterprises/elasticsearch-operator/releases/tag/v0.0.7) which still utilize third party resources._
1414

1515
## Cloud
1616

1717
The operator was also _currently_ designed to leverage [Amazon AWS S3](https://aws.amazon.com/s3/) for snapshot / restore to the elastic cluster. The goal of this project is to extend to support additional clouds and scenarios to make it fully featured.
1818

19-
By swapping out the storage types, this can be used in GKE, but snapshots won't work at the moment.
19+
By swapping out the storage types, this can be used in GKE, but snapshots won't work at the moment.
2020

2121
# Demo
22+
2223
Watch a demo here:<br>
2324
[![Elasticsearch Operator Demo](http://img.youtube.com/vi/3HnV7NfgP6A/0.jpg)](http://www.youtube.com/watch?v=3HnV7NfgP6A)<br>
2425
[https://www.youtube.com/watch?v=3HnV7NfgP6A](https://www.youtube.com/watch?v=3HnV7NfgP6A)
@@ -44,7 +45,8 @@ Following parameters are available to customize the elastic cluster:
4445
- master-java-options: sets java-options for Master nodes (overrides java-options)
4546
- client-java-options: sets java-options for Client nodes (overrides java-options)
4647
- data-java-options: sets java-options for Data nodes (overrides java-options)
47-
48+
- annotations: list of custom annotations which are applied to the master, data and client nodes
49+
- `key: value`
4850
- [snapshot](https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html)
4951
- scheduler-enabled: If the cron scheduler should be running to enable snapshotting
5052
- bucket-name: Name of S3 bucket to dump snapshots
@@ -70,14 +72,15 @@ Following parameters are available to customize the elastic cluster:
7072
- cerebro: Deploy [cerebro](https://github.com/lmenezes/cerebro) to cluster and automatically reference certs from secret
7173
- image: Image to use (Note: Using [custom image](https://github.com/upmc-enterprises/cerebro-docker) since upstream has no docker images available)
7274
- nodeSelector: list of k8s NodeSelectors which are applied to the Master Nodes and Data Nodes
73-
- `key: "value`
75+
- `key: value`
7476
- tolerations: list of k8s Tolerations which are applied to the Master Nodes and Data Nodes
7577
- `- effect:` eg: NoSchedule, NoExecute
7678
`key:` eg: somekey
7779
`operator:` eg: exists
7880
- affinity: affinity rules to put on the client node deployments
79-
- example:
80-
```
81+
- example:
82+
83+
```sh
8184
affinity:
8285
podAntiAffinity:
8386
requiredDuringSchedulingIgnoredDuringExecution:
@@ -89,9 +92,10 @@ Following parameters are available to customize the elastic cluster:
8992
- client
9093
topologyKey: kubernetes.io/hostname
9194
```
95+
9296
## Certs secret
9397

94-
The default image used adds TLS to the Elastic cluster. If not existing, secrets are automatically generated by the operator dynamically.
98+
The default image used adds TLS to the Elastic cluster. If not existing, secrets are automatically generated by the operator dynamically.
9599

96100
If supplying your own certs, first generate them and add to a secret. Secret should contain `truststore.jks` and `node-keystore.jks`. The name of the secret should follow the pattern: `es-certs-[ClusterName]`. So for example if your cluster is named `example-es-cluster` then the secret should be `es-certs-example-es-cluster`.
97101

@@ -102,8 +106,10 @@ The base image used is `upmcenterprises/docker-elasticsearch-kubernetes:6.1.3_0`
102106
_NOTE: If no image is specified, the default noted previously is used._
103107

104108
## Image pull secret
109+
105110
If you are using a private repository you can add a pull secret under spec in your ElasticsearchCluster manifest
106-
```
111+
112+
```sh
107113
spec:
108114
client-node-replicas: 3
109115
data-node-replicas: 3
@@ -130,7 +136,7 @@ spec:
130136

131137
To deploy the operator simply deploy to your cluster:
132138

133-
```bash
139+
```sh
134140
$ kubectl create ns operator
135141
$ kubectl create -f https://raw.githubusercontent.com/upmc-enterprises/elasticsearch-operator/master/example/controller.yaml -n operator
136142
```
@@ -140,32 +146,32 @@ _NOTE: In the example we're putting the operator into the namespace `operator`.
140146

141147
# Create Example ElasticSearch Cluster
142148

143-
Run the following command to create a [sample cluster](example/example-es-cluster.yaml) on AWS and you most likely will have to update the [zones](example/example-es-cluster.yaml#L16) to match your AWS Account, other examples are available as well if not running on AWS:
149+
Run the following command to create a [sample cluster](example/example-es-cluster.yaml) on AWS and you most likely will have to update the [zones](example/example-es-cluster.yaml#L16) to match your AWS Account, other examples are available as well if not running on AWS:
144150

145-
```bash
151+
```sh
146152
$ kubectl create -n operator -f https://raw.githubusercontent.com/upmc-enterprises/elasticsearch-operator/master/example/example-es-cluster.yaml
147153
```
154+
148155
_NOTE: Creating a custom cluster requires the creation of a CustomResourceDefinition. This happens automatically after the controller is created._
149156

150157
# Create Example ElasticSearch Cluster (Minikube)
151158

152159
To run the operator on minikube, this sample file is setup to do that. It sets lower Java memory constraints as well as uses the default storage class in Minikube which writes to hostPath.
153160

154-
```bash
161+
```sh
155162
$ kubectl create -f https://raw.githubusercontent.com/upmc-enterprises/elasticsearch-operator/master/example/example-es-cluster-minikube.yaml
156163
```
164+
157165
_NOTE: Creating a custom cluster requires the creation of a CustomResourceDefinition. This happens automatically after the controller is created._
158166

159167
# Helm
160168

161169
Both operator and cluster can be deployed using Helm charts:
162170

163-
```
171+
```sh
164172
$ helm repo add es-operator https://raw.githubusercontent.com/upmc-enterprises/elasticsearch-operator/master/charts/
165-
$ helm install --name elasticsearch-operator es-operator/elasticsearch-operator --set rbac.enabled=True --namespace logging
173+
$ helm install --name elasticsearch-operator es-operator/elasticsearch-operator --set rbac.enabled=True --namespace logging
166174
$ helm install --name=elasticsearch es-operator/elasticsearch --set kibana.enabled=True --set cerebro.enabled=True --set zones="{eu-west-1a,eu-west-1b}" --namespace logging
167-
```
168-
```
169175
$helm list
170176
NAME REVISION UPDATED STATUS CHART NAMESPACE
171177
elasticsearch 1 Thu Dec 7 11:53:45 2017 DEPLOYED elasticsearch-0.1.0 default
@@ -176,9 +182,9 @@ elasticsearch-operator 1 Thu Dec 7 11:49:13 2017 DEPLOYED elasticsearc
176182

177183
[Kibana](https://www.elastic.co/products/kibana) and [Cerebro](https://github.com/lmenezes/cerebro) can be automatically deployed by adding the cerebro piece to the manifest:
178184

179-
```
185+
```sh
180186
spec:
181-
kibana:
187+
kibana:
182188
image: docker.elastic.co/kibana/kibana-oss:6.1.3
183189
cerebro:
184190
image: upmcenterprises/cerebro:0.6.8
@@ -188,13 +194,13 @@ Once added the operator will create certs for Kibana or Cerebro and automaticall
188194

189195
To access, just port-forward to the pod:
190196

191-
```
197+
```sh
192198
Kibana:
193199
$ kubectl port-forward <podName> 5601:5601
194200
$ curl https://localhost:5601
195201
````
196202

197-
```
203+
```sh
198204
Cerebro:
199205
$ kubectl port-forward <podName> 9000:9000
200206
$ curl https://localhost:9000
@@ -214,13 +220,13 @@ Elasticsearch can snapshot it's indexes for easy backup / recovery of the cluste
214220

215221
Snapshots can be scheduled via a Cron syntax by defining the cron schedule in your elastic cluster. See: [https://godoc.org/github.com/robfig/cron](https://godoc.org/github.com/robfig/cron)
216222

217-
_NOTE: Be sure to enable the scheduler as well by setting `scheduler-enabled=true`_
223+
_NOTE: Be sure to enable the scheduler as well by setting `scheduler-enabled=true`_
218224

219225
## AWS Setup
220226

221-
To enable the snapshots create a bucket in S3, then apply the following IAM permissions to your EC2 instances replacing `{!YOUR_BUCKET!}` with the correct bucket name.
227+
To enable the snapshots create a bucket in S3, then apply the following IAM permissions to your EC2 instances replacing `{!YOUR_BUCKET!}` with the correct bucket name.
222228

223-
```
229+
```json
224230
{
225231
"Statement": [
226232
{
@@ -257,7 +263,7 @@ To enable the snapshots create a bucket in S3, then apply the following IAM perm
257263

258264
To enable snapshots with GCS on GKE, create a bucket in GCS and bind the `storage.admin` role to the cluster service account replacing `${BUCKET}` with your bucket name:
259265

260-
```
266+
```sh
261267
gsutil mb gs://${BUCKET}
262268
263269
SA_EMAIL=$(kubectl run shell --rm --restart=Never -it --image google/cloud-sdk --command /usr/bin/curl -- -s -H "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/email)
@@ -269,9 +275,10 @@ gcloud projects add-iam-policy-binding ${PROJECT} \
269275
```
270276

271277
## Snapshot Authentication
278+
272279
If you are using an elasticsearch image that requires authentication for the snapshot url, you can specify basic auth credentials.
273280

274-
```
281+
```sh
275282
spec:
276283
client-node-replicas: 3
277284
data-node-replicas: 3
@@ -305,12 +312,13 @@ Once deployed and all pods are running, the cluster can be accessed internally v
305312

306313
To run the Operator locally:
307314

308-
```
315+
```sh
309316
$ mkdir -p /tmp/certs/config && mkdir -p /tmp/certs/certs
310317
$ go get -u github.com/cloudflare/cfssl/cmd/cfssl
311318
$ go get -u github.com/cloudflare/cfssl/cmd/cfssljson
312319
$ go run cmd/operator/main.go --kubecfg-file=${HOME}/.kube/config
313320
```
314321

315322
# About
323+
316324
Built by UPMC Enterprises in Pittsburgh, PA. http://enterprises.upmc.com/

pkg/apis/elasticsearchoperator/v1/cluster.go

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -79,6 +79,9 @@ type ClusterSpec struct {
7979
// Affinity (podAffinity, podAntiAffinity, nodeAffinity) will be applied to the Client nodes
8080
Affinity v1.Affinity `json:"affinity,omitempty"`
8181

82+
// Annotations specifies a map of key-value pairs
83+
Annotations map[string]string `json:"annotations,omitempty"`
84+
8285
// Zones specifies a map of key-value pairs. Defines which zones
8386
// to deploy persistent volumes for data nodes
8487
Zones []string `json:"zones,omitempty"`

pkg/apis/elasticsearchoperator/v1/zz_generated.deepcopy.go

Lines changed: 7 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

pkg/k8sutil/deployments.go

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ func (k *K8sutil) DeleteDeployment(clusterName, namespace, deploymentType string
9696

9797
// CreateClientDeployment creates the client deployment
9898
func (k *K8sutil) CreateClientDeployment(baseImage string, replicas *int32, javaOptions, clientJavaOptions string,
99-
resources myspec.Resources, imagePullSecrets []myspec.ImagePullSecrets, imagePullPolicy, serviceAccountName, clusterName, statsdEndpoint, networkHost, namespace string, useSSL *bool, affinity v1.Affinity) error {
99+
resources myspec.Resources, imagePullSecrets []myspec.ImagePullSecrets, imagePullPolicy, serviceAccountName, clusterName, statsdEndpoint, networkHost, namespace string, useSSL *bool, affinity v1.Affinity, annotations map[string]string) error {
100100

101101
component := fmt.Sprintf("elasticsearch-%s", clusterName)
102102
discoveryServiceNameCluster := fmt.Sprintf("%s-%s", discoveryServiceName, clusterName)
@@ -168,6 +168,7 @@ func (k *K8sutil) CreateClientDeployment(baseImage string, replicas *int32, java
168168
"name": deploymentName,
169169
"cluster": clusterName,
170170
},
171+
Annotations: annotations,
171172
},
172173
Spec: v1.PodSpec{
173174
Affinity: &affinity,

pkg/k8sutil/k8sutil.go

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -396,7 +396,7 @@ func processDeploymentType(deploymentType string, clusterName string) (string, s
396396
}
397397

398398
func buildStatefulSet(statefulSetName, clusterName, deploymentType, baseImage, storageClass, dataDiskSize, javaOptions, masterJavaOptions, dataJavaOptions, serviceAccountName,
399-
statsdEndpoint, networkHost string, replicas *int32, useSSL *bool, resources myspec.Resources, imagePullSecrets []myspec.ImagePullSecrets, imagePullPolicy string, nodeSelector map[string]string, tolerations []v1.Toleration) *apps.StatefulSet {
399+
statsdEndpoint, networkHost string, replicas *int32, useSSL *bool, resources myspec.Resources, imagePullSecrets []myspec.ImagePullSecrets, imagePullPolicy string, nodeSelector map[string]string, tolerations []v1.Toleration, annotations map[string]string) *apps.StatefulSet {
400400

401401
_, role, isNodeMaster, isNodeData := processDeploymentType(deploymentType, clusterName)
402402

@@ -483,6 +483,7 @@ func buildStatefulSet(statefulSetName, clusterName, deploymentType, baseImage, s
483483
"name": statefulSetName,
484484
"cluster": clusterName,
485485
},
486+
Annotations: annotations,
486487
},
487488
Spec: v1.PodSpec{
488489
Tolerations: tolerations,
@@ -667,7 +668,7 @@ func buildStatefulSet(statefulSetName, clusterName, deploymentType, baseImage, s
667668

668669
// CreateDataNodeDeployment creates the data node deployment
669670
func (k *K8sutil) CreateDataNodeDeployment(deploymentType string, replicas *int32, baseImage, storageClass string, dataDiskSize string, resources myspec.Resources,
670-
imagePullSecrets []myspec.ImagePullSecrets, imagePullPolicy, serviceAccountName, clusterName, statsdEndpoint, networkHost, namespace, javaOptions, masterJavaOptions, dataJavaOptions string, useSSL *bool, esUrl string, nodeSelector map[string]string, tolerations []v1.Toleration) error {
671+
imagePullSecrets []myspec.ImagePullSecrets, imagePullPolicy, serviceAccountName, clusterName, statsdEndpoint, networkHost, namespace, javaOptions, masterJavaOptions, dataJavaOptions string, useSSL *bool, esUrl string, nodeSelector map[string]string, tolerations []v1.Toleration, annotations map[string]string) error {
671672

672673
deploymentName, _, _, _ := processDeploymentType(deploymentType, clusterName)
673674

@@ -681,7 +682,7 @@ func (k *K8sutil) CreateDataNodeDeployment(deploymentType string, replicas *int3
681682
logrus.Infof("StatefulSet %s not found, creating...", statefulSetName)
682683

683684
statefulSet := buildStatefulSet(statefulSetName, clusterName, deploymentType, baseImage, storageClass, dataDiskSize, javaOptions, masterJavaOptions, dataJavaOptions, serviceAccountName,
684-
statsdEndpoint, networkHost, replicas, useSSL, resources, imagePullSecrets, imagePullPolicy, nodeSelector, tolerations)
685+
statsdEndpoint, networkHost, replicas, useSSL, resources, imagePullSecrets, imagePullPolicy, nodeSelector, tolerations, annotations)
685686

686687
if _, err := k.Kclient.AppsV1beta2().StatefulSets(namespace).Create(statefulSet); err != nil {
687688
logrus.Error("Could not create stateful set: ", err)

pkg/k8sutil/k8sutil_test.go

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -42,8 +42,9 @@ func TestSSLCertConfig(t *testing.T) {
4242
useSSL := false
4343
nodeSelector := make(map[string]string)
4444
tolerations := []corev1.Toleration{}
45+
annotations := make(map[string]string)
4546
statefulSet := buildStatefulSet("test", clusterName, "master", "foo/image", "test", "1G", "",
46-
"", "", "", "", "", nil, &useSSL, resources, nil, "", nodeSelector, tolerations)
47+
"", "", "", "", "", nil, &useSSL, resources, nil, "", nodeSelector, tolerations, annotations)
4748

4849
for _, volume := range statefulSet.Spec.Template.Spec.Volumes {
4950
if volume.Name == fmt.Sprintf("%s-%s", secretName, clusterName) {
@@ -53,7 +54,7 @@ func TestSSLCertConfig(t *testing.T) {
5354

5455
useSSL = true
5556
statefulSet = buildStatefulSet("test", clusterName, "master", "foo/image", "test", "1G", "",
56-
"", "", "", "", "", nil, &useSSL, resources, nil, "", nodeSelector, tolerations)
57+
"", "", "", "", "", nil, &useSSL, resources, nil, "", nodeSelector, tolerations, annotations)
5758

5859
found := false
5960
for _, volume := range statefulSet.Spec.Template.Spec.Volumes {

0 commit comments

Comments
 (0)