Adding S3 Keys to Elasticsearch Keystore
Dec 29, 2018
Aldrin Navarro
2 minute read

When using S3 plugin for Elasticsearch, the senstive repository settings aren’t directly configured in elasticsearch.yml but instead stored in the elasticsearch keystore using elasticsearch-keystore bin commands.

Before starting the node, run the following.

bin/elasticsearch-keystore add s3.client.default.access_key
bin/elasticsearch-keystore add s3.client.default.secret_key

See more S3 Repository Plugin - Client Settings

On Kubernetes

For ES (elasticsearch) nodes that are in Kubernetes, the commands above can be issued on elasticsearch kube pod. The “reloadable” client secure settings still apply.

kubectl exec -t efk-elasticsearch-pod-name -- /bin/bash -c "echo $S3_ACCESS_KEY | bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key && echo $S3_SECRET_KEY | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key"

Just to be sure, settings can be reloadable by calling the following API command,

POST _nodes/reload_secure_settings

Reference: Reloadable secure settings

The command will decrypt and re-read the entire keystore for each cluster nodes. S3 client access key and secret keys are one of the reloadable secure settings that will be applied.

I made a simple script to take care of issuing the keystore commands to each of the Kubernetes Elasticsearch pod.

#!/bin/bash
# Usage: es_s3_kube_keystore_add {ACCESS_KEY} {SECRET_KEY}
for pod in $(kubectl get pods -o=name | grep efk-elasticsearch | sed "s/^.\{4\}//")
do
    kubectl exec -t $pod \
    -- /bin/bash -c "echo $1 | bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key && echo $2 | bin/elasticsearch-keystore add --stdin --force s3.client.default.secret_key"
    echo $pod
done

You can then improve on the bash script above to consider a case when no elasticsearch pod is available or leaving a note to issue a secure settings reload via POST _nodes/reload_secure_settings

An alternative to the solution above is to create your own custom Elasticsearch Docker image and embed the keys in using a line with a RUN echo ************ | bin/elasticsearch-keystore add --stdin --force s3.client.default.access_key && ... (same line for secret_key). Consider that these secure s3 keys may be compromised by exposing it on a Dockerfile.


🐋 hello there! If you enjoy this, a "Thank you" is enough.

Or you can also ...

Buy me a teaBuy me a tea

comments powered by Disqus