- Open Source
- Why Fairwinds
- About Us
Amazon Elasticsearch Service is a managed service intended to make it easy to deploy, operate and scale Elasticsearch clusters in the AWS Cloud. When we first looked at Amazon Elasticsearch Service shortly after it was released in October 2015, we weren’t very impressed. Several aspects and features of Amazon Elasticsearch Service didn’t meet our needs or our clients’ needs at the time, chief among them the fairly dated version offered and limited access controls.
I recently took another look, and many of the observations are surprising (these observations are based on clusters running Kubernetes 5.1.1). Below, I’ll highlight some of these observations, including supported versions, access controls and more dedicated master choices, along with a few additional features.
Overall, Elasticsearch is much more current now than it was when it was first released. AWS is now offering three versions of Elasticsearch, which is the biggest change I’ve noticed. Amazon Elasticsearch Service currently supports Elasticsearch versions 1.5, 2.3 and 5.1. Version 1.5 is available for those needing a 1.x release. It would be nice if the latest 1.x release (1.7) was supported, but at least it’s something in terms of supporting the legacy version of the software. That said, let’s face it – even Elastic doesn’t support that version anymore, so an upgrade would be well advised.
Version 2.3 is the release of the 2.x series. Again it’s not the latest (2.4 is the latest as of this writing), but it’s much closer to the current version than in the 1.x case. I doubt Amazon will move this version up to 2.4, which was released originally in August 2016, with the most recent patch being from November of that year.
Odds are that most efforts are focused on release 5.1 (December 2016). Version 5.2 was since released in January 2017. I would assume that it will not be long before 5.2 becomes available on Amazon.
For many use cases, these versions will likely be sufficient.
On the access control front, not much has changed. You have three options:
The documentation on users is written cleverly. It clearly states that you need an AWS account (i.e., root) or an IAM user. I tried an IAM role that we use for instances, and while I was able to specify the role, the access didn’t work. The best way to deal with this problem would likely be specific IAM users for specific purposes and then using a password with it.
Access using whitelists of IP addresses works quite well. That’s the easiest way to get things to work. One thing to be aware is that the Elasticsearch instances don’t live in your VPC, so traffic will always hit the external address. That feels a bit ugly, but it’s functional. Just heed the warning of not opening up the whitelist to everywhere. If your instances are private inside a VPC, whitelisting the IP addresses of the NAT gateways works nicely.
Signed requests are likely the most powerful but also the most difficult to implement. I won’t go into detail here except to say that for our use cases it would mean heavily modifying our clients’ code, which is something we generally don’t do. Also, curl won’t work.
Once you restrict who can have access, there’s a lot more you can do in terms of what they can access. Amazon allows you to restrict a URI path, the HTTP method, and so on. All that power is likely most useful with user access or signed requests.
Ultimately the signed requests will get you the most control, but they will require some effort.
Amazon also enhanced the settings with respect to dedicated master nodes, which are used to increase cluster stability. It’s now possible to specify the instance count and type. That’s a nice feature. Having three t2.small.elasticsearch instances keeps costs down and raises confidence when trying to avoid a split brain.
The AWS console for Elasticsearch is pretty opaque and doesn’t let you see everything under the hood. It appears that when you update a cluster with the same settings, it will still update things. That appears to consist of replacing all the instances under the hood. That’s a little heavy, but I haven’t observed issues with the actual updates in my tests.
Making changes to the cluster appears to take awhile. At a minimum that’s 10 minutes, which also applies to access changes. However, the changes appear to take without issue. I’ve tested increasing and decreasing instance count, using different instance types (masters and nodes) and adding and removing EBS volumes. All of these actions happened cleanly (though, again, not rapidly). One nice feature is that if you’re decreasing instance counts or storage sizes, AWS will check to make sure you have enough space when all is said and done. While I didn’t test this feature extensively, it did prevent me from trying to wedge too much data into too little space during a couple of tests.
That said, an odd thing occurred when I altered the IP whitelist. For a period of time, I alternated between receiving permission-denied errors and successes, which suggests that Amazon is using a conservative approach to replacing the instances. In other words, new instances are added fully before existing ones are removed. I’m guessing that the approach is using a proper vacating of data from existing resources. That’s a nice touch.
Overall, Amazon Elasticsearch Service has come a long way. It still doesn’t include several features that would be nice to see, including the ability to run the Elasticsearch domain inside a VPC and full integration of the IAM. Nonetheless, it’s a pretty solid system and likely very usable for many purposes.