Amazon have launched a neat new Route 53 feature: latency-based routing. The idea behind this is when someone hits www.yoursite.com this resolves to the closest server to them, cutting latency.
This DNS cleverness has been used by the big boys for some time, but not been available to us mortals without shelling out big bucks to someone like neustar/ultradns (shudder).
The ‘location’ of your server is determined by multiple DNS records for a given lookup, each with an EC2 region attached to them (us-east-1, eu-west-1, etc.), so the service ties together with hosting your site on EC2; though this is not exclusive - there would certainly still be benefit in using in combination with multiple locations or even multiple providers (cloud of clouds anyone?).
Anyway, to cut to the chase I’ve added this functionality to cli53 v0.3.1.
First you’ll need the latest boto develop branch and an updated cli53:
$ pip install --upgrade https://github.com/boto/boto/tarball/develop $ pip install --upgrade cli53
You can now create multiple records:
$ cli53 rrcreate example.com www CNAME ec2-elastic-name1 --region eu-west-1 --identifier web $ cli53 rrcreate example.com www CNAME ec2-elastic-name2 --region us-east-1 --identifier web
Depending on where the resolver is they would either hit the closer of ec2-elastic-name1 or ec2-elastic-name2.
Latency-routing can also be handy in the situation of maintanence, so for example when you need to perform maintanence on your servers in eu-west-1 you just drop these records (wait for ttl) and then your European customers would be routed to closest alternate data centre - perhaps us-east-1 or ap-southeast-1 dependent how far East/West they are.
Another trick is using it internally in your application to connect your application servers to other local services (say your backend search server for example). If you CNAME to the external hostname inside each region, the local lookup would benefit from this too and resolve to the internal private IP address - avoiding any region specific configuration or unnecessary cross-region traffic.
My only criticism is it’s very closely tied to EC2 locations so not quite as granular as cloudfront which has multiple endpoints in (for example) Europe. But given you can already serve your website assets from cloudfront and benefit from ‘real closeness’ of this, it’s only your dynamic content you’d be serving from slightly further (and still closer than before too).