Here at Box UK, many of our projects run on Amazon Web Services (AWS).
We’re always interested in how other people are using it, so we can improve our own usage.
Two blog posts popped up
on Hacker News recently which shared some ‘tips’ on getting he best out of AWS. The comments revealed even more invaluable information and are well worth a read on their own.
We were aware of AWS’s ability to pre-sign urls so they can be used by unauthenticated clients,
but what we weren’t aware of the was the ability to upload objects using pre-signed URLs
straight to S3.
This is a great feature, and allows you to give temporary access to users to upload an object straight to an S3 bucket. The advantages are obvious; your application doesn’t have to handle file uploads any more. Why upload to your app servers and then have your application upload to S3, when you can cut out the middle man?
We recently came across a problem in one of our applications where this feature would have helped.
The application in question allowed users to upload files. In this instance, the application was load balanced behind n (where n is more than 1) Varnish
servers, which were in turn load balanced behind ELB.
The bug manifested in uploads failing, seemingly randomly. Some quick headbashing revealed the problem to be down to our uploads being multi-part, or rather more specifically, our Varnish instances were configured to use a round-robin director
, which mean that different parts of the uploaded file were received by different application servers (‘backends’ in Varnish speak). They would then error when they realised they hadn’t received the whole of the multi-part upload.
We made a quick fix in our Varnish setup to instead use the hash director
for the Varnish back-ends that handle uploads. This meant that every multi-part upload routed to the same back-end.
Going back to the original ‘tip’, we can now fix this issue in a nicer way, by bypassing ELB, Varnish, and our application servers entirely, and granting the user temporary access to upload their files straight to S3.