I love Amazon Web Services (AWS) but am always apprehensive about the pricing. To be fair, I probably over analyze my needs and I end up not pulling the trigger on using them. One thing I do rely on is Simple Storage Service (S3) for storing image uploads from my users on SceneKids.
My first rule for using S3 and keeping costs low is to not host images directly from S3. It may seem counter intuitive to some of you, but since Amazon charges for the bandwidth used to serve the images, it’s always made sense to me to store my images out on S3 and to serve them locally. I always have a ton of free bandwidth available on my servers.
S3 for me is more of a safety net for user uploaded images than a viable way to serve them. Once out of S3, I generate scaled versions of the uploads and serve them directly from my server. One caveat I have right now is that I have to pull the image down from S3 for ever new scaled version I generate locally.
I could improve upon this by either generating all of the potential sizes either at the time of the upload or at the time of the generation of the first time it is scaled. Even though I am paying for a bit more bandwidth to generate the myriad of sizes I need, I have the flexibility to clear the local cache and re-scale the images easily. The pricing had proven to be fairly negligible once the local image cache has been primed.
This method has served me very well for a while now and I am not a big fan of having a single point of failure. To combat that, I have since added CloudFlare in front of my server. Doing so has allowed me to cut my bandwidth usage by 55%. That’s 325GB of bandwidth I’m not paying Amazon and isn‘t being used by my hosting plan.
The cost has been fairly negligible (less than ten bucks) to store nearly 100GB of images). It doesn’t always make sense to store your cached files locally, but if you have enough disk space, it’s an easy way to avoid paying Amazon more money to serve directly from S3 or to have to incur additional costs by way of CloudFront.