I’ve been researching backing up my buckets on s3 (images etc) and theres quite a few ways.
1 - s3 versioning
2 - aws glacier
3 - creating copies of buckets
Ideally I’d like to perform scheduled and adhoc backups of my assets and perhaps in sync with my db backups.
How does everyone (Thoughtbot or Upcase members) approach this? I know Thoughtbot use Heroku and S3 extensively so would be interested to hear your workflow.
It depends what your trying to mitigate against.
Amazon will have their own backup/redundancy in S3, so even if a hard drive dies, you probably won’t even know about.
Glacier is cheaper than S3, but unless your backing up terabytes it probably won’t make a huge difference in terms of cost.
At our company we back up external data to S3 that we can’t recreate in the event of data loss on our own server. There’s probably a point at which that becomes cost-prohibitive, but I would start to mitigate that by archiving very old data off to Glacier over time if AWS fees creep up.
One of the awesome things about working with Heroku within Amazon’s ecosystem is that you’re saving money on transfers between your app and S3 (but not between the user’s browser and S3!!!).
We use Fog to do S3 storage (we’re also using it to interface between Paperclip and S3), and I’ve liked using it. It’s easy to set up local storage on your development machine so you can do network-independent testing without slowing down your machine or incurring needless AWS charges.
Sorry for late repsonse, got swept up in work. I’m basically trying to mitigate any human error such as accidentally deleting images or messing up a migration of data. It would be handy to keep a daily and weekly snapshot.
Geoff do you use S3 as the primary data source. And if you do, how do you automate backing your buckets up.
I’m using s3 with carrierwave to store my sites images and was hoping there was some simply way to automatically backup that bucket on a daily and perhaps weekly basis to mitigate agant dev errors