April 21, 20205 yr Hi all, I have a idea to establish progressive backup on AWS S3 bucket. Client has a 3GB database (on aws machine) , and I we already use progressive backup in every 5 minutes. But, in case that main server crushes down, we have another, backup server, also AWS machine. Now, client wants that in trouble situation, save data as much as possible, and came up with this idea - progressive backup on AWS S3. Is this possible? Does anyone tried this? Thanks in advance, Ve
April 21, 20205 yr https://www.soliantconsulting.com/blog/backups-to-the-cloud-with-aws/ Copying a backup to S3 is trivial, especially from an AWS machine itself. But copying a progressive backup is tricky because the exact run time of a progressive backup is not set, it depends on when the machine was last restarted. And you do not want to touch the progressive backups folder while a backup is in progress. So your OS-level script that tries to grab the most recent progressive backup needs solid coding to check for a backup being in progress.
April 21, 20205 yr Author Thank you Wim for fast answer! So I cannot somehow set the path for progressive backup folder on s3 directly. I "only" need to have a good 'OS -level' script which will transform progressive backup folder to S3? Ve
April 21, 20205 yr 28 minutes ago, Veselko said: So I cannot somehow set the path for progressive backup folder on s3 directly. No. 29 minutes ago, Veselko said: I "only" need to have a good 'OS -level' script which will transform progressive backup folder to S3? Yes. Obviously you have to factor in what frequency you want to do with. If this is a single file that is 3GB and you want to copy it to S3 every 5 minutes, that could potentially affect the overall performance of your server.
April 21, 20205 yr Author Thanks! One question more please. 13 minutes ago, Wim Decorte said: that could potentially affect the overall performance of your server. You mean performance of windows server machine, not FMS. Is that correct? Ve
April 21, 20205 yr The two cannot be separated. Any server has 4 broad potential bottlenecks: processing power disk i/o memory network throughput FMS, as a database server reading and writing and processing data is very sensitive to #1 and #2. If you are going to have a separate process that reads 3GB of data every 5 minutes and pushes it across the network you are going to expend quite a bit of resources on #1, 2 and 4. That will affect FMS. To mitigate that you need to build in extra resources in those areas so that the effect is minimal. Things like: make sure the AWS instance does not have burstable processing or burstable disk i/o. Go for higher-than-normal IOPS, pick an instance with more cores than you'd think you need for just FMS,...
Create an account or sign in to comment