Jump to content
Server Maintenance This Week. ×

Use MirrorSync for recovery?


This topic is 4265 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Hi,

I was starting to create scripts to export/import all my information in case we crashed so I could put us back into an uncrashed copy of the program. I realized it is a lot of work and I might forget to add to export/import script a new field I add. It just does not feel very dependable and it seems incredibly time-consuming.

My question: can MirrorSync be used instead? I see it can be used between two FMPro files but has anyone use this for recovery before? And what I read about recovery says to export - does MirrorSync clean the data like an export would? If not, an export would still be important right?

My thought is that a good backup could be opened and synch'd to the crashed but still usable file to synch its data into the backup copy. Bad idea?

  • Like 1
Link to comment
Share on other sites

You should always save clones of your currently working (good ) files. If your DB crashes you can import the data into the clone. Can you use MirrorSync? Possibly. Ask 360 Works, they will tell you.

You should never move forward with a file that had crashed. You can recover a file and find bad elements that can be ,annually cleaned out and then use the file, especially if they are only a few fields and layouts. But if the file crashed and is unusable, you have no idea what type of damage you could be facing.

Link to comment
Share on other sites

Thank you Agnes, that is why I posted in the 360Works section. Was I wrong to ask here? If so I apologize.

But you confirm what I have been reading and I would never use a file which is recovered or damaged. I keep clone for just such reason but if the backup is good, why import into empty clone instead of updating the backup file? I think it would be faster and it certainly would be easier to do - let the MirrorSync keep track of added fields and deleted records.

I just don't know if others use it this way or whether it would work. I can only justify a certain more dollars to this project this year and MirrorSync could then provide two purpose for us, both for synching the ipad file and also as a disaster recovery tool. It would add weight to my case and our money is scarce. Maybe even if not synching into a backup file but rather an empty clone.

There are many good programs out there particularly from 360Works and Draconventions and I am trying to pin down our needs for best cost and multipurpose programs are especially important. Thank you very much for helping me.

Link to comment
Share on other sites

Hi David - it should work fine to have a client copy of FileMaker running at all times, syncing every minute or so with the server. That way, you have a copy of your FileMaker files that is never than 1 minute old. If the server were to die, you can just take the database from the client and place it onto the server.

As to your other question, about taking a backup and using it to sync with the server - I think that would work, but the critical question is whether or not the file is accessible with the XML web publishing engine. As long as it is, MirrorSync can pull from it. This is plain ASCII data so it is not subject to corruption.

  • Like 1
Link to comment
Share on other sites

Hi David - it should work fine to have a client copy of FileMaker running at all times, syncing every minute or so with the server. That way, you have a copy of your FileMaker files that is never than 1 minute old. If the server were to die, you can just take the database from the client and place it onto the server.

This is wonderful to know. Doesn't your program automatically add new fields in the synch and stuff? It seems that, once it is connected, it would take care of itself and include new fields. Is this right? And also is it faster if I use the new UUID that you demonstrate instead of existing auto-enter serial numbers? This is a new program I have written and it does not have real information yet. I could switch the IDs if it will help.

As to your other question, about taking a backup and using it to sync with the server - I think that would work, but the critical question is whether or not the file is accessible with the XML web publishing engine. As long as it is, MirrorSync can pull from it. This is plain ASCII data so it is not subject to corruption.

So if you were me and had access to a computer with client installed that could run constantly, is that called a robot, would you use this robot to keep a current data file available to instantly replace a served data file? It will be quicker than import export of all the data plus it will eliminate the problems with keeping the import/export maps up. Our company is concerned that if we have a crash, we need to be back up within minutes and none of us knows server stuff so we want something simple and fast. Just taking the robot and placing it on server even I can do easily.

Thank you very much for helping me understand this, Jesse.

Link to comment
Share on other sites

Yes, David:

Jesse's product and his responses good news for synching your data — it does NOT synchronize the structure of your database, such the fields in each table, relationships (why using UUID's is a plus with MirrorSync), layouts, or the objects placed on layouts. This is why you will add things such as script steps, scripts, the MirrorSync table, and a special "Sync" layout in the data source for each table and field within that table that you wish to sync. It does NOT take care of itself and include new fields as you change your database schema.

However, I employ a Data Separation model — where the interface is separate from the actual data. This "front-end" contains most, if not all of your logic, such as relationships to the external data source. Though you can't edit those tables from within the interface face, you can create relationships in the "Graph" (Entity Relationship Graph in "Manage Database…"). I put all of the MirrorSync stuff into my data file hosted by FileMaker Server, then scripted interface element to to trigger the sync. I can change the interface almost as much as I want (nothing as "absolute") and it has no bearing on the integration of MirrorSync. Depending on the complexity of your solution, you may do yourself a favor by creating a system of documenting those changes as you make them to your data file, and verifying their need to be excluded or included in synchronization activities.

It is faster to use auto-calculated UUID's (which are hidden from users) compared to auto-enter serial numbers (like invoice #'s), but if you disable "unique value" validation (to prevent duplicates) after going to UUID's, it will definitely be a noticeable speedup if you have a large amount of data (thanks to Matt Navarre the "Search King" for that tip!). For me, one of the most important reasons for using UUID's with MirrorSync is that it makes initial setup — and future changes — very quick. I'd recommend that you switch the IDs — it will help.

I am intrigued that you thought about creating a perpetual sync "robot to keep a current data file available to instantly replace a served data file" It's an application I've thought about, but it require more thought on my part to explore it. I also like your idea of importing into the local copy (very fast), then synching the data to your hosted data file — what an opportunity for someone to do a serious speed test ("Don't get me started!")

Best regards,

- - Scott

  • Like 1
Link to comment
Share on other sites

Sorry I have been slow responding. I am trying to chew too much at one time. Your input is extremely valuable and I appreciate it. I have separation. In data file I have one entity relation which is needed for auto-enters and calcs. I cannot test yet because we realize we must replace our server. This is new concept for our company.

Problem with straight import update that I see is records deleted in the original will remain whereas MirrorSync will remove the deleted records offering true synch. Let's say server crashes. It is easy to just replace the interface file. Data file needs to be replaced because it should never be used. Since the latest backup is fine, do you think it is quickest to use the backup and synch to the crashed file since MirrorSync will work same as export-import scrubbing, cleaning the data or to use the crashed file and synch to an empty clone? Import mapping is tedious and prone to error and unless it is ran against an empty file, dups come back in. MirrorSync seems much more sound.

I just want to consider these other options as well since my boss gave me a dirty look when I said I wanted a computer just to sit there and not be used by a person. Someone or I read said that raid does this but from what I understand, raid is instant so it would mirror the damaged file and that would do no good.

Link to comment
Share on other sites

Problem with straight import update that I see is records deleted in the original will remain whereas MirrorSync will remove the deleted records offering true synch. Let's say server crashes. It is easy to just replace the interface file. Data file needs to be replaced because it should never be used. Since the latest backup is fine, do you think it is quickest to use the backup and synch to the crashed file since MirrorSync will work same as export-import scrubbing, cleaning the data or to use the crashed file and synch to an empty clone? Import mapping is tedious and prone to error and unless it is ran against an empty file, dups come back in. MirrorSync seems much more sound.

David, if your objective is just to have a clean copy of the server file after a crash, I don't think MirrorSync is necessary - the new progress backup option in FileMaker Server 12 effectively gives you the same protection. It is designed to run very often, ie. every minute or so, and restore quickly if the server goes down. For more information, see this document:

http://help.filemaker.com/app/answers/detail/a_id/10243/

  • Like 1
Link to comment
Share on other sites

  • 4 weeks later...

Hello Jesse, thank you very much for responding.

I did not know about one minute backups in 12! But it seems I hit on something good that Mr. Scott mentions although I am not sure how to use the information:

I am intrigued that you thought about creating a perpetual sync "robot to keep a current data file available to instantly replace a served data file" It's an application I've thought about, but it require more thought on my part to explore it. I also like your idea of importing into the local copy (very fast), then synching the data to your hosted data file — what an opportunity for someone to do a serious speed test ("Don't get me started!")

Best regards,

- - Scott

If I can (in 12) back up every minute incrementally then I do not need to create the intensive export/import scripting to replace a damaged file. I can always just use the backup, right? Maybe Mr. Scott means 'perpetual' as in every change unlike 'incremental' which is every minute? I am still developing this file and make changes to it every day. I am hoping to provide a disaster recovery process which does not require constantly changing the import/export maps used to export from damaged file and import into clean clone every time I add a field. All it will take is forgetting once and the solution could lack needed data if I forget to update the maps. It bothers me a great deal.

We will be using FMGo also but I can't even make my tests work there. I use dropbox but FMGo can't even see my file. I have a long way to go yet.

Link to comment
Share on other sites

This topic is 4265 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.