Jump to content
DataCruncher

Speed up sync - different replication frequency for tables?

Recommended Posts

Posted (edited)

Hello all - 

 

I'm thinking about MirrorSync for a Filemaker Server to Filemaker Server replication. 

The old problem: Office in CA, Office in NY - latencies kill the Filemaker Pro clients that are on the other coast when connected to FMS on the other coast. 

 

Think of our database as a medical records system for patients in a clinic that also manages patient handling. 

 

Currently, we have one main table, PATIENTS. 

One of the fields in this table, PATIENT_STATUS, is a numerical value that ranges from 0 to 6 (0 patient just showed up, 1 patient in waiting room, 2 patient seeing the nurse, 3 patient seeing the doctor, 4 report transcribed, 5 prescription issued, 6 patient discharged). 

The workstations display lists showing the patients in each status. Obviously, as patients come and go, status changes, and we need as close to real time updates as we can get on these. 

 

Now, if you think about syncing a CA server with a NY server - that is easy enough, and i don't mind if it takes a minute to sync - but each respective office has fast local performance. 

 

I'm not sure what best practice would be to improve sync performance. In our current setup, the main table, PATIENTS, has about 150.000 records - comprising the 20 patients that are sitting in the clinic now, with PATIENT_STATUS from 0-5; but also, 149.980 records comprising discharged patients with with PATIENT_STATUS = 6. 

I need sync to be as fast as possible for the 20 patients that are in the office now; but I could absolutely do with less frequent sync intervals for the remaining 149.980 records. 

Also, something tells me that syncing 20 records would take a lot less time than syncing 150.000 records - especially when you try to get sync intervals down to a few seconds. 

 

My options, as I understand, are: 

1- Leave the database structure as is, and hope it will be fast enough for our purposes (let filemaker handle the load)

2- Split the PATIENT table up in two tables: CURRENT_PATIENTS and ARCHIVE_PATIENTS. I would then run a mirrorsync every five seconds on the CURRENT_PATIENTS, but only once an hour on ARCHIVE_PATIENTS. 

This may come with its own set of problems, as we also search for patients by name and I'd have to script two consecutive searches, one in CURRENT_PATIENTS and one in ARCHIVE_PATIENTS and then merge the results somehow. 

I also imagine that splitting my tables would be quite a bit of work, adjusting layouts, scripts, and relationships. 

Lastly, upon discharge, the entire record would have to be moved over from the table CURRENT_PATIENTS to ARCHIVE_PATIENTS. 

3 - Do the same as described in 2, but also create a new join table that servers as my new main table: GLOBAL_PATIENT, with a PATIENT_ID primary key that is related to a PATIENT_ID key in CURRENT_PATIENTS, but also to PATIENT_ID in ARCHIVE_PATIENTS. 

It is my understanding that in this approach, all that I would need to change in searches is to search GLOBAL_PATIENT as opposed to both the ARCHIVE_ and the CURRENT_ tables. 

Option 2 and 3 may well be the same, but to me, they seem different. 

I am not sure if I am missing out on potential other ways to make this work. 

I would appreciate any advice ....

 

Have a great day!

 

 

Edited by DataCruncher

Share this post


Link to post
Share on other sites

How long do the syncs currently take?

Also, is the disk where MirrorSync is installed an SSD drive?

Share this post


Link to post
Share on other sites

I haven't run a full sync yet - partially because I want to have my database structure optimized before I do. 

 

I am not sure if having a SSD drive on the mirrorsync machine would make that much of a difference; we have set aside a machine with about 50GBs of RAM. 

 

I will report current sync times as soon as I have them. 

Share this post


Link to post
Share on other sites

I would be surprised if it takes longer than 5 seconds to run an incremental sync with 20 records changing.

An SSD drive will have a noticeable impact on sync speed, although it may be 'fast enough' without the SSD.

Try setting up the sync exactly as you have the current structure, I don't think the number of records you're talking about will have much affect on the sync speed.

btw MirrorSync probably won't do much with more than 1-2 gigs of RAM, so hopefully you're using all of that RAM for other stuff. It has a default 1 gig cap on allocation, you can edit that following our instructions in the advanced docs but I doubt you'll see any benefit beyond 2 gigs.

Share this post


Link to post
Share on other sites

OK - some news here. I have limited the sync part to one of my 72 tables only. That table has 62.000 records, text only, no containers. 

I have two instances of FMS12 set up on a local subnet, and mirrorsync running on a dedicated third machine with 8GB of RAM, SSD disks. They are all connected through a GB switch. 

On initial sync, it seems to start fine, but then fails after about 8 minutes with:

 

 java.io.IOException: Server returned a 500 (Internal server) error. This could be caused by many things, but it usually means that the Web Publishing Engine ran out of memory. The URL that generated the error is http://domain.com/fmi/xml/FMPXMLRESULT.xml?-db=TEST&-lay=MirrorSync&-script=MirrorSync+(TEST)&-script.param=doServerAction 1 Let([%24command%3D"getModificationsSince"%3B%24table%3D"sync_CASES"%3B%24%24sessionId%3D"67ab1e2d-dcf1-4505-91c0-8d16aebb9634"%3B%24dataMode%3D1%3B%24timestamp%3D""%3B%24%24MIRRORSYNC_USERTOKEN%3D""]%3BTrue)&-max=all&-findany

 

So what I did was up the WPE timeout to 2000; and i also gradually increased the host FMS memory allocation from the default 500MB to 2000M and now 8000M - still the same thing. 

I'll allocate 16000M next - my concern simply is both databases are already there, I'm merely trying to sync the one new record. Hopefully, once i get past this initial sync, incremental sync will be more responsive - else I may really have to go CWP...

I'm running out of ideas - it fails again with error 500 after 7 minutes - with 16 GB of memory allocated to WPE on FMS12. My entire database file only has 5 GB. Perhaps I'm doing it wrong...

Share this post


Link to post
Share on other sites

Did increasing the timeout to 2000 have any effect on how long it took for that error to come up?

If you have the WPE at 8,000 megs of RAM, I doubt adding more will help.

What version of MirrorSync are you running?

After the next time it fails, please send us a problem report by clicking the link on the MirrorSync launch page

Share this post


Link to post
Share on other sites

So increasing the timeout didn't work. 

 

Feeding the initial sync spoon by spoon, however, did. 

 

I first synced an empty clone database - which worked fine.

Next, I imported 3000 of my 64000 records - which worked fine, too. 

I then imported another 5000 records - you get the idea. 

 

Once all 64.000 records were imported, incremental sync works fine and takes about 4 seconds. 

 

Next step will be to take the spoke machine off-site and see how latencies change these numbers. 

 

Thank you for your assistance

Share this post


Link to post
Share on other sites

Glad it worked. If you can answer my questions, I might be able to see why it didn't work originally.

Share this post


Link to post
Share on other sites

Jesse - 

Thank you. I still need you to look into this; the inital sync never works unless I delete all data and manually import piece by piece. It is impossible to keep doing this for every table. 

Once the initial sync is done, all incremental syncs work flawlessly. 

 

To answer your questions:

 

Did increasing the timeout to 2000 have any effect on how long it took for that error to come up? - No. I even increased the timeout for XML web publishing in Catalina.sh to 20.000 - still got the internal server error. 

If you have the WPE at 8,000 megs of RAM, I doubt adding more will help. - I have it at 49000 at once just to try - still the same thing. 

What version of MirrorSync are you running? - 360Works MirrorSync 3.17, build 9239 (3/24/17)

 

Now, this is my system setup: 

HUB: FMS 12.0.6 on Mac OS

Spoke: FMS 12.0.2 on Mac OS

Mirrorsync is running on ubuntu Ubuntu 17.04 (GNU/Linux 4.10.0-20-generic x86_64)

 

We followed install instructions by the book; the only thing I could think of now is that we corrupted the installation of the MirrorSync scripts somewhat. 

During initial MirrorSync config, there is a page that required us to copy a table, some scripts and a layout to our FM database file. The tables and the layout copied fine; but the copy scripts button asked us which version of FM we are running (FM 13+) - but then gave us an error: 

Could not copy script onto clipboard: java.lang.NullPointerException

Perhaps I'm not supposed to have empty spaces in my layout names? 

Either way, we have tried the configuration on 8 different client machines running different versions of java - all to no avail. 

As a workaround, 360works support emailed us a .fmp12 file that contained nothing but the mirrorsync scripts which we manually copied over into our database table and went through which fields should be referenced over the phone. I am uploading the empty .fmp12 with the scripts that I got from mirrorsync.  

 

Perhaps the most prudent way to try and untangle all of this would be to entirely reverse the process where we manually inserted mirrorsync scripts; and instead get the copy scripts function to run during initial mirrorsync configuration. 

 

We did send a ticket to 360works, but I doubt it would have logged any relevant information as the NullPointerException occurs on the local machine, I believe. 

 

Thank you!

 

Robert

MirrorSyncScripts-2.fmp12

Share this post


Link to post
Share on other sites

Hi Robert - if you have the time and the inclination, I'd love to do a screen sharing session to see the process from beginning to end and try to troubleshoot this issue. Since you've got it working now, I understand if you just want to leave well enough alone, but if you'd like to help get to the bottom of the original issue, please send me an email to support@360works.com letting me know when a good time is for you and I'll try to figure it out with you.

Share this post


Link to post
Share on other sites
Posted (edited)

Thank you!

Sending my email now...

Now I did get it to work, but on a sample database. I can't employ the approach I had on one or two tables on my live database with 71 tables...

 

So I'd love to get to the bottom of this...

 

 

Edited by DataCruncher

Share this post


Link to post
Share on other sites

ALL FIXED! Thank you Jesse for helping me through this. 

 

If anybody experiences similar issues, these were the steps we have undertaken to fix our MirrorSync FMS15-FMS15 issue: 

 

1) Upgrade to FMS15. That makes JDBC an option. 

2) Don't use JDBC if one of your records has a field with more than 35000 characters in it. It will not work. Use XML instead. 

3) make sure you increase the memory allocation, if required, in three different spots: WPE on the hub server; WPE on the spoke server; and - also - on the mirror sync tomcat. We used 2048M and fared fine in the end. 

4) If you timeout, increase the timeout. 

5) Make sure you don't have calculation fields on your sync layouts. These can calculate locally and likely slow down your sync. 

6) Make sure you don't sync external containers. These are not supported by MirrorSync. 

7) Make sure your mirror sync table has no client record under 'internal' before you first sync. 

 

That's pretty much all I remember, but our installation works really well now. 

 

Thanks again 360works for all your assistance in getting this going!

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×

Important Information

By using this site, you agree to our Terms of Use.