Jump to content
Server Maintenance This Week. ×

Things I learned about syncing


This topic is 3179 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Mostly posting this to hear myself talk but maybe someone will find it useful or a helpful discussion will ensue...

My company has deployed a FileMaker solution over iPads for our sales staff. Because they are frequently in locations with no or poor reception, their solution is offline. They sync on demand when they have a connection and can select what they want to sync.

My company is also one of those that doesn't like to buy anything. So I had to make my own solution to sync. The trouble is, our sales staff isn't always the best about syncing and sometimes if they select all options it can take up to two hours to complete a sync!

I went to DevCon this year with a main goal of improving the sync speed. I came back with several ideas and these are the results.

Currently, my solution is set up with a separate connector file. I did this because in the past I kept having issues with corruption of the main file. It would happen regularly but ever since I switched to using a connector file it has nearly disappeared. Hurray for that! The connector file has table occurrences for the server files and the local files and uses those relationships for finding the records to update. I then do a simple Import Records:Update Matching to update the data (in most cases, sometimes I do loop through and "set field" if the data is especially important).

I talked to a bunch of people and attended several sessions about FMGo and the overwhelming feeling seemed to be "NO, BAD DAN!" and I left with my head swimming with ideas of PerformScriptOnServer and looping through records and payload files. Fun.

First, I did some benchmarking. With my original script I got the following sync times for five different segments (some are just one table, some are multiple):

Customers: 6 seconds
Inventory: 3 minutes 32 seconds
Orders: 2 minutes 59 seconds
Inventory Master: 3 minutes 58 seconds
Pricing: 27 seconds

I then went in and changed all of the syncs to loop through the records and "Set Field". I had it set all fields because it was even longer if I had it first evaluate if it needed to do a set field.

Customers: 2 seconds
Inventory: 23 minutes 58 seconds
Orders: 2 minutes 10 seconds
Inventory Master: 43 seconds
Pricing: 3 seconds

So I had some luck in a few places but absolutely atrocious for the Inventory sync. A couple of things to note: there were a couple of tables that I was just syncing everything instead of what was new. I believe that was the case for the Customers sync in particular. Also, the big thing for the inventory master is that it has product pictures in it. I learned about Base64 encoding and decoding at DevCon and I'm sure that was the winning ticket for making that sync go faster. I'm guessing even the import version would go quicker if I was to use that instead of importing the large picture files.

Since the data changes regularly, I wanted to be sure that I had good numbers to work with. I was telling the script to sync 10 days worth of changes. I reran the benchmarking today to different results (this time it was over wireless instead of cellular because I was running out of data on my plan):

Old Script:

Customers: 2 seconds
Inventory: 12 seconds
Orders: 31 seconds
Inventory Master: 8 minutes 59 seconds
Pricing: 15 seconds

New Script:

Customers: 4 seconds
Inventory: 2 minutes 35 seconds
Orders: 1 minute 24 seconds
Inventory Master: 13 seconds
Pricing: 6 seconds

On the whole, it would seem that looping with set field is not a good method if you want speed. Of course you can argue it is more reliable so there's a trade-off.

I wanted to experiment with running the query on the server and then sending over just data for the Inventory sync (about 1700 records). I used PerformScriptOnServer to pass the date to be synced. I had the server find the records I needed then loop through them and capture the data into a variable. I then set that into a data table field. The iPad copied over the data and I had it. But now what to do with it??? First I tried unpacking it into a temp table and then using Import to update the data. 5m 10s. Ok, that's twice as long. I then unpacked it directly into the table to update by querying the item ID. I update the data if found and added it if it wasn't (wasn't even concerned with deletions at this point). 9m 1s. That's going the wrong way!!

So, with the exception of making sure I sync only the records that were changed and using Base64 encode/decode for pics it looks like I'll be sticking with the import function. One good thing is that the two users I've switched over to FMGo 14 are saying syncing is going a little quicker than when they were on FMGo 13.

  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

Hey Cable (Dan),

    A very interesting post; thanks for taking the time to write this up and to put metrics on your efforts!

We have just done a very similar thing to yourself:  had a client that had a need for offline iPad use, but wanted to sync with the home-server when they had a good WiFi connection.  We also rolled our own sync mechanism, using (it sounds like) very similar techniques:  a connector file to establish the link to the host file and then a script that does field-by-field copies.  We went with field-by-field because we wanted to take advantage of transactional methods (i.e. the 'reliability' or 'surety' you spoke of).  We used the iOs sync whitepaper, by Katherine Russell of Nightwing Enterprises, on FileMaker's technical resources page as a template; is that what you were using?

You didn't mention if you were doing this transactionally, the times that you used a field-by-field copy.  Are you syncing one record at a time or doing a transaction?  If you are doing one record at a time, or anything that is non-transactional, then you will have MANY commits thrown in there which could significantly impact your speed.  With transactions, there is only one commit.  It will be a BIG one, from the data sets you describe, but still less overhead than one at a time.

Also, keep in mind that Base64 encoded data is about 30% larger than the original file.  So while you gain the advantage of being able to move the image over ASCII-only interfaces, you are moving a large chunk more data.

Also, I was curious why you were doing a date-range of records rather than watching which records were modified/created after the 'last successful sync' time, and only moving those over.  That seems like it might cut down on the number of records you need to move around.  Is that the only filter you have on which records to sync?  Our testing has been with limited numbers, but 6-8 records only takes 20-30 seconds.  We aren't moving any binary or image files around, it is all text strings fields.  We also only have...100?...fields across 4-5 tables total that are being sync'ed.  We haven't tried it over a cellular network.

Are you doing bi-directional sync, or just one-way (up from the iPad)?

 

--  Justin

Link to comment
Share on other sites

This topic is 3179 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.