Jump to content

Having Trouble with Large Record Sets


This topic is 3353 days old. Please don't post here. Open a new topic instead.

Recommended Posts

I have a solution with twenty tables, most of them with small record sets, up to a few hundred. One table has 4000 records and one has 60000 records. 

 

I'm able to set up the sync process for all the smaller record sets, including the 4000 records. Everything is OK. I can make changes to a few records and Sync takes about 20 seconds. 

 

As soon as I add the large table to the process things fail. 

 

I've read previous posts. 

 

I have populated all the ES_ fields and then duplicated the file, to ensure that records are matching UUIDs and timestamps.

I have used the reset script to update the global fields on the mobile side. 

Without making any changes the sync process takes up to 20 mins on the laptop.

The iPad chugs along (I have to keep stroking the surface to prevent it sleeping) and eventually FMGo crashes.

 

Is there any way around this?

 

Link to comment
Share on other sites

Here are two things to consider:

 

Do you really need access to all 60,000 records in that larger table? If not, then only sync the records that you absolutely need.

 

If you do need all 60,000 records, do they change often? If not, then you might want to pre-load them into the mobile database, and in the future, only sync what has changed.

 

-- Tim

Link to comment
Share on other sites

No, we don't need all the records. We're distributing the database to sales reps and they only need to have the data sets for the state they represent. That reduces the load to 19000 in the largest state.

 

With that data set, when i make a single change, and with the relationship set to PUSH the process completes but returns an error. Unfortunately the server error is "". I don't know what that means yet.

Link to comment
Share on other sites

I'm fairly sure that the problems I'm seeing relate to PSOS problems. When I turn off PSOS the process runs - though it is slow.

 

That leaves me with two issues to resolve.

 

First, how to we ensure that the file copy that we send to a user is ready to sync only the records that have changed. Currently, when I take a backup and delete the unwanted records the first synch checks every record. I want to avoid that. I want the distributed copy to be "up to date." I've seen advice in other threads but I haven't developed a clear, step-by-step solution. Is there a thread that explains this task?

 

Second, I can see in the code that there is scope for reducing times by doing more to determine the record set on the host side, using business rules. I obviously need to do that. 

Link to comment
Share on other sites

I've resolved the first problem. I modified the Sync Utilities script to include an extra option that I called "Configure". It sets both UTC time fields to the current UTC time and sets the last full sync field to get(currenttimestamp).

 

We script the file copy and during the process we set a field value (user_config_required = 1). On start up, when user_config_required = 1 we run a subscript which now includes "perform script[ Sync Utilities; "configure"]. The times are set, and they are more recent than all the records. The user_config_required field is set to empty. 

Link to comment
Share on other sites

This topic is 3353 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.