Jump to content


  • Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About flybynight

  • Rank

Profile Information

  • Title
    Senior System Engineer
  • Industry
    Custom software
  • Gender
  1. image sync speed improvement

    bcooney, Care to go into any detail as for HOW you are more carefully creating the segments so they never split a tag? We haven't run into this issue… but I suppose it's just a matter of time? Thanks! -Shawn
  2. image sync speed improvement

    Perren, Interesting find. The only thing I wonder about is how many repetitions do you have? That has to be pre-defined in the schema, correct? What happens if your sync is too large for the number of repetitions you have? Or with Deploy, if your new solution file adds a lot of great features, and ends up being larger than you anticipated? We did some playing with dansmith65's proof of concept file (on the OP, not the one on GitHub), and tweaked it to get it working for small project. Our project did not have a need for the "replace" method, so we didn't have to worry about that piece that was incomplete. We experimented with the max pull size setting, and found a sweet spot for us around 250,000. At 500,000, we could see the processing of each segment slow down, and then get back up to speed at the beginning of the next segment. Much lower than that (we also tried down to 100,000), and the processing went a little faster, but you had more round trips to the server, so overall performance declined. Looked at GoZync, and I can see why it doesn't have the slow-down issue. With the intermediary file making the connection between the hosted and mobile files, it does all of it's Set Field operations there, directly from one to the other, so it is only dealing with 1 record/field at a time, and not storing a big payload in a field or variable. Great to see all of the experimenting going on! -Shawn
  3. image sync speed improvement

    Dan, Curious if you have visited this recently. Your last note on this thread said you were still in the middle of modifying EasySync, and that was a couple months after the last submission to GitHub. We have been testing out EasySync, and are running into some of the performance issues you talk about. We don't have any container fields, but a solution where different users will sync different sets of customer and products. The product tables have tens of thousands of records, but each user will sync a few hundred to a few thousand. Testing on some of the small tables, it goes pretty fast, but as soon as the payload gets above 1MB, things slow down. At 3MB performance becomes unacceptable, taking over an hour to process a couple thousand records. It's odd because gathering and downloading the payload from the server doesn't take that long. It's just the looping through and setting all of the fields and creating the records locally that bogs down. Eliminating unused fields helped a lot, but it's still not enough. Have you done testing on larger sets of records, with or without container data? Are the modifications you have done easy to apply to a solution already set up with EasySync, or would you recommend tearing ES out and starting over? Have you tried any of the paid sync solutions? Curious if they have similar performance issues. Appreciate the research and work that you have done! Thanks! -Shawn

Important Information

By using this site, you agree to our Terms of Use.