Jump to content

Joshua Willing Halpern

  • Content Count

  • Joined

  • Last visited

  • Days Won


Joshua Willing Halpern last won the day on September 14 2016

Joshua Willing Halpern had the most liked content!

Community Reputation

7 Neutral

About Joshua Willing Halpern

  • Rank

Profile Information

  • Title
    Nice Person
  • Gender
  • Location
    Los Angeles, CA
  • Interests
    Sync, Calendars, Guitars, Cats

Contact Methods

  • Skype

FileMaker Experience

  • Skill Level
  • FM Application

Platform Environment

  • OS Platform
  • OS Version
    High sierra

Recent Profile Visitors

3,140 profile views
  1. If you set es last push and last pull time stamps correctly then it should jump over those two sections super quickly, So confirm that. Is is it getting stuck on the “sync check” part?
  2. Hey, I'm going from memory here, but to avoid an initial sync, before you deploy the local copy, set ES_Last_Push_UTC_Time, and ES_Last_Pull_UTC_Time to the current UTC using Get ( CurrentTimeUTCMilliseconds ). That will trick the local file into thinking it just synced and it will consider itself up to date (until a record changes on the server or locally). I think there is also a "last full sync" field somewhere that you may want to set to the current timestamp. But that one should automatically update after your next sync.
  3. Barbara! Looks great, and excellent refinements. I want to implement something like this in a future version as well. Thanks for sharing your code.
  4. Barbara, you're totally right. Just to be clear, updating the ES_UTC on the server should always prevent that problem of records not being pulled, barring any simultaneous sync snafus. However This brings up another issue: e.g. If two users edit the same record, User A does first, and User B does second. If User A syncs first, then the record on the server will update with a new later timestamp and when User B goes to sync, she will end up pulling the data as edited by User A even though she ( User B ) was the last to edit the record :/. Just spitballing, but I believe this could be solved with a second timestamp field. Use ES_UTC to determine if the record should be examined at all during a pull, and a new ES_UTC_ACTUAL that only ever reflects actual user edit timestamps and is used for conflict resolution. Cheers, J
  5. Here's my understanding. The $script_override is used during a pull to keep ES_UTC and ES_Device_ID from auto-entering on the local file, so that's fine. And ES_Device_ID should also not auto-enter on the server and should be set the same as that in the pushed record. Tim originally scripted it to do that by sneaking this in in the middle of 'Process Payload from Client': The calc shows that instead of letting ES_Device_ID update by itself with the server's id, it uses $client_persistent_id, which comes from the script parameter. In summation: $script override prevents auto-enter while the field is manually written. All good so far. But i don't think he actually addressed the issue you raised, where a lazy syncer might not sync their device for months and other users will therefore not pull his new/updated record. We need to allow the server to auto-enter in the ES_UTC field. I did this by splitting the variable into $script_override for the ES_Device_ID field and $time_override for the ES_UTC field. That way you can independently control when each field's auto-enter calc is turned off, which is what you need during Process Payload from Client. Round tripping will still be prevented because ES_Device_ID still matches but other users will be able to pull the data because the ES_UTC field is now later than their last sync. Once broken apart, make sure both are set to 1 during a pull and only $script_override is set during "Process Payload from Client". It's been a while so I hope my understanding is correct and that that helps. J
  6. I'd say Mirrorsync is the quickest sync tool I've come across. But Easysync can be a great solution--how many records will be in your sync payloads, and across how many tables?
  7. Hey, try adding this line (including the quotation marks) at the end of your SQL statement: "AND ( \"_kf_uuid_companys\" = ? )" Please note the \" around the field name--You'll need these since your field starts with _. Then include $additional_settings as the argument in your ExecuteSQL calculation like so (See attached image). Alternatively you can embed the variable right in the SQL text: "AND ( \"_kf_uuid_companys\" = '" & $additional_settings & "' )" P.S. This doesn't matter at all, but I believe it's spelled companies.
  8. Hey, I want to create invoices with three different types of line items, 1) rental items, 2) people for hire, and 3) rental add-ons. Should I place all three in a single products table with extra child tables for each's type-specific info or should I create three different foreign keys in the line items table and create relationships to all three tables?
  9. Glad it's working for you! You may want to read tmr_slh's comment a couple entries up and verify that this modification works on all platforms you need it for. If not his suggestions look promising though I haven't tried them and I don't know that they all have the same linear complexity. Cheers.
  10. Why don't you want to run sync check? If a record is deleted on server, it won't be deleted from client without sync check. Maybe I misunderstand your question? Why don't we continue via email. Look in the easysync scripts for my email or message me directly.
  11. Hey Jonathan, DELETIONS The reason your records are being deleted is because Sync check will also exclude records with the ES_Exclude flag. In your case, to fix this you'll need to update $$additional_sync_check_info in the client settings to include your state list. In the server-side sync check script set the $States_Selected variable again so that the ES_Exclude flags are correct when the server uuid list is assembled. 1. Copy the calculation from $$additional_pull_info into $$additional_sync_check_info. 2 then add this line to the server sync check script: Set Variable [ $States_Selected ; Substitute ( $additional_sync_check_info ; "-" ; ¶ ) ] after the line Set Variable [$additional_sync_check_info] EXCLUDE Your ES_Exclude seems to be working to me. Try this: Select a state or two in your global picker. Then click the utilities button at bottom and wipe and reset. Your client db should be empty now. Click sync and the records that are pulled down should only be from those two states. That's how your ES_Exclude is currently setup. Records that don't match your selected states will be excluded during pull.
  12. It sounds like your ES_Exclude calculation is not working. If you're trying to exclude records that match the value lists, maybe use the calculation: PatternCount ( $additional_settings ; <field> ) > 0. It would help to know what you specifically need. As for the accidental deletion problem. You need to ensure your users don't accidentally delete things without logging the deletion in a payload record. If that means removing delete from the menu, then do that. To get around the ES_Device_ID problem I changed the field calculation to include either the system or host ip address depending on whether the user has accessed the local or hosted file. I had to modify the scripts to accommodate this change and prevent round-tripping. That's a subject for another thread though. More about exclusions; there are 2 ways to exclude records from the sync. 1) Use the ES_Exclude field. Any record with an ES_Exclude value that evaluates to true will be excluded from the push or pull payloads. You can see that in the Push Payload script here: 2) The second way to exclude records is by modifying the $dyn_sql SELECT statement in "Prepare Payload For Client." Modify this however you want. Want to exclude all Georges from the Pull?... Add "AND name <> 'George'" to the end of the SELECT statement. To exclude values that match your value list, maybe create a loop that adds a clause each time -- E.g. & " AND field <> " & GetValue ( $additional_info ; $i )" is added with each loop. You could technically add more conditions to the Push Payload SELECT statement too, but It's probably better to push all new/edited records and only pull what you need. J
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.