Justin Close

Members
  • Content count

    172
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Justin Close

  • Rank
    member

Profile Information

  • Gender
    Male
  • Location
    Salem, OR
  • Interests
    FileMaker, programming, soccer, hiking...

Contact Methods

  • Website URL
    www.mazamadataworks.com

FIleMaker Profile

  • FM Application
    14 Advanced
  • Platform
    Mac OS X Mavericks
  • Skill Level
    Intermediate
  • Membership
    TechNet
    FileMaker Business Alliance

Recent Profile Visitors

7,072 profile views
  1. I have run into this error as well. It seems to only happen on Safari, though; Chrome and Firefox (yeah, I know - not official) don't show the error. In my case it only shows up briefly as the user is loading the WebD layouts, and then it goes away and the proper layout is displayed - so there is no loss of functionality.
  2. Diver, 2 things: First, your string of values must be in a specific format to function in the IN clause. Something like "23, 432, 523". If you are using strings in the values, they have to be quoted. A return-separated list won't work. Second, in your first example it doesn't look like you are correctly using the variable "$itemIDs" - you don't exit the FileMaker string-mode, so it is using the literal string "$itemIDs" and not variable replacement. Here's what it should look like: "...IN ( " & $itemIDs & " ) ..."
  3. Hmmm...maybe the page wasn't loading fully for me when I was starting it, because I didn't see all the various sub-forums lower down the page that I was expecting. It was just the first block, the 'FileMaker Platform' group, which is the 'General', 'Web Direct', and 'iPhone/iPad'. This specific instance is a FMGo deployed file synching to FMServer, but it seems that this issue with the modification time stamps being updated in an entire transactional block could happen in many different configurations. Not that I have tested that... This is one of those things that seems to cover so many aspects.
  4. I'm having an issue with and offline sync setup that I created. I used the basic outline as described in the FM white paper for offline syncing. I'm using a transactional model for the whole sync process, so there is a final commit at the end if things don't throw an error. It is this final commit that is currently giving me problems. When this commit fires, all of the records that were created during the sync process are once again tagged as being 'modified', and so the modification timestamps are updating again. Here's an example: say I sync 3 records at 22:43. During the sync process the modification time stamps for these 3 new records, reflected at the destination, are: 1: 15:00 2: 13:24 3: 18:31 That's as it should be. But then at the end of the process, when the final commit happens, the three records change: 1: 18:31 2: 18:31 3: 18:31This sync process has a modification override feature, such that when a record is recreated at the destination it will still show the modification TS from when it was created/edited at the source, not when it was copied to the destination (the server). The FM white paper does this with a 'modification_TS' override field. So I have two mod_TS fields: 1 is a standard 'last record modification TS' called "zModTS_Always", and the other is "ModTS_Useful" field that is supposed to reflect the timestamp the record was modified by the user, and not the one where it was modified by the sync process. This override field is used only during the sync process. It is defined thusly: If( zModTS_Always and Sync::ModOverride_g ; Sync::ModOverride_g ; zModTS_Always ) This is important because you don't want the modification time to be changing whenever someone syncs - this would cause all kinds of re-syncing between users. My problem is that the final commit at the end of the sync process is causing all the records that were just created at the destination to have their 'Useful' timestamp updated one last time, and set equal to the 'useful' time stamp of the last record sync'ed. It isn't the timestamp at the point of the sync itself, the override field is still functioning as expected, it is just being fired for records that are not the current record. This might not be too much of a problem...at least the times are something prior to the sync time. But I am sorting a portal of these records based on this timestamp, and having them all with the same timestamp makes the sort rather useless. Or if I were to commit each record after it was created, that would probably solve the problem - but that eliminates the transactional benefits. So I'm just not certain what is causing all these records to be modified one final time when the commit is done. Anyone familiar with maintaining modification timestamps using a transactional technique? Thanks, Justin PS. Lee, sorry if this is in the wrong place; I didn't see any other sub-forums anymore, just the 14/13/Older general discussions.
  5. Olger, I have run into this issue a few times with Selector technique as well: you can't create new records in a table if there are no records in that table to begin with. Yes, this is a situation where the Source and Target tables are the same base table. The root cause of this problem is the fact that without a record, it can't evaluate the relationship to the Selector TO. At least that's my understanding. And this happens if you have a found set of 0, too. So you can't fix it by just having ANY records in the table, but you have to be showing a record. In my current scenario I ran into this problem because I am trying to allow for creation of new records on a layout. Say, for creating a new Line Item for an Order, when the layout is based on Line Items. If there aren't already Line Items existing (and shown in the found set), it won't work. This cropped up because they can delete Line Items as well; if they delete the last one from the found set, then a new one couldn't be created. (I got around this by checking if it was the last one being deleted, and just wiping that record clean instead of deleting; essentially forcing them to have a default line item.) -- Justin
  6. Ah, clever! I was wondering how a file might be deleted from the device. I like it!
  7. Thanks Barbara for pointing me at EasyDeploy. I ended up implementing the EasyDeploy parts from EasySync and got it working in our solution. There are more moving parts but it is working well, where-as One-Click was giving me the errors mentioned in my original posting. I was able to get One-Click to work using their demo files, so I don't believe that it is a technique. I actually kind of like it better from the point of view that it has fewer moving parts and less debris (EasyDeploy leaves a stray file on the device that is visible on the 'Recent' and 'Device' file lists in FMGo). There just appears to be something finicky about it that wasn't working in our solution. If I had more time I would boil it down and try to figure it out - but I don't. -- Justin
  8. Well, no dice so far. I have tried a different server, one that doesn't have a PW protected list of files, and that didn't work. Just to be sure, I tried the vanilla files on the client's server and that DID work. I also tried deploying the vanilla 'mobile' file from the client's solution, and that also DID work. I then copied all of the original scripts in from the vanilla file to my file, only make some minor changes to try and make sure I hadn't messed up anything there. (I do have to make some changes as the server name is different, I had a different file name for a while, etc.) That wasn't getting me success, so I even went back to using the vanilla filename. Then I thought that the external data sources in our deployable file might be causing problems; there was some discussion in the documentation for the sync model (which this file is) about external data sources preventing the file from closing the hosted file, etc. So I removed the EDSs that where there and tried again: it failed. I then tested out file size as a possible contributor; that didn't change anything and stuff still failed on my file, but it DID work OK on the vanilla file even after adding some container data to make it large. I am going to try a clone and recovered file next; I'm still wondering if the EDSs (even though I deleted them) might still have some hooks in the file. (This didn't work.) After that, short of just starting from a completely new file and building it up, I'm not sure what else to try. Guess I could try the external file approach mentioned above. I.e. have a 2nd file that is exported from the current file and then does the updating of the current file.
  9. Has anyone ever implemented the update-in-place technique from Colibri Solutions? http://www.colibrisolutions.com/2011/08/24/one-click-updating-of-a-filemaker-go-solution-on-a-mobile-ios-device/ The general outline is that the infrastructure is all in place in a deployed file and a hosted file. The deployed file is opened on the iOS device; from inside that file you hit a button to start the update script, which then does all the heavy lifting. The trick behind this technique is to start an on-timer script on the device, and then close the file on the device. The on-timer script apparently still has an existence outside the file, and keeps running. It then downloads the new file from the server, replacing the existing one on the device, and opens that new file. I was just giving it a try and it almost works...except for one critical part. When the iOS device tries to download the new file, I get a 'file is locked or in use' error. There was one time when this worked for me earlier today - my first try of the day, whereas yesterday many attempts had failed. But I don't know what the conditions were the caused it to work, and an immediate retry failed, and has consistently failed since then. There was an SSL error at the server that came up, before it succeeded - I just had to log in (not unexpected, although I expected a basic file login prompt, not an SSL error prompt). But this error has come up again since then, but the update has continued to fail. Thanks, Justin
  10. Ah, I think I get what you were originally saying: the customer sees the initial ID generated in the field, and perhaps not the one generated by the central system. But as you were also saying...then why need a new central ID? Erm...guess it doesn't really matter if there is a new CentralID as long as the ID generated in the field is unique and human readable. If it's unique across the system, including other offline, in the field generated numbers, then it solves the problem. That's the real root of the question - how to generate those numbers and keep them unique? I believe the idea behind the Central ID is that it is THAT one that is unique across the system, and the one generated in the field is replaced with it. This system is replacing old paper forms. Those forms were preprinted with IDs on them. So as long as you told the printer what ID to start from, you were OK.
  11. Yes, they all end up back on the server eventually (within a few hours or a day most likely). The salespeople are out driving around, visiting potential clients - they are selling ad-space. Yes, I think a TempID and FinalID, both persistent, is a good idea. But wouldn't you want the customer to have the OrderID? They could have multiple Orders over time. Makes them feel more...secure or certain? The idea was to generate all this on the iPad, have the customer sign it even, and give them a receipt/invoice of their Order all at the same time. Immediacy, not having to wait for other processes, etc.
  12. Do folks have recommendation on setting up a system for creating regular, human readable and consumable, IDs with an offline/online-sync and hosted solution? The offline part of the solution allows full creation of new Orders. These Orders will (should? could?) have an OrderID will be visible to the customer and given to them on an invoice at the time of creation in the offline world. (The invoice is generated along with the order and will be emailed to the customer, so the OrderID will be there.) But someone else in a different offline copy, or someone who creates an Order on the server, will also get OrderIDs that might then conflict with each other. How do you reconcile these different creation points? And just using UUIDs isn't a solution; it needs to be something simple that a customer can read off; 6 digit number or some such. Maybe a prefix for each device/salesperson? Maybe try to create and control blocks of IDs that go to each device? Maybe a temporary ID somehow, e.g. a 'purchase order ID' for the offline generated items that then gets translated to a full OrderID when they sync to the server? Then the customer would have an invoice (guess it would be a P.O. now) with one number and an invoice with another. Seems a bit confusing. Anyone have something they have successfully used? Thanks, Justin
  13. Salim, I'm a bit late to the party on this one, but you could start with what Kris was saying: unhost the file from the server. However, it sounds like you want to work on the file while you aren't connected to the host. For example, your host server is somewhere safe and sound, but you want to work on the file at your home or on your laptop while you don't have a network connection. Is that right? You didn't mention what version of FMServer you are using. I will assume 14 for now - other versions can do this same thing but with a different series of steps. For FMS 14, I would close the file on the server, and download a copy of it to your remote machine. This is all easily done from the Admin Console of the server, which you can run from your local machine through a web browser. Do NOT 'reopen' the file on the server. Actually, you could 'remove' the file from the server at this point...or just leave it 'closed' until you are ready to repost your new file (at which point you will have to 'remove' the old one). This assumes that you don't need to leave the file open for others to use while you have it offline, of course. But if you are making changes, I would guess you don't want people using it anyway. As Beverly noted, it is best practice to not try and make changes while others are using the file. If you are using the separation model, you might have more flexibility here... there are many different nuances to this scenario. Back to the process: Once you have the file copied to your local machine you can edit it as you like. When you are done editing, you can use the FMPA 14 'upload to server' feature to send the file back to the server. That's it. No worries about a damaged file or things crashing. If something happens to your file while you have it offline, you still have the server copy as a backup. -- Justin
  14. Hey Cable (Dan), A very interesting post; thanks for taking the time to write this up and to put metrics on your efforts! We have just done a very similar thing to yourself: had a client that had a need for offline iPad use, but wanted to sync with the home-server when they had a good WiFi connection. We also rolled our own sync mechanism, using (it sounds like) very similar techniques: a connector file to establish the link to the host file and then a script that does field-by-field copies. We went with field-by-field because we wanted to take advantage of transactional methods (i.e. the 'reliability' or 'surety' you spoke of). We used the iOs sync whitepaper, by Katherine Russell of Nightwing Enterprises, on FileMaker's technical resources page as a template; is that what you were using? You didn't mention if you were doing this transactionally, the times that you used a field-by-field copy. Are you syncing one record at a time or doing a transaction? If you are doing one record at a time, or anything that is non-transactional, then you will have MANY commits thrown in there which could significantly impact your speed. With transactions, there is only one commit. It will be a BIG one, from the data sets you describe, but still less overhead than one at a time. Also, keep in mind that Base64 encoded data is about 30% larger than the original file. So while you gain the advantage of being able to move the image over ASCII-only interfaces, you are moving a large chunk more data. Also, I was curious why you were doing a date-range of records rather than watching which records were modified/created after the 'last successful sync' time, and only moving those over. That seems like it might cut down on the number of records you need to move around. Is that the only filter you have on which records to sync? Our testing has been with limited numbers, but 6-8 records only takes 20-30 seconds. We aren't moving any binary or image files around, it is all text strings fields. We also only have...100?...fields across 4-5 tables total that are being sync'ed. We haven't tried it over a cellular network. Are you doing bi-directional sync, or just one-way (up from the iPad)? -- Justin
  15. I'm having a bit of an issue with an ESQL query. I would like it to always return 2 decimal places on the value retrieved. This is a currency field, and "$45.5" looks weird. I am generating a list of payments to displayed (via a global variable in the UI) to the user, so it should look money-like. It's a simple 'here's what you currently have set up' kind of display. So here's the basic query (I believe folks here are familiar with the GFN() and GTN() functions - they are just a way to robustify the query in FileMaker): ExecuteSQL ( "SELECT '$' || " & GFN ( Payments::Amount ) & " FROM " & GTN ( Payments::aaPaymentUUID ) & " WHERE " & GFN ( Payments::aLineItemID_fk ) & " = ? "; "" ; "" ; $ID ) I have also tried various versions of this without luck (using "numeric(10,2)", or "decimal(10,2)"): ExecuteSQL ( "SELECT '$' || CAST ( " & GFN ( Payments::Amount ) & " as decimal (5,2) )" & " FROM " & GTN ( Payments::aaPaymentUUID ) & " WHERE " & GFN ( Payments::aLineItemID_fk ) & " = ? " ; "" ; "" ; $ID )But I always get responses like (yes, they should be the same - it's an equal payment calculator, so the last one might be different): $125.5 $125.5 $125.5When I would like it to be: $125.50 $125.50 $125.50Anyone have a quick answer as to how to get each record value to have 2 decimal places after it? I have shied away from doing in FileMaker string manipulation because I thought the ESQL fix would be easy; and it would save some steps in the script. But maybe it's easier to do it in FM and just reprocess the whole list.