Jump to content

DataCruncher

Members
  • Content count

    23
  • Joined

  • Last visited

Community Reputation

0 Neutral

About DataCruncher

  • Rank
    member

Recent Profile Visitors

1,902 profile views
  1. Hello everybody - Finally moving Supercontainer to a dedicated CentOS machine with tomcat8 and all the bells and whistles. Got it set up and running nicely - only when I upload files to the SuperContainer, all of a sudden I am hit with a 1 MB File size limit? The error message I get seems to be thrown by Catalina/Tomcat, saying: The upload exceeded the maximum allowed size on the server However, I tried to no avail to increase the maxPostsize parameter in server.xml and also the manager web app from 50 MB to 100 MB - yet, I can't even upload a 2 MB file. Downloads work fine for all sizes. What am I missing here? Where is another size limit in a straightforward SuperContainer Tomcat8 installation that works perfectly otherwise? Also, I can't explain where the 1 MB size limit should come from. The default files for the tomcat manager web app has a 50 MB limit. Thank you!
  2. I apologize but I searched up and down and did my fair share of research but I can't for the life of me figure this one out. So what I am trying to achieve is something straightforward: I have a main table, ORDERS, that simply has one record for each order. Amongst the fields in ORDERS, I have: ORDERS::Client and ORDERS::Agent. Both fields contain Text values; ORDERS::Agent being any one of a set Value list containing my 20 agents. ORDERS::Client is whatever the client entered; so the same client may enter his/her name ever so slightly differently. One time, they may enter Mike Miller; the next time around, they may enter M. Miller; and the next time around they may enter Michael Miller - even though it's always the same client. -------------------------------------------------------------------------------------------------------------------------- Then, I have a related table called TRUSTED AGENTS. The purpose is simply to route a returning client to the same agent. The relationship is ORDERS::Agent = TRUSTED AGENTS::Agent. Only fields in that table are TRUSTED AGENTS:Agent and TRUSTED AGENTS::Client. For every order, I populate the TRUSTED AGENTS:Agent and TRUSTED AGENTS::Client fields. -------------------------------------------------------------------------------------------------------------------------- What I need to achieve is to flag every case where a client was previously handled by one of my agents so I may route to the same Agent. The workaround I have now is A Calculation Field called 'ORDERS::TrustedAgentCheck' that is defined as: If ( Count (TRUSTED AGENTS::Client) > 0 ; ORDERS::Agent ; "NEW CLIENT" ) The amazing thing is this actually works - somewhat. ---------------------------------------------------------------------------------------- The problem with this approach is it will only match EXACT verbatim matches. If the Client entered his name is M. Miller the first time, it will only match M. Miller the second time, but will miss Michael Miller completely. I know you could say well give them a structure that makes it less likely to spell their names differently; but the client names are actually not just typed in by the clients, sometimes they come through third party platforms that format them differently. What I need is to manually truncate the TRUSTED AGENTS::Client entry to the smallest possible denominator, if you will. If a client enters his name as Michael J. Miller, Bend, OR - all I want to store in my TRUSTED AGENTS::Client field is 'Miller". That's all I need. But how can I achieve a match that I would best describe as: If ( ORDERS::Client contains or partially contains any of the records in TRUSTED AGENTS::Client; flag this order) I am really struggling with this and can't seem to find a viable solution to this. Any help is appreciated! Thank you
  3. ALL FIXED! Thank you Jesse for helping me through this. If anybody experiences similar issues, these were the steps we have undertaken to fix our MirrorSync FMS15-FMS15 issue: 1) Upgrade to FMS15. That makes JDBC an option. 2) Don't use JDBC if one of your records has a field with more than 35000 characters in it. It will not work. Use XML instead. 3) make sure you increase the memory allocation, if required, in three different spots: WPE on the hub server; WPE on the spoke server; and - also - on the mirror sync tomcat. We used 2048M and fared fine in the end. 4) If you timeout, increase the timeout. 5) Make sure you don't have calculation fields on your sync layouts. These can calculate locally and likely slow down your sync. 6) Make sure you don't sync external containers. These are not supported by MirrorSync. 7) Make sure your mirror sync table has no client record under 'internal' before you first sync. That's pretty much all I remember, but our installation works really well now. Thanks again 360works for all your assistance in getting this going!
  4. Hello all - So I got a larger MirrorSync installation to run beautifully between two FMS15 server sites. Sync times are a charm. My question is what the best practice is to make MirrorSync sync fastest. Currently, our hub server is in a colocation datacenter in Los Angeles with Gigabit connection. The spoke server is in our branch office, syncing through a 100/50 Mbit connection. MirrorSync server runs in the same datacenter as the hub server. My question is whether it would make sense to install the mirror sync server at the branch office location instead. That way, mirror sync could connect locally to the spoke server at 1 Gbit, and only pull all changes from the hub server over the 100/50Mbit internet connection. The way we have it now, mirror sync at the data center, it would only connect to the spoke server on the 50/100 Mbit connection. Since I don't know how mirror sync handles data traffic to the hub and spoke server (or if there is any difference at all), I don't know which way is better. Any advice would be appreciated! Robert
  5. DataCruncher

    Bad Gateway?

    Bad Gateway error on sync = increase timeout in: /Library/FileMaker\ Server/Admin/admin-helper/WEB-INF/conf/mod_proxy.conf
  6. Thank you! Sending my email now... Now I did get it to work, but on a sample database. I can't employ the approach I had on one or two tables on my live database with 71 tables... So I'd love to get to the bottom of this...
  7. Jesse - Thank you. I still need you to look into this; the inital sync never works unless I delete all data and manually import piece by piece. It is impossible to keep doing this for every table. Once the initial sync is done, all incremental syncs work flawlessly. To answer your questions: Did increasing the timeout to 2000 have any effect on how long it took for that error to come up? - No. I even increased the timeout for XML web publishing in Catalina.sh to 20.000 - still got the internal server error. If you have the WPE at 8,000 megs of RAM, I doubt adding more will help. - I have it at 49000 at once just to try - still the same thing. What version of MirrorSync are you running? - 360Works MirrorSync 3.17, build 9239 (3/24/17) Now, this is my system setup: HUB: FMS 12.0.6 on Mac OS Spoke: FMS 12.0.2 on Mac OS Mirrorsync is running on ubuntu Ubuntu 17.04 (GNU/Linux 4.10.0-20-generic x86_64) We followed install instructions by the book; the only thing I could think of now is that we corrupted the installation of the MirrorSync scripts somewhat. During initial MirrorSync config, there is a page that required us to copy a table, some scripts and a layout to our FM database file. The tables and the layout copied fine; but the copy scripts button asked us which version of FM we are running (FM 13+) - but then gave us an error: Could not copy script onto clipboard: java.lang.NullPointerException Perhaps I'm not supposed to have empty spaces in my layout names? Either way, we have tried the configuration on 8 different client machines running different versions of java - all to no avail. As a workaround, 360works support emailed us a .fmp12 file that contained nothing but the mirrorsync scripts which we manually copied over into our database table and went through which fields should be referenced over the phone. I am uploading the empty .fmp12 with the scripts that I got from mirrorsync. Perhaps the most prudent way to try and untangle all of this would be to entirely reverse the process where we manually inserted mirrorsync scripts; and instead get the copy scripts function to run during initial mirrorsync configuration. We did send a ticket to 360works, but I doubt it would have logged any relevant information as the NullPointerException occurs on the local machine, I believe. Thank you! Robert MirrorSyncScripts-2.fmp12
  8. So increasing the timeout didn't work. Feeding the initial sync spoon by spoon, however, did. I first synced an empty clone database - which worked fine. Next, I imported 3000 of my 64000 records - which worked fine, too. I then imported another 5000 records - you get the idea. Once all 64.000 records were imported, incremental sync works fine and takes about 4 seconds. Next step will be to take the spoke machine off-site and see how latencies change these numbers. Thank you for your assistance
  9. OK - some news here. I have limited the sync part to one of my 72 tables only. That table has 62.000 records, text only, no containers. I have two instances of FMS12 set up on a local subnet, and mirrorsync running on a dedicated third machine with 8GB of RAM, SSD disks. They are all connected through a GB switch. On initial sync, it seems to start fine, but then fails after about 8 minutes with: java.io.IOException: Server returned a 500 (Internal server) error. This could be caused by many things, but it usually means that the Web Publishing Engine ran out of memory. The URL that generated the error is http://domain.com/fmi/xml/FMPXMLRESULT.xml?-db=TEST&-lay=MirrorSync&-script=MirrorSync+(TEST)&-script.param=doServerAction 1 Let([%24command%3D"getModificationsSince"%3B%24table%3D"sync_CASES"%3B%24%24sessionId%3D"67ab1e2d-dcf1-4505-91c0-8d16aebb9634"%3B%24dataMode%3D1%3B%24timestamp%3D""%3B%24%24MIRRORSYNC_USERTOKEN%3D""]%3BTrue)&-max=all&-findany So what I did was up the WPE timeout to 2000; and i also gradually increased the host FMS memory allocation from the default 500MB to 2000M and now 8000M - still the same thing. I'll allocate 16000M next - my concern simply is both databases are already there, I'm merely trying to sync the one new record. Hopefully, once i get past this initial sync, incremental sync will be more responsive - else I may really have to go CWP... I'm running out of ideas - it fails again with error 500 after 7 minutes - with 16 GB of memory allocated to WPE on FMS12. My entire database file only has 5 GB. Perhaps I'm doing it wrong...
  10. DataCruncher

    FileMaker crashes in field picker

    Having the same issue.
  11. Thank you, that's very good information. I'll run it and get back with results.
  12. I haven't run a full sync yet - partially because I want to have my database structure optimized before I do. I am not sure if having a SSD drive on the mirrorsync machine would make that much of a difference; we have set aside a machine with about 50GBs of RAM. I will report current sync times as soon as I have them.
  13. Hello all - I'm thinking about MirrorSync for a Filemaker Server to Filemaker Server replication. The old problem: Office in CA, Office in NY - latencies kill the Filemaker Pro clients that are on the other coast when connected to FMS on the other coast. Think of our database as a medical records system for patients in a clinic that also manages patient handling. Currently, we have one main table, PATIENTS. One of the fields in this table, PATIENT_STATUS, is a numerical value that ranges from 0 to 6 (0 patient just showed up, 1 patient in waiting room, 2 patient seeing the nurse, 3 patient seeing the doctor, 4 report transcribed, 5 prescription issued, 6 patient discharged). The workstations display lists showing the patients in each status. Obviously, as patients come and go, status changes, and we need as close to real time updates as we can get on these. Now, if you think about syncing a CA server with a NY server - that is easy enough, and i don't mind if it takes a minute to sync - but each respective office has fast local performance. I'm not sure what best practice would be to improve sync performance. In our current setup, the main table, PATIENTS, has about 150.000 records - comprising the 20 patients that are sitting in the clinic now, with PATIENT_STATUS from 0-5; but also, 149.980 records comprising discharged patients with with PATIENT_STATUS = 6. I need sync to be as fast as possible for the 20 patients that are in the office now; but I could absolutely do with less frequent sync intervals for the remaining 149.980 records. Also, something tells me that syncing 20 records would take a lot less time than syncing 150.000 records - especially when you try to get sync intervals down to a few seconds. My options, as I understand, are: 1- Leave the database structure as is, and hope it will be fast enough for our purposes (let filemaker handle the load) 2- Split the PATIENT table up in two tables: CURRENT_PATIENTS and ARCHIVE_PATIENTS. I would then run a mirrorsync every five seconds on the CURRENT_PATIENTS, but only once an hour on ARCHIVE_PATIENTS. This may come with its own set of problems, as we also search for patients by name and I'd have to script two consecutive searches, one in CURRENT_PATIENTS and one in ARCHIVE_PATIENTS and then merge the results somehow. I also imagine that splitting my tables would be quite a bit of work, adjusting layouts, scripts, and relationships. Lastly, upon discharge, the entire record would have to be moved over from the table CURRENT_PATIENTS to ARCHIVE_PATIENTS. 3 - Do the same as described in 2, but also create a new join table that servers as my new main table: GLOBAL_PATIENT, with a PATIENT_ID primary key that is related to a PATIENT_ID key in CURRENT_PATIENTS, but also to PATIENT_ID in ARCHIVE_PATIENTS. It is my understanding that in this approach, all that I would need to change in searches is to search GLOBAL_PATIENT as opposed to both the ARCHIVE_ and the CURRENT_ tables. Option 2 and 3 may well be the same, but to me, they seem different. I am not sure if I am missing out on potential other ways to make this work. I would appreciate any advice .... Have a great day!
  14. Installed mirrorsync 3.17 build 9239 on a Linux machine, connected to FMS 12 on Mac OS X using XML. All is well until I start to setup my first initial sync job. I copied and pasted the MirrorSync Table just fine. But then, the copy scripts step throws a java.lang.NullPointer.Exception error. Just out of curiosity, the third step - of copying the mirrorsync Layout would also work fine, but I'm not going there until I got the copy scripts step done. I realize it's convenient to click a button and have them in your clipboard, but at this rate, even a manual .txt version of the required scripts would at least allow me to manually copy these scripts... Any advice? Thank you!
  15. DataCruncher

    stderr log - newsyslog management

    Hi, It seems I ran into the same issue with stderr growing to GBs in size. Newsyslog may work, but since I can't add the default Filemaker Server path with the space in it, I was wondering how you had solved the issue. Can you simply re-name the FIlemaker Server directory to get rid of the space? I would be very hesitant to do it as the path name would surely have dozens, if not hundreds, of other references... Thank you!
×

Important Information

By using this site, you agree to our Terms of Use.