Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About DataCruncher

  • Rank

Recent Profile Visitors

2,945 profile views
  1. The CentOS FMS19 performs strongly so far; but which ODBC/JDBC driver should I use?
  2. Per 360works, they are working on releasing FMS19 CentOS versions of all their plugins prior to or round production release.
  3. So on FileMaker Server 19 dev preview on Centos, how do I enable XML? I attempted fmsadmin set cwpconfig enablexml=true and CLI indicates all went well, but fmi/xml/FMPXMLRESULT.xml?-dbnames yields forbidden? Thank you
  4. Oh that's strange but it makes sense. It's strange because the calculation fields are not normally meant to involve client side computing? Perhaps I am not understanding it fully. So if I serve a calculation using SCGetInfo through web direct or as JSON through XML, I need to have the companion installed on the web direct instance?
  5. As MacOS is really Darwin, I tried installing the 360works MacOS plugins. Of course, that also did not work - I guess all the path references must be different. I cannot imagine 360works would drop the ball and NOT support the linux version. However, and shouts out to Jesse @360works, I did find 360works release email yesterday a bit misleading as it announced that all plugin versions are compatible with Filemaker 19 - when they are clearly not compatible with FMS19 for CentOS. My guess is the guys over @360works will release new versions within days. I am not subscribing to the turf wars as to which OS is better. I could not care less. What I do care about is that Apple won't let you legally install MacOS on industrial grade server hardware - so how am I supposed to run FMS natively. That's my main reason why I need the Linux version up and running, to unleash the full, unvirtualized power of my shiny new rackmounted hardware without having to deal with Windows' overhead.
  6. Hello- I am getting confused - When I have a calculation field on a Filemaker Server hosted database file that is basically SCGetInfo(path) - which SuperContainer Companion plugin instance is responsible for rendering that calculation result? The one installed on the Filemaker client computer, or the one installed on the Filemaker Server? I am confused about this as I was certain the server side Companion instance would do the math. However, I just set up a brand new FMS instance on a brand new machine and had no chance to install the Companion plugin yet - yet, the calculation seems to return the correct result? I am thinking is this run client-side perhaps and the reason why some of my calculations are super slow? Thank you!
  7. Hello 360works - What's the timeline to bring: SuperContainerCompanion Scribe FTPeek and the Email plugin to the Linux version of FMS19? Can't wait! Thank you
  8. Import records is a great idea. I somehow forgot that one may script imports. I guess the import would have to be two separate ones - to the archive file and two the live file. This does not even have to be real time, I could probably run it overnight and have a delete script in place that only deletes the live record once the archive record is positively consistent. Thank you.
  9. I CANNOT wait for this. Forcing us into either Windows or a Mac Virtual instance on server hardware is pretty unacceptable for an Apple-affiliated company. FMS was meant to run on native Linux all along.
  10. Hello, I was wondering: Since mirrorsync centers so much about the MSMOD field (last modified timestamp) - what is the reason these are left populated after they were successfully synced? Both, initial sync times and incremental sync times accumulate longer and longer the more MSMOD fields have timestamps in them; even when they are the same across all synced instances? Would it not make sense to clear the MSMOD timestamp following successful sync to reduce the load on the sync process? Thank you.
  11. Update: Of course, I attempted the most obvious solution, scripting a copy step: Copy Record/Request Go to Layout (MAIN_ARCHIVE) New Record/Request Paste [Select] This yields no result, other than pasting the contents of all fields of the copied record into the first available field in the new record. There is no mirror paste field by field.
  12. Hello all - This may not be the correct subforum, but interestingly enough, there seems to be none for Database Design. I have a rather complex FileMaker solution, grown historically and with large-ish file sizes (around 10GB each). I say each because we use 360works mirrorsync to keep several (5 at present) FMS instances in sync. We need to do that as FM clients access the FMS from all 5 continents, where latency becomes an issue in the way FMP client handles list requests. Basically, ever line item compounds two roundtrip ping latencies. So a user in Australia looking at a list served by our New York FMS containing 100 records is looking at a response time of 100 line items x2 roundtrips x150msec ping latency = about 30 seconds just to have that list page load. It's a very different scenario if you have a server within 5msec ping times - locally - so the solution actually remains workable. All files are not in filemaker containers - they are all external on the NFS, so the FM database size is not due to files stored; there is just a lot of txt data in that database. My issue is that of the about 300.000 records, only 200 or so are 'live' data; the remaining 290.800 are merely archived files that need to be accessible somehow, but should really not be part of the main table. I am well aware of the pros and cons of maintaining ONE table that holds all like records. A table with 300k+ records should not be an issue. And it's not. UNLESS: - Changes in calculated fields are made. If you have a table with 200 records, the update will take 2 seconds. With 300k+ records, it may take over 5 minutes of database locking until all changes are calculated. - Same thing applies to re-indexing. - What is really the show stopper for us is mirrorsync. If any operation ever changes one flag in too many records, mirrorsync will then be unable to sync hundred of thousands of records. We only need the live cases synced, not the rest of it. So all these things considered, my easy out would be to split my main table, MAIN, into two: one for active records; another one for archived ones. So I would have MAIN_LIVE and MAIN_ARCHIVE. I could then only mirrorsync MAIN_LIVE across the 5 FMS instances, with a mere 200 records. MAIN_ARCHIVE would be on one hub server only, and all the other 5 FMS would plug into that hub as external data source, as the archived cases will never need to be displayed in a list view and only one at a time will be parsed, rendering the ping latency no issue. My issue now with that approach is how to move a record from MAIN_LIVE to MAIN_ARCHIVE once it's done; or back for revisions. MAIN_LIVE, as would MAIN_ARCHIVE, has about 300 fields defined. It would be a tremendous burden to script a field-by-field record moving script from MAIN_LIVE to MAIN_ARCHIVE; and the other way. In particular, because that script would need to be updated every time a field definition is changed or a field gets added. Is there no way in filemaker to simply copy an existing table, and move an entire record in that table to the copied table instance? It sounds straightforward enough, but I don't think this function exists? Thank you - Data Cruncher
  13. UPDATE: I may be missing something here. I have two pretty much identical FMS 18 instances running on virtualized OS X instances. Like hardware, same software versions. Running the same Filemaker file. One processes SCGetInfo without noticeable delay; the other lags and eventually bugs down the FMS response time to about 35 seconds to render one record. Is there any log file either on the FMS machine or the SuperContainer server that would point to why SCGetInfo is running slowly on one machine, but not the other? Am I correct to assume that SCGetInfo is run server-side, rather than on my FMP client? It must be, since the SCGetInfo is part of a calculation field. Both FMS machines and the SC server are on the same local subnet even and ping the SuperContainer server in well under 1 msec. Perhaps my SC Companion plugin is corrupted on one of the machines, or a deprecated java version installed?
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.