Jump to content

cbishop

Members
  • Content Count

    9
  • Joined

  • Last visited

Community Reputation

0 Neutral

About cbishop

  • Rank
    newbie

Profile Information

  • Title
    Software Developer

FileMaker Experience

  • Skill Level
    Expert
  • FM Application
    18

Platform Environment

  • OS Platform
    X-Platform

FileMaker Partner

  • Certification
    8
    10
    11
    12
    14

Recent Profile Visitors

976 profile views
  1. Probably. We have a lot of graphical web views within (and occasionally outside) FileMaker Pro that do a lot of summarization of daily data points, and a quarter of our users are looking at these throughout each day. They are PHP calls that use an FM script calling ExecuteSQL and returning the results that are then parsed and charted in AMCharts or other software. We also have several nightly processes that take data out, perform calculations in PHP, and update static summary fields on records (like worked time on an episode and on each episode task). I'd imagine that we might do between 100GB and 1TB of data transfer in a month through PHP calls, but since we're not logging total output length, I can't say for sure.
  2. Yah - the file size will probably grow 25% or more, so totally agree to go with the Data API. We are holdouts on the PHP API still because of the amount of scripts relying on it, and because our company would probably hit limits and unfairly have to pay for extracting our own on-premise data from the Data API.
  3. There is a workaround for this. You can Base64Encode your file contents in PHP and then save into a text field in FileMaker - then you can run a FileMaker script to Base64Decode it and store it into a container field. You can use a global field for the Base64 text contents if you execute the FileMaker script in the same PHP file / connection, and this way it won't take up database storage space.
  4. Yes. XML did have much higher write speeds (2 or 3 times the speed). However, the reads and connections were taking longer. Would be good to have a blend - tell it to use XML for batch writes only, but JDBC for all else - would this be possible to implement?
  5. Hi Jesse, That sounds great. Not sure about the pricing issue with the Data API though (the per GB charge). It would be nice to have a pass-through small server instance located wherever you need it though...
  6. I'm having this issue too when testing with MirrorSync, and I already found that thread and upvoted it. I don't see FileMaker, Inc. improving the JDBC engine since they're heavily pushing the Data API now. I currently use SyncServer Pro (SSP). The way it handles remote server syncing is by allowing you to install additional SSP client software copies on each FM Server machine (or on a different machine local to that FMS machine). The hub SSP software communicates with the SSP clients rather than directly with the remote FileMaker Servers, then the spoke SSP clients connect via JDBC. SyncServer has many, many issues, but in write speed it does well. Can I suggest to add this as a feature request?
  7. Yes, we've been using a task schedule with "fmsadmin start xdbc" that runs every 15 minutes, and it does indeed keep it online. We also run "fmsadmin restart xdbc" nightly just to flush out any memory leaks from its Java processes.
  8. Years late, but Get(ApplicationVersion) = ("Filemaker Web Publishing" or "Web Publishing Engine") This would always be false anyway. The ("Filemaker Web Publishing" or "Web Publishing Engine") is an expression on its own, and would equate to 0. You are basically asking if Get(ApplicationVersion) = 0. That is, unless you were just giving us a loose example of what you were trying to do. :-)
  9. As far as the original search, I find that searching any related tables is unnecessarily slow. It seems to search all records and then push through to the current table. What I do to speed it up - especially when constraining from a smaller set of records - is to create an unstored calc field in the table you're searching in. The calc would get the contents of the field you want to search for in the relationship. So if you're in TABLE_A, and you're constraining using TABLE_B::STATUS, create a new field in TABLE_A called STATUS_CALC, and then formula is TABLE_B::STATUS (unstored). Now perform your constrain on TABLE_A::STATUS_CALC instead. I've had searches that were taking up to 3 seconds now perform almost instantly.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.