FileMaker Performance - can it work?
Posted 29 January 2009 - 03:45 PM
I finally surrendered and called technical support and used my "one free ticket" on essentially the most basic question, "Does FileMaker work?"
The answer, in the end, is that FileMaker will not work for my needs. I am abandoning 30 days of work attempting to get this tool to work for my needs. I will contact the client whose project first led me to this product, and advise them that the project is returning to "zero," with no technology solution selected. My personal project will also revert to exactly where it was 6 months ago when a programmer "flaked out" after partially developing a solution using PHP and MySQL.
FYI, the issue seems to be that FileMaker's "sweet spot" for performance lies somewhere between 250,000 and 500,000 records in the database (it might be the database size, I don't really know). When I pushed it past a million and then past 2 million, it slowed to a crawl and could simply no longer function. When I went through an excruciating series of steps to remove data, it crawled while removing records until it fell somewhere around 500,000, at which point performance was fast again.
Both of my projects require a database tool that can work with many millions of records. Therefore, I'm returning to MySQL and PHP, and will be starting over.
What's especially frustrating is that FileMaker (the company) kept "stringing me along," suggesting that if only I upgraded, my problems might be solved. So I did so, and I invested more time, until I was unable to proceed because of FileMaker's undisclosed limits.
I have uninstalled FileMaker from my system, and I am requesting a refund of the license fees paid. I don't know if they'll refund my money, but I need to stop wasting time on this solution and move on to a database solution that will work.
Posted 29 January 2009 - 03:59 PM
But not every tool fits the job - such is the way of IT.
Posted 29 January 2009 - 05:14 PM
Although indices make for faster searches, constantly regenerating these indices eat processor and memory (emphasis on the latter).
Relational design has lots of pitfalls in performance issues. Took a VSAM flat file DB on a mainframe to a Oracle relational model (375 terabyte DB of 3 months window of data, changing constantly) on a 256 way Sun E10000. The processing time went from under an hour to over 27 hours to produce the reports - the migration project got canned.
Filemaker has a unique and powerful construct that affords significant performance improvements. Using global match fields and self-join table occurrences, you can have many of the select capabilities performed with near zero performance impacts.
The lesson that I guess I am attempting to communicate is that there are ways to get or restrict performance, mostly in design. Many factors impact performance, and FMs table occurrence model affords significant performance benefits, but requires some good - and possible unique - design approaches.
Posted 29 January 2009 - 06:50 PM
And there's the issue: to achieve success with FileMaker, I need to adopt and learn a "completely different" way of doing things. My whole reason for using FileMaker was for simplicity, ease of use, and quick development -- not a long learning curve requiring help at every single stage.
The cache issue is a perfect example. On the phone today, after I had complained several times about the fact that even running "flat-out," FileMaker rarely used more than a few percent of CPU capacity (peaking at 9% maximum), the FM technician eventually suggested that I check the cache size setting, which defaults to 8MB. Huh? An 8MB cache on a computer with 4GB? My cache size, for some bizarre reason, was 7MB. The technician suggested that I change it to 16MB to see if performance improved; I immediately asked, why 16MB? Why not 32MB or 64MB? The technician couldn't answer. When I set it to 16MB, CPU usage seemed to nearly double; when I set it to 64MB, CPU usage bumped up to 44 percent (it's a dual-core CPU; FileMaker can use only one, so its max CPU% would be 50%). I tried in vain to find any useful references to the cache settings in my FileMaker books or in the online help (I did find some comments in this forum, of course, but only after I knew what to search for). A learning curve, indeed.
And yes, getting CPU usage from 2% up to 44% would certainly have improved overall performance (especially in sorting and summarizing, and in deleting records). It would still not have solved the problems that triggered the worst performance (and I was only just started, with more fields still needing to be indexed, and more tables needing to be related).
Posted 29 January 2009 - 07:10 PM
I posted a global field URL_match approach to one of your comments. It is really quick to implement. If you are stuck, comment back, and I'll walk you through it. If you have something else that is more time intensive, describe it, and we'll see about walking through a quick and dirty test, so you can compare the speed. I think you'll be amazed.
Posted 25 February 2012 - 01:39 PM
When your solution becomes slow at some point it is most probably due to (not intentionally) using one of the things that are slow by nature (or by design), but there is often a simple way to avoid using that thing and implementing your feature in a more efficient way.
The only problem is finding the one point where this happens and which you should focus on. If you want to learn how to find this weak spot, check out my video at http://fmbench.com/bottleneck, or you can go directly to the homepage of 24U FM Bench, our new product built specifically to address this issue.
If you want to learn (or find help) how to optimize the bottleneck you find, or discuss how to get the most out of FileMaker Pro in specific tasks, you may find the FileMaker Optimizers LinkedIn Group useful.
Software Division Manager, 24U s.r.o.
Filemaker Business Alliance Member
FileMaker 8, 10, 11, 12 Certified Developer