Jump to content

Filemaker Server Performance Testing


This topic is 3171 days old. Please don't post here. Open a new topic instead.

Recommended Posts

I have a system of 8 Filemaker files and about 50 concurrent FMP users running on Filemaker 11. Recently, our system has started to suffer severe performance degradation that I thought was a result of an old server (Xserv, single 2.0 Intel Xeon, 1x 7200RPM SATA Drive, 16GB of RAM). To support this, I designed a performance test that navigated to some of the larger tables in the system and scripted the creation of 7,500 records in each table. I recorded the start and stop timestamps of the loop and then placed them into a table where I could compare the performance of several test servers against the performance of the live server. (Spreadsheet available at https://docs.google....a0E&output=html )

What I found was surprising in that the Live server actually performed BETTER on many of the speed tests than servers running newer, faster hardware. These results are concerning because, in my mind, it point’s to a bigger issue at hand: database optimization.

Before I go down that treacherous road I’d like to get input on the testing that I performed and also get some clarification on how Filemaker performance monitoring works.

First, what is a “Remote Call?” From what I can tell, Filemaker doesn’t even define what a “call” is in any of their documentation, only that remote users may have to have multiple calls for a single task.

Second, how CPU dependent is Filemaker Pro from the servers end? It’s not uncommon to see in Activity Monitor that the CPU is pegged at 100% usage while a user performs an export or is updating a set of related records. My goal was to put Filemaker on the fastest CPU out there and see how it performs. I know that being a database application, Filemaker is very dependent on hard drive speed, but given the results that I’ve seen (very little drive activity while CPU is pegged), I think the performance issues are more related to CPU bottleneck than drive bottleneck.

I’m in the middle of implementing FMBench in my solution and I’m really hoping that I’ll be able to at least find some optimizations there but are there more “behind the scenes” tools to take a look at what is going on when the server’s pegged?

Thanks for any insight you guys can provide!

Link to comment
Share on other sites

There is no documentation on what a remote call is. So while it's useful to see the stat in the admin console and the stats.log, it's hard to translate it into what is going on in the solution.

One thing you can do is add some of your own logging to your scripts so that you would know what is running when. That too is somewhat limited because the load on your server can come from unoptimized other parts (graphic-heavy layouts, layouts that have many portals, sorted relationships, users that do manual searches on unindexed fields, layouts with many unstored calcs and/or summary fields,...)

FMS has the 4 traditional bottlenecks. Disk i/o carries the biggest performance penalty.

- disk i/o

- memory

- processor

- network

Load on the server is a combination of complexity of the solution, the # of users, the usage profile of those users and the hardware of the server. So there are many variables to consider.

I would suggest doing extended performance monitoring using the OS tools (harder to do on OSX than on Windows) to make sure you have a good picture of what is going on. That will answer your question about the CPU dependency of your solution. Overall FMS and its core tasks are more disk intensive than CPU intensive.

Your performance test is not very representative as it is just one operation, it doesn't mimic the complex... well.... complexity of users working in your solution with all the tables, layouts, scripts,... FMbench should help there.

  • Like 1
Link to comment
Share on other sites

Investigate why this so recent. Assuming lttle change to your dbs then I am thinking drive problems on the xserve.

I have used xserve with many more files (40+) and 30+ users without issue, but had another xserve as fileserver and that suffered hd failute with peeformance issues running upto failure.

Link to comment
Share on other sites

CPU is important, but not so important as drives. The SATA drive in that x-Serve is hardly optimal. See what Wim had to say.

Also, have you read the Server Configuration White Papers from the SE's?

Steven

Steven,

I haven't read them but I am familiar with "best practices" as far as FMP Server is concerned. A quick Google search reveals nothing for "Filemaker Server Configuration Best Practices SE" Could you link one of them?

Investigate why this so recent. Assuming lttle change to your dbs then I am thinking drive problems on the xserve.

I have used xserve with many more files (40+) and 30+ users without issue, but had another xserve as fileserver and that suffered hd failute with peeformance issues running upto failure.

I have a feeling that the recent performance issues are a result of an every-growing data set size. Unfortunately our business model dictates that we can't just "archive" information, so most users have access to in excess of 500k records (with the aforementioned 2.5M+ sub-records as overhead).

I met up with Jay Gonzales of BeezWax and he suggested running a "Save a Copy - Compressed" on one of the files. According to him, if the database is in good shape I should see a less than 5% change in database size after I do that. Well, but largest file (~3GB) went down to 2GB, so I think there's some optimization to be had there.

It's tough to diagnose hardware issues in Mac OS X server. Does anybody have a good tool to do that (Other than the notoriously finicky Drive Utility built into Mac OS X)?

Link to comment
Share on other sites

Hardware is important but in my optinion you can spent lot's of money, hours and nervs trying to speed up your server (which does currently run on older but not on bad hardware). I have done it myself and it was more or less useless. So I sat down and started to debug and optimize some script sequences. With increasing number of records some scripts just need to be modified to run efficiently. Optimiziation is not a one week task, more a week within each quartal. But I found it's worth and you can gain speed performance within the first few hours you couldn't archieve with hardware updates. And I also need to mention, that optimization is pure fun ;)

Regards,

markus

Link to comment
Share on other sites

  • 2 weeks later...

I agree with Markus - I have the same experience. In the past my company invested a lot of money to improve our server hardware, but we never saw the solution's performance improved by more than 20-30 % by that.

We aso tried to go the long-term optimization path Markus is suggesting, but it was a lot of work and lot of guessing.

Last year I discovered that the most efficient way to improve performance of our solution was to identify the real bottleneck (learn more about it at http://FMBench.com/bottleneck) and optimize just the bottleneck.

However, sometimes it's hard to find the bottleneck even with FM Bench. For example FM Bench lets you easily identify a single script step as your bottleneck, but when the scrip step is a Set Field or something as atomic as that, you may need to go a bit deeper. My usual approach is then to make a copy of the solution, and keep deleting parts of it and examining how the performance is affected. You can, for example, delete half of all your tables. and then measure how long that "bottleneck" script step takes to execute. Is it significantly faster or not? If it is, then your real bottleneck is somewhere in the tables you have just deleted. If it's still very slow, then the bottleneck is in the half you have not deleted. And so on...

You can replace your hardware, but at certain point you get to a level when better hardware won't help you because your solution won't be able to take advantage of the better hardware. But the solution itself - the algorithms, calculations, database structure - can be optimized and optimizing it will help you regardless of what hardware you use. So my suggestion is to never give up identifying the right bottleneck. Even if it's difficult, it is still the most efficient way to optimize your solution.

I hope this helps...

HOnza

  • Like 1
Link to comment
Share on other sites

  • 2 years later...

This topic is 3171 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.