Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

This topic is 7777 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

Are those CURRENT, AVERAGE or PEAK values ??

Bear in mind that the performance statistics are are only collected at the stipulated time interval. The most frequent analysis you can do is 1 second - but then that will affect FM Server performance !.

I can't test my system above 30 users (that's all I've got), but I suspect that FM Server will deliver on demand and I/O statistics will rise with increasing users.

Maybe something a bit more concise from FM Inc could help.

My system is

Mac G4-400, OS 8.6, 100Base NIC

PEAK values:

Disk: 161

Guests: 28

Network: 59

Transactions ??? 273

Posted

Sorry -- of course Peak values.

Even with 200 idle users nothing will show smile.gif

But peak shows the potential of FM server and it is collected over the time.

Obviously, lazy users will not push FM server to high smile.gif

I am interested to see the values of RAM disks!

Posted

Yeah, I do realize it depends on traffic.

In any case I think about creating some test file with heavy load and run that on 2-3 workstations in loop. That will create some demand on server. Then we can get higher numbers.

Posted

What are the drives?

Is that serious 17516 KB/sec?

And 11576 KB/sec network?

And 32767 transactions?

If is it all serious peak figures that are great from single FileMaker server! What are those Guests processing?

Posted

Well, I'm trying to be serious, Anatoli ; )

Hopefully I'm not giving out bad numbers; they're copied straight off the remote server admin usage window.

In terms of extra info, one 80 gig HD..

I just did some testing and found where I get my peak values, in part. There's an update/import script run every morning, where the 1000 files are updated by 3 match DBs and 2 match excel files (each match file up to 5000 records).

This got me right to my peak transactions...

The one client (which runs unlimited and does the import) has probably handled up to 500 simultaneous users during its peak period, when the students are signing on to the system at the beginning of the academic year, although I can't imagine that as individuals they are taxing the system much...

Bevin

(PS--this is the client who is looking for a FT solutions administrator (see the posting under the help wanted section) to run my system and tend to other miscellaneous computer needs.)

Posted

In that case FMI screw up the statistic big time.

We have similar scenario with script machine running big job on 500+ MB of data.

The machine has 5 latest SCSI drives as Raid 10.

You have probably excellent setup, but 17516 KB/sec from single HD and 2850KB/sec from fastest Raid?

It doesn't make sense.

Posted

Anatoli,

Can I have you take a look then? You have Timbuktu that I can give you temporary view privileges for an account on this server?

Just wanna make sure it all is looking ok...

Bevin

Posted

Probably everything is OK with your server.

I was just thinking it would be nice to have some kind of statistics as what one can expect from FM server. Especially if someone must justify the expense first.

Sorry, I do not have Timbuktu, but that is all right.

But why it is like those numbers, beats me.

Posted

I wish that were true numbers smile.gif

Anyway -- I've noticed quite high-unsaved cache. The same is here after night job. When the person responsible for writing those routines will be back, we will put in some "Flush cache" after finish of each routine. If that will not help, we will put "pause 10-20 seconds after the Flash.

Maybe you can also think about this. If you will have crash or powercut (obviously without UPS) data loss is very probable. And if you don't have the backup made just before night run, than data reconstruction can take long time.

  • 2 weeks later...
  • Newbies
Posted

I'm running MacOS 9.2.2 on a G4 45-mhz with 256 RAM, with 50 users (actual peak shows 29) and 83 files.

Peak transactions/record 1532

Peak Network 633

Peak Disk Kbytes/sec 2204

Cache Hit% 100!!! (set to the max, 40mb)

Cache Unsaved % 0

We're finding our performance is not adequate for large searches. I've been asked to find out whether a server hardware upgrade would make a big difference or if the bottleneck is really the network.

My questions:

1. Should I be concerned that the cache hit is so high and unsaved is 0? What is unsaved anyway?

2. Has anyone out there done performance tests to compare server performance on their own? Do you recommend Linux, Jaguar (MacOS 10.2), Windows 2000, etc. Dell 2650 with Linux or Apple XServe?

BTW: I've followed the advice on filemaker.com for performance, db design, etc.

Posted

The Cache statistics are a tricky thing. A bigger cache means that more stuff can be cached, but unless that same stuff is called for over and over again, it will be just more overhead for the processor to deal with. A smaller cache is, in my opinion, better. Most of my caches are set between 2MB and 8MB.

Couple of other things to improve performance:

Getting a fast SCSI harddrive. Most likely your are running the ATA drives that come in the G4s and they are not that good for server class machines, especially with a disk intensive solution like Filemaker Server.

How much RAM is allocated to FM Server? If more than say 40MB, it is WAAAAAYYYY to much. Lower it way down (to say 16MB), set your cache to 8MB (it may adjust the memory again, let it) and see how that works.

Since you have a fair amount of RAM and need to spend more on a new HD anyway, why not consider a RAMDisk. We have 5 FMP Servers running out of RAMDisk on G4 400 Cubes and the performance is great. Of course you pretty much need to load them with max RAM to make this work effeciently. We use RAMBunctious by Clarkwood Software.

We also find that OS 9.04 is the best "Server" OS and use it on all of our servers (ASIP systems included), and even then we strip out EVERY unneeded extension (and there are lots of them to remove).

Finally look at Peek-a-Boo also by Clarkwood. It allows you to specify processor priorities for various applications and services running. This way you can give Filemaker Server maximum processor time.

  • Newbies
Posted

Thanks for the advice! Now I have to wait for a good time to bump everyone out in order to experiment. We have people who stay connected for months at a time.

More questions:

3. Is our ATA drive still as much of a factor if we use a RAM Disk?

4. Why not use Apple's built-in RAM Disk? What's better about RAMBuctious?

5. How big should I make RAM Disk if the total size of the folder of databases on the Filemaker server is 227MB? It doesn't seem like I'll have enough memory to put them all there and have room left for the system without VM.

6. Is Peek-a-boo going to give me a gain over just using the checkbox in the Preferences dialog for making server the priority process?

7. Are there any databases out there used for benchmarking performance? If not, I have some simple ones that I'm working on which I plan to run on a few different platforms. I'll publish my results later on.

Here are some answers to your follow-up questions:

- We do indexed searches when we can. Particularly bad searches combine related fields with local fields where the related fields only have a few possible values. For example, a related weekday field can only have 5 values. Searching the related db of 25k records for any of those 5 values takes 30 seconds.

- We have 63MB RAM allocated to the server app. I recall increasing it gradually to that after it complained about memory.

Thanks again.

TED

Posted

Once you go to RAM disk the ATA drive ceases to be a factor.

RAMBunctious has many more features than the built in RAM disk functions.

Add more physical RAM, as much as you can. Well really in your case 512MB should be enough, give the RAMdisk 384 and save the rest for the system and Filemaker Server.

Peek-a-boo allows you to actually set the OS settings for application priority. Not only does it allow you to modify it for Filemaker Server it allow you to lower it for other processes.

  • Newbies
Posted

I ran my performance files on my server with various configurations. The best numbers, of course, were running the files locally. That was half the time of running them over the network using my FileMaker Pro Server. The server performance, was as captkurt said, low cache, low server app memory, from RAM Disk.

I want to compare to a Linux or XServe server.

I'm willing to share the files I used for my benchmark so I can attach them here.

TED

  • 2 months later...
  • 1 month later...
Posted

As we throw more and more tasks on our FM server, we have greater performance numbers:

Disk: 9725 KB/sec

Network: 2503 KB/sec

Transactions: 5357

What are good values on another platforms, Linux? Mac X?

Posted

How come my numbers are so off. They seem impossible, but it's what I'm getting. Is the plug-in screwy for OS X or what. I noticed on the first page of this thread that someone else had similiar peak values, in fact a couple were exactly the same.

Posted

Hey guys,

We get similar performance with an X server:

Disk 12012

Files 51

Guests 10

Network 7260

Transactions 32767

I can drive transactions to the peak rate by doing a sort on one of our larger files, or running a summary calculation in the same place. Near as I can tell, this is the maximum transaction rate.

Carlisle

This topic is 7777 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.