Jump to content
Server Maintenance This Week. ×

Major Problems across Servers


This topic is 6686 days old. Please don't post here. Open a new topic instead.

Recommended Posts

We have a complex solution we have built using the seperation model with 1 Interface File and 5 Data files. The 5 Data files have many tables within them. Previously, we had all 6 files on 1 server. However, after a load test, it would perform very slowly after 50 people. The CPU on the server would max out (Server is dual G5 XServe).

So, we decided to split the solution accross multiple servers. The configuration is as follows:

Server 1: Interface File

Server 2: Data File 1 (main)

Server 3: Data File 2

Server 4: Data File 3,4,and 5

All file references were updated and double checked. After opening the Interface File, it would automatically open Data File 1, Data File 2 ,and Data File 3....since they are related. Next, after going to any layout in the Interface File where the layout is based on Data File 1, and contains related fields from ANY other Data File, the Data File 1 closes and we receive the following message:

"File has been forcefully disconnected by the host. All affected windows will be closed.[From Administrator:"""

If we take all of the files and put them back on the same server, and update the File References, it works perfectly. I have also moved Data File 1 and the Interface File on the same server, and it will have disconnect problems with one of the other Data Files.

Edited by Guest
Link to comment
Share on other sites

However, after a load test, it would perform very slowly after 50 people.

For over 50 connections in the Mac OS, it's important that the server be running on Mac OS X Server. It has something to do with the thread management.

Link to comment
Share on other sites

It only runs FileMaker and is an LDAP Master.

And therein likely lies part of the problem. You ought not mix LDAP or other Directory Serviceswith FileMaker Server. It will definitely degrade performance.

Also, how much installed RAM? And, since this is Server 8, I assume you are not running any web publishing.

Steven

Edited by Guest
Link to comment
Share on other sites

Are you positive that LDAP specifically will degrade performace? We use the LDAP for FileMaker external authentication only. When no users are in FileMaker, the CPU doesn't even get to 2% normally.

It has 2 Gig of Ram. Also, the CPU's are dual 2Ghz.

We are not running web publishing at this time on that server.

At this point, I think FileMaker Server just runs that slow because of the complexity of this solution. The biggest problem we are facing is that when we try to split the solution accross servers, we get the message "File has been forcefully disconnected by the host. All affected windows will be closed.". We have a ticket in with FileMaker and it is in the process of being escelated.

Link to comment
Share on other sites

Are you positive that LDAP specifically will degrade performace?

Pretty much. See the Server and the External Server Authentication Tech Briefs.

2GB is probably just enough RAM. Does this problem persist if you host the files on a different single processor CPU or if you disable one of the processors on the G-5?

Steven

Link to comment
Share on other sites

In the tech brief, it gives the example of the LDAP master being on a different machine, but does not state that running on the same machine will degrade performace.

I have not tried to disable one of the processors. Does FileMaker have a problem with dual processors?

Link to comment
Share on other sites

Does FileMaker have a problem with dual processors?

FileMaker Server does in some instances. What I am really trying to isolate here is whether the specific server has a problem or whether it's the files. See Tech Info 5303 for one example where DP causes problems. It may or may not be applicable to your situation.

Steven

Link to comment
Share on other sites

We purchased this server specifically for this purpose...So all it has ever done is LDAP and FileMaker. I looked that the Tech Info article and it applied to running FileMaker Server Advanced, which we aren't running on that machine.

Is it possible that we have just reached the max of what FileMake can do? Here is our system setup currently.

5 total FileMaker servers.

4 are dual G5 Towers FileMaker 7.

There is about 250 total databases accross those 4 servers.

1 is a dual G5 XServe running FileMaker 8. This one runs the new system we have developed using the speration model with 6 total files. 1 Interface File, and 5 Data Files.

Our Data files total to 162 tables, with the highest number of records for a table over 2 Million.

Our Interface File contains 552 Relationships, 666 Layouts, 353 Value Lists, 30 custom functions, and 27 File References.

As you can see, we run entirely on FileMaker. We have 4 full-time in-house FileMaker developers and have been using FileMaker since the company started over 10 years ago.

Link to comment
Share on other sites

I believe this is a FMS8 issue.

I have experienced this very same problem when using version 8, but not in version 7. I have two files ( 1.5 Gb and 28 Gb), only one server, on a 1.8GHz DP, 10.3.9. Initial conversion to FMS8 was preceded with a 48 hour test with only one user and backups every hour. No problems so went live...

About four hours into normal use, problems start. First time, the server locked. The live files were discarded and went to backups. Those backups appeared fine, recover and compressed, back on server 8 and four hours into use, files closed during a backup, server log indicated files were corrupt and were closed.

Tech Support and I are trying reproduce on my test server, too risky to do on the live solution, but no luck yet. They have suggested reducing to a single processor and they are wondering of related to the files being on a drive other than the start up drive, but I first want to be able to reproduce to get some level of security prior to trying live files again.

Link to comment
Share on other sites

In the tech brief, it gives the example of the LDAP master being on a different machine, but does not state that running on the same machine will degrade performace.

We didn't mention it because it seemed obvious to us. An OD master or machine with an AD role can get really busy depending on the deployment. In your case it may not put much of an extra load on it, but chances are you'll start using the OD for much more than just FM. Having FMS on that machine will give you grief over time when that happens...

Link to comment
Share on other sites

Given the overhead of TCP communications between more than one server, I'd have a hard time believing the 2 server solution would work better, unless you had some an unusual relational design.

Here are some things to look at:

1. What are your FM RAM cache settings on the server and client? For server it should be maxed out. I'm not so sure about on the client side, but you may want to try increasing it there.

2. The goal is to make sure your "working set" fits inside the RAM cache at all times. You can estimate the working set roughly as the # of records needed to be accessed for a given operation times the number of users, minus any shared data. For example, if you have an unstored calculation that summarizes a bunch of records, each time that calc is referenced, ALL those records must be brought into the cache.

3. For example, I have one solution where I have to summarize a single field across several hundred thousand records. It tuns out it was much faster to intentionally denormalize this data by putting it into a small table that included only an ID field and the data field.

Link to comment
Share on other sites

1. What are your FM RAM cache settings on the server and client? For server it should be maxed out. I'm not so sure about on the client side, but you may want to try increasing it there.

Not always. You should use the OS and FMS statistics to figure out the ideal server cache setting. Best performance does not usually occur with the FMS cache set to it's max.

Link to comment
Share on other sites

But I agree with Xochi in general: splitting the files over different servers in order to get more processing power doesn't seem like a good solution to me. A solution redesign is probably in order.

Also more RAM and faster hard disks can help.

Link to comment
Share on other sites

For server it should be maxed out. I'm not so sure about on the client side, but you may want to try increasing it there.

Whoops! Optimal usage occurs when the cache hits are in the 90 to 95 range in normal operations. Check the statistics in the SAT Tool to verify this.

Also there is a difference between RAM used (reserved) for cache and that RAM used by the service itself. The cache ceiling is purposefully constrained.

C=(R-128) * 0.25

where R is the amount of installed RAM, and C is the amount of resultant memory avialble for cache, with an absolute ceiling of approximately 800 MB.

HTH

Steven

Link to comment
Share on other sites

Whoops! Optimal usage occurs when the cache hits are in the 90 to 95 range in normal operations. Check the statistics in the SAT Tool to verify this.

I'm not sure I agree. The best performance logically will occur when you have 100% cache hit rate. The only reason to aim for 90-95% is that you can be penalized by setting your cache too large (because that RAM could be used by the OS instead). However, as you point out, this is not allowed by FM:

The cache ceiling is purposefully constrained.

C=(R-128) * 0.25

Steven

If you tune your system for 95% cache hit at point A, you will have performance that is not noticeably different from if you had set it to get 100% hit rate. However, you run the risk of your system growing at a later date and outgrowing your cache.

Thus, I'd recommend that the best recommendation is to set the cache to it's highest setting possible. Since FM already limits this to about 25% of total RAM, there is no risk of setting the cache to high (and causing other OS processes to suffer).

However, setting the cache too low is bound to give poor performance, and since the vast majority of database solutions grow in size, rather than shrink.

Link to comment
Share on other sites

This topic is 6686 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.