Jump to content
Server Maintenance This Week. ×

Server cache settings best practice?


Flarb

This topic is 2762 days old. Please don't post here. Open a new topic instead.

Recommended Posts

  • Newbies

Objective: We want to make the whole solution faster, for both web and local FMPro users. Here's what we are working with:

- Machine 1: FileMaker Server 14, Blade server with 512GB ram, 1TB SSD, running Windows Server 2012 R2, intranet connected only, main machine.

- Machine 2: iMac, runs Admin Console only.

- Machine 3: FileMaker Web Engine 14, Mac Pro with 128GB ram, 1TB SSD, running El Capitan, intranet and internet connected, exclusively for web.

- FileMaker file is a single file at 14GB, no data separation currently, containers are stored externally. Let's assume that the file's inner workings are optimized and that we are here to discuss server deployment, server specifications, and server settings

- FMPro simultaneous users: ~100. XML and PHP total Web simultaneous connections: ~50.

Question 1a) Would there be much benefit to setting the Server Cache from its defaulted 512MB, to 490GB (yes that's GB)? We see 99-100% cache hit when at 512MB. We see consistent 100% cache hit when at 490GB. Disk access is orders of magnitude less when cache set to 490GB, almost none.

Question 1b) Would there be any penalties associated with the above setting? For example, we've heard that regular backups would be missing the latest data changes unless the Progressive Backup feature is used. We've also heard that should the server crash, more data would be lost (except for the backup, of course). We've also heard that FileMaker Server has to use more CPU time to "manage" such a large cache, resulting in less performance, not more. Any of this rung true? Any other penalties to consider?

Question 2) Would breaking the file up into data separation model, smaller chunks, net any performance benefits?

Question 3) If using a smaller Server Cache (say, back down to 512MB), would the system benefit from a RAM DISK (such as RAM-SAN 440) instead of an SSD, and give net better performance than the crazy big Server Cache setting?

Observation: 512MB cache makes FMPro clients faster, but web slower. 490GB cache makes FMPro clients slower, but web significantly faster.

Observation: FM Inc and one consulting firm tells us to set the cache to 90% of this server's ram capacity. Other experienced developers tell us to "manage" the cache at a much smaller number, paying mind to the cache hit percentage. What say you?


Thank you for any insight, folks!

Link to comment
Share on other sites

13 hours ago, Flarb said:

 

Question 1a) Would there be much benefit to setting the Server Cache from its defaulted 512MB, to 490GB (yes that's GB)?

Totally not.  In fact you'd probably increase instability, not improve stability or speed.  Based on what you are describing I would increase the cache a bit to say 800MB but not beyond that.

13 hours ago, Flarb said:

For example, we've heard that regular backups would be missing the latest data changes unless the Progressive Backup feature is used.

 

Where have you heard that!?

FMS backups are very robust and I've never heard of any lost data; and using progressive backups does not make the regular backups better.  Progressives should be use to complement the existing backup strategy.

Regular backups (and progressives) are going to be fairly slow in your case because you have one monolith of a file at 14GB.  I would break that file up, not just along the lines of UI and Data, but along the lines of static vs dynamic data, tables that are used more intensely than others,...

14 hours ago, Flarb said:

Objective: We want to make the whole solution faster, for both web and local FMPro users. Here's what we are working with:

- Machine 1: FileMaker Server 14, Blade server with 512GB ram, 1TB SSD, running Windows Server 2012 R2, intranet connected only, main machine.

 

I think you are focusing too much on the RAM/cache to get better speed.  I would focus instead on the processing power and the disk speed.

(And no: I would not consider a RAM disk, that's too volatile for my liking).

With that many concurrent users I would:

- split the file as I mentioned above

- depending on the number of cores available and their speed: consider using machine #2 to host some files and not everything on machine #1.  There's no point in using a machine for just the admin console.

  • Like 1
Link to comment
Share on other sites

  • Newbies

Thank you Wim for the excellent information. As to where, we have been privately pinging some other developers and FMI to get extra input. Because the input has been quite dynamic, I've posted it here for extra input. I tend to agree with your assessments, right on, it's in line with a couple of other top developers.

As for the RAM disk, I would never dream of running a system-level (OS X for example) ram disk in production. Specifically, the relatively obscure TMS RAM-SAN 440 is a dedicated hardware box with its own internal battery bank, system controller, and an internal standby SSD RAID which receives a RAM dump in case of power failure; a dedicated hardware unit that shows up as a curiously fast hard disk to the system. The fastest enterprise SSD boxes from IBM available today boast a 150/190 read/write µS latency; while the RAM-SAN 400 boasts a 14µS r/w latency. We figure that ten+ times quicker on latency could help make a dent? 

Agreed with splitting the file, purchasing and then splitting across more processor cores. Which brings me to Question 4) The powers that be may want to purchase a new 56-core system, and use VMWare to run a bunch of instances of Windows Server 2012 R2 + FileMaker Servers. Has anyone had a net positive experience with this approach, as opposed to say, a deskfull of Mac Pro cans with an equivalent number of cores, not virtualized, running normal OS X and FileMaker Server instances? We've been reading of mostly negative experiences with the VM approach thus far.

Thanks again!

Link to comment
Share on other sites

23 hours ago, Flarb said:

We've been reading of mostly negative experiences with the VM approach thus far.

 

Really?  It's pretty much the default deployment in bigger organizations and I would prefer it over a rack of OSX Server cans.  Especially since the VMs let you dynamically assign resources if and when necessary; you have VMotion to load balance across physical hosts etc.

Most issues with virtualized machines come from trying to cram too many instances onto one physical server and/or giving each VM not enough resources.

If you go virtualized then you really need more than one bad ass physical server, you obviously want redundancy.  And you don't want to combine VM instances that each tax the same resource bottlenecks.

I am also surprised that the choice is between VMware (all Windows + the VMware tuning skills) and OSX servers.  That very different ends of the spectrum with very very different skill sets.  It'd make me nervous because I have rarely found those skills in the same team.

 

 

Link to comment
Share on other sites

This topic is 2762 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.