Jump to content
Server Maintenance This Week. ×

Perform Script on Server - Push Boundaries


This topic is 3459 days old. Please don't post here. Open a new topic instead.

Recommended Posts

I was wondering, if it would be worth opening a separate forum for this functionality? I has a huge potential, could completely transform the performance of the solutions we build, but at the same time is challenging to take advantage of.

 

This is an excellent article on the subject: http://buzz.beezwax.net/2014/04/04/an-introduction-to-perform-script-on-server

It covers all the basics. It also correctly states, that the server can't be aware of the user's context (found set, variables, sort order, etc.). 

 

It would be great, if there was a space, were we could discuss and dive deeper, exchange ideas and concepts on how to utilize this powerful function.

 

For starters, I would be very interested if anyone has found a good way to transfer a found set to the server, perform an operation (such a summarizing or sorting, for example - try 200k records in FMP over 3G) and get the result back to the client. I know, the answer probable is: it can't be done. But what IF... imagine! AND often, there IS a way with FileMaker. 

 

Anyone care to join in? :-o

 

--

Further reading:

http://timdietrich.me/blog/filemaker-13-perform-script-on-server-insanity/

http://www.filemakertoday.com/component/k2/4035-filemaker-13/5181-the-new-perform-script-on-server-script-step-filemaker-13

Link to comment
Share on other sites

Can you give me an idea, as to where the server starts being the bottleneck, as opposed to the network connection? For example we could take a 11g wireless network (aprox. 2.5 mbytes per second) and a user pool size of 50 clients through FMP (WebDirect not active), MacMini Server, i7, 1TB SSD, 16GB RAM)

Link to comment
Share on other sites

Servers have 4 potential bottlenecks: processing power, memory, disk i/o speed and network bandwidth.  Depending on the server specs, the user load, the design of the solution, the nature of the user interaction (reading data vs. running summary reports for instance), you will see stress on one or more of these.  Typically processing power and disk i/o get hit first.  So if you have a minimal server (which a mac mini is), pushing more processing tasks to the server instead of letting the client process is, can kill performance for everything and you may actually see a decrease in performance instead.

Link to comment
Share on other sites

Even with 64Bit and SSDs, which we both have these days?

 

I'm not questioning your statement, but I guess I have to revise my take on processing power available. 

Link to comment
Share on other sites

Personally I would consider the inability to allocate large segments of RAM to specific processes and using large chunks for caching to be a "bottleneck" in 32Bit.

 

MacMini vs. "real server" aside (let's assume that we have an adequate server), I'm sure there are many situations, where the server-to-client-caching of records is slower than server-side processing.

 

 

 

Conceptual idea for transferring a user's context to server

 

Based on all the reading done, I claim that we can't transfer a found set or records directly without taking a significant hit in performance. This would call for aggregation - and we know how that goes... entire records being cached, etc. Also see: http://www.teamdf.com/weetbicks/100/the-search-for-fast-aggregates-implementation-results

 

But what if we interpret "context" differently? 

 

If you have the need to store information about accounts (preferences, etc.) you probably already have a table with records for matching FM accounts. One record for each account. My idea is to add a field for each global field and each global variable used during a user's (local) session - plus corresponding "result" fields - to the accounts table. If you now modify your scripts to write the results of $$vars and global fields to the account record as well, you have a way of piping the "context" to the server - without any overhead. Now that the context is known to the server, it can perform, calculate, aggregate, etc. and write the result back to to the account record. Once again with no overhead. From there a local script can re-insert the result into the user's local context.

 

Let's take aggregating records based on a FoundSet for example. If you don't have the Quick Find field on the layout yet, place it and create an equivalent and result field in the accounts table. Have your search script write the search string to the account record after the successful execution of the search. Now this very search can be replicated on the server. Execute the search server side, run the aggregate needed on the found set, insert the result into the result field on the account record. Now a local script can pick-up and transfers the result into a field of choice in the desired/original destination table.

 

IF I'm thinking correctly, this could replace the need for a summary field... under certain conditions, of course.

Link to comment
Share on other sites

This topic is 3459 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.