Jump to content
Server Maintenance This Week. ×

unstored calculations, global fields, and efficiency


This topic is 4704 days old. Please don't post here. Open a new topic instead.

Recommended Posts

I want to do what I can to improve the efficiency of a few procedures that are much slower than I would like, particularly when the client is not on the host's LAN. I'm hoping someone can clarify what kinds of fields can slow things down.

(1) I think I can get rid of most of my unstored calculations in tables by moving the calculations directly into scripts, where they will be evaluated when needed, or onto layouts, e.g., in the form of conditions for conditional formatting. My question: Are unstored calculations in tables a problem in general for speed, or do they cause a problem only when they appear on layouts? If the latter, then is there any pressing need to expunge them from tables as long as I am careful to keep them off layouts where they aren't needed?

(2) What about global fields? Are they always a problem or do they slow things down only when they appear on a layout that has been invoked?

(3) If a global field appears in an unstored calculation, does that change the answer to (1)?

The logic of my scripting aside, is there any thing else I should be looking out for as I try to improve performance?

Link to comment
Share on other sites

Unstored calculations require bascially no resources until evaluated (by display or reference).

Global fields almost never slow performance.

Can you clarify what issues you're having? Is it when a layout displays? New record creation? Sorting? Record deletion?

There are a handful of reasons why a database may seem slow and a dozen ways to fix it.

Link to comment
Share on other sites

Unstored calculations require bascially no resources until evaluated (by display or reference).

Global fields almost never slow performance.

Can you clarify what issues you're having? Is it when a layout displays? New record creation? Sorting? Record deletion?

There are a handful of reasons why a database may seem slow and a dozen ways to fix it.

The database manages a yearly schedule of workshops. I have attached a sample of the main page one sees on login. The two very slow operations are (1) going from one year to another (instantaneous on my own computer but as long as 14 seconds from a remote computer off the LAN), (2) producing an email batch, which goes through all of the workshops and constructs an email for each presenter of all of the sessions they teach as a lead presenter or as a team member based on an email template that the user can modify before sending the batch (can be over 30 seconds off the LAN and 6 or 7 on).

I already think I know at least some reasons why (1) would be slow: (a) because I use a portal filter to allow the user to search any of the fields (see the second attachment for an example of what a filter does), and (B) I use conditional formatting to show (in blue) which fields are modifiable (e.g., a workshop that has not yet occurred). These two features may be the main culprits, but I was hoping there was some other problem with how I go about things. Just after reading your response, I deleted all calculation fields and globals from my basic table layouts (used for database manipulation, never for display), where they are not needed. That may have helped some. I also tried grouping fields that had identical conditional formats, but I'm not sure that had any effect. I experimented with using list views rather than portals (I would manage the filtering through scripted finds rather than through a portal filter) but that doesn't seem to have made much difference.

As for the batch email (2), creating a batch email is a somewhat complex operation, involving doing additional finds on a found set (so that the operation is basically quadratic), but the operation is still surprisingly slow. Note that the batch is created from the filtered workshops (whether or not the user actually uses any of the filter fields), so I imagine that would slow things down. Maybe I am paying for the extra features.

If it would help to see the database, I can attach it in a future response. The information eventually ends up on the web, so there are no confidentiality issues. But I don't assume anyone would want to wade through the code.

I much appreciate your interest in my problem. If I am doing something in a needlessly inefficient manner, it would be great for me to know.

BTW, I got the idea that unstored calculations involving global fields are a potential problem from the Filemaker Help page on using global fields. I'll quote the passage in question:

If a file is shared, some calculations that include global fields are evaluated on the host instead of on the client. The host performs the evaluation in the following situations:

• when a find request is searching an unstored calculation field whose calculation includes a global field.

• during a find request on a table with access privileges that restrict access to certain records, and the calculation that determines record access includes a global field.

• the evaluation of any other record access calculation that includes a global field in order to determine whether to display related data. For example, a record access calculation to determine the rows to display in a portal or the values to display in a related value list will occur on the host if the record access calculation includes a global field.

To accurately evaluate these calculations on the host, FileMaker Pro transfers all the global field values in the current table from the client to the host. If you know that certain global fields will never be used in unstored calculations or record access calculations, you can improve database performance by creating these global fields in a separate table. This will prevent unneeded global field data from being repeatedly transferred to the host.

The filter for the workshops is one big calculation involving eight global fields.

SamplePageOfDatabase.png.zip

SamplePageOfDatabaseFiltered.png.zip

Link to comment
Share on other sites

One of the key principles is to minimise the amount of data flowing between the server and the client. It sounds as though your real problem is the slow WAN connection: optimising this might create better results than anything else.

Graphics on layouts are a common cause of excess data. Get rid of all bitmaps and stick to FMP-native objects (lines, rectangles etc). Yes they may look boring but they are fast.

Portals that have a sort defined (and relationships with a sort) need to pull down the sort field data from all the related records.

Unstored calculation fields don't slow anything down until they are used, which means be careful when and where you display them.

Link to comment
Share on other sites

(1) going from one year to another

Is this portal filtered at the portal (rather than the relationship) level for the years?

Portal filtering is helpful, but on a hosted db, if the relationship would show more than a few dozen records if there was no filtering, then you see performance issues. What happens is that ALL the records in the unfiltered relationship get "touched" by Filemaker. Meaning ALL the stored data (not unstored calcs or container fields) in the record is brought down locally and then the filter is applied. If the filter references an unstored calc then that calc is evaluated for all records. If you've got a text-heavy field, your performance is hosed.

One way to speed this is up to use relationship filtering rather than portal filtering.

Conditional formatting doesn't really affect performance much. I'm not sure how FMP handles multiple instances of the same conditional formatting calculation, but you could try placing a conditional formatted text box that, for example, grays out the background. That would be one calc per related record rather than the five or six you have now.

2) producing an email batch, which goes through all of the workshops and constructs an email for each presenter of all of the sessions they teach as a lead presenter or as a team member based on an email template that the user can modify before sending the batch (can be over 30 seconds off the LAN and 6 or 7 on).

It sounds like this is a pretty complicated script. I suspect there are ways to optimize the script to get it to run 3-4 times faster, but without seeing the script it's hard to advise. I would try to limit the number of finds you're doing.

And post your file.

An unstored calc referencing a global field, can in fact be faster. It depends on what's being stored in global fields compared to regular fields. Regardless, it doesn't sound like you're doing any of the things that FM says will cause all the globals to go to the server.

Link to comment
Share on other sites

Many thanks for your response. Since my last post I finished moving all of my globals to separate tables on my new understanding that when global variables are needed by the host for, say, a find, the entire encompassing table gets sent to the host. I was a bit confused about the issue in my earlier post and hope I am correct now in understanding that one wants to avoid sending tables, not globals themselves, unnecessarily up to the host (and just as much so, judging by your response below, in the other direction as well). In any case, my change didn't have a huge effect. Things are still quite slow across a WAN for the two operations I wrote about, and the batch email initialization (item 2) remains noticeably slow even on the host. So hopefully your explanation provides the answer to improving speed.

Is this portal filtered at the portal (rather than the relationship) level for the years?

Though I am not completely sure what you mean by filtering at the relationship level, I think the answer is definitely that I am filtering at the level of the portal.

Portal filtering is helpful, but on a hosted db, if the relationship would show more than a few dozen records if there was no filtering, then you see performance issues. What happens is that ALL the records in the unfiltered relationship get "touched" by Filemaker. Meaning ALL the stored data (not unstored calcs or container fields) in the record is brought down locally and then the filter is applied. If the filter references an unstored calc then that calc is evaluated for all records. If you've got a text-heavy field, your performance is hosed.

One way to speed this is up to use relationship filtering rather than portal filtering.

I have been thinking a lot about moving from using a portal to using list view, and managing all of the filtering and sorting through finds and sorts at the scripting level. Is this what you're essentially talking about?

List view is looking attractive for other reasons. For one thing, I will need to have several versions of the main schedule layout depending on whether a "program" (any of the possible adopters of the database, each with its own distinct schedule) wants to include certain fields (series, unit, team, etc.). Since I sort the schedule in two different ways (by series name or chronologically), I could cut the number of different layouts for the schedule by a factor of two using a list view instead of a portal: I would manage the type of sort via scripting rather than creating a separate layout with a different sort. I have also found that portals do weird things when asking you if you want to revert a record that doesn't pass validation criteria. Finally, I don't like the fact that there are in effect two scroll bars with portals.

If, however, I were to change my portals across the board to tables displayed in list view, I run across a number of difficulties. Since I was unhappy with data selection via drop-down value lists, I created my own layouts for data selection and came up with generalized scripts for performing most of my key operations. This was to avoid multiple scripts all doing basically the same thing. These generalized scripts rely on some tables I created that the scripts can use to decide which layouts to evoke, which fields to change, etc. But if I let go of portals, I won't be able to perform many of these using even modified versions of my generalized scripts, because it appears that there is no way to sort a table in list view other than to create a script step that does a very specific sort. I.e., there is no equivalent to Set Field by Name for sorting. (Am I wrong about this?) So I would need to go back to separate scripts for all of the different things I do. I would consider doing this if I can improve performance. Using generalized scripts has its downside too.

This is a whole other issue, so don't feel compelled to address it unless you have any insight you would like to share. I could still display my main schedule in list view and keep other layouts in portals.

Conditional formatting doesn't really affect performance much.

Thanks, good to know.

It sounds like this is a pretty complicated script. I suspect there are ways to optimize the script to get it to run 3-4 times faster, but without seeing the script it's hard to advise. I would try to limit the number of finds you're doing.

And post your file.

3-4 times faster would be terrific.

See the attachment. You can log in as guest1; password is the same. I fixed things up so that it will be hard to mistakenly send a batch email, but you can do pretty much whatever you like after clicking send email and then hit cancel when you are done.

Navigating from year to year should be self-evident, but please let me know if anything needs clarification.

ForFMForum.fp7.zip

Link to comment
Share on other sites

D J:

I think I just got (about 5 minutes after finishing my last post) what you mean by filtering at the relationship level. That is what I did much earlier in my development of this database, and I switched to portal filtering because it seemed so much cleaner and simpler.

I will try to implement what I believe you mean on the relationship graph, but I think I will need to get the necessary globals back into the relevant tables. If you have any other insights in the meantime, please feel free to send them on.

Jerry

Link to comment
Share on other sites

I want to do what I can to improve the efficiency of a few procedures that are much slower than I would like, particularly when the client is not on the host's LAN. I'm hoping someone can clarify what kinds of fields can slow things down.

(1) I think I can get rid of most of my unstored calculations in tables by moving the calculations directly into scripts, where they will be evaluated when needed, or onto layouts, e.g., in the form of conditions for conditional formatting. My question: Are unstored calculations in tables a problem in general for speed, or do they cause a problem only when they appear on layouts? If the latter, then is there any pressing need to expunge them from tables as long as I am careful to keep them off layouts where they aren't needed?

(2) What about global fields? Are they always a problem or do they slow things down only when they appear on a layout that has been invoked?

(3) If a global field appears in an unstored calculation, does that change the answer to (1)?

The logic of my scripting aside, is there any thing else I should be looking out for as I try to improve performance?

What is really needed to get good performance over a LAN is to separate data fields from calculations and globals. The way to do this would be the separation model as this separates the data from the business logic. With such a model each user can have a local copy of the front end and the data fields reside on the server. Thus no graphics or other items requiring heavy lifting pass over the LAN (unless the data itself is graphics or similar).

My impression (I'm not an expert on the subject) is that in Filemaker it is not possible to achieve complete separation. It also appears to be quite involved, notably when retro-fitting the model. You'll find plenty in the forums on the subject of separation model.

It occurs to me that another way of doing this might be to move the data to an SQL system and replace the data fields in the Filemaker solution with calulations that point to the data now on an SQL server. This would, I think, be much less work than altering a current system to adopt the separation model. There would be a hit to native performance but the improvement over the LAN should make this appear minimal. This approach would bring other benefits such as global search capability (search for a term anywhere in the database) via the SQL query tool,

Norman

Link to comment
Share on other sites

One way to speed this is up to use relationship filtering rather than portal filtering.

There may be something wrong with how I did the implementation, but I set up my relationships to do the filtering and the execution is much slower. I also tried other variations of the original file, e.g., removing filters and sorts. Here are the surprising results:

portal filter and sort used: 11.7 seconds

portal filter removed: 12.7 seconds

portal filter and conditional formats removed: 9.3 seconds

portal filter, sort, and conditionals removed: 9.3 seconds

filter through relationships: 32 seconds

I didn't test the email batch procedure, but the results would have been about three times slower.

The only way for me to implement filtering through relationships was to use an unstored calculation that referred to global and indexed fields on the left side. There were a a total of eight extra relationships added to the relationship graph, and more than two new TOs needed to be inserted. So maybe the slowing down isn't that surprising after all.

I'm back to considering getting rid of the portal and changing to list view. The year would be set up through an initial find, and then whenever the user wanted to add a filter or change the year, I would modify the current find in a script. At most one field would change in the find at any one time.

If removing the portal made at best a minor difference, then I don't expect to see anything more than a minor difference in moving to list view and scripting the data filtering. The question is whether portals will become very inefficient as the database grows. My sense is that the issue for portals is the number of records left after the data has already been filtered by relationships. In the case of my database, there will only ever be at most 50-60 records going into the portal filter (most of the filtering will happen via the relationship graph), so if my understanding is correct, portal filters may not turn out to be a problem for me. If I am wrong, anyone, please let me know. I may still opt for list view for the other reasons listed earlier in the post. But Vaughan may be right that the problem is essentially one of bandwidth--unless data separation or server side scripting offers a way around these issues.

I have to say that I am mystified as to why a basic find can be instantaneous on my own computer and take around ten seconds when I'm working remotely. Doesn't the program upload the necessary globals and unstored calculations to be used in the find (minimal transfer), then the server performs the find on the data on the server, and then the found records are downloaded and displayed on the client. Obviously I am missing something. Is there some document that explains what happens when and where?

Link to comment
Share on other sites

What is really needed to get good performance over a LAN is to separate data fields from calculations and globals. The way to do this would be the separation model as this separates the data from the business logic. With such a model each user can have a local copy of the front end and the data fields reside on the server. Thus no graphics or other items requiring heavy lifting pass over the LAN (unless the data itself is graphics or similar).

My impression (I'm not an expert on the subject) is that in Filemaker it is not possible to achieve complete separation. It also appears to be quite involved, notably when retro-fitting the model. You'll find plenty in the forums on the subject of separation model.

It occurs to me that another way of doing this might be to move the data to an SQL system and replace the data fields in the Filemaker solution with calulations that point to the data now on an SQL server. This would, I think, be much less work than altering a current system to adopt the separation model. There would be a hit to native performance but the improvement over the LAN should make this appear minimal. This approach would bring other benefits such as global search capability (search for a term anywhere in the database) via the SQL query tool,

Norman

I tried data separation a while back and thought it was somewhat inelegant and potentially problematic to replace one version of the interface with a substantially developed version. I decided to go back to the one file model before I got too deeply into it. But if there would be a very noticeable difference in performance (say at most three or fours seconds to change years rather than ten or eleven), than I would be willing to try. Can anyone else say whether data separation will make a big difference? And if so, is there a description of how to take an already developed file and convert it into data and interface. I'm unclear about whether one can start with two copies of the same file and pare them down while changing some references, or whether at least one of the files needs to be recreated from the ground up.

Link to comment
Share on other sites

Could you move the email blast to a server script? See Agnes Riley's article.

This suggestion is intriguing. Would it be like IWP, which does a lot of the work on the server side and can therefore be much faster (until, I have heard, the server gets loaded down with too many other IWP users)? I tried, however, to implement your suggestion and wasn't clear exactly how to go about it. One must, it seems, change the reference to the file in the Perform Script step within the script or button doing the calling, but when I did that on the server itself, the reference just changed back automatically to what it was. When I tried changing the reference in a file I was developing and then tried opening remotely from another computer on my home network, I saw that the script did nothing at all. Is there a fuller description of this process somewhere? And, as with data separation, does anyone else have an idea of whether this will make a substantial contribution?

Link to comment
Share on other sites

I tried data separation a while back and thought it was somewhat inelegant and potentially problematic to replace one version of the interface with a substantially developed version. I decided to go back to the one file model before I got too deeply into it. But if there would be a very noticeable difference in performance (say at most three or fours seconds to change years rather than ten or eleven), than I would be willing to try. Can anyone else say whether data separation will make a big difference? And if so, is there a description of how to take an already developed file and convert it into data and interface. I'm unclear about whether one can start with two copies of the same file and pare them down while changing some references, or whether at least one of the files needs to be recreated from the ground up.

I thought about what I said about an FM calc field pointing to an SQL data field and thought that that was probably not a built in feature. So I looked around and saw this:

http://www.dracoventions.com/products/2empowerFM/family/sql.php

This makes what I described possible and, much to my surprise, it is a free plug-in. Of course, one would need to be sure the product is going to continue and that it is fast, bug-free (doesn't cause crashes) and reliable. But, as I said before, I think linking to an SQL server in this way is better than retro-fitting the separation model. SQL databases are much faster than FM server and they are mostly free,

Norman

Link to comment
Share on other sites

This topic is 4704 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.