Jump to content

Relationship and portal combination


sal88

This topic is 959 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Hi all

I can't work out what combination of relationship/portals/portal filters is best for performance.

The first scenario is the customer record which has three portals that all ultimately point to the same Log table, the portals are mainly distinguished by the value of the Type field. They are all in their own tab panel so only one is displayed at a time. The portals also have a "free text" filter which refers to a global field, so users can search the portal.

The portals display the following data:

  • Recurring Services (type = 1 and isempty (invoice_idf)) - 100 records - sorted descending by Log_ID (number field)
  • Jobs (type = 2) - 10,000 records - sorted descending by Log_ID
  • Items (type = 3) - 920 records - sorted descending by a "due date" field - but could probably be changed if it were an issue

The record counts are as per the 'busiest' customer - which is quite slow to load.

So I'm wondering what configuration would be best when it comes to:

  • Sorting - should this be via the relationship, or portal, or both?
  • Filtering - relationship or portal or both?
  • Table occurrences - one shared by all portals, or one for each portal?

 

The second scenario is similar but relates to the Invoice record pulling in that same Log table data for totals.

The Invoice record has four calculations which I believe are the source of slowness:

  • Recurring Services total (currently looks to all Log records that have the required Invoice ID, and a type field of '1')
  • Jobs total (same as above but type 2)
  • Items total (same as above but type 3)
  • net total - sum of the above three

In this scenario, should there be a TO for each of the top three calculations, or a single TO, and then the addition of three line_net_sale fields in the Log table - each one populated depending on the Log's type field, so the Recurring Services total calculation would be sum (invoices::line_net_sale_RS) for example.

Also, should the net total add up the other three calculation fields, or should it point to its own TO, which shows all Logs of the same Invoice ID?

All advice appreciated!

Link to comment
Share on other sites

If you are concerned about performance then try avoiding portal filters. Especially when dealing with such large amount of records. Does it even make sense to cram 10k records into a portal? Who can make any sensible use of that?

Anyway, seeing as your primary goal (in the first scenario) is to divide the records by type, you could simply use a filtered relationship along the lines of:

Customers::CustomerID = Log 2::CustomerID
AND
Customers::gType = Log 2::Type

and populate the global gType field with the corresponding value when switching to a tab. In fact, if it weren't for the requirement of different sorting, you could use a single portal and fake the tabs.

The additional  "isempty (invoice_idf)" requirement could be accomplished by portal filtering, with only about 100 records of type 1 to process. However, "searching" the portal using a "free text" global field is simply out of the question when viewing type 2 records. I would suggest you look into performing an actual find and showing the results in a card window.

IMHO, your second scenario deserves a separate thread.

 

 

Link to comment
Share on other sites

Thank you Comment!

I guess that means the fewer related table occurrences the better, even if they are effectively not on display via a layout object. Or is the sorting of them that affects performance?

Does it make any difference whether the sorting is done via the relationship, or the portal?

I'll test the changeable global field approach, however I've just realised it might not be an option as I have a multi window configuration, so one window might have the Jobs tab open, and the other the Items tab. However multiple windows is I'm sure is another problem for speed so I'll find out if anyone is actually using multi window.

If I have to stick with multiple window, I will have the multiple TOs but ensure that the filtering is done via the relationship, and not the portal (with the exception of the free text filter), as you suggest.

Yes I think it is time to put a limit on the number of records that it is pulling through. I take it that should be done via the relationship? What is the best field to filter by? 

I may have misunderstood where problem lies with so many related records, is it the display of them in the portal, or searching the portal, or a bit of both?

Do you have an idea of what the maximum number of records should be? I'd say the average customer has 800 related type 2 records. And we have thirty users over WAN, who will all have this layout open (now I think about it I'm not surprised there are performance issues!!).

I'll post the other scenario separately.

Many thanks

Edited by sal88
Link to comment
Share on other sites

That's a lot of questions ... I think you will find answers for some, if not most of them here:
https://support.claris.com/s/article/Top-Tips-Optimizing-Performance-of-FileMaker-Custom-Apps-in-the-Cloud?language=en_US

 

2 hours ago, sal88 said:

one window might have the Jobs tab open, and the other the Items tab.

Well, that kind of contradicts what you said at the beginning:

On 8/25/2021 at 4:52 PM, sal88 said:

They are all in their own tab panel so only one is displayed at a time.

 

2 hours ago, sal88 said:

I may have misunderstood where problem lies with so many related records

The problem I pointed out is portal filtering. Unlike filtering the relationship which uses the child table's index, portal filtering is the equivalent of an unstored calculation; if there are 10k related records, the portal filter must perform 10k evaluations of the portal's filtering expression before it can decide which of the related records should be shown. In practice, this restricts the use of portal filtering to cases where the related set size is counted in hundreds, not thousands (YMMV). 

Link to comment
Share on other sites

1 minute ago, comment said:

Well, that kind of contradicts what you said at the beginning:

That's right, sorry it completely slipped my mind. Hopefully that's a sign that I can remove the capability!

Thank you for the article I will read it right now.

2 minutes ago, comment said:

The problem I pointed out is portal filtering. Unlike filtering the relationship which uses the child table's index, portal filtering is the equivalent of an unstored calculation; if there are 10k related records, the portal filter must perform 10k evaluations of the portal's filtering expression before it can decide which of the related records should be shown. In practice, this restricts the use of portal filtering to cases where the related set size is counted in hundreds, not thousands (YMMV). 

Ah I understand. That would explain why FileMaker  freezes when I try to search those 10,000 related records. 

Does that mean that even if the free text filter field is empty, it still performs 10k evaluations, because the filter includes the line: IsEmpty ( g_filter )

Link to comment
Share on other sites

7 minutes ago, sal88 said:

Does that mean that even if the free text filter field is empty, it still performs 10k evaluations,

In theory, yes (although the evaluation exits when it finds that the first condition is true). In practice, you would have to ask a Claris engineer what kind of optimization they put in place - or conduct a comparative speed test.

 

Link to comment
Share on other sites

Great article and I've implemented all the fixes however I can't find a recommended way to limit the number of records that the relationship pulls in. Is it enough to just have a script populated global date field (e.g. thirty days ago) and then base the relationship on that? Or is there a better way? Ultimately I just need the most recently created hundred or so records.
Thanks

Edited by sal88
Link to comment
Share on other sites

As comment suggested, try using a find. Then grab those ids and set them into a selector-global field on the parent-side of the portal relationship. When you find, enter find mode first, then go to layout or use a layout with no fields on it for the find so that no records are pulled down when you first enter the layout. We often use this technique over filtered portals. 

Link to comment
Share on other sites

There used to be a big speed difference between a relationship using a < comparison vs. one using the = operator with multikey. I believe this has been reduced to negligible - but you can conduct your own test to verify this in your own environment.

As I mentioned earlier, my main concern here was to warn you against portal filtering. If I am not mistaken, if the relationship itself is filtered using a global field as an additional matchfield, only related records will be sent down from the host. But I am not an expert on hosted solutions optimization, so you might post any questions on that separately and get more qualified answers from someone who is.

 

Link to comment
Share on other sites

I added a 'minimum log id' field to the customer record table, to be populated on record load when the number of related records goes over 500. It was the 500th newest log id for that customer. So the relationship would show records where the log id was above that in the new 'minimum 'log id' field.

While it was an improvement, it was still slow, from what I can tell this was due to the "greater than" operator in the relationship. I'm not sure if this would have been different if it were a date field.

Instead, I've added a new 'top 500 client id' field to the Jobs/Cases tables. This is an auto enter field which  is populated with the client id. So the relationship is clients::client_idf==cases::top_500_client_idf

To maintain this I have a daily scheduled server script which starts with an ExecuteSQL query:

select  top_500_client_idf
,       count(*) as case_count
from    cases where top_500_client_idf is not null and closed = '1'
group by
        top_500_client_idf
having  count(*) > 500

From there it then blanks the top_500_client_idf field where the record is not part of the latest 500. I'm trying to think of a simpler way of doing this, without looping through records, e.g. a lookup.

This solution means the user does not have to perform any queries or match key population on record load.

Thanks for all your help guys!

Link to comment
Share on other sites

This topic is 959 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.