Sign in to follow this  
Followers 0
FM_Newbie

'Find Operation' Going Slow

18 posts in this topic

Hi Guys,

I am having trouble with Find Operation getting slower and slower each day. There are some design consideration as per some articles i have read recently which i will be doing as well.

I inherited this solution from another developer, Find is based on some Calculation fields which are based further on some other calculation fields (both calculation fields have 'Do not store results' option checked)

I know its not the best of designs, i wanted to ask for any possible improvement to Find process as its getting clumsy now.

I have indexed relevant fields though, data in the table isn't huge, around 2000 records with 50 fields but Find is based on unstored calculation fields mainly.

Can PSOS be used in Find Operation? 

Is there any other thing i must do at the beginning.

Thanks much,

Edited by FM_Newbie

Share this post


Link to post
Share on other sites

A Find on unstored calc fields will end up being slow - do those fields _really_ need to be unstored calc, or can they be normal Text/Number fields with auto-enter - then they can be indexed for much faster searches.

Share this post


Link to post
Share on other sites

I'll echo Webko's sentiment here: my take on it is that you have to bite the bullet and retool the solution so that these fields are regular stored fields.  There is no reason - except bad design - why a search on 2,000 records should be slow.

PSoS may help as a band-aid for now but you'll have to test.  You also have to consider whether the server can handle the PSoS load you throw at it.  If the server is marginal then you are just going to tip it over.

Share this post


Link to post
Share on other sites
On 01/24/2017 at 9:30 AM, FM_Newbie said:

i wanted to ask for any possible improvement to Find process

That's hard to say without knowing what those calculations do, and what are you trying to find. In many cases you can simply move the find to operate on the original stored fields.

For example, say you have a field for DateOfBirth and an unstored calculation of Age. Instead of searching the Age field for numbers higher than 18, you can calculate the date 18 years before the current date and search the DateOfBirth field for dates earlier than that.

 

Edited by comment

Share this post


Link to post
Share on other sites

Thanks for the comments. I will try to explain the scenario here to be clear:

There is jobs table and there is clients table. Every Job has more than one clients like where we, pick up, drop off, client who pays etc. Now these clients do change as well during the lifecycle, someone entered wrong client or some other issue. I had to make these client types into jobs table as Calculation fields to update if something changes.

The Search is mainly based on 'Search all jobs with same Pick up Address' or likewise. These target search fields are Calculations and hence non-indexed.

I hope i have made my question clear.

Cheers

 

Share this post


Link to post
Share on other sites

I am afraid this part makes no sense to me:

3 minutes ago, FM_Newbie said:

I had to make these client types into jobs table as Calculation fields to update if something changes.

I don't think you need to have any client details in the Jobs table at all. Except perhaps as a convenience, for display purposes (when a portal would not provide the required view). Even then, there should be no need for one calculation to depend on another.

 

6 minutes ago, FM_Newbie said:

he Search is mainly based on 'Search all jobs with same Pick up Address'

That too is not quite clear. If you're speaking about a particular, single address, then the correct procedure would be to search the address field of the related Clients table. Otherwise it might be necessary to construct some auxiliary relationships to interlink Jobs with common addresses.

 

With all that said, I am still puzzled about one thing: you say you have around 2000 records. That's practically nothing. Even with complex unstored calculations, the find should be instant or very close to that. When you say that the find is "getting slower and slower each day", I can't hep thinking there must be some other factors at play here.

 

Share this post


Link to post
Share on other sites

hi @comment, Sorry for the confusion on this. I will try to clarify a bit more.

On 1/27/2017 at 4:23 PM, comment said:

I don't think you need to have any client details in the Jobs table at all. Except perhaps as a convenience, for display purposes (when a portal would not provide the required view). Even then, there should be no need for one calculation to depend on another.

 

The Layout in discussion is 'jobs' table layout and the calculation fields are there for user conveniece to display data from clients table. Also the problem is that user wants to have search on the same layout on those calculation fields.

what i meant by getting slower every day was a little sweeping comment out of frustration. The search has gotten slower over the period slowly and it seems quite slower since the start of database. 

Share this post


Link to post
Share on other sites
1 hour ago, FM_Newbie said:

user wants to have search on the same layout

Users can still search on the Jobs layout. In fact, they must search on Jobs layout, if they hope to find Job records that meet the criteria. The point is that they need to enter the search criteria into a field of the Clients table - either directly, or through the help of a script.

 

1 hour ago, FM_Newbie said:

the calculation fields are there for user conveniece to display data from clients table.

Well, are you sure you need them? Why not display the actual client data directly in a portal?

 

1 hour ago, FM_Newbie said:

The search has gotten slower over the period slowly

My point still stands. With only 2,000 records, even a grossly inefficient search should be nearly instant.

 

Share this post


Link to post
Share on other sites

It would be very helpful if you could post the file.

Or a clone of the file.

Or a print-to-PDF or screen shot of the field and table definitions. 

Share this post


Link to post
Share on other sites

Attached are three files:

Script: The script used for search on main layout

Field def1: main calculation field used in find

field def2: base field on which the calculation field is based on.

field def2.jpg

Field def1.jpg

Script.jpg

Share this post


Link to post
Share on other sites

Use Not IsEmpty

And make TOP_LocalCompany a text field with an auto-enter calc the same as the calc definition - I'm assuming the actual value doesn't really change very often

Share this post


Link to post
Share on other sites

Hi @webko, Unfortunatley it does change quite often after the record is created.. so can't miss out on any change made ;(

Share this post


Link to post
Share on other sites

Let me make two points here:

First, the calculation does not fit your earlier description. I am not sure if this is a mistake in description or in the actual design, but the calculation returns the first paragraph of the first (in relationship's order) related party's name. This is very different from:

On 01/27/2017 at 10:01 AM, FM_Newbie said:

The Search is mainly based on 'Search all jobs with same Pick up Address' or likewise.

 

Next, I made a test file with 2,000 parent (Jobs) records and 20,000 child (Parties) records and added a calculation field similar to yours. Searching this field produces instant results. I am attaching the file so you can test this for yourself.

 

 

Test.fp7.zip

Share this post


Link to post
Share on other sites

Thank You @comment for the demo. 

I will have a look at the design again to simplify it, i took this from another developer and since it was working so i didnt touch it.

Overall the database file size is near 4 GB now (with container data stored externally separately), plus in the respective jobs table there are total 115 fields. Overall this could be making search slower.

Thanks again for your help 

Share this post


Link to post
Share on other sites

4GB is not massive enough to account for the slowdowns you have described.  But it is big enough to make me wonder if you are really storing all container data externally.  How big is the file when you save it as an empty clone?  And how big when you save it as compacted?  Do things get noticeably faster after you save it as compacted?

115 fields in the job table is iffy; it usually does not take 115 attributes to describe an entity so I would look hard at getting rid of some of these, especially if you have a lot of calcs in there.

Share this post


Link to post
Share on other sites

This isn't really getting anywhere. Statements about trimming have been vague. It may be time to move on and tell the company to work with an experienced developer.

It would be very helpful if you could post the file. Or a clone of the file.

Or a print-to-PDF or screen shot of the (complete) field and table definitions. 

 

Share this post


Link to post
Share on other sites

@Wim Decorte, Clone file size is approximaltely 8MB, I didnt compact the file size recently but it would reduce around 500 Mb in size.

There is just one container field that is using data internally, rest is all stored externally.

Problem with Search going slow was due to bad design, i was doing the search on same layout which was used to display different calculated fields from related tables,  I have moved the Find Operation to another layout which is based on the the actual table. Results are pretty good.

@BruceR, Isn't this the same at the start that we all make silly mistakes ? and every question seems silly untill u know the answer. I would rather try hard to become that 'experienced developer' one day.

Share this post


Link to post
Share on other sites

Answer: no.

To a significant extent, it is about making it easy for volunteers here to see enough detail of the solution to be able to help and being considerate of their time.

Having seen some of the detail of this solution, this is even more apparent to me. Playing twenty questions, revealing tiny unrepresentative and incomplete slices of a solution for us to review, preventing the company from operating on reliable data; those seem to be important considerations.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By AbsoluteVoice4u
      I'm looking to create a script and place it in a button to exist on two layouts (BACKUP and MAIN), to accomplish the following when activated from either layout:
      Copy current ACCTNUMBER
      If current layout is MAIN
      then go to BACKUP layout and show the same ACCTNUMBER record.
      else if
      current layout is BACKUP
      then go to MAIN layout and show the same ACCTNUMBER record.
       
      ....I'm sure it's a no brainer but right now I'm totally missing some small step to start the script ....help?
       
       
    • By Per Waagø
      Hi. 

      I seem to find variating answers to this topic, so here I try again.

      How do I use a variable to perform a find in a script? I want to use the field contents of the current record to search for similar records. 

      Background: We send a lot of people out to different countries and I want to make a script that makes an excel sheet with various data based on the country field of the current record. Hence, if the current record is going to France, I want to see everybody that we currently have in France etc. However, we have two return dates so I need to do two searches: 

      1. Country+ departure date <=// + arrival date >=//  

      2. Country+ departure date <=// + Extended arrival date >=//  

      Is it really true that perform find cannot handle a variable?

      Hope you can help.
    • By FM_Newbie
      Hi Guys,
      I had been using FM EasySync for few months now, so it had been working fine till now.
      Without having any change in the table structure, i got the error " The attempt to push data into server has failed ( The mesage recevied rom the server was "852" )
      852 Error is :  Cannot write a file to the external storage
      I believe this is related to container fileds I am using. There has been no change in the Container field options. They are still being stored "Externally" on the server with Open Storage option. Also there hasn't been any change on Server policies as well.
      I have no clue as to what could have cause this. Anyone having idea about this ??
      Thanks much.
    • By Anthony
      I am using Insert From URL to send HTTP GET request to my server and get the result.
      I have a function PING that returns 1 if the server is online, but it cannot returns 0 if it is not so I am catching the error on filemaker :
      Connection timed out; the timeout value is 60 seconds.
      Is there a way to reduce  this timeout value so the user does not have to wait a minute to know that the server is offline ?
      Similarly, if a process takes too long to answer, is there a way I can timeout the Insert From URL command ?
    • By JerrySalem
      I have a system being hosted using FMS14.  I also have a duplicate database hosted for testing/exporting using FMS15. PSOS is much slower in my hands with FMS15.
      Both servers are physical boxes, with similar (maybe identical) specs (Cores, RAM HD Space, HD Space available)
      I also have a Server Side script that creates a found set, exports data to excel and emails it to a user.  (This is based on https://www.skeletonkey.com/restoring-filemaker-clients-found-set-within-server-side-script/).  The exports have a couple of related fields, but no unstirred calculations.
      Using FMS14 this has been working just fine.  Using FMS15 it has been significantly slower.
      As a test, I exported 13,000 records from both FMS14 and FMS15.
      Using FMS14, it takes about 10 Seconds to recreate the found set, then 70 Seconds to create the export.  
      Using FMS15, it takes about 20 Seconds to recreate the (same) found set, then 175 Seconds to export the data.
      In another series of tests, this time exporting 30,000 records (more realistic in my scenario) I found;
      Using FMS14, it takes about 25 Seconds to recreate the found set, then 168 Seconds to create the export.  (2.5x longer to find/2x longer to export 3x records)
      Using FMS15, it takes about 90 Seconds to recreate the (same) found set, then 825 Seconds to export the data. 9x longer to find/11x longer to export 3x records)
      In the last series of tests, this time exporting 50,000 records I found;
      Using FMS14, it takes about 59 Seconds to recreate the found set, then 262 Seconds to create the export.  (6x longer to find/4x longer to export 5x records)
      Using FMS15, it takes about 160 Seconds to recreate the (same) found set, then 1700 Seconds to export the data.(16x to find/24x to export 5x records)
      Any ideas?  Anyone see similar results with PSOS on FMS15?  
      Again, the machines are identical, the databases are identical and the scripts are identical.  the only difference is FMS14 vs. FMS15.  I am also letting another user pull data using ODBC.  That has also gotten extremely slow using FMS15, but that is a discussion for another thread.  I am not using WebD on this server.  In general the FMS15 database performs find using FMP Clients.  But the FMS15 functions are not very impressive.  I am petrified to move my production database to FMS15, and even considering moving the test server back to FMS14.  Help!
       
      TIA
      Jerry