Jump to content
Server Maintenance This Week. ×

FM8 Faster than FM7?


xochi

This topic is 6255 days old. Please don't post here. Open a new topic instead.

Recommended Posts

I've read a couple of times that FM8 is "faster" than FM7.

Have there been any specifics? Does anyone have any real-world experience?

I find FM7 vastly faster than FM6 when hosted on FM Server. The ability to have a 200+ MB RAM Cache on the server (vs 40MB on FMS5.5) makes a world of difference.

The main complaints I have about FM7's performance are in the areas of Importing and Deleting records. Have these been improved in FM8/FM8Server? (I realize FM8 Server is not out, but perhaps people saw demos or heard from beta testers)

Link to comment
Share on other sites

Well this doesnt directly effect it, but I have found that when moving layouts and scrolling through scripts, I found that FM8 Adv. was faster than FM7 Dev. Just my 2 cents.

Link to comment
Share on other sites

Have there been any specifics? Does anyone have any real-world experience?

At Devcon did Andy LeCates show a radical change in the speed when using theta-joins.

The main complaints I have about FM7's performance are in the areas of Importing and Deleting records.

This might have to do with bad practices on your behalf, it's obivous that if a lot of autoentering is trigged during import, is there a price to pay. But by turning indexes on almost every field is appareantly a too common a mistake.

If you search several fields during your request-series, might a carthesian product calcfield be much more efficient to make the seaches in. I saw Chris Moyer show this tecnique, but his slides havn't found their way into cd-rom we recieved.

I guess that it's the purpose Daniele Raybaudi have made this CF for:

http://www.briandunning.com/filemaker-custom-functions/detail.php?fn_id=277

Perhaps if someone have made better notes than me, in Moyers session, could expand a little here???

--sd

Link to comment
Share on other sites

At Devcon did Andy LeCates show a radical change in the speed when using theta-joins.

What is a theta-join?

This might have to do with bad practices on your behalf, it's obivous that if a lot of autoentering is trigged during import, is there a price to pay. But by turning indexes on almost every field is appareantly a too common a mistake.

Yes, I do have lots of auto-enter calcs and indexed fields, and I understand the performance hit. But, I still maintain that FM7 just has bad performance. For example, using FM 7 server, I commonly see performance of about 100 records per second when deleting all records in a table! This table, by the way, does not have any cascading deletes. I suspect (perhaps in a mis-guided attempt to handle record locking / ACID properties), FM is handling the deletes one-record at a time, and re-indexing the table after each one. Silly. I want the deletion of the entire table to be atomic.

Fortunately I don't have to delete records that often, but still it's an example of extremely un-optomized programming...

Other problem I hope is fixed in FM8 :) In FM7 server, sometimes a find on an unidexed fields simply not cancellable. You click the cancel button, it dissappears! You either have to wait until the find completes, or force-quit the app.

Link to comment
Share on other sites

theta-join is the relational types beyond equal matching.

I do have lots of auto-enter calcs and indexed fields

...And they're used for??? Relational keys or sortfields??? Are you importing between tables to make qoutations to orders ...is it the most crafty relational structure known to mankind??

I'm about to figure out the reasoning in carthesian fields for searching ...it must beyond doubt that indexing just a single field instead of 4-5 might have an issue on the speed???

--sd

Link to comment
Share on other sites

In my case, I'm using a join table to summarize a bunch of related tables. The join table has one record per worker per payroll cycle, and a bunch of auto-enter calcs that pull Sum()s and Count()s from a bunch of related tables (paychecks, paycheck deductions, etc.)

Since the auto-enter calcs all reference external tables, l set up each with a local trigger field. So to update the join table, i just do a Replace Field Contents ( TriggerField = 1) with the latest month's found set.

There's probably more than one way to do this, but I chose to use triggered auto-enter calcs for several reasons:

• Regular calculations would not be indexable since they reference related tables

• Lookups would be awkward, since I'm trying to get summary statisics (Sum() and Count())

I'm pretty happy with the final result :) since all the summary fields are regular numeric fields, they can be indexed and I can search them very fast. My main complaint is that getting all the tables imported and updated is quite slow. However, it is a batch process that runs monthly, so it's quite feasible to just start it and walk away for 4 hours or so while it runs.

I'm hoping it scales well. Right now I have about 250,000 records, and adding another 25-50,000 per month.

Link to comment
Share on other sites

In my case, I'm using a join table to summarize a bunch of related tables. The join table has one record per worker per payroll cycle, and a bunch of auto-enter calcs that pull Sum()s and Count()s from a bunch of related tables (paychecks, paycheck deductions, etc.)

Well in my book isn't it more than a range definded as a theta-join, and then using the tunneling of related values such as summaries, checkout:

http://www.fmforums.com/forum/showtopic.php?tid/159548/tp/1/

...and often can you re-use these two fields, several theta-joins where you ontop build a dynamic multicriteria relation to show the figures for each worker ID. Please note that no sorting needs to take place whatsoever.

My main complaint is that getting all the tables imported and updated is quite slow

I have sused it so far :) but you havn't explained why you import in the first place???

--sd

Edited by Guest
Link to comment
Share on other sites

  • 1 year later...

XOCHI noted:

I commonly see performance of about 100 records per second when deleting all records in a table!

Is there a way to speed up or bypass this process while making sure the properly related children and parents records are deleted?

Thanks

Link to comment
Share on other sites

Is there a way to speed up or bypass this process while making sure the properly related children and parents records are deleted?

Yes by not deleting anything at all, or rather prospone it to a nightly routine. You could overwrite the foreignkey with a null value, and the linking evaporates by it - a rough estimation have showed me that the ratio then is approximately 1000 records per second on my dozy old g4 powerbook:

Set Variable [ $time; Value:Get ( CurrentTime ) ]

Go to Related Record [ From table: “LineItems”; Using layout: “LineItems” (LineItems) ] [ Show only related records ]

Replace Field Contents [ LineItems::ForeingKey; Replace with calculation: Case ( 0;0 ) ] [ No dialog ]

Go to Layout [ original layout ]

Set Field [ Invoicing::Performance; Get ( CurrentTime ) - $time ]

What then is back, is to make a script find the records with the cleared foreignkeys, which obviously doesn't have to be done when the office is up at full throttle - and delete them if necessary!

--sd

Link to comment
Share on other sites

Go to Related Record [ From table: “LineItems”; Using layout: “LineItems” (LineItems) ] [ Show only related records ]

Thank you for the response.

I do agree that:

1. Going to related records (as above)

2. Replacing foreign key

3. Deleting overnight

Is probably faster.... but still takes a lot of time compared to FM6!?

I have a 60,000 persons records, with around 500,00 visits and each visit related to 7 other tables (diagnosis, consults, payment, medication, etc). I am trying to extract a separate subset database for 15,000 persons. Going to the related records from the subset of persons to visits took 6 hours. I estimated it will take 96 hours to delete the unwanted 45,000 with their children visits and multiple grand children!

May be the next FM will solve this issue..

GH

Link to comment
Share on other sites

Going to the related records from the subset of persons to visits took 6 hours

There must be live summary fields in the layout you try to do it from?? Have you been taking enough notice of what the migration papers says about file references probably being cluttered, which makes any solution as slow as molasses.

I am trying to extract a separate subset database for 15,000 persons

Why in a separate base?

May be the next FM will solve this issue..

It's not a algorithmic issue for filemaker to optimize on, but instead an issue reflecting a poor migration and perhaps even a poorly structured solution, take one single issue have you remembered to convert your tunneling of values over relations NOT to utilize calcfields for the purpose? Are your Lookups changed into autoenter calc's using:

http://www.filemaker.com/help/FunctionsRef-315.html

Have you taken any steps to put all tables involved into the one and same file. Frankly do you only need one or rather two files at all, logic and interface??

--sd

Link to comment
Share on other sites

1. No file references. I rebuilt the whole database from scratch and everything is in one file.

2. I need a subset database for offline statistical manipulation.

3. I think I may have too many calculated fields. I will strip the tables and see if that makes a difference in GTRR.

Thanks

Link to comment
Share on other sites

I think I may have too many calculated fields. I will strip the tables and see if that makes a difference in GTRR

It is if the rendering of the layout you're arriving in have summaries and unstored calc's shown in a listview! Interdependent calc'fields is another closely related flaw.... you should never calculate on something a relation away if scaling a solution, but instead make a more transactional approach like:

http://www.filemakerpros.com/LULAST.zip

--sd

Link to comment
Share on other sites

This topic is 6255 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.