Jump to content

Recommended Posts

Hi Guys, 

So I am CLOSE to my near final question, until it comes to reports.....yay!

This is possibly opening a can of worms, but I want to create a Date Calculation. 

The table will look something as follows: 

Type  Buy/Sell Date
Banana  7 10 April 2020
Apple 10 5 April 2020
Apple -3 9 April 2020
Bananna 4 17 April 2020
Apple 9 14 April 2020
Bananna -4 20 April 2020
Apple 16 30 April 2020

I want to run a calculation to say "How many Apples did I have on Date X?".  The date will come a TimeStamp on another table. 

The correct answer is of course 32.   

 The same formula for bananas would be 7. 

So really my questions is, what would I need to go about making a calculation like this? 

Once this is mastered, I think/hope to have no more questions other than on Reporting!







Link to post
Share on other sites
  • Replies 23
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

There may be a confusion of terms because there is no script involved at this point... and to re-iterate: you should be scripting this kind of report, not using calculated fields.  Otherwise it *will*

Posted Images

4 minutes ago, Neil Scrivener said:

This is possibly opening a can of worms


it would be easy to define a relationship from a table of Types to an occurrence of the table shown (let's call it Transactions) as:

Types::Type = Transactions 2::Type
Types::gCutOffDate ≥ Transactions 2::Date

and then just sum the related transaction quantities.

It would even easier to find the transactions that occurred before the cut-off date, sort them by Type, and show them in a summary report.

However, as time goes by, the number of transactions will increase and these methods may become too slow. And that's a can of worms I don't feel like opening right now.


Link to post
Share on other sites

In a script you can do an executeSQL() to find all the records where type=banana and use the date as a WHERE clause, SELECTing only the buy/sell field.  That will give you a list of values that you can loop through to get the final number.

If you want that for all products then it's a loop within a loop by first asking for all DISTINCT types in your time frame and then asking for their values in turn.

You can display the results in a virtual list table or a JavaScript DataTable.

If you want to have this report available quickly I would suggest you run these kinds of aggregations on a schedule and store the results in their own reporting tables.  This is typically not the kind of stuff you want to have collected at the moment you run the report.

Link to post
Share on other sites
1 hour ago, Wim Decorte said:

That will give you a list of values that you can loop through to get the final number.

If you're going to use SQL (which I would be reluctant to do), why wouldn't you use it to also aggregate the values??


Link to post
Share on other sites

It's one of the things to test.  eSQL() is blazingly fast when used under the right circumstances.  Using complex joins or using SQL functions is usually where it starts to slow down and I tend to prefer to do the basic queries with it, then post-process the data in FM.  But not always, sometimes using SQL functions inside the query are just fine, much depends on the amount of data and the underlying FM schema.


Link to post
Share on other sites

Thanks for the info guys. 

If i was comparing for the month of April, would I not need to have a 'start' date' too? 


I've put a quick demo file of what I'm looking to achieve. 

So if you look on the Companies Layout, I'm looking at the Owned field in the Valuation Portal to be be a calculation at that snapshot in time. 

So where a Valuation is on 19 May 2020, I want it to tell me at that point how many items I 'own' based on the Records under the 'Previous Data of Transactions' field. If that makes sense? 

Demo File+.fmp12

Link to post
Share on other sites

What I'm in essence trying to do is work out a snapshot of how much the items were worth per month. Which will tell me if it has lost or made money. 

So the eventual calculation will tell me: 

- How many units did I own in that month?

- What was the total cost of those units in that particular month?

- What is the value of those units in a month according to the particular month in the Valuation portal?

- What for that month is the outcome, both £ and % wise. 

Hopefully that explains better. 


Link to post
Share on other sites

We get the business use case of the questions; they're valid questions that fall in the general realm of "business intelligence".

Take a page out of that BI book and how it works with things like data warehousing, the core concepts of ETL: extract, transform, load.

It speaks to the difference between your data as generated and used for the purely transactional business workflows and the data as used for reporting and gathering insights.  Very often that means extracting and transforming your data (usually pre-aggregating, flattening, de-normalizing,...) so that the BI tools can use that data and not the original data for speedy and accurate reports.

My point being: be somewhat weary of writing calculations that have to run on the basic data over and over again.  Build in that step that will collect and pre-aggregate the data for your reports.


Link to post
Share on other sites

I'm sure there are a good number of general courses on BI available on sites like Udemy and so on.  Beyond the core concepts, the learning curve is in getting to know the BI tool of your choosing.  Some of the decision around whether or not to use a dedicated BI tool is 'make or buy' - how much does it cost to create the functionality in FM vs. buying a tool and learning it.

Link to post
Share on other sites

Great, thanks. One final question on this. 

Is there a temporary fix I can use in the way of a calculation to resolve this particular issue? 

Totally understand the arguments on speed and efficiency - but also mindful that I want this database to 'work', and would rather have a beta with a slower calculation whilst I learn the BI and Scripting - than delay getting a beta version together whilst I learn something else, if that makes sense? 



Link to post
Share on other sites

This calc will get you the list as you can see from the result of the data viewer.

But you'd still have to parse it out and present the user with something that looks better.  Unless you want the calc to also include the type you're after instead of the types found in the chosen time frame.

As to delivering a system 'that works' but is suboptimal; more often than not I see systems where that choice is made but never rectified afterwards.  The term for it is 'technical debt'.  After you deliver it will become much harder to convince the client to let you undo what you did to replace it with something they think they already paid for.  Better to do right the first time around and use it to learn the required scripting and architecture to avoid performance issues down the road.


Link to post
Share on other sites

You have a bunch of typos in that calc: mostly lack of spaces:

- after the SELECT

- before the FROM and WHERE and GROUP BY

- after the GROUP BY

remove the space after the SUM and before the (

Link to post
Share on other sites

Makes sense thank you. So with the script working, I guess I need to do a lot of learning on Scripting. 

So I've taken this Script added as a Custom Function. Easy. 

I've inserted it as a Field. Easy. But I'm getting the result below. 

Is there a straightforward way I can manipulate this to show me just the Number that is related to record only. For example in the first record Apple is the 'Type', so I'd like Items in April to show just 16. 

Also, if I change the date, I've noticed the quantities don't update. Do I need to add a Refresh command to refresh the calculation when a Date or Quantity is changed?  





Screen Shot 2020-05-19 at 6.56.54 PM.png

Link to post
Share on other sites

There may be a confusion of terms because there is no script involved at this point... and to re-iterate: you should be scripting this kind of report, not using calculated fields.  Otherwise it *will* come back and bite you.

If you have never seen a penalty in action then download the SQL performance test from my old devcon session: https://www.soliantconsulting.com/blog/executesql-named-buckets/

It shows how using executeSQL with an open record in the target table can lead to serious performance issues.  And with calculated fields that need to be unstored to work: you have no control over when the calculation fires so you may find yourself in exactly this situation.  And this is just one example, just one that is extremely easy to demonstrate.

As to your questions:

- to make it work for just the type of the record you are on, add " AND " & _type & " = ?" to the WHERE clause and add a parameter to the executeSQL() at the end to give it the value of the type field

- to make it refresh automatically, make the calculation unstored --> and this is where the performance problems will begin, you may not feel it now because you don't have many records, but this is how it starts.

  • Like 1
Link to post
Share on other sites

Thanks, I should make it clear that this is a completely new territory for me. I really don't know anything about anything I'm writing here other than what you're telling me. So I am taking everything extremely literally while I learn this syntax. 

I simply don't understand the SQL side, I'm happy to learn, but it's a slow process. 

Turning to the calculations, I've made the following changes: 


"WHERE" & _matchfield & " >= ? AND " & _matchfield & " <= ? " AND " & _type & "=?" &



ExecuteSQL( _sql ; "" ; "" ; "4/1/2020" ; "4/30/2020" ; &_type )

I'm getting Table Missing. 

Should I delete the "GROUP BY" command? 

Link to post
Share on other sites

No, don't delete the GROUP BY, the GROUP BY is responsible for summarizing the data so that each type is listed only once, it's how sql knows how to sum the values.

Your sql syntax looks correct but not your executeSQL()

as the last parameter, use a reference to the field itself because you need the field's value.  Using my demo file that would make it:


ExecuteSQL( _sql ; "" ; "" ; "4/1/2020" ; "4/30/2020" ; demo::type )

Link to post
Share on other sites

Unfortunately no cigar: 




_type = _FIELDREF_getField ( demo::Type ; True ) ;

_qty = _FIELDREF_getField ( demo::Buy_Sell ; True ) ;

_table = _FIELDREF_getTO ( demo::Type ; True ) ;

_matchfield = _FIELDREF_getField ( demo::Date ; True ) ;


_sql =

"SELECT " & _type & ", Sum(" & _qty & ")" &

"FROM" & _table &

"WHERE" & _matchfield & " >= ? AND " & _matchfield & " <= ? " AND " & _type & "=?" &

"GROUP BY" & _type ];

executeSQL( _sql ; "" ; "" ; "4/1/2020" ; "4/30/2020" ; demo::type )


This seems to be exactly as per your directions, unless I've done a typo. 😭

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By GAltanis
      Dear users,
      I have three tables for three relevant layouts. Each one has separate table and tables are related with only one field.
      The scope is to show on the 2nd and 3rd layout the records that related with the 1st one. Here are some screen shots with what I need to do. Actually, I need to have on the seconf layout ("Orders from suppliers") the related records from the first one ("Suppliers list")
      Thank you thank you


    • By epatrick
      I find it odd that FileMaker is so intuitive yet hides access to the file name of the data source during importing. It looks like the only way I have found based on posts on the forum is to import the Data Source file as a reference into a container field. That is an extra step that shouldn't have to be done since FileMaker sees the Source file name during multiple points in the import process. Here are three dialogs where it's seen during import. There has to be a way to use the input file name with some kind of Get Function.  Please help!

    • By 34South
      I previously used ODBC Manager (32 bit)  to great success importing data directly from Filemaker Server to JMP. I recently upgraded to Catalina (MacOS 10.15.5) and knew that one of the casualties would be this ODBC utility. I downloaded the 64 bit ODBC manager from Actual Technologies and successfully installed it but get the following message when trying to open an FM database from within the ODBC interface in JMP:
      dlopen(/Library/ODBC/FileMaker ODBC.bundle/Contents/MacOS/fmodbc.so, 6): image not found
      I have navigated to Actual Technologies' web site believing I should download an ODBC driver but this comes at a hefty price tag, especially when converted to my local currency. Given the increasing costs of maintenance contracts and SSL certificates I had hoped to avoid further expenditure. Do I really need this and is there an alternative?
    • By droid
      I've been saving various files - mostly pdfs - in FM container fields, for years. A script triggered by clicking in the field allowed the field to be exported for viewing.
      Recently I upgraded to FM18, and now when I click in the field, I'm told "container fields cannot be exported"!
      Is there a new way that I should be doing this? Thanks.
    • By ggt667
      I can ping my PDF document server from Terminal, I can connect to the PDF document server from all browsers apart from Safari, my default web browser is FireFox, I also tried to change to Chromium and Opera as the default web browser. WebViewer has the same symptoms as Safari, server not found.
      $ ping -nc 1 document PING document ( 56 data bytes 64 bytes from icmp_seq=0 ttl=64 time=0.002 ms --- document ping statistics --- 1 packets transmitted, 1 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 0.002/0.002/0.002/0.000 ms FileMaker says ’Couldnot connect to the server.’
      The issues is easily solved by creating a new MacOS X user, and log in to that user, however I would rather like to fix the current user not having to migrate all other application settings. Is there some dns cache specific to Web Viewer and Safari?
  • Who Viewed the Topic

  • Create New...

Important Information

By using this site, you agree to our Terms of Use.