Jump to content
View in the app

A better way to browse. Learn more.

FMForums.com

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

Triggers for database update

Featured Replies

  • Author

[color:red]MAC ONLY - MAC ONLY - MAC ONLY - MAC ONLY - MAC ONLY

Good catch! Everything is pulled by the layout event there are two solutions to the problem either show the calc'field in the child-tables layout stashed away in a corner where text-color matches the environment

There is no display. No user. No interactive processing. Everything is driven by the arrival of the email message, and everything after that has to happen without user interaction.

...or even better do it via a self-join ... provided we just have the parent record for processing purposes only, are the cartesian relation actually made redudant...

I cannot see how a self-join is useful here.

Take a look at the template it consists of two layouts belonging to each it's own table, but the viewer layout is related to the other table via a cartesian looking only at the established values of the GetNthRecord( without the relational discourse .... the refreshing issues still prevail!

I had a look at your example but I am afraid that I cannot see how it applies to my application - and I am not sure what it was supposed to do. At any rate it did not do what I expected...

But as such could the applescriped creation based on the mail-rule actually handle it reliable, as long as we don't strain the found set in any way!

I very much appreciate the effort - but I am not sure what you are getting at here. If I have received 26 messages in the last 3 hours from a router with a problem, I need to decide what to do about all 26 messages based on the information which is stored in the equipment table for the specific router, or even in a parent record for routers in general. In my database the router "knows" what it is supposed to do when a message arrives. It "knows" how to respond - this "knowledge" is embedded in the calculation fields in the router equipment record.

I just need to tell the equipment record that a new message for it has arrived and it should therefore do something...

The same is true for every piece of equipment - all of which will "know" what to do when a message arrives, because the equipment record contains that intelligence.

  • Author

It have suddenly occured to me why you seems to make french fries with a tennis racket metaphoricly speaking, you wish to implement the transaction model, while we when it comes to stock levels use recursive structures:

http://jonathanstark.com/recursive_data_structures.php

I am familiar with this technique - I started life as an analyst for order entry/inventory systems and my first system design was for IBM S/38. This is commonly used for products which are related - such as an iPod, and its case, and its strap, and its warranty, etc. But I have no such application here that I can see...

But then use a event trigger plugin for your task, zippscript comes to mind. But I would think you probably were better off with use of the MBS plugin's ability to direct SQL commands to filemaker it self simply by stuffing the calls in either the field validation or some adjacent autoenter fields - checkout:

http://mbs.geistinteractive.com/node/7

Thanks - this looks very powerful. There are several scripting tools - I will play with some and see what I can achieve...

There is no display. No user. No interactive processing. Everything is driven by the arrival of the email message, and everything after that has to happen without user interaction.

No of couse not, then use command 1 .... the applescript is only there to emulate what a rule in mail does. What I here tries to prove is the update occures based on the input made by applescript ...which BTW uses an interface strictly speaking.

How would you make the setup for the test then, you wish to digest the latest arrived mail ... so it can be source for the assembly of a new mail to transmit, this is where I count a and b's in the latest .... should this in any way be compared to previous arrivals??

--sd

An email arrives, is processed and a new message record is created.

That part you've obviously got down pat.

At the equipment level, a check must be run to decide whether to react to this message, or ignore it, depending on the type of message and whether any action has been taken already on a similar message already received. In my scenario, several simple calculation fields on the equipment record set various flags, and a more elaborate calculation field or fields will then decide what to do and execute the appropriate instructions (perhaps a script to send am email to an end user; perhaps a script to log a service call; perhaps a script to order a spare part. Or all of the above.)

Briefly, I think I understand this, though I will say I think some of the action logic should be pushed to a data structure rather than explicitly embedded in a calculation. For one thing, a generic calc called EquipmentAction in a table will need to contain explicit logic for each equipment record andevery possible action, and that to me seems as cumbersome as a script.

I would see this as a calc in Equipment that checks the most recent message in the Messages table for any data that matches items in a related ExpectedActions table. It keeps the code a little more abstracted, and when you need to add actions to the expected behavior, you just add an Action record to the ExpectedActions table.

When you actually perform the Actions, you would write the Action to a PerformedActions table with the EquipmentID and the MessageID the Action was a response to.

See attached Relationships Graph for a simplified example.

I guess that's quibbling and not the main issue, which is event triggers. But it does simplify some of the other things you're trying to do, below.

If an action is taken, the message record is updated to include the details of the action taken (so the message record becomes the permanent audit trail). If no action is taken then the message record is updated to include the flags or parameters which were set at the equipment level, and an error message (which might just say - duplicate message, ignored). Later, if you are trying to figure out why your system never sent a spare part for a specific piece of equipment, you can follow the decision making process more readily because of the audit trail.

Logging Actions in PeformedActions table with appropriate keys would provide this audit trail without having to actually do anything to the original Message record. The Message record and the Equipment record would both have related PerformedActions, timestamped, with details.

It also gets around the whole issue of updating Equipment records and Message records, since it's not actually the Equipment or Message that's changing - it's the Action you're logging.

At the equipment level, I would like to reset some fields with the ID of the latest alert, some data from this alert and some statistics. This is not essential, but it makes the "system" more robust and easier to debug since it is self-logging. I could write the details of what transpired to a log file, but I feel that this is more efficient. The message-handling process updates the equipment record, and the calculation fields on the equipment record then update the message record with the action taken. It is probably not necessary to include the details of the latest alert in the equipment table - though it will make exception reporting more efficient perhaps. I am not sure what performance of this application will be like at this point...

I think the first part of this paragraph - setting fields in Equipment with the latest message data - isn't necessary, if you model the data so that the most recent (or earliest, or whatever filter you wish) message data is accessible via relationships. Ditto with Actions.

Anyway, I need something which will trigger the calculation fields at the related equipment record when a new message arrives. I expected to be able to write the ID of my message record into the related equipment record, thereby triggering the calculation fields in the equipment record.

To the first part:

I know you hate the idea of embedding this in a script, because of concerns that all the logic will be socked away in there, and make the data structure opaque.

But when a new message arrives, it's arriving via a scripted process, and I sincerely don't believe that having that same script execute the logic embedded in a set of Equipment, Message, ExpectedAction and PerformedAction tables will make that logic as opaque as you think. Remember, the script can be dumb. It doesn't need to know which actions need to be performed - the Equipment record will inform the script based on what related ExpectedActions match the Message - it just needs to know what to do with the actions that match the current context of Equipment/ExpectedActions/CurrentMessage. If your data is modeled correctly, that can be as simple as:

from Equipment context:

Go to Related Record (Required Actions) // - assume a relationship structure here that filters only the required actions, based on expected/current message

Go to Record/Request (first)

Loop

If (ActionType="Email")

// - perform email action here

Else If (Action Type="Log Warning")

// - perform log writing code here

etc.

Go to Record/Request (Next, exit on last)

The above script doesn't know or care what the particulars of the Equipment record are, or know in advance what actions are expected to perform, or what the particulars of the message are. All of that is informed by the data model. And all of it can be driven by the script that inserts the incoming message into the message table.

I understand that records will occasionally be created manually by an admin. Remember that for that there will be an interface, and the user experience can be controlled. Replacing FileMaker default menu choices (command/control-N for New Record) with Custom Menus tied to an abstracted script process can handle that exception.

Anyway...take all of this with a grain of salt.

Best,

Colin

ERD.jpg

I want to define the response in the equipment table

IMHO, that is a conceptual mistake. Because the response is not to equipment. The response is to some attribute of equipment, which arrived as a message.

Let's simplify this example to the bare minimum. Let's have a table of Equipment with fields:

• EquipmentID

• MaxTemperature

and a table of Messages with fields:

• MessageID

• EquipmentID

• Timestamp

• Temperature

We want an e-mail to be sent when a message arrives with temperature higher than the maximum set in the parent equipment record. Since e-mails cannot be produced by a calculation, we will need to trigger a script.

I would have the script that created the new message go on and check if the temperature exceeds the parent's maximum, and if so, send an e-mail. No event-trigger is required.

If you don't want "a mega script which controls everything", this can be (and should be) broken into individual sub-scripts: one to create the record, one to check the condition, one to send the notification. The amount of required work needed is the same in any case (except the redundant duplication of data into the parent, which my method avoids) - so the only question is who/what calls each sub-script.

If I understand you correctly, your method would run a script to create a value, whose creation is then supposed to trigger another script to modify another value, whose modification is then supposed to trigger yet another script to finally get to the point of sending the e-mail. All you have achieved is data redundancy and more potential points of failure. The conceptual mistake has spawn mistakes on the strategic and tactical levels.

I think it is becoming apparent that the real issue here is that FM has no triggers. There are those in the FM community who think this is no big deal, and others who think it is.

I used to think it was an issue, too. When I understood better what a relational database is about, I realized it's not. You may find the folowing note interesting in this context:

http://fmforums.com/forum/showpost.php?post/187127/

  • Author

Good catch! Everything is pulled by the layout event there are two solutions to the problem either show the calc'field in the child-tables layout stashed away in a corner where text-color matches the environment, or even better do it via a self-join ... provided we just have the parent record for processing purposes only, are the cartesian relation actually made redudant...

This is very obtuse for me I have to say. I have had a good look at the code but I don't understand what it is doing.

However, as is often the case, you have helped me find an easy solution to another problem we discussed earlier - which is to obtain the first and last alerts. The first is just straightforward, leaving the sort order as creation order. To find the last alert I create a calculation field which counts the related record set and then I use your GetNthRecord (field, record number) to pull in the ID of the most recent message for a piece of equipment.

Very neat. :-)

Now I have to see if this is enough to trigger my calculation fields on the equipment table. :-)

  • Author

IMHO, that is a conceptual mistake. Because the response is not to equipment. The response is to some attribute of equipment, which arrived as a message.

Is it? I think it is a response to a piece of equipment which finds itself in a particular situation, when governed by a range of parameters set for itself specifically, the class of device itself, the location at which it is running, the organisation within which it is deployed.

Let's simplify this example to the bare minimum. Let's have a table of Equipment with fields:

• EquipmentID

• MaxTemperature

and a table of Messages with fields:

• MessageID

• EquipmentID

• Timestamp

• Temperature

We want an e-mail to be sent when a message arrives with temperature higher than the maximum set in the parent equipment record. Since e-mails cannot be produced by a calculation, we will need to trigger a script.

If it was simple, this would be ok. But it is not so simple. The equipment class may be being retired, or it may be out of warranty, or it may be in a remote location, or it may be a known problem which can be ignored, or it may be close to the end of the contract, or a rollout of replacements may be in progress...

I would have the script that created the new message go on and check if the temperature exceeds the parent's maximum, and if so, send an e-mail. No event-trigger is required.

If you don't want "a mega script which controls everything", this can be (and should be) broken into individual sub-scripts: one to create the record, one to check the condition, one to send the notification. The amount of required work needed is the same in any case (except the redundant duplication of data into the parent, which my method avoids) - so the only question is who/what calls each sub-script.

The only data i think i really need in the parent is the last alert ID. The rest can be pulled from the child record when required.

If I understand you correctly, your method would run a script to create a value, whose creation is then supposed to trigger another script to modify another value, whose modification is then supposed to trigger yet another script to finally get to the point of sending the e-mail. All you have achieved is data redundancy and more potential points of failure. The conceptual mistake has spawn mistakes on the strategic and tactical levels.

see below...

I think it is becoming apparent that the real issue here is that FM has no triggers. There are those in the FM community who think this is no big deal, and others who think it is.

I used to think it was an issue, too. When I understood better what a relational database is about, I realized it's not. You may find the folowing note interesting in this context:

http://fmforums.com/forum/showpost.php?post/187127/

Is it? I think it is a response to a piece of equipment which finds itself in a particular situation

Well, you keep saying so, but that's where I think is your major obstacle. You think you need to respond to a SITUATION. I say the response should be the result of an ACTION. The action, in this case, being the arrival of a new message.

I believe I have explained why such approach is preferable. If you have more questions on this, I will try to answer them - but I will not try to convince you.

If it was simple, this would be ok. But it is not so simple. The equipment class may be being retired, or it may be out of warranty, ...

So? I picked a simple example to demonstrate a principle. The real situation may be more complex, but the principle stays the same. All you are showing me is more conditions that need to be checked.

As for the rest, I am puzzled why you think you need to explain these things to me.

To find the last alert I create a calculation field which counts the related record set and then I use your GetNthRecord (field, record number) to pull in the ID of the most recent message

The Last() function does all of that at once.

Also, you should consider archival. In this system, the message record contains everything you would need to know about the system's response to an alert, including whether it was acted upon, and if so why, or if not, why not. It will show you what part was ordered, when, on what purchase order and what technician was despatched on what date to effect the repair - and on what invoice the repair was charged, and what date the invoice was paid. You can safely delete all the associated records in every other table once this incident has been close, because the message record is an audit trail that contains everything you need to know to reconstruct the event. Compare this to my financial system (Great Plains Dynamics running on MS SQL) which will not allow me to delete employees who have not worked for the company for years; customers who have not bought in years and any product I have ever sold. This is an elegant RDBMS design, but it is a nightmare to manage historically.

Conversely, if you store all the logic for your event triggers in granular calc fields, when the business logic later changes, and you need to change those calcs, another programmer has no way of looking at those old messages and understanding WHY they were acted upon that way. The business logic is lost, unless you agressively archive/comment your old logic calcs.

Granularizing business rules into easy-to-understand snippets is an attractive idea in theory. But if I were to inherit such code from someone else, I would have a hard time pulling all the pieces together and understanding the dependencies.

Storing business rules as records and abstracting the logic to perform those rules provides a method for archiving/updating business logic without the need to update calcs or scripts.

Two sides to every coin here, I suppose.

  • Author

Conversely, if you store all the logic for your event triggers in granular calc fields, when the business logic later changes, and you need to change those calcs, another programmer has no way of looking at those old messages and understanding WHY they were acted upon that way. The business logic is lost, unless you agressively archive/comment your old logic calcs.

Granularizing business rules into easy-to-understand snippets is an attractive idea in theory. But if I were to inherit such code from someone else, I would have a hard time pulling all the pieces together and understanding the dependencies.

Storing business rules as records and abstracting the logic to perform those rules provides a method for archiving/updating business logic without the need to update calcs or scripts.

Two sides to every coin here, I suppose.

Well, not really. The "granular" calculations are just things that are self-evident. They just pull together all the "facts" on which the decision will be made. For instance, one calc field may simply establish whether the item is under warranty and set a field value to "Y", while another may establish the threshold for action for this device. Each of these parameters is written to the messages record, so the archived record shows you the state of the device, the nature of the message and the resulting action or inaction along with an english-language message to explain the decision.

The granular calculations are not part of the decision making logic in any real sense - you will always need to establish the facts before you decide how to deal with them.

  • Author

You are so right. Thanks. :-)

  • Author

So? I picked a simple example to demonstrate a principle. The real situation may be more complex, but the principle stays the same. All you are showing me is more conditions that need to be checked.

Each piece of equipment has its own complex algorithm to determine what alerts to respond to, and how to respond to them. It is not just a question of checking more parameters, but of using quite different logic for different equipment types. The processing logic (and parameters) involved in managing the supply and installation of air filters, say, for an air conditioning system (where new filters are required every nn hours of operation, and where the alerts continue until the filter is replaced and a switch reset, but the unit is still functioning) is very different from the processing involved in a memory module failure on a PC. The former involves complex analyis of the incoming alerts to determine whether an alert is a continuation of an event you have already actioned, or a new event which requires a response. In this case, where such alerts are expected, the logic also involves predictive processing to ascertain when the next filter will be due (using calculations based on reported usage over time).

I wouldn't want to include all this complex logic in my processing of the simpler DIMM failure alert. The KISS principle is at work here, even if it doesn't appear that way at first glance...)

If all of this processing were to be built in to the initial script it would be a very large script and one which would be, I believe, difficult to read and maintain: ensuring that large chunks of the logic were NOT applied to a specific situation would take as much careful design as the creation of the logic in the first place. In the Equipment Type table (parent of the individual devices) I can build simple rules which will apply to all devices of that type, and I can add exceptions to specific pieces of equipment which might, for some reason, require an individual approach (the CEO's notebook for instance).

As for the rest, I am puzzled why you think you need to explain these things to me.

Since you have very generously offered your assistance I have erred on the side of verbosity. And, particuarly, where I am suggesting I will not implement your advice, I feel obliged to offer some reasoning.

  • Author

I used to think it was an issue, too. When I understood better what a relational database is about, I realized it's not. You may find the folowing note interesting in this context:

http://fmforums.com/forum/showpost.php?post/187127/

This example is about the pitfalls of duplication fields: RDBMS 101 I guess...

However, when I "duplicate" information in my messages table, I am storing the actual value of a field at a specific point in time. The values of these field will change over time in the parent record, and it is essential that these changes are not rolled down to the messages table.

A good example of where you would do this is in an order lines table where you "duplicate" the product description, price, weight etc. In the IT industry where products have a life of months rather than years, most of the products in your order lines table will have been discontinued within 12 months. Indeed, within 3 months, a large number of the products will have different prices.

So although the field is duplicated at the time of creation, it is really a new field - one with a value which is unchanging from this point onwards irrespective of any change to the original field.

  • Author

[You may find the folowing note interesting in this context:

http://fmforums.com/forum/showpost.php?post/187127/

I think we are talking at cross purposes here now anyway. When I first looked at this problem I had not explored very far into FM, and (as you can see from the length of this post) a great deal of valuable assistance has been provided by those who have generously given their time.

In my equipment table, I have now defined several fields which duplicate information in the child records. however these are unstored calculation fields.

I am still working through the issue of writing data to the child record from a calculation defined in the parent, but I no longer need to write to the parent from the child - given that I will be running a script from a calculation field in the child (using a plugin) which will update the parent and, hopefully, provoke the execution of the relevant calculation fields at the parent record.

If all of this processing were to be built in to the initial script it would be a very large script and one which would be, I believe, difficult to read and maintain: ensuring that large chunks of the logic were NOT applied to a specific situation would take as much careful design as the creation of the logic in the first place.

You've repeated this theme several times in this thread and it's caught my eye. You do know you can create multiple scripts that can be called from within on main script?

For example whatever processing is generic to all types can be performed then:

If[Equipment::Type = "SIMM"]

Perform Script[sIMM Alert]

Else If[Equipment::Type = "Air Filter"]

Perform Script[Air Filter Alert]

End If

  • Author

You've repeated this theme several times in this thread and it's caught my eye. You do know you can create multiple scripts that can be called from within on main script?

For example whatever processing is generic to all types can be performed then:

If[Equipment::Type = "SIMM"]

Perform Script[sIMM Alert]

Else If[Equipment::Type = "Air Filter"]

Perform Script[Air Filter Alert]

End If

LOLOL... I think we have done this to death now... If I place the logic outside the equipment table, or preferably equipment type table, I have to build a script which allows for each equipment type and then calls the appropriate sub-script. If I add a new equipment type i have to remember to change the script and if I delete one, I have to do the same. By leaving the decision process in the equipment table it will always happen when necessary - automatically...

It's really about using the power of the database to manage itself - right now i don't have any scripts (except the external script which dumps the message data into the database). I just have calculation fields, and most of these just produce data. I guess I am going to need a couple of scripts to update data in the message table from a calculation field in the equipment table, but that should be a simple script which just updates a single field: thereafter calculation fields in the message table can take over.

So far, so good. Only I can't for the life of me work out how to get today's date in a calculation field: I need to subtract the date of the latest message from today's date and get an answer in days (actually an answer in week days would be better, but I can live with days). The books say that "//" should produce today's date in a calculation, but it doesn't do it for me... Any clues?

Get (CurrentDate)

Must be unstored or it will not update in a calculation; it is always current when in a script.

Edited by Guest

  • Author

Only I can't for the life of me work out how to get today's date in a calculation field: I need to subtract the date of the latest message from today's date and get an answer in days (actually an answer in week days would be better, but I can live with days). The books say that "//" should produce today's date in a calculation, but it doesn't do it for me... Any clues?

Solved my problem. Get( CurrentDate ) - LatestAlertDate gives me the answer in days... :-)

  • Author

Get (CurrentDate)

Must be unstored or it will not update in a calculation; it is always current when in a script.

Thanks matey - just figured it out. Didn't know about the unstored part, but its used in an unstored calc field so that's cool...

The books say that "//" should produce today's date in a calculation

It actually does, but only when being rendered on a layout!

--sd

If I place the logic outside the equipment table, or preferably equipment type table, I have to build a script which allows for each equipment type and then calls the appropriate sub-script. If I add a new equipment type i have to remember to change the script and if I delete one, I have to do the same. By leaving the decision process in the equipment table it will always happen when necessary - automatically...

It depends of what you mean by "logic". If a new type of equipment requires a different script action, you will have to modify the existing script (or add a new one) - anyway. OTOH, if the difference can be calculated using data (or even formula) from the type record, then the script can run unmodified.

So ultimately it comes back to the question of what triggers the script. I still think your method adds unnecessary complexity and - by using a plugin - vulnerability to the process.

when I "duplicate" information in my messages table, I am storing the actual value of a field at a specific point in time. ... A good example of where you would do this is in an order lines table where you "duplicate" the product description, price, weight etc.

I'm afraid I cannot see the similarity at all. You already have the history of changes in the messages table. Those are your 'line items' - the ones that won't be changing. Copying the most recent data into the parent equipment record, and replacing it every time a new message arrives serves no purpose.

where I am suggesting I will not implement your advice, I feel obliged to offer some reasoning.

You are free to take my advice or dismiss it, with or without explanation. However, when the explanation turns into a rather condescending lecture about good design practices, it implies that my advice goes against those and therefore I need to be lectured about them.

Perhaps when you get more familiar with Filemaker (say a year from now), you will come to realize why the suggestions made by me and others in this thread ARE good practice - even if right now they may seem to contradict your experience.

Of the 70 odd long posts in this thread I missed at least one in which comment already covered sub scripts. My apologies to you both.

I would still have made the same post however, but simply emphasizing what comment and Colin have already said.

I think the point needs hammering home. Other posters have done a valiant job, but you still are stuck in non-FM think. Scripts are powerful tools for accomplishing tasks in Filemaker. They're not "outside the database".

If I place the logic outside the equipment table, or preferably equipment type table, I have to build a script which allows for each equipment type and then calls the appropriate sub-script.

Yes, the alternative being you build the logic into a calculation field. I am loath to change calculation fields. I prefer to change scripts. They far easier to edit and debug.

If I add a new equipment type i have to remember to change the script and if I delete one, I have to do the same. By leaving the decision process in the equipment table it will always happen when necessary - automatically...

Well, you don't need to delete the old one. Keep it around, it's not hurting anyone and you may get that equipment type back again. As far as remembering to change the script if you add one...I'm not sure I understand that as an argument. You do remember to build the logic into your calculation field right?

It's really about using the power of the database to manage itself - right now i don't have any scripts (except the external script which dumps the message data into the database).

Having no scripts in a solution is a generally a sign of an FM novice.

Filemaker has some weakness, in terms of its calculation engine, those include, the inability of a record to monitor changes to its relations, the speed of unstored calculations over a network, and the lack of script triggers.

Scripts are one of the three basic parts of Filemaker, they can make up for some of the problems in other places.

I understand your philosophy. I think everyone on this thread does. But you're really making things more difficult for yourself than necessary by sticking to your theory rather than discovering what is practical. You've got some great resources here, and some great advice.

Why not listen to it? :twocents:

Edited by Guest

Having no scripts in a solution is a generally a sign of an FM novice.

Well, he isn't building a user interface for this, supposedly, so I can understand the argument for a single script. And I think the method he's pursuing is possible. On your other points, DJ, I absolutely agree.

Where I differ in opinion from the OP (and I think you and everyone else is on board with this : ) is the notion that storing all the business rules embedded into calcs is:

1. more self documenting than a script

2. more comprehensible to a developer newly exposed to the product than a script

3. easier to maintain and update than a script when new rules come down the pike.

That said, I don't have any argument with storing what he's calling facts as calculated data in a table, or nesting logical chains in a series of calcs, e.g. field 1 Y/N, field two, if field 1 Y, X, else Y, etc. Those are facts that can resolve to true false.

I do disagree that what happens as a result of those facts also is embedded in calcs - e.g. field 1 Y/N, field two is a script trigger that runs a specific event (email, for example) based on contents of field 1.

I think this is an extremely smart guy who is diving into FileMaker with what he knows and is making progress. The problem is the perspective he's bringing with him is closing avenues to elegant, proven programming methods that are particular to the programming architecture of FileMaker, which is admittedly unique. With time, he'll undoubtedly re-evaluate what we're talking about here, and if after a few months of exposure to the product what we're saying makes sense to him, mix it with his own methodology and come up with something that satisfies his personal requirements.

And, I think it's made for interesting discussion, at the very least.

Colin

Well, he isn't building a user interface for this, supposedly, so I can understand the argument for a single script.

Fair enough.

I don't understand the issue entirely...how does he have one script that runs when a mail message is received and then needs to trigger a script? Are we talking about AppleScripts and FileMaker scripts?

And I think the method he's pursuing is possible.

Sure it is. I just think he's trying to fit an oval peg into a round hole.

That said, I don't have any argument with storing what he's calling facts as calculated data in a table, or nesting logical chains in a series of calcs, e.g. field 1 Y/N, field two, if field 1 Y, X, else Y, etc. Those are facts that can resolve to true false.

I do disagree that what happens as a result of those facts also is embedded in calcs

I think that's an important distinction...storing facts versus storing actions which I know has already been said, by comment I believe.

And, I think it's made for interesting discussion, at the very least.

Very true.

I just wanted to add one more comment generally - not about Byronic in particular, but about developers coming from other life experiences to the world of FileMaker:

I think it's a very good thing when people with perspectives from other development environments bring that perspective to bear when looking at FileMaker development.

FileMaker's had such a long history of the data layer being wedded to the programming layer - of context being made explicit in calculations, scripts, layouts and so on - that many of us have had to relearn how to think about programming when FileMaker finally gave us tools that allowed us to abstract our code - variables, custom functions, script parameters and script results, for example. Especially those of us who were with FileMaker since the earlier versions - 2, 3, 4, 5 and 6.

Programming practices like abstraction and OOP are things that we have in the past implemented piecemeal rather than rigorously, partly because of expediency when it comes to particular projects, but partly the programming environment didn't reward us for doing so, and in large part made it impossible.

That's no longer quite the case, and we as a community have some catching up to do in terms of thinking the way programmers who've had those advantages do. We can only learn when we hear someone speak from a context outside the cloistered world of FileMaker programming.

We also, I think, should be pushing FileMaker in the direction it's already started, by implementing coding practices that encourage abstraction and demanding better tools for abstraction. I'm sure you can think of many obvious examples beyond being able to specify a field in Set Field with a variable. There are plenty of places where the programming layer is still superglued onto the data layer.

Now I'll step off my stupid little soapbox and leave this thread alone :

Edited by Guest

I think this is an extremely smart guy who is diving into FileMaker with what he knows and is making progress.

Indeed this isn't one of those Ryan Rosenberg have wet dreams about, would ditch their spreadsheet for bookings in the near future.

--sd

  • Author

I'm afraid I cannot see the similarity at all. You already have the history of changes in the messages table. Those are your 'line items' - the ones that won't be changing. Copying the most recent data into the parent equipment record, and replacing it every time a new message arrives serves no purpose.

When I started looking at this I was not sure I would be able to create a view of the data using information from the messages file. However, thanks to the assistance of the forum I have been able to accomplish that, so no duplicate information from the messages file needs to be stored in the equipment table.

But I still need to take a "snapshot" of some of the fields in the equipment, and other, tables and store them in the messages record where they behave as a permanent audit trail, as in the order lines example I gave.

  • Author

So ultimately it comes back to the question of what triggers the script. I still think your method adds unnecessary complexity and - by using a plugin - vulnerability to the process.

Actually, it's much simpler than you think. Most of the work in the equipment table is establishing the "facts". Then the decision about what to do about them is fairly easy. The nice thing about this is that the "facts" are all displayed on a layout for anyone who wants to review the process - and the decision making calculation script is quite short and simple for each equipment type.

By keeping the logic in the equipment record, it makes the database much less obtuse. For any piece of equipment you can readily see all the parameters which are being considered, and what the decision-making process is. Adding a new type of equipment is easy, and entirely modular. Removing an equipment type has no impact on the logic elsewhere in the system.

I still need to take a "snapshot" of some of the fields in the equipment, and other, tables and store them in the messages record where they behave as a permanent audit trail, as in the order lines example I gave.

That's an entirely different matter - easily accomplished by lookups.

  • Author

I think there are many ways to solve a problem with a product as sophisticated as Filemaker. I guess my approach has developed from 30 years or so of building solutions in all sorts of environments, starting in RPG-II on IBM S/3 in 1977. Since then I have worked with a lot of RDBMS products and built systems in many of them. All have their strengths and weaknesses.

But here, I think my approach is driven by the view I have of the process. What are we doing here? The answer is we are managing equipment. Various things happen to that equipment - some of which are in response to alerts. But other actions will be taken as warranties approach expiry, new software patches are released, or operational hours approach recommended service intervals and so forth. It is true that some of the most complex processes are those that involve alerts delivered to the messages table - but, again, it is not the alert itself that drives the process, but the business rules associated with the piece of equipment. Indeed, most of the alerts are ignored once the system determines that no response is required.

Also, philosophically, a process which relates to a piece of equipment, is to my mind an attribute of that equipment. This is really no different from a price or a product code - and should be associated directly with the equipment. If possible, I believe this should be stored along with the equipment in the database table.

As far as readability is concerned, a database analyst new to this system only has to know that the business logic for every piece of equipment is embedded in clearly labelled fields in the equipment record. This logic is relatively simple since it works with the facts which have already been established by other calc fields.

Thus a response to a warning error will only have a few if tests: if warning level > threshhold and if currentwarranty = Y and if blah blah blah... All the complex calculations are elsewhere - but easily discovered since you can look at the currentwarranty field and see the date calculations involved. Once a decision is made about what to do, the process can be handed off to an email process, or a purchase order process etc, so the decision-making logic remains uncluttered.

In a complex situation the very complexity of a long script, with multiple nested IF tests and loops can make it very difficult to manage over time, and is more prone to errors in logic. Adding new logic for specific equipment types, and removing old logic for devices that have been withdrawn is not only laborious, but each time the script is disturbed there is an opportunity to introduce bugs. Indeed, my existing system is built like this - it does not use any of the features of MS SQL, and all the logic is in code - poorly documented and requiring a specific development environment to maintain it. There are not only serious logic errors, but the code has proven very hard to modify to take account of things which were not imagined at the time the solution was built.

A good example of this is that we found that the initial life of some items is significantly less than the life of the replacement items. This played havoc with the system when we had a rollout of new devices - since the system worked on the life expectancy of the replacement items. To add the extra logic to allow for a one-off situation such as this would make an already complex application even more so.

In the new system, I have a field for the life of the initial item - it defaults to the life of the replacement item but can be overridden at an EquipmentType level. A discreet calculation field looks at things such as purchase date and alert history to decide whether the currently installed item is the initial, short-life, item or the standard replacement. The result of this calculation is displayed, along with the parameters used to make this decision, so that an IT manager can understand, at a glance, why a replacement part has been ordered earlier than he or she might otherwise expect.

The field which contains the value of the life of the item currently in the machine is a simple calculation field which adjusts the lifetime to reflect the decision already made.

So, here there are two calculation fields where one might have sufficed - but the system is more readable; the reasoning is less obtuse and the situation is immediately apparent to anyone wanting reassurance that the system is working properly.

I feel this is important - (nearly) all systems have bugs, and a system which inspires confidence in the user by revealing all its workings in this way saves time in the long run. I can assure you that, in 12 months time, when I have cause to wonder why something happened, I will have only a dim recollection of the situation and circumstances involved.

When I posted here I was hoping for some assistance to enable me to implement the solution I had in my head. I wasn't expecting, but have thoroughly enjoyed, the opportunity to debate the philosophy behind various ways of doing things. As a consequence the design of my application has changed, though not (so it seems) to satisfy everyone. However, a robust discussion provides the opportunity to test one's own decisions. Since this is my own application for my own business I am at liberty to build it any which way - and since this is the third version of this system I am determined to get it right this time.

Time will tell... :-)

  • Author

That's an entirely different matter - easily accomplished by lookups.

Is it? I am not so sure. I use Lookups elsewhere, but here the values of the fields of which snapshots are required are determined over the course of the decision-making process.

(a) The message is received

(: Parameters are checked, calculations performed

© Decisions are made

(d) A snapshot is taken and written to the messages record

Perhaps a lookup in a calculation field in the messages table which is fired off somehow in step ©... Dunno. Haven't gotten to that stage as yet.

  • Author

You are free to take my advice or dismiss it, with or without explanation. However, when the explanation turns into a rather condescending lecture about good design practices, it implies that my advice goes against those and therefore I need to be lectured about them.

I didn't think i was being condescending - merely stating my case in response to your arguments about what constitutes good design practice. Perhaps you feel that, being new to FM, I should have no views on this. I am not new to RDBMS, or business applications. And I have the grey hair to show for that :-)

Perhaps when you get more familiar with Filemaker (say a year from now), you will come to realize why the suggestions made by me and others in this thread ARE good practice - even if right now they may seem to contradict your experience.

Perhaps. Anything is possible...

I use Lookups elsewhere, but here the values of the fields of which snapshots are required are determined over the course of the decision-making process

...

Perhaps a lookup in a calculation field in the messages table which is fired off somehow in step ©.

I'm afraid I cannot answer that without knowing a lot of more specific details. Offhand I would say that if an attribute requires monitoring, it should be in a separate table. Thus no lookup from the parent table is required - one simply creates a new record with the current value. A good example of this is a car odometer: each refueling is a record of the current mileage and the fuel intake. Nothing takes place in the parent car record (except when one views it - then an unstored calculation will evaluate to display the fuel efficiency).

There are other options, e.g. you could trigger a relookup by setting the ParentID in the child record to itself. Or you could have an audit field in the parent logging the changes to a value (auto-entering itself and the monitored field).

Perhaps you feel that, being new to FM, I should have no views on this.

If I thought so, I would have said so. I said something else instead. I believe that's enough on this subject.

  • Author

I'm afraid I cannot answer that without knowing a lot of more specific details. Offhand I would say that if an attribute requires monitoring, it should be in a separate table. Thus no lookup from the parent table is required - one simply creates a new record with the current value.

Are we talking at cross-purposes again? The warranty details for the piece of equipment are stored in the equipment table. The replacement parts for a piece of equipment are stored in a separate table linked to the EquipmentType record. Calc fields in the equipment record pull down this info when required.

A good example of this is a car odometer: each refueling is a record of the current mileage and the fuel intake. Nothing takes place in the parent car record (except when one views it - then an unstored calculation will evaluate to display the fuel efficiency).

Good example - so my refuelling record would contain a snapshot of the odometer at the time of the refuelling. In this case there may be a dozen or so parameters which are checked by the decision making process - it is these which need to be recorded for audit purposes. eg:

a) warranty status

: new issue or duplicate message about existing issue

c) if duplicate, what is the ID of the message which is being actioned

d) threshhold level

e) ...

f) action taken (Unprocessed, Ignored, Actioned)

g) Email? (recipient address)

h) Purchase Order No

i) ...

The idea is that the message record becomes the log - when all else is deleted it is still possible to reconstruct what occurred, and why.

There are other options, e.g. you could trigger a relookup by setting the ParentID in the child record to itself. Or you could have an audit field in the parent logging the changes to a value (auto-entering itself and the monitored field).

I don't quite follow this - the messages record is the audit trail and I just need to write a snapshot of the parameters which existed in the parent record at the time the message was processed, along with the result (ie: ignored, duplicate of prior alert / processed, part ordered etc etc). The parameters are, at the moment, mostly unstored calculation fields in the equipment record so I am not sure whether I can use a lookup from the child to pull these values down into the audit fields in the message record itself.

I am sorry - I don't understand what you mean by "there may be a dozen or so parameters which are checked by the decision making process - it is these which need to be recorded for audit purposes".

I believe it's best to analyze a single parameter at a time. For example, warranty status. You say that:

The warranty details for the piece of equipment are stored in the equipment table.

Why then is it necessary to record them elsewhere - if they are not being modified? I cannot see the connection between checking something and recording it. Of course, you may want to record this for convenience purposes alone - but it is not REQUIRED.

Regarding unstored calculations, they may be a red herring. I would go after stored data. Only stored data needs tracking. Once you have that, anything can be calculated again. In any case, you can lookup stored data and/or unstored calculations alike.

  • Author

I am sorry - I don't understand what you mean by "there may be a dozen or so parameters which are checked by the decision making process - it is these which need to be recorded for audit purposes".

I believe it's best to analyze a single parameter at a time. For example, warranty status. You say that:

Why then is it necessary to record them elsewhere - if they are not being modified? I cannot see the connection between checking something and recording it. Of course, you may want to record this for convenience purposes alone - but it is not REQUIRED.

The warranty status changes over time. A product will usually have an initial warranty, perhaps an extension purchased at the time of the initial order, and then perhaps be under an annual maintenance agreement. In the equipment record there is a calculation field which determines what the status is "right now", and it is this status which is written to the audit record (message record) at the time that a response to the message is determined. If I don't lock down this value in the messages record, the warranty status might have changed when you later review the transaction.

The same is true for other parameters - in your example it would be the equivalent of the odometer reading on the day.

It is quite usual to take this approach in an order/entry environment, because the price may fluctuate or you may want to delete products which are no longer available. If you do not freeze these values in your order lines record you will either have null values for things like description and price (because the parent record no longer exists) or your order will reflect a newer price and therefore be meaningless.

Regarding unstored calculations, they may be a red herring. I would go after stored data. Only stored data needs tracking. Once you have that, anything can be calculated again. In any case, you can lookup stored data and/or unstored calculations alike.

In the equipment record I store only those things which cannot be calculated "on the fly". In the messages record I record only those details which need to be "frozen in time". So, I will write a permanent record in the message record of a value which is calculated "at the time" in the equipment record. Does that make sense?

In most invoicing solution, no one bothers to track price changes per se. So if your situation is similar (i.e. there's no audit log on the warranty status, or on the stored data used to calculate the warranty status), then yes - you do need a snapshot.

However, I would need a good reason to take a snapshot of the unstored calculated result, rather than of the stored data itself - especially when the snapshot is taken for auditing purposes. It is easy to reconstruct the calculated result from its original input, but the reverse may not be true.

  • Author

In most invoicing solution, no one bothers to track price changes per se. So if your situation is similar (i.e. there's no audit log on the warranty status, or on the stored data used to calculate the warranty status), then yes - you do need a snapshot.

Hmmm. Well I suppose it depends on the likelihood of there being a change to a price between the time of the initial transaction and a later retrieval of the details. If a customer asked for a copy of an invoice 2 years down the track, it would be impossible to regenerate accurately if the product had been deleted or the price had changed. In the IT industry you must take a snapshot of the detail at the time of the order if you want to be able to reliably reconstruct an invoice from electronic records down the track...

However, I would need a good reason to take a snapshot of the unstored calculated result, rather than of the stored data itself - especially when the snapshot is taken for auditing purposes. It is easy to reconstruct the calculated result from its original input, but the reverse may not be true.

Let's say that you are monitoring the performance of your service provider 18 months down the track. A piece of equipment was purchased with a standard warranty, but was then transferred to a maintenance agreement on which there were specific Service Level Agreements. You are measuring the performance of the service provider against these SLA's in the process of renegotiating the agreement.

If you have not taken a snapshot of the warranty status at the time, you have a complex task to determine what level of performance was in force at the time. If you have not stored the parameters, you will need to write, debug and implement a complex report to go back and determine what the state of play was at the time of the alert. You will also need to refer to any manual records you have made to check that priority, threshhold, and other parameters have not been altered in the time since the alert occurred.

In my experience, no-one really remembers that level of detail for very long in a complex environment. A snapshot of the situation "on the day" is by far the most reliable way to ensure that you are working with the correct data.

Much better, in my opinion, to be able to point to the log file (which is what we have here) and see immediately that the alert was dealt with under a specific agreement with a specific SLA, and that the system made its decision based on 7 parameters, the values of which are recorded in the log file itself.

Perhaps you will think this is overkill - and in many environments it may be. But not this one.

I am afraid you misunderstood my first point. I meant no one bothers to record what the price of a product was before it was changed. The only historical records that exist are the line items of products sold. Therefore, it is impossible to tell exactly WHEN a price was changed - all you can see is that the price was x on the day of this sale, and y when another sale took place. Somewhere in-between the price has changed - perhaps more than once.

Now, it would be quite possible to monitor the price fluctuations themselves. In such case, you wouldn't need to lookup the current price into a line item, because each line item has a date and a ProductID - so the correct price can be reconstructed at any time.

There are good reasons not to do so in an invoicing solution, but it's possible - and it might be preferable in other types of solution.

Let's say I am tracking the performance of employees over time. I have some parameters that I measure, and a formula that weights these parameters to produce an overall score. Suppose I only record the overall scores. Then, after doing so for 18 months, I decide that the original weights do not suit my purpose as well as I thought, and I want to change them. Consequence: 18 months worth of data down the drain.

I cannot tell from your description which example fits your case better, but I hope the point is clear.

  • Author

I am afraid you misunderstood my first point. I meant no one bothers to record what the price of a product was before it was changed. The only historical records that exist are the line items of products sold. Therefore, it is impossible to tell exactly WHEN a price was changed - all you can see is that the price was x on the day of this sale, and y when another sale took place. Somewhere in-between the price has changed - perhaps more than once.

Now, it would be quite possible to monitor the price fluctuations themselves. In such case, you wouldn't need to lookup the current price into a line item, because each line item has a date and a ProductID - so the correct price can be reconstructed at any time.

There are good reasons not to do so in an invoicing solution, but it's possible - and it might be preferable in other types of solution.

Let's say I am tracking the performance of employees over time. I have some parameters that I measure, and a formula that weights these parameters to produce an overall score. Suppose I only record the overall scores. Then, after doing so for 18 months, I decide that the original weights do not suit my purpose as well as I thought, and I want to change them. Consequence: 18 months worth of data down the drain.

I cannot tell from your description which example fits your case better, but I hope the point is clear.

Cool - we are of the same mind here more or less. I will be recording the data (parameters) as well as the decision.

Insofar as historical records are concerned, i am of the view that less is more. One record with all the data you need is preferable to having to maintain a whole database. I speak from experience - I cannot delete an employee, creditor, saleperson, product or customer from the well normalised MS SQl database in my financial system. Consequently 90% or more of the data I am lugging around after several years is irrelevant.

Create an account or sign in to comment

Important Information

By using this site, you agree to our Terms of Use.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.