Jump to content
Server Maintenance This Week. ×

Triggers for database update


This topic is 5914 days old. Please don't post here. Open a new topic instead.

Recommended Posts

If I place the logic outside the equipment table, or preferably equipment type table, I have to build a script which allows for each equipment type and then calls the appropriate sub-script. If I add a new equipment type i have to remember to change the script and if I delete one, I have to do the same. By leaving the decision process in the equipment table it will always happen when necessary - automatically...

It depends of what you mean by "logic". If a new type of equipment requires a different script action, you will have to modify the existing script (or add a new one) - anyway. OTOH, if the difference can be calculated using data (or even formula) from the type record, then the script can run unmodified.

So ultimately it comes back to the question of what triggers the script. I still think your method adds unnecessary complexity and - by using a plugin - vulnerability to the process.

when I "duplicate" information in my messages table, I am storing the actual value of a field at a specific point in time. ... A good example of where you would do this is in an order lines table where you "duplicate" the product description, price, weight etc.

I'm afraid I cannot see the similarity at all. You already have the history of changes in the messages table. Those are your 'line items' - the ones that won't be changing. Copying the most recent data into the parent equipment record, and replacing it every time a new message arrives serves no purpose.

Link to comment
Share on other sites

where I am suggesting I will not implement your advice, I feel obliged to offer some reasoning.

You are free to take my advice or dismiss it, with or without explanation. However, when the explanation turns into a rather condescending lecture about good design practices, it implies that my advice goes against those and therefore I need to be lectured about them.

Perhaps when you get more familiar with Filemaker (say a year from now), you will come to realize why the suggestions made by me and others in this thread ARE good practice - even if right now they may seem to contradict your experience.

Link to comment
Share on other sites

Of the 70 odd long posts in this thread I missed at least one in which comment already covered sub scripts. My apologies to you both.

I would still have made the same post however, but simply emphasizing what comment and Colin have already said.

I think the point needs hammering home. Other posters have done a valiant job, but you still are stuck in non-FM think. Scripts are powerful tools for accomplishing tasks in Filemaker. They're not "outside the database".

If I place the logic outside the equipment table, or preferably equipment type table, I have to build a script which allows for each equipment type and then calls the appropriate sub-script.

Yes, the alternative being you build the logic into a calculation field. I am loath to change calculation fields. I prefer to change scripts. They far easier to edit and debug.

If I add a new equipment type i have to remember to change the script and if I delete one, I have to do the same. By leaving the decision process in the equipment table it will always happen when necessary - automatically...

Well, you don't need to delete the old one. Keep it around, it's not hurting anyone and you may get that equipment type back again. As far as remembering to change the script if you add one...I'm not sure I understand that as an argument. You do remember to build the logic into your calculation field right?

It's really about using the power of the database to manage itself - right now i don't have any scripts (except the external script which dumps the message data into the database).

Having no scripts in a solution is a generally a sign of an FM novice.

Filemaker has some weakness, in terms of its calculation engine, those include, the inability of a record to monitor changes to its relations, the speed of unstored calculations over a network, and the lack of script triggers.

Scripts are one of the three basic parts of Filemaker, they can make up for some of the problems in other places.

I understand your philosophy. I think everyone on this thread does. But you're really making things more difficult for yourself than necessary by sticking to your theory rather than discovering what is practical. You've got some great resources here, and some great advice.

Why not listen to it? :twocents:

Edited by Guest
Link to comment
Share on other sites

Having no scripts in a solution is a generally a sign of an FM novice.

Well, he isn't building a user interface for this, supposedly, so I can understand the argument for a single script. And I think the method he's pursuing is possible. On your other points, DJ, I absolutely agree.

Where I differ in opinion from the OP (and I think you and everyone else is on board with this : ) is the notion that storing all the business rules embedded into calcs is:

1. more self documenting than a script

2. more comprehensible to a developer newly exposed to the product than a script

3. easier to maintain and update than a script when new rules come down the pike.

That said, I don't have any argument with storing what he's calling facts as calculated data in a table, or nesting logical chains in a series of calcs, e.g. field 1 Y/N, field two, if field 1 Y, X, else Y, etc. Those are facts that can resolve to true false.

I do disagree that what happens as a result of those facts also is embedded in calcs - e.g. field 1 Y/N, field two is a script trigger that runs a specific event (email, for example) based on contents of field 1.

I think this is an extremely smart guy who is diving into FileMaker with what he knows and is making progress. The problem is the perspective he's bringing with him is closing avenues to elegant, proven programming methods that are particular to the programming architecture of FileMaker, which is admittedly unique. With time, he'll undoubtedly re-evaluate what we're talking about here, and if after a few months of exposure to the product what we're saying makes sense to him, mix it with his own methodology and come up with something that satisfies his personal requirements.

And, I think it's made for interesting discussion, at the very least.

Colin

Link to comment
Share on other sites

Well, he isn't building a user interface for this, supposedly, so I can understand the argument for a single script.

Fair enough.

I don't understand the issue entirely...how does he have one script that runs when a mail message is received and then needs to trigger a script? Are we talking about AppleScripts and FileMaker scripts?

And I think the method he's pursuing is possible.

Sure it is. I just think he's trying to fit an oval peg into a round hole.

That said, I don't have any argument with storing what he's calling facts as calculated data in a table, or nesting logical chains in a series of calcs, e.g. field 1 Y/N, field two, if field 1 Y, X, else Y, etc. Those are facts that can resolve to true false.

I do disagree that what happens as a result of those facts also is embedded in calcs

I think that's an important distinction...storing facts versus storing actions which I know has already been said, by comment I believe.

And, I think it's made for interesting discussion, at the very least.

Very true.

Link to comment
Share on other sites

I just wanted to add one more comment generally - not about Byronic in particular, but about developers coming from other life experiences to the world of FileMaker:

I think it's a very good thing when people with perspectives from other development environments bring that perspective to bear when looking at FileMaker development.

FileMaker's had such a long history of the data layer being wedded to the programming layer - of context being made explicit in calculations, scripts, layouts and so on - that many of us have had to relearn how to think about programming when FileMaker finally gave us tools that allowed us to abstract our code - variables, custom functions, script parameters and script results, for example. Especially those of us who were with FileMaker since the earlier versions - 2, 3, 4, 5 and 6.

Programming practices like abstraction and OOP are things that we have in the past implemented piecemeal rather than rigorously, partly because of expediency when it comes to particular projects, but partly the programming environment didn't reward us for doing so, and in large part made it impossible.

That's no longer quite the case, and we as a community have some catching up to do in terms of thinking the way programmers who've had those advantages do. We can only learn when we hear someone speak from a context outside the cloistered world of FileMaker programming.

We also, I think, should be pushing FileMaker in the direction it's already started, by implementing coding practices that encourage abstraction and demanding better tools for abstraction. I'm sure you can think of many obvious examples beyond being able to specify a field in Set Field with a variable. There are plenty of places where the programming layer is still superglued onto the data layer.

Now I'll step off my stupid little soapbox and leave this thread alone :

Edited by Guest
Link to comment
Share on other sites

I think this is an extremely smart guy who is diving into FileMaker with what he knows and is making progress.

Indeed this isn't one of those Ryan Rosenberg have wet dreams about, would ditch their spreadsheet for bookings in the near future.

--sd

Link to comment
Share on other sites

I'm afraid I cannot see the similarity at all. You already have the history of changes in the messages table. Those are your 'line items' - the ones that won't be changing. Copying the most recent data into the parent equipment record, and replacing it every time a new message arrives serves no purpose.

When I started looking at this I was not sure I would be able to create a view of the data using information from the messages file. However, thanks to the assistance of the forum I have been able to accomplish that, so no duplicate information from the messages file needs to be stored in the equipment table.

But I still need to take a "snapshot" of some of the fields in the equipment, and other, tables and store them in the messages record where they behave as a permanent audit trail, as in the order lines example I gave.

Link to comment
Share on other sites

So ultimately it comes back to the question of what triggers the script. I still think your method adds unnecessary complexity and - by using a plugin - vulnerability to the process.

Actually, it's much simpler than you think. Most of the work in the equipment table is establishing the "facts". Then the decision about what to do about them is fairly easy. The nice thing about this is that the "facts" are all displayed on a layout for anyone who wants to review the process - and the decision making calculation script is quite short and simple for each equipment type.

By keeping the logic in the equipment record, it makes the database much less obtuse. For any piece of equipment you can readily see all the parameters which are being considered, and what the decision-making process is. Adding a new type of equipment is easy, and entirely modular. Removing an equipment type has no impact on the logic elsewhere in the system.

Link to comment
Share on other sites

I still need to take a "snapshot" of some of the fields in the equipment, and other, tables and store them in the messages record where they behave as a permanent audit trail, as in the order lines example I gave.

That's an entirely different matter - easily accomplished by lookups.

Link to comment
Share on other sites

I think there are many ways to solve a problem with a product as sophisticated as Filemaker. I guess my approach has developed from 30 years or so of building solutions in all sorts of environments, starting in RPG-II on IBM S/3 in 1977. Since then I have worked with a lot of RDBMS products and built systems in many of them. All have their strengths and weaknesses.

But here, I think my approach is driven by the view I have of the process. What are we doing here? The answer is we are managing equipment. Various things happen to that equipment - some of which are in response to alerts. But other actions will be taken as warranties approach expiry, new software patches are released, or operational hours approach recommended service intervals and so forth. It is true that some of the most complex processes are those that involve alerts delivered to the messages table - but, again, it is not the alert itself that drives the process, but the business rules associated with the piece of equipment. Indeed, most of the alerts are ignored once the system determines that no response is required.

Also, philosophically, a process which relates to a piece of equipment, is to my mind an attribute of that equipment. This is really no different from a price or a product code - and should be associated directly with the equipment. If possible, I believe this should be stored along with the equipment in the database table.

As far as readability is concerned, a database analyst new to this system only has to know that the business logic for every piece of equipment is embedded in clearly labelled fields in the equipment record. This logic is relatively simple since it works with the facts which have already been established by other calc fields.

Thus a response to a warning error will only have a few if tests: if warning level > threshhold and if currentwarranty = Y and if blah blah blah... All the complex calculations are elsewhere - but easily discovered since you can look at the currentwarranty field and see the date calculations involved. Once a decision is made about what to do, the process can be handed off to an email process, or a purchase order process etc, so the decision-making logic remains uncluttered.

In a complex situation the very complexity of a long script, with multiple nested IF tests and loops can make it very difficult to manage over time, and is more prone to errors in logic. Adding new logic for specific equipment types, and removing old logic for devices that have been withdrawn is not only laborious, but each time the script is disturbed there is an opportunity to introduce bugs. Indeed, my existing system is built like this - it does not use any of the features of MS SQL, and all the logic is in code - poorly documented and requiring a specific development environment to maintain it. There are not only serious logic errors, but the code has proven very hard to modify to take account of things which were not imagined at the time the solution was built.

A good example of this is that we found that the initial life of some items is significantly less than the life of the replacement items. This played havoc with the system when we had a rollout of new devices - since the system worked on the life expectancy of the replacement items. To add the extra logic to allow for a one-off situation such as this would make an already complex application even more so.

In the new system, I have a field for the life of the initial item - it defaults to the life of the replacement item but can be overridden at an EquipmentType level. A discreet calculation field looks at things such as purchase date and alert history to decide whether the currently installed item is the initial, short-life, item or the standard replacement. The result of this calculation is displayed, along with the parameters used to make this decision, so that an IT manager can understand, at a glance, why a replacement part has been ordered earlier than he or she might otherwise expect.

The field which contains the value of the life of the item currently in the machine is a simple calculation field which adjusts the lifetime to reflect the decision already made.

So, here there are two calculation fields where one might have sufficed - but the system is more readable; the reasoning is less obtuse and the situation is immediately apparent to anyone wanting reassurance that the system is working properly.

I feel this is important - (nearly) all systems have bugs, and a system which inspires confidence in the user by revealing all its workings in this way saves time in the long run. I can assure you that, in 12 months time, when I have cause to wonder why something happened, I will have only a dim recollection of the situation and circumstances involved.

When I posted here I was hoping for some assistance to enable me to implement the solution I had in my head. I wasn't expecting, but have thoroughly enjoyed, the opportunity to debate the philosophy behind various ways of doing things. As a consequence the design of my application has changed, though not (so it seems) to satisfy everyone. However, a robust discussion provides the opportunity to test one's own decisions. Since this is my own application for my own business I am at liberty to build it any which way - and since this is the third version of this system I am determined to get it right this time.

Time will tell... :-)

Link to comment
Share on other sites

That's an entirely different matter - easily accomplished by lookups.

Is it? I am not so sure. I use Lookups elsewhere, but here the values of the fields of which snapshots are required are determined over the course of the decision-making process.

(a) The message is received

(: Parameters are checked, calculations performed

© Decisions are made

(d) A snapshot is taken and written to the messages record

Perhaps a lookup in a calculation field in the messages table which is fired off somehow in step ©... Dunno. Haven't gotten to that stage as yet.

Link to comment
Share on other sites

You are free to take my advice or dismiss it, with or without explanation. However, when the explanation turns into a rather condescending lecture about good design practices, it implies that my advice goes against those and therefore I need to be lectured about them.

I didn't think i was being condescending - merely stating my case in response to your arguments about what constitutes good design practice. Perhaps you feel that, being new to FM, I should have no views on this. I am not new to RDBMS, or business applications. And I have the grey hair to show for that :-)

Perhaps when you get more familiar with Filemaker (say a year from now), you will come to realize why the suggestions made by me and others in this thread ARE good practice - even if right now they may seem to contradict your experience.

Perhaps. Anything is possible...

Link to comment
Share on other sites

I use Lookups elsewhere, but here the values of the fields of which snapshots are required are determined over the course of the decision-making process

...

Perhaps a lookup in a calculation field in the messages table which is fired off somehow in step ©.

I'm afraid I cannot answer that without knowing a lot of more specific details. Offhand I would say that if an attribute requires monitoring, it should be in a separate table. Thus no lookup from the parent table is required - one simply creates a new record with the current value. A good example of this is a car odometer: each refueling is a record of the current mileage and the fuel intake. Nothing takes place in the parent car record (except when one views it - then an unstored calculation will evaluate to display the fuel efficiency).

There are other options, e.g. you could trigger a relookup by setting the ParentID in the child record to itself. Or you could have an audit field in the parent logging the changes to a value (auto-entering itself and the monitored field).

Link to comment
Share on other sites

I'm afraid I cannot answer that without knowing a lot of more specific details. Offhand I would say that if an attribute requires monitoring, it should be in a separate table. Thus no lookup from the parent table is required - one simply creates a new record with the current value.

Are we talking at cross-purposes again? The warranty details for the piece of equipment are stored in the equipment table. The replacement parts for a piece of equipment are stored in a separate table linked to the EquipmentType record. Calc fields in the equipment record pull down this info when required.

A good example of this is a car odometer: each refueling is a record of the current mileage and the fuel intake. Nothing takes place in the parent car record (except when one views it - then an unstored calculation will evaluate to display the fuel efficiency).

Good example - so my refuelling record would contain a snapshot of the odometer at the time of the refuelling. In this case there may be a dozen or so parameters which are checked by the decision making process - it is these which need to be recorded for audit purposes. eg:

a) warranty status

: new issue or duplicate message about existing issue

c) if duplicate, what is the ID of the message which is being actioned

d) threshhold level

e) ...

f) action taken (Unprocessed, Ignored, Actioned)

g) Email? (recipient address)

h) Purchase Order No

i) ...

The idea is that the message record becomes the log - when all else is deleted it is still possible to reconstruct what occurred, and why.

There are other options, e.g. you could trigger a relookup by setting the ParentID in the child record to itself. Or you could have an audit field in the parent logging the changes to a value (auto-entering itself and the monitored field).

I don't quite follow this - the messages record is the audit trail and I just need to write a snapshot of the parameters which existed in the parent record at the time the message was processed, along with the result (ie: ignored, duplicate of prior alert / processed, part ordered etc etc). The parameters are, at the moment, mostly unstored calculation fields in the equipment record so I am not sure whether I can use a lookup from the child to pull these values down into the audit fields in the message record itself.

Link to comment
Share on other sites

I am sorry - I don't understand what you mean by "there may be a dozen or so parameters which are checked by the decision making process - it is these which need to be recorded for audit purposes".

I believe it's best to analyze a single parameter at a time. For example, warranty status. You say that:

The warranty details for the piece of equipment are stored in the equipment table.

Why then is it necessary to record them elsewhere - if they are not being modified? I cannot see the connection between checking something and recording it. Of course, you may want to record this for convenience purposes alone - but it is not REQUIRED.

Regarding unstored calculations, they may be a red herring. I would go after stored data. Only stored data needs tracking. Once you have that, anything can be calculated again. In any case, you can lookup stored data and/or unstored calculations alike.

Link to comment
Share on other sites

I am sorry - I don't understand what you mean by "there may be a dozen or so parameters which are checked by the decision making process - it is these which need to be recorded for audit purposes".

I believe it's best to analyze a single parameter at a time. For example, warranty status. You say that:

Why then is it necessary to record them elsewhere - if they are not being modified? I cannot see the connection between checking something and recording it. Of course, you may want to record this for convenience purposes alone - but it is not REQUIRED.

The warranty status changes over time. A product will usually have an initial warranty, perhaps an extension purchased at the time of the initial order, and then perhaps be under an annual maintenance agreement. In the equipment record there is a calculation field which determines what the status is "right now", and it is this status which is written to the audit record (message record) at the time that a response to the message is determined. If I don't lock down this value in the messages record, the warranty status might have changed when you later review the transaction.

The same is true for other parameters - in your example it would be the equivalent of the odometer reading on the day.

It is quite usual to take this approach in an order/entry environment, because the price may fluctuate or you may want to delete products which are no longer available. If you do not freeze these values in your order lines record you will either have null values for things like description and price (because the parent record no longer exists) or your order will reflect a newer price and therefore be meaningless.

Regarding unstored calculations, they may be a red herring. I would go after stored data. Only stored data needs tracking. Once you have that, anything can be calculated again. In any case, you can lookup stored data and/or unstored calculations alike.

In the equipment record I store only those things which cannot be calculated "on the fly". In the messages record I record only those details which need to be "frozen in time". So, I will write a permanent record in the message record of a value which is calculated "at the time" in the equipment record. Does that make sense?

Link to comment
Share on other sites

In most invoicing solution, no one bothers to track price changes per se. So if your situation is similar (i.e. there's no audit log on the warranty status, or on the stored data used to calculate the warranty status), then yes - you do need a snapshot.

However, I would need a good reason to take a snapshot of the unstored calculated result, rather than of the stored data itself - especially when the snapshot is taken for auditing purposes. It is easy to reconstruct the calculated result from its original input, but the reverse may not be true.

Link to comment
Share on other sites

In most invoicing solution, no one bothers to track price changes per se. So if your situation is similar (i.e. there's no audit log on the warranty status, or on the stored data used to calculate the warranty status), then yes - you do need a snapshot.

Hmmm. Well I suppose it depends on the likelihood of there being a change to a price between the time of the initial transaction and a later retrieval of the details. If a customer asked for a copy of an invoice 2 years down the track, it would be impossible to regenerate accurately if the product had been deleted or the price had changed. In the IT industry you must take a snapshot of the detail at the time of the order if you want to be able to reliably reconstruct an invoice from electronic records down the track...

However, I would need a good reason to take a snapshot of the unstored calculated result, rather than of the stored data itself - especially when the snapshot is taken for auditing purposes. It is easy to reconstruct the calculated result from its original input, but the reverse may not be true.

Let's say that you are monitoring the performance of your service provider 18 months down the track. A piece of equipment was purchased with a standard warranty, but was then transferred to a maintenance agreement on which there were specific Service Level Agreements. You are measuring the performance of the service provider against these SLA's in the process of renegotiating the agreement.

If you have not taken a snapshot of the warranty status at the time, you have a complex task to determine what level of performance was in force at the time. If you have not stored the parameters, you will need to write, debug and implement a complex report to go back and determine what the state of play was at the time of the alert. You will also need to refer to any manual records you have made to check that priority, threshhold, and other parameters have not been altered in the time since the alert occurred.

In my experience, no-one really remembers that level of detail for very long in a complex environment. A snapshot of the situation "on the day" is by far the most reliable way to ensure that you are working with the correct data.

Much better, in my opinion, to be able to point to the log file (which is what we have here) and see immediately that the alert was dealt with under a specific agreement with a specific SLA, and that the system made its decision based on 7 parameters, the values of which are recorded in the log file itself.

Perhaps you will think this is overkill - and in many environments it may be. But not this one.

Link to comment
Share on other sites

I am afraid you misunderstood my first point. I meant no one bothers to record what the price of a product was before it was changed. The only historical records that exist are the line items of products sold. Therefore, it is impossible to tell exactly WHEN a price was changed - all you can see is that the price was x on the day of this sale, and y when another sale took place. Somewhere in-between the price has changed - perhaps more than once.

Now, it would be quite possible to monitor the price fluctuations themselves. In such case, you wouldn't need to lookup the current price into a line item, because each line item has a date and a ProductID - so the correct price can be reconstructed at any time.

There are good reasons not to do so in an invoicing solution, but it's possible - and it might be preferable in other types of solution.

Let's say I am tracking the performance of employees over time. I have some parameters that I measure, and a formula that weights these parameters to produce an overall score. Suppose I only record the overall scores. Then, after doing so for 18 months, I decide that the original weights do not suit my purpose as well as I thought, and I want to change them. Consequence: 18 months worth of data down the drain.

I cannot tell from your description which example fits your case better, but I hope the point is clear.

Link to comment
Share on other sites

I am afraid you misunderstood my first point. I meant no one bothers to record what the price of a product was before it was changed. The only historical records that exist are the line items of products sold. Therefore, it is impossible to tell exactly WHEN a price was changed - all you can see is that the price was x on the day of this sale, and y when another sale took place. Somewhere in-between the price has changed - perhaps more than once.

Now, it would be quite possible to monitor the price fluctuations themselves. In such case, you wouldn't need to lookup the current price into a line item, because each line item has a date and a ProductID - so the correct price can be reconstructed at any time.

There are good reasons not to do so in an invoicing solution, but it's possible - and it might be preferable in other types of solution.

Let's say I am tracking the performance of employees over time. I have some parameters that I measure, and a formula that weights these parameters to produce an overall score. Suppose I only record the overall scores. Then, after doing so for 18 months, I decide that the original weights do not suit my purpose as well as I thought, and I want to change them. Consequence: 18 months worth of data down the drain.

I cannot tell from your description which example fits your case better, but I hope the point is clear.

Cool - we are of the same mind here more or less. I will be recording the data (parameters) as well as the decision.

Insofar as historical records are concerned, i am of the view that less is more. One record with all the data you need is preferable to having to maintain a whole database. I speak from experience - I cannot delete an employee, creditor, saleperson, product or customer from the well normalised MS SQl database in my financial system. Consequently 90% or more of the data I am lugging around after several years is irrelevant.

Link to comment
Share on other sites

This topic is 5914 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.