Jump to content

Automatically indexing non-indexable fields


This topic is 7522 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Background

In FileMaker Pro, calculations which include references to global fields or related fields (ie fields from other databases) cannot be indexed.

Indexes, on the other hand , perform several important functions:

1. they facilitate searching and sorting on a field

2. they enable the creation of value lists based on field contents

3. indexed fields can be used as the target (foreign) key in a relationship

Each of these considerations is significant. however the latter two sometimes present the database designer with a dilemma. Occasionally, the natural connections between data elements require that a relational dependency - or a global variable - be factored into the production of a key field or value list

This article is concerned with techniques which may be used to ameliorate a situation in which a relationship of value list cannot be created because the field on which it is to be based cannot be indexed.

The Scripted Solution

One answer to this problem lies in the use of a script rather than a calculation.

Whereas a calculation determines a result based on a formula, a script is able to place a calculated result - using an equivalent formula - into a text or number field. A result so placed may be indexed as data, regardless of whether its components are local, global or related.

While a scripted procedure to write a calculated value to a data field throughout the current file can be set up using the 'Replace Contents' (formerly 'Replace') script step, this will not be suitable for use in a multi-user solution, it will skip over any records currently being edited by another user in multi-user mode, without producing an error code or any means to trace the records which have not been updated.

A better alternative, therefore is a loop which passes through the file applying the calculation to each record in turn, via a Set Field [ ] command. Immediately after each Set Field step, error-trapping an be used to ensure that the record was not locked, and an alternate procedure can be invoked if it was (eg the record number can be stored, so that a subsequent pass can be made, and/or the problem can be reported to the user at the closure of the script sequence).

Since either approach (Replace Contents or Loop/Set Field) works on the current found set, it will be necessary to either:

- precede the relevant step/s with a 'Show All Records' step to that all records are included, or

- if it is desired to complete the procedure without disturbing the found set, to perform the procedure twice, with a ?Show Omitted? step between the two passes, and a further ?Show Omitted? step at the end of the script.

In this latter case, a test to ascertain whether there is a found set in place should precede the second sequence, along the lines of:

If ["Status(CurrentFoundCount) < Status(CurrentRecordCount)"]

Show Omitted

Replace Contents ["Your Field?, Replace with calculated result: "Your Formula Here"]

Show Omitted

EndIf


(note that this is the single-user-only Replace Contents version of the script).

Using this approach, however, it will be necessary to run the script whenever it is desirable to refresh the indexed field values. This is a limitation, since it will not always be convenient or efficient for users to run the script, so the solution is not suitable for all situations.

In a few cases, it may be possible to solve this problem procedurally (eg to call the script whenever a user enters a layout where a relationship or value list based on the field is to be accessed). In other situations, the solution will be less than seamless.

Automation

A better approach in some instances may be to set the indexable field to update automatically whenever a relevant event occurs. This would be done by setting the indexable field as a look-up (to copy the contents of a calculating field via a self-join based on recordID).

The trigger for the lookup should be based on a stored value local to the current record which, when it changes, will cause a refreshing of the lookup of the calculated value. This can be something specific to the nature and purpose of the calculation (eg a local value associated with it) or something as general as the record modification time. Once a suitable event to trigger by has been selected, a primary key field for the trigger should be created. This can be called 'cTrigger.key', and should be defined with a formula along the lines of:

Case(IsEmpty(TriggerField), Status(CurrentRecordID), Status(CurrentRecordID))

In addition to this, the following will be required:

1. a stored calculating field (call it 'RecordID') of number type with the formula set to Status(CurrentRecordID)

2. a self-join relationship called 'StoredCalcUpdate' which matches cTrigger.key with RecordID.

3. a data field (ie of type text, number, date or time, as appropriate) which will be the indexable field upon which a relationship of value list is to be based

The data field referred to at 3 (above) should be defined as a lookup to copy the value of the pre-existing unstored calculation field, based on the 'StoredCalcUpdate' relationship.

With these procedures in place, the stored data field will be refreshed automatically each time the value you have chosen as the basis of your trigger changes (eg if it is a field which auto enters modification time, every time anything on the record changes).

Choice of Trigger/s for the Update

It is important to recognise that this automation technique has a limitation insofar as it will only successfully trigger in response to changes which are local to the current record. This is because the table of field dependencies which FileMaker maintains in the background to ensure that dependent calculation fields are updated, is local to each file.

So, for example, if the trigger field references a global value, only the current record will update when that value is changed. If the trigger field references a value in another file (via a relationship), the triggering may be limited to changes made to the local value from a field placed on a layout in the current record.

For this reason it is preferable that a value be chosen as the trigger for the update, which is either:

- closely associated with the calculation function and therefore likely to change whenever there is a corresponding need to update the calculation value, or

- general to the record and likely to force frequent updates to ensure that the calculation is refreshed with maximum frequency.

For this latter purpose, a generic record-related value such as a record modification time field may be the best choice as the point of reference for the trigger.

A Combined Approach

The table of field dependencies that FileMaker maintains in the background and uses to track cascading recalculations is confined to stored calculations. Therefore it is important to note that although the approach can work very well in some instances, it is theoretically imperfect in that a value in another file can change remotely and this will not necessarily trigger the relookup.

In implementations where this limitation is of concern, a combined approach which includes a refresh script (eg based on a Replace Contents script step) as an adjunct to the triggering technique is a possible solution. This can either be triggered periodically by users or run routinely at start-up. shut down or linked to some other action such as change of layouts.

In cases where data on the current records is changing frequently, the added measure is unlikely to be necessary. However some combination of the two techniques outlined above is likely to provide an adequate solution for indexing problems in most situations.

Note: a pdf version of this article is now also available from the NightWing web site (see URL below).

Link to comment
Share on other sites

  • 3 weeks later...

Hi Dj,

GetField(" ") is a fine way to reference a value, including global field values or related file values.

However if the calc in which the GetField resides is stored, it will not automatically update when the source value changes (as is the case for other methods of referencing global or remote values). This is because the table of dependencies does not extend to fields outside the current record.

Hence, whether or not you use GetField( ) as the method of referencing the values which are to be stored, the mechanisms outlined above are still needed to keep the stored/indexed values up to date as the source values change.

Cheers,

Link to comment
Share on other sites

Hi Ray,

I'm perfectly aware of problem of updating the stored value when source field's value get changed.

I didn't say that GetField could by pass limitations of look up method (the only advantage is when referencing globals:there's no need for an additional relationship)

As in method you've described in every particular, you can only trigger it's updating using some indexable fields or binding it to some user scripted action.

So assuming that you want to get values from field x using relationship Y thru key field kxY than Case(kXY,GetField(Y::x"),GetField(Y::x")) will update every time the key kxY change.

As allways the change would interst only the current record, this is very important in case you are referencing globals.

Dj

smirk.gif

Link to comment
Share on other sites

We do not use the 'Get Field' function here, we are just use fieldname directly in formula.

Have been doing this since before Get Field was available in 5.5 -- even for globals and relateds and always works.

Get Field is good to determine which source field by calculation, but is not needed if just referencing same field every time.

Link to comment
Share on other sites

  • 1 month later...

So, yesterday, I was messing with the exact lookup strategy you suggested here, and then today, I saw your article. There is a problem with your suggestion, t hat I hope you know a way around...

You said if there's no indexable field that changes whenever the calc is updated, you should use something general, like a last-modified time or something.

Unfortunately, while these fields ARE indexable they do NOT remain current when they have been indexed. One you index them, they seem to update when they feel like it, by some voodoo, that at least I don't understand. For instance, I set a status(currentrecordmodificationcount) field, and just sat there and watched while I altered the record, and the number never incremented. How to get around this?

The other problem, of course, is that sometimes, the only change is in the related records, NOT in the host record. Thus, the lastmodificationtime does not change. I for instance, have been dealing with a database of pages that I need to make an indexable key in. Its part of a system that paginates classified ads onto Quark pages, and it relates to a database of gaps filled with classified ads on the page. Now, if all those gaps are filled, then the page is filled, and I want to make a relationship ONLY to non-full pages of a particular issue of the paper. So, the key needs to be IssueNumber & " | " & Filled where Filled is (count(Filled Gaps:Gap Number) = count(Gaps on This Page:Gap Number)). Now, the page itself, once imported should be static. But the status of the gaps changes all the time. But the change in the status of the gaps does NOT up the modification count or time of the host record. So what do I trigger off of? I COULD script, but it would be NOTORIOUSLY unreliable.

Link to comment
Share on other sites

Hello Keshalyi,

There are several reasons why a calculation based on the Status(CurrentRecordModificationCount) function might appear to not be updating as the record changes.

1. The calculation is stored.

Status( ) functions do not automatically trigger updates, however if used as the originatintg key field for a self-join relationship, there is no necessity to store or index the value (only the destination key on the right side of the relationship need be indexed).

2. You have to exit the record after making changes, in order for the modification count to be incremented.

If you change a field and then tab to a new field, since you are still within the record, the mod count will not be incremented.

3. The record is new and has not yet been 'committed'.

The modification count for a record begins not when it is created, but the first time a user leaves the record after returning to edit it. Therefore to begin incrementing the modification count you must create the record, exit it, go into a field (or several fields) and make a change then exit the record. It is on this second exiting of the record (after making a change) that the modification count is first incremented. If this is an issue, the customary solution is to create new records via script, where the first exit record is part of the script, followed by a 'Go to Field[ ] step - which ensures that the modification count begins incrementing the first time the record is exited.

4. Even though the modification count has been incremented, an unstored calculation may not update unless there is an event which requires it to recalculate. Simply exiting the record or refreshing the screen may not always suffice, however changing records or layouts will force recalculation.

Once you take the wbove issues into account, you will likely find that your calculation is updating. If, however, you require that a lookup be refreshed while the user remains within the record, then modification count will not suffice. In this case you would be best to use a series of other fields within the record as triggers.

However notwithstanding the above, I suspect that you may be happier with an approach which comes at your problem from the other end. Since the fields which change are in your related file, perhaps you should be considering creating a calculated key field in that file, which is null when all the gaps are filled, but produces a key value when there are gaps. This can presumably be a stored and indexed calculation referencing the classified ad fields within in the same db.

With this latter calculation in place, you should be able to achieve what you've indicated you want using a static key in the main file.

Link to comment
Share on other sites

  • 6 months later...
  • Newbies

I'm interested in the issue Keshalyi brought up, but was not resolved here:

How do you trap changes made in related records so that you know values in the master record have changed?

Here's our situation. We have a content management system built in Filemaker. It uses many related tables. We now have to synchronise the data with a SQL database on a webserver that uses a "functional" subset of tables and fields. We have settled on doing this by a periodic export of .csv files in the format used on the SQL database. And what we'd like to do is to export only the records where values have changed. Hence our interest.

Link to comment
Share on other sites

"...export only the records where values have changed..."

We may be going off the thread topic -- but how about just storing the export date as part of your export script, either in the records, or in a single-record Preferences file. Then you could easily script a Find for mod. date > exp. date.

Link to comment
Share on other sites

  • 2 months later...
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.