kennedy Posted August 19, 2002 Posted August 19, 2002 I don't want to limit access to the database, in general. Anyone who can login to my OSX Macs can access the database. HOWEVER, if someone changes something improperly, I want to be able to determine who made the change. Does FileMaker offer any simple way of tracking who made what changes, at any level or granularity?
CobaltSky Posted August 19, 2002 Posted August 19, 2002 There are many variations on the theme of logging modifications to a database with ever finer levels of granularity. One of the relatively simple - but nevertheless very elegant and infinitely extendable techniques for doing this can be traced to a demo from ISO. I'm not sure if it is still available from their site (it dates back to FMPv4 days) but there is a copy on my server which you can download directly if you're interested: http://www.nightwing.com.au/FileMaker/ModLog.zip The demo is unlocked so you can look under the hood to check how it is working. What it does is stores a log of changes which indicate the user, field changed, date and time of the change, what the field was and what it was changed to. Different implementations of this kind of approach can be set up to store the log in a continuous or rolling form, or to store indefinitely until manually cleared and re-set (as illustrated in the demo).
Newbies DreamGardner Posted August 19, 2002 Newbies Posted August 19, 2002 I have the same need of an Audit Trail, for a File Maker project I am currently working on. I was unsuccefull in connecting to the link you posted. Is this a plug-in? Thanks!
CobaltSky Posted August 20, 2002 Posted August 20, 2002 Hi DreamGardner, Not sure why you weren't able to access the link - it seems to be working from here. Perhaps give it another try and, if it's still not functioning, if you'd like to email me at [email protected] I'll send you the demo by reply as an attachment. Cheers,
Kundinger Posted August 28, 2002 Posted August 28, 2002 Take a look at: "Change Archive-History-Audit Field Sample File" in the 'Sample Files' forum.
kennedy Posted September 3, 2002 Author Posted September 3, 2002 Thanks CobaltSky... wanted to follow-up for other readers... One of the relatively simple - but nevertheless very elegant and infinitely extendable techniques for doing this can be traced to a demo from ISO. I'm not sure if it is still available from their site (it dates back to FMPv4 days) but there is a copy on my server which you can download directly if you're interested: http://www.nightwing.com.au/FileMaker/ModLog.zip As stated... very elegant and easy to follow. I modified it a bit utilizing some FMP 5.5 functionality... GetField... by using GetField, you can avoid needing to modify the big Text calculation each time you add a field. Unfortunately, GetField won't work for the trigger field, so you still must modify the trigger field each time you add a new field. So, now for each field with 'fieldname', I have three companion fields: 'LOG fieldname', 'SORT fieldname', and 'HELP fieldname'. To operate the log itself, I have three fields 'Log', 'LogTrigger', and 'LogNext'... plus the relationship LOG on LogTrigger. That's now part of every file I am building. Swwweeeeeet! Thanks again!
kennedy Posted September 21, 2002 Author Posted September 21, 2002 So, I picked up the Moyer/Bowers book for the implementation of user passwords (as recommended in another thread), and they also have an implementation of Audit Trail. In comparing, there is one primary difference between their's and what I got from this thread: They have one lookup field capturing all the old values of all the fields and then have to parse the value of the changed field out. In contrast, the mechanism I am using adds an extra lookup field for each field I want to capture to hold the old value. Is there any reason to avoid extra fields? Which is more efficient: one huge text field, or many small text fields? Which is more efficient: one lookup that computes a large text concatenation, or multiple lookups that each just capture a single value? (Given it is a self-join lookup.) As for moving the stuff to an Audit Log file on a constant basis, that seems tedious to implement. Are there inefficiencies associated with having log fields that grow steadily in each record, such that it is much better to keep those fields empty?? Their rollback mechanism is obviously cool. And the Audit Log makes it cooler. But I am thinking I will just have a script to periodically purge all the logs to the Audit Log file. You can run it before doing rollbacks. And otherwise, it can be run in conjunction with regular backups. Thoughts? Thanks.
kennedy Posted September 26, 2002 Author Posted September 26, 2002 So, I guess nobody knows the answers to those questions? Anyway, as a follow-up to this thread, in case someone likes the idea above and wants an implementation, you can get one in the Starter Template I posted in Sample Files: http://www.fmforums.com/threads/showflat.php?Cat=&Board=files&Number=46219&page=0&view=collapsed&sb=5&o=&fpart=1
Anatoli Posted September 26, 2002 Posted September 26, 2002 RE: Are there inefficiencies associated with having log fields that grow steadily in each record, such that it is much better to keep those fields empty?? All I can say is, that in my 4-5 year experience with this technique log field it never grows too much. People are creating records, process them and never edit them again. In contrast, I am serving web from text fields that are huge and I never had an issue with that. And those long fields are PR stuff or articles and much longer than log field will ever grow.
CobaltSky Posted September 27, 2002 Posted September 27, 2002 Hi Kennedy, Sorry - I overlooked your post initially - just caught up with it... As you say, the different approaches achieve a similar end result, and there are generally several ways to approach anything. I suspect that the pros and cos of each approach (ie multiple single-field lookups versus one big lookup with parsing) roughly balance out in most situations. The latter approach is unlikely to save storage - and what it saves in processor cycles on the lookups, it uses for the concatenation and the parsing. As regards the issue of accumulation of archive data, this can be an issue. It depends on the kind of data and the way it is used. Some clients have a large number of records that they rarely edit, whereas other business reqiurements can result in (relatively) smaller numbers of records that are constantly changing. It is this latter case where the accumulation of log entries can present a dilemma. There are several ways I've addressed this, depending on the needs of a particular solution. For example: 1. A technique I call a 'rolling log' which caps the number of stored changes at a pre-defined limit (eg the last 20 changes) and drops off older ones as new ones are added. 2. A variation of the above in the form of a script which runs automatically at shut down (or at file open) which cycles through the records and deletes any log entries over the pre-defined limit, after first writing them to the log file. 3. A further variation which runs a script when the file is closed to detect and delete any log entries which are older than a certain number of days from the current date (the number of days being pre-defined and set in a global preference field). Again, if desired, the 'expired' change log entries can be written to a log file before being deleted from the main database. Expiry timelines and or max log If you're interested in exploring these techniques further, you'll find a simplified (single field) example of the 'rolling log' principle in a demo file which is on my site at the URL: http://www.nightwing.com.au/FileMaker/demos/RollingLog.zip A related technique which provides a form of 'rollback' (this demo is also single field, but the principle isn't) from rolling log data is also online at: http://www.nightwing.com.au/FileMaker/demos/Undo.zip It is possible to link undo/rollback features to log data drawn from local log fields first and then (if when the local log data is exhausted, from a separate archive file. This ensures that the rollback will work whenever there is data available for the current record regardless of whether it has been archived. It's also possible to do implementations which provide 'roll back and roll forward again' options, but I only do those if they are specifically requested by the client. The technique is an extension of the approach used in the Undo demo mentioned above. Cheers,
Recommended Posts
This topic is 8091 days old. Please don't post here. Open a new topic instead.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now