Jump to content

alecgregory

Members
  • Posts

    14
  • Joined

  • Last visited

Profile Information

  • Gender
    Not Telling

Recent Profile Visitors

1,877 profile views

alecgregory's Achievements

Apprentice

Apprentice (3/14)

  • First Post
  • Collaborator
  • Conversation Starter
  • Week One Done
  • One Month Later

Recent Badges

2

Reputation

  1. If a commit log times out the timeout value is not written to the log entry for the timeout. This is caused by the incorrect global variable name in the set field calculation for the EasyAudit::Note field In the "EasyAudit Commit - Server" script. You can find this step towards the end of STEP 1 section of "EasyAudit Commit - Server". To fix, change the step to the following (step in red, change in bold): # #If EA timed out waiting for changes to be committed to the database... If [ $e > $$ea_timeout_seconds ] #Log the timeout. New Record/Request Set Field By Name [ "EasyAudit::Transaction_ID"; $Transaction_ID ] Set Field By Name [ "EasyAudit::Entry_Type"; "Timeout" ] Set Field By Name [ "EasyAudit::Note"; "Timed out waiting for changes to be saved to database. Waited " & $$ea_timeout_seconds & " seconds, starting at " & $TS_Pre_Commit & "." ] Commit Records/Requests [ Skip data entry validation; No dialog ] Exit Script [ ] End If
  2. I'd like to add a few general thoughts to this thread. I'm working in Beta 2. 1. For the bulk operations (such as delete all, import and replace field contents) the framework takes the approach of looping through all records in the found set. This is simple and effective, but can have performance and logging implications. Performance-wise, if you have a fairly heavy layout or are accessing over WAN with a large found set then it could take ages to loop through all the records and build the list of uuids. Obviously we can try to avoid heavy layouts, but they are fairly common. And WAN access is sometimes unavoidable. Logging-wise, and more relevant to the framework, if you have a view log set up on a layout you would get a lot of spurious view log records when performing bulk operations. This problem could be solved by using Suppressible Triggered Scripts. An alternative approach would be to use other methods of getting all uuids in the found set either via a custom function or the new ListOf summary field. I intend to do some testing to find the optimal approach to bulk operations. 2. As far as I can tell there's no "Record Created" log type. There are various ways this could be implemented. The most generic would involve looking for a "New Value" log type in the EA_UUID field in a table. 3. In some cases (such as a massive sync transaction with thousands of records across many tables), a commit can take way longer than your time out value for normal circumstances. You can usually work this out in advance, for example by counting the number of open records before commit, and so revise the timeout value upwards temporarily.
  3. I've been working with the beta this weekend and really liking it. Logging record views is particularly good to see as I've always wanted to be able to provide stats for this sort of thing to managers, so they can see how their users behave. I've made a couple of changes to the version I have in my current solution and thought I'd post them here in case anyone else was after the same functionality. I've tried to stay true to the coding style in the framework and implemented the changes via the settings script and existing scripts. These work for me but I've not tested extensively so proceed with caution. Changes Omitting EA fields when you have changed their names Omitting fields from all or particular tables without renaming to include the 'EXCL_' prefix Omitting EA fields when you have changed their names In the scripts "EasyAudit - Commit - Server" and "EasyAudit - Log Single Record - Server" Change the Set Variable ($fields) Step to the following calculation: // Modified to allow excluding ea_fields when they have custom names ExecuteSQL ( "SELECT FieldName, FieldReps FROM FileMaker_Fields WHERE ( TableName = ? ) AND ( FieldType NOT LIKE 'global%' ) AND ( FieldName NOT LIKE 'EXCL_%' )" & If ( $$omit_ea_fields; " AND ( FieldName NOT IN ( '" & Substitute ( List ( $$EA_UUID; $$EA_Modifier; $$EA_Mod_Timestamp; $$EA_Mod_Count ); ¶; "','" ) & "' ) )" ); "|"; ¶; $table_occurrence_name ) // Original /* ExecuteSQL ( "SELECT FieldName, FieldReps FROM FileMaker_Fields WHERE ( TableName = ? ) AND ( FieldType NOT LIKE 'global%') AND ( FieldName NOT LIKE 'EXCL_%' )" & If ( $$omit_ea_fields; " AND ( FieldName NOT LIKE 'EA_%' )"; "" ); "|"; ¶; $table_occurrence_name ) */ Omitting fields from all or particular tables without renaming to include the 'EXCL_' prefix In the script "EasyAudit - Settings" add a global variable named $$fields_to_omit_all. This variable should include a list of fields that you want to omit from all tables (such as a admin field you have in all tables). E.g. // List of fields to omit from all tables List ( "z_ModifyAccount_View"; "z_ModifyName_View"; "z_ModifyTs_View" ) In the script "EasyAudit - Settings" add a global variable named $$fields_to_omit_[table_name], where [table_name] is the base table name of a particular table you want to omit fields from. This variable should include a list of fields that you want to omit from the table [table_name] only. E.g. // List of fields to omit from [table_name] List ( "Field1_To_Not_Log"; "Field2_To_Not_Log" ) In the scripts "EasyAudit - Commit - Server" and "EasyAudit - Log Single Record - Server" Change the Set Variable ($fields) Step to the following calculation (this is the same as the change above with two additional lines in the list function): // Modified to allow excluding ea_fields when they have custom names ExecuteSQL ( "SELECT FieldName, FieldReps FROM FileMaker_Fields WHERE ( TableName = ? ) AND ( FieldType NOT LIKE 'global%' ) AND ( FieldName NOT LIKE 'EXCL_%' )" & If ( $$omit_ea_fields; " AND ( FieldName NOT IN ( '" & Substitute ( List ( $$EA_UUID; $$EA_Modifier; $$EA_Mod_Timestamp; $$EA_Mod_Count; $$fields_to_omit_all; Evaluate ( "$$fields_to_omit_" & $table_name ) ); ¶; "','" ) & "' ) )" ); "|"; ¶; $table_occurrence_name ) // Original /* ExecuteSQL ( "SELECT FieldName, FieldReps FROM FileMaker_Fields WHERE ( TableName = ? ) AND ( FieldType NOT LIKE 'global%') AND ( FieldName NOT LIKE 'EXCL_%' )" & If ( $$omit_ea_fields; " AND ( FieldName NOT LIKE 'EA_%' )"; "" ); "|"; ¶; $table_occurrence_name ) */
  4. I've been doing some work with the GetThumbnail function in FileMaker recently and I've found that GetThumbnail often creates files larger than the original, even though their pixel dimensions and dpi are lower. For Example, My original file has the following spec: 300dpi 1000 x 1000 pixels 76,490 bytes Using GetThumbnail ( Image; 750; 750 ) gives me an image that is 72dpi 750 x 750 pixels 85,579 bytes So, a file that is a little over half the pixel area and less than a third of the dpi is over 10% larger than the original file. That's not very good, is it? Anyone care to suggest an explanation for this? Could it be because FileMaker is decompressing the image to process it and then re-compressing it really badly?
  5. I agree that any additional measures a developer takes are unlikely to increase the security of a file. I don't think any serious developer adds security features with this expectation. They are usually added because the FileMaker security model is not suited to all client requirements. Take the Role-Based-Access-Control model (RBAC), for example, which has grown in popularity in the last few years. It isn't possible to implement RBAC using the default FileMaker Pro security features because users can only be assigned a single privilege set, which torpedoes the inheritance concept that RBAC relies on. So, the only option if you want to implement RBAC is to create your own RBAC tables for users, auth items and child auth items and use them to manage access. And you use them in tandem with FileMaker accounts that are as restrictive as possible while still giving users the access levels they need. Does this make your solution insecure? Only if you mess up the implementation. But, as Steven points out, your solution is also insecure if you mess up the implementation of a regular FileMaker security set up. I do think getting the basics of FileMaker security right is essential and I appreciate the points in this blog. I also think that with most FileMaker solutions you are best served sticking to the built-in FileMaker security features, even if they do make development more time-consuming. But in some cases you do need to go outside the built-in security in order to provide users with functionality and compete with other products. At heart, FileMaker is a DBMS like any other and in other DBMS (Oracle, MSSQL, MySQL) managing security through customized tables on top of built-in security is common and perfectly acceptable.
  6. An update to this: The bug seems to be connected to the IN clause. I recently replaced an equals with an IN and the results from the sub-query appeared in the result set again. Shame =ANY doesn't work in FileMaker for a comparison. I really should get this one reported.
  7. Can you post the exact URL? It's hard to tell what's going on with just the abridged version. For one thing, I cannot see from what you posted that the app has returned result=APPROVED Thanks, Alec
  8. OK, to recreate, try this. Create two tables Parent (pk_parent, other_fields) and Child (fk_parent, other_fields) Link Child to parent via pk_parent and fk_parent and allow creation in the parent table Don't check "Delete this record when record is deleted in the other table" Go to the layout based on the child table and add some fields from the parent table. Create a new record in the child table and fill in the parent table fields to force the creation of a new parent Go to the layout based on the parent table and try to delete the parent record. Ideally do this with the script debugger running. In my system, as soon as the delete step completes, Get ( RecordOpenState ) is set to 3 and the record is still visible on the layout The record only disappears when a second delete action occurs or when a commit occurs. Notes: I'm not sure the steps above will reproduce this as I've not had time to try it on a vanilla solution. There could be other factors I haven't considered in my solution that cause it. I noticed this because I am preventing commits until user presses save or cancel. Chances are you wouldn't notice the issue if you weren't catching commits using the OnRecordCommit script trigger that sometimes returns 0. I can't help thinking that this is somehow related to the bug where portal records sometimes don't delete if you try to delete them before committing them, which has existed since at least FM11 as far as I know. Perhaps fixing that will fix this...
  9. I've found an undocumented RecordOpenState (see attachement). The state is 3 and as far as I can tell it means "Record that you just deleted but is still showing on the layout because an OnRecordCommit trigger is stopping it from committing". Â I post it for information and to see if anyone else has ever seen it. There seem to be some specific conditions required to make it happen as I can only make it happen on records created via a certain relationship at the moment. I'll report back if I can find out enough info to report it as a bug.
  10. Update: This has now been acknowledged as a bug by FileMaker tech support: http://forums.filemaker.com/posts/15d66c2223
  11. I think I've found a bug related to setting fields in related records using scripts. If, in an open record (i.e. Get ( RecordOpenState ) = 1 or 2), an attempt is made to set a field from a related table in a portal by a script set to run with full access privileges when the logged in user is NOT assigned the [Full Access] privilege set, the changes are not shown in the field until the record is committed. Notes: A refresh window (flush cached join results) script step does not help The field can be directly on the layout or in the portal I'm using a data-entry pattern based on the ideas in this soliant article http://www.soliantconsulting.com/blog/2012/08/easy-filemaker-modal-edit-dialogs-full-rollback-support, The idea being that changes aren't committed until the user hits save. The changes will be rolled back if the user presses cancel. Some of these changes will be made by scripts, and will need full access privileges. So it's pretty important that uncommitted changes are shown on the layout. Has anyone encountered this before and / or have any ideas how to solve it without breaking the pattern? It's pretty easily reproducible so I fear it's a FileMaker bug. The steps to reproduce the bug are below, and an example file is attached. To confirm a user with [Full Access] Privilege set can see data in uncomitted portal records Open Database “full-access-bug.fmp12” Login as “Admin” with no password Create a new record Click on the button titled “Set Field in Portal” Confirm a new row has been added on the portal Click “Save” Repeat steps 3 through 6 as desired, then close the Database. To confirm a user without [Full Access] Privilege set cannot see data in uncomitted portal records Re-Open the Database “full-access-bug” Login as “User” with no password Create a new record (by going to ‘Records’ then clicking on ‘New Record’) Click on the button titled “Set Field in Portal” You will notice that the new row does not appear in the portal Click “Save” Now the new row(s) will appear in the portal full-access-bug.zip
  12. Just done the same test with a MySQL database and the UNION keyword performs as expected, so this does seem to be a FileMaker-specific issue.
  13. Many thanks for the suggestion Wim. I did try some parenthesis experiments but they didn't seem to alter the result. I think the next step is to recreate this using a MySQL database to make sure I'm not just misusing the UNION keyword. I'll report back once I've given it a go.
  14. I have found what appears to be a bug within FQL when using the union clause with subqueries. The query below gives the expected result SELECT d_OrganisationName FROM ORGANISATION WHERE a__kp_t_ORGANISATION NOT IN ( SELECT a_kf_t_Organisation FROM ORGANISATION_CONTACT_LINK WHERE a_kf_t_Contact = '" & CONTACT::a__kp_t_CONTACT & "' ) which is a list all organisations that the contact isn't currently linked to However, I also want to combine this result with another result, for example some default organisation: SELECT d_OrganisationName FROM ORGANISATION WHERE d_OrganisationName = 'Default Organisation' On it's own, the above query also gives the expected result, which is just 'Default Organisation' However, when I put a union clause between the two queries like so SELECT d_OrganisationName FROM ORGANISATION WHERE a__kp_t_ORGANISATION NOT IN ( SELECT a_kf_t_Organisation FROM ORGANISATION_CONTACT_LINK WHERE a_kf_t_Contact = '" & CONTACT::a__kp_t_CONTACT & "' ) UNION SELECT d_OrganisationName FROM ORGANISATION WHERE d_OrganisationName = 'Default Organisation' The results include the combination of the two queries PLUS the result of the subquery, which is an id, i.e. Some Org Some Org Default Org OrgId1FromSubquery OrgId2FromSubquery It seems like FileMaker is seeing three result sets to UNION instead of two: the first query, the second query AND the subquery in the first query. Has anyone else experienced this or have any suggestions for correcting it? Thanks, Alec
  15. I've been trying this process out and it seems to work well for short variables. However, for very long variables (in my case the contents of a 2000 line batch file), performance is very slow, taking a good few minutes for the variable passed to be made available to the script. For now I'll put the file contents into a field earlier in the process, but if anyone knows of an efficient way of passing long variables between scripts without having to go via a field, I'd be interested to hear it. Alec
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.