Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

This topic is 8059 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

Here is my situation:

I have just recommended that the company I work for upgrade from 5.0 to 6.0 to gain the record-level access privileges functionality in our database. The reason for this is because we are going to be allowing independent contractors access to areas of our database via Citrix. The record level privileges would provide security to make sure the contractors only view the information that they are supposed to view.

The record-level security is nice because it filters portals and value lists so they only see information they are supposed to see.

Now the caveat.

We currently have 10 db's, the main one that the contractors would mostly be in has nearly 300,000 records. I've created a test file (see attached) to what the performance would be like with these types of record volumes. My test is the SIMPLEST situation I could imagine.

1 file

3 fields

_RecordID //Serial Number

_RecordDescription //for emulating editing

_Access //A 1 in the field means that access is granted.

To use the test file, you need to run the script "Create 300,000 Records". (takes a bit of time, but keeps the attachment small)

For the test, the master password (admin) must be used to enter a 1 in the fields to "grant" access. Then, close the file and reopen it with the user password (user).

Do a find on the Access field (by entering a 1), and this result is FAST.

Next, do a find on the description field (the word "project" is what I used), the find itself takes a few seconds, THEN... it evaluates the security. This TAKES A LONG TIME compared to the find. (about 5 or more times longer than the find itself)

Like I said, this is the simplest security access method I could think of. Now imagine using related fields, checking a user database for permissions through a join file. (which is what I think I would need to allow multiple users access to different projects at the same time)

I also created another test, where I had 2 files. One was the 300,000 record project file, and another was just a file with one record, with a portal view into the projects file. Because the record-level access filters through a portal it needed to do this every time a change was made. Let me tell you...only a person with the patience of Job could use this setup.

Does anyone use record level access with large volumes of records successfully? Am I missing something? By the way, nothing is based on calculations, and everything is indexed. So this is not the problem.

Maybe FileMaker thinks that the only people that would use this feature have databases of less than a few thousand records (this is where it starts to slow down)

I've read a few posts now where people are struggling with this. Should I just scrap this idea now before spending anymore time developing for this "feature"?

AccessTest.zip

  • 3 weeks later...
Posted

Trevorg --

I'm curious if you found any solution to your problem -- which is also my problem. As far as I can tell, limiting browse access by even the simplest boolean test -- whether stored or unstored -- significantly slows down finds in large databases. I even tried sticking just the number "1" directly in the access calculation box (which essentially allows access to all records), and it still took a lot longer to do finds than if I set browse access to "All." The complexity of the boolean calculation seems less important than the simple fact that you're limiting access. I'm pretty much out of ideas -- so if you've come up with anything that works, I'd really appreciate hearing about it. Thanks a lot.

--Gerry

Posted

Hi Gerry,

Unfortunately I haven't found the definitive answer yet. And I am somewhat foolishly avoiding it for the time being. This implementation has not been given priority, but within the next month or so it will be the TOP priority.

However, I did find a few things out that I am keeping in the back of my mind.

1. The security is evaluated after the find. It appears to be VERY important to have a limited amount of records return from each find. If the find returns the result of a large number of records (or worse, all the records in the DB) then forget it. It just takes too long. Also, when evaluating security permissions through related files I think it's evaluating AFTER the related records are returned. So if a large number of related records exist, then the security takes a long time to evaluate.

I'm scared about this one, because everyone here is betting the farm on this security feature. If I can't get it to work efficiently, I'm going to be eating a lot of crow.

I did call FileMaker Inc on this. The answer I got was vague. I was told, "The performance of the record level security access is highly dependant upon the architecture of the DB that the feature is being utilized in". When I told them my test file is 1 DB, no relations, no calculations, and a few fields... they said... well, we know there have been some implementations where it can take a few minutes to evaluate the security.... They even went on to say that they know of one client that got the performance down from OVER AN HOUR to evaluate to about 5 minutes by changing the method of implementation. I said 5 SECONDS is still too long! I don't know how 5 minutes is anything to be proud of.

So... to sum up. No. I have nothing yet. I asked if there were any white papers for "best practices" when using this feature. They said, "No, but I suppose one would be pretty helpful eh?"

The last bit of "advice" the gave was that they suggested I join the FSA to gain "insider information" and access to their private message boards. I wasn't too surprised that they were trying to make another sale...

Posted

Trevorg--

Thanks a lot for the detailed answer. I've found pretty much what you've found: a large found set = lousy performance. It would be very helpful to find out exactly how FileMaker is limiting access. However they're doing it, I think they may want to go back to the drawing board.

If it makes you feel any better, I'm dealing with a place that is also "betting the farm" on this feature. Fortunately, we're dealing with a maximum of 10,000 records in the main file, which is slow enough -- but it's a lot better than what you've got to deal with. If I find anything helpful as we implement this thing, I'll pass it along....

This topic is 8059 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.