Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

RDBMS design for FileMaker - worth the effort/feasible?


This topic is 8119 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

Here's a question for all of you familiar with RDBMS design!

I have studied relational database theory in my Masters education in Computer Science and I wonder how applicable it could be in the FileMaker development process. I have also developed many FileMaker solutions, albeit not really by using the "traditional" relational database design steps. With "traditional" i mean the process of first creating a conceptual schema, then a relational schema, then normalizing the tables, and finally implementing the solution into a RDBMS.

During my years developing FileMaker solutions I have found that the implementation techniques you use in FM differs significantly from most SQL-compliant RDBMS. For example, there is no concept of a "primary key" in a FileMaker file (which is the equivalent of an SQL "table", or in RDBMS theory "relation"), and there is little support for referential integrity. Furthermore, the normalization process often gives rise to a lot of tables (files in FM), and there are, relatively speaking, severe limitations in FileMaker as to how many files can be related and how many can be open simultaneously by a FM server.

Nevertheless, it would feel good to have a "clean" design from the beginning, but I'm not sure if it is feasible and worth the effort?

Has anyone out there tried this "traditional" approach for developing FM databases and how well did it work out?

Thank you for any input!

/Daniel

Posted

quote:

Originally posted by DanielS:

Here's a question for all of you familiar with RDBMS design!

I have studied relational database theory in my Masters education in Computer Science and I wonder how applicable it could be in the FileMaker development process. I have also developed many FileMaker solutions, albeit not really by using the "traditional" relational database design steps. With "traditional" i mean the process of first creating a conceptual schema, then a relational schema, then normalizing the tables, and finally implementing the solution into a RDBMS.

/Daniel

FileMaker Pro is different, but the design process you describe is almost mandatory for larger projects. FileMaker Pro 5 supports multiple relations as well as self joins. And in some instances there can be several hundred of these per file.

This topic is well covered at the annual FileMaker Pro DEVCON, usually by a team of System Engineers from FMI and some FSA Partners.

HTH

Old Advance Man

Posted

Thank you for your comments, "Old Advance Man"!

That sounds good, could you (or anyone else) point me to an example project where this has been done? That would be very helpful...

Thanks,

Daniel

Posted

I can't release DEVCON material to you. But perhaps you might find some information that will help you in the book by Rich Coulombre. "Special Edition Using FileMaker Pro 5" available at Amazon.com

HTH

Old Advance Man

Posted

I have not seen any modern RDMBS which truely implments the Relational Design Theory that you rad about. All of them make some compromises to the model.

Since Filemaker is based on a non-relational model it tends to directly implement the least of the model, but do not think for a moment that you should not use good relational design.

Check out my article (in the Database Standards forum) on unique IDs to see my implementation of primary keys.

Posted

Thanks to both of you, "OAM" and "CaptKurt"!

I really appreciate your input.

I have looked at your "Primary ID"-scheme and it really looks interesting, I think I'll use it!

I also looked at the "Special Edition..." book and that, too, seems to be a valuable resource. I think I'll order it. The only books/products I've found to be of much value earlier is the "Scriptology series" by Matt Petrowsky. But he doesn't talk that much about the design process and the PK->FK relational aspects.

Thank you again!

/Daniel laugh.gif" border="0

Posted

Hi Daniel,

IMHO, with FileMaker you can absolutely create a well formed relational data model, with relatively pure functional dependency and a relatively robust referential integrity. For creating RDBs, FileMaker is probably stronger than you'd imagine.

Although FileMaker doesn't require you to explicitly define primary keys, there is nothing preventing you from doing it, and from constraining the keys properly to be {unique, non-null, persistent, consistent, meaningless}.

Be careful of the file = table assumption. There are many other "objects" in FileMaker that equate to tables in traditional systems also: value lists, portals, Find requests, Found sets, summaries etc. So there is usually a need for fewer FileMaker files than tables in other systems.

Also keep in mind that FileMaker Relationships are a superset of relationships from other database environments -- in other environments, relationships traditionally support only the data layer. FileMaker Relationships are also used to derive data (business logic layer) and to control the application (presentation layer). In other words, FileMaker Relationships equate to queries, views, programming variables, et al. FileMaker Relationships can be surprisingly powerful and versatile because of their tight coupling to the data source.

As an RDBMS, FileMaker conforms far less to the traditional model, principly because of the unified file format which means no distinct data/logic/presentation layers and no administerable transaction model (i.e. FileMaker doesn't pass the ACID test, in fact it doesn't even take the test).

Nonetheless, there are many developers that deploy distinct FileMaker files for application and data separation. If you already have a good head for the traditional tiered RDBMS architecture, you can definintely apply that knowledge to FileMaker application construction to benefit. There are a number of developers in the FileMaker community who have been successfully leading this charge and it was a hot topic at this past year's DevCon.

Although there is a unified file format, the elements of FileMaker can be generally identified with a specific RDBMS layer. For instance, although all fields are defined in one place, they participate functionally at different "layers":

Data layer: text, num, date, time

Logic layer: calculations, summaries

Presentation layer: globals, calculations

Similarly for relationships:

Data layer: Primary key -Foreign key relationships

Logic layer: query relationships, portal filter relationships etc.

Presentation layer: constant match, null relationships (for passing global data etc.)

The unified file format of FileMaker, although it breaks with traditional RDBMS theory, is fundamentally responsible for almost everything good about FileMaker (ease of use, performance, on the fly updates etc.). So I don't imagine that it will go away soon. Nonetheless, one can design and develop with traditional architecture models in mind to create the strongest FileMaker databases, IMHO.

This is a long discussion - too long for this medium perhaps, but HTH.

Good luck!

[ November 30, 2001: Message edited by: droid ]

Posted

Andrew,

Thank you very much for your insightful comments. You confirm and put down in words, many things that I have pondered over in FileMaker-land. It feels good that a FileMaker representative can confirm my thoughts!

I have some questions and comments to your statements, though.

>Nonetheless, there are many developers that deploy distinct FileMaker >files for application and data separation. If you already have a good >head for the traditional tiered RDBMS architecture, you can definintely >apply that knowledge to FileMaker application construction to benefit. >There are a number of developers in the FileMaker community who have been >successfully leading this charge and it was a hot topic at this past >year's DevCon.

What is FileMaker's view on this? In general, it's a good thing to separate data and presentation, especially when it comes to bigger and more complex solutions. Wouldn't it be an idea for FileMaker to write a "developer's guide" for these more complex projects when it comes to separating data, business logic, and presentation objects. I haven't seen one yet, anyway! Do you know if there is any other material from the DevCon that is available on the Internet or for purchase elsewhere? (I have now bought the "Special Edition Using FileMaker Pro 5" book by Rich Coulombre, which touches on the subject more than any other book I've read, but I'd like more sources!

>As an RDBMS, FileMaker conforms far less to the traditional model, >principly because of the unified file format which means no distinct >data/logic/presentation layers and no administerable transaction model

>(i.e. FileMaker doesn't pass the ACID test, in fact it doesn't even take >the test).

Is it FileMaker's intention to include any support for transactions in the future in FileMaker, or is it a definite "no-no" from FileMaker's side?

>Similarly for relationships:

>Data layer: Primary key -Foreign key relationships

>Logic layer: query relationships, portal filter relationships etc.

>Presentation layer: constant match, null relationships (for passing >global data etc.)

In my view, I would put the portal filter relationsships in the presentation layer as well. And for the constant relationsships, they would primarily go in the logic layer, since they are mostly used for global data tranfser. Do we have a different definition of the "Logic" versus "Presentation" layers?

Finally, just another thanks to all of you contributing to my knowledge in FileMaker!

/Daniel

Posted

Hi Daniel,

<copious snipping to follow>

quote:

Originally posted by DanielS:

What is FileMaker's view on this? In general, it's a good thing to separate data and presentation, especially when it comes to bigger and more complex solutions. Wouldn't it be an idea for FileMaker to write a "developer's guide" for these more complex projects when it comes to separating data, business logic, and presentation objects. I haven't seen one yet, anyway!

It would be a great idea, but the source might actually best come from the development community where the practical experience with the product gets most excercise. I used to be a developer consultant, so I have my views, but there are a number of highly qualified members of the development community out there who are really pushing boundaries with the tiered approach. I hope we see more published material from them. At DevCon01, this was a hot topic. I'm fairly sure we'll see more there next August. I intend to talk about it a bit, if I get the forum to do so.

Do you know if there is any other material from the DevCon that is available on the Internet or for purchase elsewhere? (I have now bought the "Special Edition Using FileMaker Pro 5" book by Rich Coulombre, which touches on the subject more than any other book I've read, but I'd like more sources!

I'm a fan of Rich's book for sure. At this time, for more information on relational architecture, I read traditional sources (CJ Date et al), or books on other dbs (SQL Server admin guides) and apply them to FileMaker myself.

Note that I'm not claiming that one can explicitly create a tiered architecture (obviously?) but that one can enjoy the benefits of one with analagous FileMaker architecture.

Is it FileMaker's intention to include any support for transactions in the future in FileMaker, or is it a definite "no-no" from FileMaker's side?

A tough question and one that I'm, not allowed to answer directly. I wouldn't say that it's a definite "no-no", but it would be exceptionally difficult to eliminate FileMaker unified file format without losing everything that makes it unique (and successful).

In my view, I would put the portal filter relationsships in the presentation layer as well. And for the constant relationsships, they would primarily go in the logic layer, since they are mostly used for global data tranfser. Do we have a different definition of the "Logic" versus "Presentation" layers?

Portal filters just for viewing could be called presentation, but they are essentially stored queries (views) that derive information and teach us. Thus, I don't think of them as purely interface/application layer. Filtered relationships are also used for Sum(), Count() etc..

Conversely, I see global data transfer as (generally) an application driving exercise - hence presentation layer.

But of course it's hard to draw distinctions that are not (thankfully) enforced. Plus, this is a discussion probably too big for the medium, so my apologies for innacuracies.

Good luck!

/Daniel

[ December 03, 2001: Message edited by: droid ]

[ December 03, 2001: Message edited by: droid ]

Posted

Andrew,

Thanks again.

I don't really have anything to add for the moment. It's time for me to start designing, developing and coding!

I'll come back later for more interesting discussions!

/Daniel

Posted

quote:

Originally posted by CaptKurt:

Check out my article (in the Database Standards forum) on unique IDs to see my implementation of primary keys.

After reading some more in Matt Petrowsky & John Mark Osborne's "Scriptology" about "A serial number field" I wonder what your comments are to this:

"In FileMaker Pro 4.0, a new Status function was added that returns the unique ID value for each record. This value can often be substituted for a serial number field. However, there is a stability issue to consider when using the Status(CurrentRecordID) as a match value for a relationship. If the records from your database need to be transferred to a new shell, the ID on each record will change, which will invalidate all of your relationships. In this case, a serial number will work better since the values can be imported into the new FileMaker database. However, other situations could benefit from the Status(CurrentRecordID) function."

It seems that your scheme also should be vulnerable to this, but maybe there is no "perfect" solution?

Thanks for any help,

Daniel

Posted

Another thing: To maintain referential integrity I thought that, on the foreign key side, you should have a validation for the foreign key field which is: "Not empty" and "Member of value list", where the list is based on the primary key field from the "parent" database.

Any comments? Is there any disadvantages?

Thanks,

Daniel

Posted

First, do not use StatusCurrentRecordID as a primary key or foreign key field. That is not its purpose.

Second, validation on the foreign key field is not really required provided you create the "child" record with the foreign key from the parent file via a portal. Or, you can use a scripted routine.

HTH

Olad Advance Man

Posted

Kurt and I make take up this issue in other quarters, but it is woerthy of note that if a file is cloned, then all the Record ID's are rest. Also, please remember that while the ID's are unique snd persistent, they are NOT sequential. They go from 1 to 127 and then to 32768 as I recall. There is another break at about 16000 records give or take to an 8 digit Record ID.

HTH

Olad Advance Man

Posted

quote:

Originally posted by Old Advance Man:

First, do not use StatusCurrentRecordID as a primary key or foreign key field. That is not its purpose.

It is OK to use this as PART of another stored (auto-entered calculation into a text field) key. For example I use it in my key building calculation which is:

code:


Right("000000" & Status (CurrentRecordID), 6) & "-" &

Right("000000" & Status (CurrentDate), 6) & "-" &

Right("000000" & Status (CurrentTime), 6)


This basically amounts to a key like:

000123-765432-345653

However you never want to use the Status ( CurrentRecordID ) straight since as OAM points out this was NOT is purpose and will cause problems between files.

Posted

Hi Kurt and others,

I have a small suggestion to improvement of your primary key scheme.

I have read your "FAQ" for creating primary keys, and, as I have written earlier - I like it a lot. In his book "Scriptology" (p.299-300), Matt Petrowsky also suggests almost the exact same scheme for implementing a unique key - a combination of Status(CurrentDate), Status(CurrentTime) and Status(CurrentRecordID).

Even though it is unlikely, there might be key duplicates if users work on clones of a database simultaneously.

Consider the following scenario:

Several salespersons need to bring clones of a database out on the field adding new records to it. When these clones are made from the master database, the "Record ID" part of the key for newly added records in these clones will be the same, since the "Record ID counter" will start at the same value for all clones if they are saved at the same time.

Now, what then differs new records in these clones from each other, is the time part of the key. But the resolution is "only" one second. There is a, not unsignificant, risk that users working with different clones will create a new record on the same second. But I agree that the creation of a record with the same Record ID as well is extremely unlikely. But in some instances, there might the case that you create hundreds, or thousands of records via a scripted loop. Many of these will have the same "time part" of the key, thus increasing the probability that the some keys will be the same between different clones. When these are imported into the main database, we have a problem.

In order to further minimize the probability of generating a duplicate key, I would suggest to include some Random calculated characters to the key. For instance, the Status(CurrentTime) part of the key must not occupy six characters, five is enough. (60 seconds/minute times 60 minutes/hour times 24 equals 86400). If we also skip the "-" delimiters, which do not add to the uniqueness of the key, we have three characters to create random values in. A new improved scheme would then be:

Right( "000000" & Status(CurrentRecordID); 6) &

Right( "00000" & Status(CurrentTime); 5) &

Right( "000000" & Status(CurrentDate); 6) &

Middle("0123456789ABCDEFGHIJKLMNOPQRSTUVXYZ"; Round(Random*35;1); 1) &

Middle( "0123456789ABCDEFGHIJKLMNOPQRSTUVXYZ"; Round(Random*35;1); 1) &

Middle( "0123456789ABCDEFGHIJKLMNOPQRSTUVXYZ"; Round(Random*35;1); 1)

And, you could of course always add more random characters, provided that you add a space after these 20 characters first.

For (developer) display purposes, you could create a calculation field and insert delimiters "-" at proper places within the key, since they are not now part of the key.

A final note:

Of course, if you must really really make sure that your keys are unique in the database, you must check them for duplicates via a script on import. But with the proposed scheme above, the probability of a duplicate becomes infinitesimal.

Any comments?

/Daniel

Posted

Your suggestions are good, but I have 2 problems with them.

One is that although someone MIGHT create records at the same Time or on the same Date or with the same RecordID, it is almost impossible (without deliberate effort) to create a record with the same RecordID at the same Time on the same Date as another user.

System times would need to be synchronized to the second and systems would need to be performance synchronized so that they each took exactly the same amount of time to create and save a record.

I have tested this in a situation, much like what you descibe. In fact it was this project that gave birth this scheme some 5 or so years ago. It was a case of a distributed database given out to salesmen who then returned the database to thier territory manager who combined them all into one, and then turned the database into the regional director who did the same and turned it into the divisional director who did the same before turning it into the VP-Sales. Never with a duplicated ID. The probabilities were calculated once and were astronomically high against any kind of duplicate.

Since that time, I have used this scheme in every system that I have worked on and have had no duplicated keys.

I choose not to go with any kind of random generation, simply for the fact that a random number can be duplicated. Although unlikely it is possible for the random number to generate the same sequence every single time. This makes it very easy to duplicate keys.

That said, I do like your approach of adding one or more random characters into the sequence. I would perhaps modify it as follows to make them break up the sequence a bit more.

code:


Right( "000000" & Status(CurrentRecordID); 6) &

Middle("0123456789ABCDEFGHIJKLMNOPQRSTUVXYZ"; Round(Random*35;1); 1) &

Right( "00000" & Status(CurrentTime); 5) &

Middle( "0123456789ABCDEFGHIJKLMNOPQRSTUVXYZ"; Round(Random*35;1); 1) &

Right( "000000" & Status(CurrentDate); 6) &

Middle( "0123456789ABCDEFGHIJKLMNOPQRSTUVXYZ"; Round(Random*35;1); 1)


This way you get the random characters mixed into the key and still stay within the bounds of 20 characters.

  • 2 months later...
Posted

Hi CaptKurt!

I have yet another suggestion for improvement to you scheme. Now, I think this one adds more value than my previous suggestion with adding random values to the key.

We still want to keep the key 20 characters long, so the random characters unfortunately will have to go away.

What I've noticed is that it's good to have an indication of what file a primary key belongs to. This is also something that's suggested in Rich Coloumbre's book "Special Edition: Using FileMaker Pro 5", although he uses the auto-entered serial number for the rest of the primary key.

But the main thing is that it's good to prefix the primary key with an identifier that identifies the file it belongs to.

When developing and doing trouble-shooting, you might have lots of different primary and foreign keys on a layout just to se that a script executes correctly (in the "test phase" of the script). If we then can see what file a certain key belongs to, it's much easier to spot logical errors, such as defining a relationship to the wrong file. It could also be good to have when importing records.

My suggestion is to use a three-letter combination, which should make it relatively easy to come up with logical combinations for a file. Here is some examples:

Invoices - "INV"

Invoices line items - "INL"

Customers - "CUS"

Calendar - "CAL"

Reports - "REP"

Orders - "ORD"

Orders line items - "ORL"

and so on..

So, how do we change your scheme to include a three-letter prefix?

This is the definition of the new key I'm suggesting:

"XXX" &

Right( "000000" & Status(CurrentRecordID, 6) &

"-" &

Right( "00000" & Status( CurrentTime, 5) &

Right( "00000" & Status( CurrentDate, 5)

I have removed one character each from the "CurrentTime" and "CurrentDate" compared to your original key, and removed the last dash separator.

Now, removing one character from the time part does not change anything, since CurrentTime will occupy most 5 characters.

Removing the leftmost character from the date part will not make any practical difference. This is becuase CurrentDate is stored as the number of days since "year 0". So all days between ca 1917 (700 000 / 365 = 1918) and 2192 (799 999 / 365 = 2192) will contain a "7" as the leftmost character. So I simply do not use it.

I like having a separator between the "CurrentRecordID" and the time part of the key, it makes it more legible and "human-parsable".

A key for a invoices file would the look like:

"INV046442-3855530915".

What do you think about it?

Daniel

Posted

quote:

Originally posted by DanielS:

A key for a invoices file would the look like:

"INV046442-3855530915".

Actually this was part of my original scheme as it was implemented in the original system. While it was helpful, I found that it was not really that useful in normal implementation.

I have long since stopped using it. Additionally it is a problem in that it add human readable values to the key and imply a possible range of values, both of which are violations of the rules of Key fields.

If someone works with different keys mixed into single multi-key fields then this is very useful, but only for human readability.

Posted

quote:

Originally posted by CaptKurt:

quote:

Originally posted by DanielS:

A key for a invoices file would the look like:

"INV046442-3855530915".

Additionally it is a problem in that it add human readable values to the key and imply a possible range of values, both of which are violations of the rules of Key fields.


I cannot see the problem. The "rules" you are talking about, what are they exactly? I guess you are talking about key fields with respect to FileMaker, and not Relational database theory, since there is a big difference here.

I have a Master's education in computer science and I've studied relational database theory so I know what these terms mean. But that's in the RDT world, which is quite different from FileMaker.

I can only see positive things about having a prefix to the key in FileMaker.

Could you please give a more specific explanation on what you mean?

Thanks,

Daniel

  • 5 months later...
Posted

Daniel,

You might also want to check out Chris Moyer and Bob Bowers new book "Advanced FileMaker Techniques for Developers" which devotes the first 5 chapters to a discussion of the Relational Model soecifically in the context of FileMaker.

HTH

William

William Akey

Peninsula db, LLC

Cell: 650-906-5104

Posted

Here is a basic assumption to the discussion so far, "Several salespersons need to bring clones of a database out on the field adding new records to it."

Change the assumption. Since FileMaker can be served over the web and with an ability to take advantage of web connectivity as it exists today, and if the only need (or even just the initial need) is for salesmen "adding new records to it (the db)", addressing the issue as a browser solution could very easily change (and simplify) the approach to record identification keys created "in the field", since they could be created "from the field". Of course, the web would also allow the salesman to have access to data (inventory, services, etc) to serve the client.

I just mention this database / design option since the thread is ": RDBMS design for FileMaker - worth the effort/feasible? ". The important thing is design. Design and design options are what successful solutions are about.

This topic is 8119 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.