Jump to content
Server Maintenance This Week. ×

Been racking my brain: Supertype / subtype


This topic is 5294 days old. Please don't post here. Open a new topic instead.

Recommended Posts

If I take just the PARTY ROLE portion literally, I would have 29 tables to build. Am I understanding this correctly?

If you built it fully normalized, then that's probably correct. NB, this logical model covers a lot of area that your database is likely not to cover. Unless you are using the database for HR purposes, you aren't likely to track EMPLOYEE roles. One would only add the tables that are appropriate to their solution, and I doubt very highly that any one solution would include all of these shown in the model.

Part of the biggest problem is my inexperience, coupled with the fact that the book (and any other material I have researched) does not give extensive sample data for each of these "subutypes" and so the "meaning" of what these "subtypes" are is really subject to misinterpretation - on my part.

In addition to the book there is CD which contains the complete schema of all the logic models. I'm not sure if there is also some sample data, which would be really cool.

Looking at Figure 1, am I correct in assuming that this is modeled to a high degree of normalization? And if so, do most DB designers easily see what and how to denormalize the model to make implementation or retrieval easier for them, their system, their requirements?

I believe that all of Len's models are in 3NF; it's in the book somewhere. The process of denormalization is usually a little tough, especially in FileMaker. You might start out by trying to build as pure a model as possible and then backtrack in order to make something more simple for FileMaker to handle. Sometimes it's difficult to see all of the potential pitfalls in advance.

And lastly, most importantly to me, is it "possible" to physically build each subtype as shown in the figure as their own tables?

It's certainly possible, but will involve a lot of trade-offs since FileMaker's relational model is so weak.

Some of the previous posts has me observing that it is at the designers discretion on where to normalize out to a higher degree than in other areas. Am I somewhat on the right track with the assumption?

That's largely correct. The two best justifications for denormalizing a schema in FileMaker is: (a) for performance reasons or (}:( to workaround FileMaker's relational limitations.

Link to comment
Share on other sites

Thank you David - your input is greatly appreciated.

Unless you are using the database for HR purposes, you aren't likely to track EMPLOYEE roles. One would only add the tables that are appropriate to their solution, and I doubt very highly that any one solution would include all of these shown in the model.

Understand. However, this may be an odd statement, but hopefully it will make some sense...

I am coming to the belief that almost ANYTHING that covers an enterprise will eventually creep up and become a need in the solution - depending upon the growth of an organization and consequently their needs. And this is the crux of it: if I build what I can see as a need today, it may very well be in need of a retrofit or complete rebuild when the next thing that is unforeseen becomes a need and the initial build does not accommodate it.

I have heard that one should not try to model the universe. Obviously, I agree. But the other extreme of modeling for a very specific or "specialized" solution to a problem places one smack dab in the situation I describe above.

I "think" I have had a taste of this and it is not fun. Much of this may be generally known and accepted, so forgive me if this is elementary.

In addition to the book there is CD which contains the complete schema of all the logic models. I'm not sure if there is also some sample data, which would be really cool.

From what I understand, the samples are in SQL Code and I do not have SQL. Would be great if they were available in a form I could use.

I believe that all of Len's models are in 3NF; it's in the book somewhere.

I believe I read that as well.

The process of denormalization is usually a little tough, especially in FileMaker. You might start out by trying to build as pure a model as possible and then backtrack in order to make something more simple for FileMaker to handle. Sometimes it's difficult to see all of the potential pitfalls in advance.

So, in other words, I will have to see it for myself to understand these limitations.

It's certainly possible, but will involve a lot of trade-offs since FileMaker's relational model is so weak.

I find this to be a common observation among DB designers in other forums using other RDMS as well. It is touched upon in most all of the books I have read except for one. I believe it was "Database Design for Mere Mortals". I REALLY like the "principals" he is proposing. Very linear and literal. I believe I have also read that some of the "great minds" feel that we are denormalizing because of the lack of sufficient RDMS available to us and that it really is not "relational theory" correct to do so.

Denormalization while keeping integrity intact seems like it requires one to understand normal forms AND the implications of denormalizing AND how to deal with it effectively. Knowledge I do not posses. Hence the reason that I want to cling to the normal form as best as I can.

That's largely correct. The two best justifications for denormalizing a schema in FileMaker is: (a) for performance reasons or (}:( to workaround FileMaker's relational limitations.

It would seem to me that a normalized structure would actually produce less data -due to less redundancy. Also seems that the tables would not be as "deep" in some cases and they would be segregated which "seems" that retrieval is over a smaller number of records to parse through - which seems to make for an efficient framework. I guess the flip side is the burden placed on having to dig through the relationships to retrieve data in disparate locales.

Regarding the samples you posted, I have a couple of questions:

1. Your sample = calculated value of "1" for Organization.

Is this the "subtype discriminator" that I have read about?

2. My subtype discriminator "names" are in their own tables. Could I instead of the "calc = 1" in your sample use the actual primary key of the subtype discriminator. Then use its related field to display what the name value is?

I am clinging on to my security blanket "all my relationships are attached with keys and only keys." Once again, maybe unrealistic - but so far so good. I think.

Link to comment
Share on other sites

I am coming to the belief that almost ANYTHING that covers an enterprise will eventually creep up and become a need in the solution - depending upon the growth of an organization and consequently their needs. And this is the crux of it: if I build what I can see as a need today, it may very well be in need of a retrofit or complete rebuild when the next thing that is unforeseen becomes a need and the initial build does not accommodate it.

Certain businesses will never have a use for some elements of these models (e.g., a law firm will never need a product distribution network). You should start with what you need and/or anticipate and expand as needed. A minor retrofit may be required for future expansion, but a well-designed system should never need a complete rebuild. If you attempt now to build a universal enterprise system I guarantee you it will never get done!

From what I understand, the samples are in SQL Code and I do not have SQL. Would be great if they were available in a form I could use.

I did a brief test on a small snippet of SQL code that comes on the sample disc and was able to create tables and fields with it using FMP as an ODBC source. There were some things it choked on and would have to be parsed out, but it sure beats starting from scratch, eh?

Denormalization while keeping integrity intact seems like it requires one to understand normal forms AND the implications of denormalizing AND how to deal with it effectively. Knowledge I do not posses. Hence the reason that I want to cling to the normal form as best as I can.

I'd argue that it's actually the opposite. Most people's first inclination is to build denormalized structures. It takes a lot of forethought to properly plan a relational database.

It would seem to me that a normalized structure would actually produce less data -due to less redundancy. Also seems that the tables would not be as "deep" in some cases and they would be segregated which "seems" that retrieval is over a smaller number of records to parse through - which seems to make for an efficient framework. I guess the flip side is the burden placed on having to dig through the relationships to retrieve data in disparate locales.

There is certainly a performance cost paid for each and every relation, which is you'd typically denormalize for data mining purposes where speed is extremely important.

Regarding the samples you posted, I have a couple of questions:

1. Your sample = calculated value of "1" for Organization.

Is this the "subtype discriminator" that I have read about?

I'm not sure. I actually don't recall reading about a subtype discriminator.

2. My subtype discriminator "names" are in their own tables. Could I instead of the "calc = 1" in your sample use the actual primary key of the subtype discriminator. Then use its related field to display what the name value is?

That's a different purpose. Let's say that in an entirely different table I was to create a value list of all of the customers in the database. Well, that's easy because you can just target the CUSTOMER table, right? What if you wanted a value list of customers and donors? Now you need to be targeting the PARTY table, but how are you going to filter that list down when you have many different subtypes represented there? See FileMaker doesn't allow you multiple paths to a destination so I have to have all of the filtering criteria match a single target. In this case I'll need the PARTY to know that what it's subtypes are and allow you match on one or more of them.

Link to comment
Share on other sites

"I am coming to the belief that almost ANYTHING that covers an enterprise will eventually creep up and become a need in the solution - depending upon the growth of an organization and consequently their needs."

A young couple just married might need a 4 bedroom house with a pool and sauna in the future, but their needs for the next few years (and all they can afford) might be adequately met with a single bedroom apartment.

"And this is the crux of it: if I build what I can see as a need today, it may very well be in need of a retrofit or complete rebuild when the next thing that is unforeseen becomes a need and the initial build does not accommodate it."

There is nothing wrong with a re-build or a retrofit in the future.

Look at it this way: something that is built well to meet the current needs might cost $10,000 to develop. Reasonably cheap. It'll last 5 years, at which time another development for $10,000 will do again: scrap the old database and build anew.

Compare with going right now with something that has ultimate flexibility... $30,000? $50,000... easily because of all the complexity and the panning for the future involved. But it'll still need to be re-written in 5 years because the business will have changed and/or the software will have improved. Guaranteed.

I'm working on a project right now that is a database that has been around for 10 years that tracks travel requisitions (plane tickets and hotel bookings). About 3 years ago a project was started to make it web enabled but it fell over because it all got too hard... but an analysis that I did last week has shown that the business process has changed so much that a pdf file is sufficient to meet the end users needs! What the back-end people really need is a little database to process an excel file that they get sent from the travel agent that contains all the information so it can be e-mailed to the cost centre administrators for confirmation.

Build to the current business processes, but do a risk analysis to determine how likely the process is to change, and what the impact of that change is likely to be. Then let the client decide what to do: accept the risk; mitigate it; or avoid it.

Can you tell I'm getting into project management? }:(

Link to comment
Share on other sites

Well said. I would add a third case to your two examples, and that is the project that's estimated at $50,000 and after spending $75,0000 ends up being abandoned because of over-specification.

There's also time factor to be considered: often it's better to have something modest that fulfills the basic needs now, than another 6 months of using makeshift spreadsheets while waiting for the ultimate thing.

This is especially true for in-house solutions where the developer can constantly kaizen the solution, or even rebuild it.

Link to comment
Share on other sites

David, Vaughn, and Comment - I appreciate the input.

Everyone's point(s) are well taken - truly. I am not ignoring what anyone has mentioned. In fact, it is the contrast of the different perspectives that is a tremendous help.

I will be working on this tonight and over the weekend.

Link to comment
Share on other sites

I made a quick diagram that shows what I am trying to interpret from and what I built in FileMaker. In the diagram, the top portion is a replication of a data model in the book I have been referring to in this thread. I have struggled with interpreting the book's models which makes trying to build anything into a physical FileMaker set of tables and relationships very confusing.

The bottom portion of the posted diagram shows what I built and seems to be functioning - basically. The book presents the Party Classification as a Supertype, and the rounded rectangles inside of it as Subtypes of the Supertype.

However - in my struggles of trying to build something - I ended up with what you see in the bottom portion - but it is not in a supertype / subtype structure (I think).

Can anyone help clarify ANY of this for me. I believe that if I can get over some of the interpretation issues - then I can focus more on the difficulties I have in implementing the "FileMaker" connections such as filtering and conditional value lists to retrieve the proper "type" by first selecting the "category", then "subcategory", then "type" while being in a "Party Classification" portal on the "Person" or "Organization" layout.

Note how I "invented" the two tables - "Category" and "Subcategory" to provide a place to store those values, but they are outside of the Supertype.

I have no experience on the different methods one may choose from in how to implement in FileMaker. So any other avenues are educational as well.

Thanks!

party.jpg

Link to comment
Share on other sites

Can I get anyone's ideas or input on this?

I have attached some slices of screenshots to show some of the values - if that helps at all. I am precisely trying to figure out what are the "ways" in which the model could be built. Obviously, what I am posting now - falls outside of a super/sub build.

Link to comment
Share on other sites

I like to help and I like discussing theory, but I prefer not to spend much time on theoretical solutions.

Advising on a database that's been designed to model an ERD from a book doesn't hold much attraction.

But, I've built a couple of these classification systems before and when you start getting into multiple levels of classes, a recursive data structure becomes useful. But that's a whole 'nother new concept.

Edited by Guest
Link to comment
Share on other sites

Hi DJ, thanks for the response. Actually I am trying to build this for a solution. I am not debating the theory of it, rather I am sold on the concept and am struggling in how to build it.

Anyone else have some insight?

Link to comment
Share on other sites

  • 2 weeks later...

Wow! Look at the party I was missing. I suppose better late than never.

Sorry for my confusing previous post:

Here is my observation about no shared attributes:

PERSON

First Name

Middle Name

Last Name

ORGANIZATION

Name

You must be from the US, right? Because "First Middle Last" is a very US-centric view of names that will lead to this confusion. If, on the other hand, you took a more international view of names you would find that construct lacking. Take my friend "Manuel Luiz Rodrigue Jesus Ponce de Vega." Or my other friend "Viet Tran."

The descriptor both of my friends have in common is their "Name." Which part of the name is "first" or "last" or "given" or "family" or "christian" is a cultural construct. As such, I would most certainly store "Name" as an attribute of the Party table which addresses the "WHO" part of "WHO is the party we are referring to."

I have no idea - nor leg to stand on - if this observation is true or false? Are these shared attributes? If so, do we agree the fields as presented above a physically separate?

I hope my answer is clear in so far as "Name" is stored in the PARTY table, and contains the full name that identifies the party in question. Attributes that are not general to the party belong in the subtype tables.

If my assumptions are correct, does one concatenate the "PERSON" FirstName, MiddleName, LastName; then for ORGANIZATION use "Name" as the values for the Party Name field? Seems obviously simple - but the fact that I can not store a calculated field from a related table in another table is problematic and may be one reason why this model is not used more in FileMaker.

The reason the model isn't used more is because of unfamiliarity with the model and those who do know about it don't necessarily understand it. But that's not just FileMaker - the same is true of just about every other RDBMS out there.

Link to comment
Share on other sites

Which part of the name is "first" or "last" or "given" or "family" or "christian" is a cultural construct.

That is true - but it doesn't make the distinction any less useful. Because my solution also has a cultural context. The purpose of this distinction is to tell me which part of the name I should use in a formal letter, which part in an informal one, and what goes on the envelope. My letters are by necessity written in my own cultural context, and I don't see why I should sacrifice this for the sake of some idealized political correctness.

Link to comment
Share on other sites

I don't think the issue corn raises is about political correctness. It's about properly structuring a database for a user's needs.

If your needs include letter etiquette then the "traditional" First/Middle/Last structure will fail in some cases.

Cher and Kim Jong-Il are two examples that come to mind.

I don't have a solution, but I know there's a problem.

Link to comment
Share on other sites

That is true - but it doesn't make the distinction any less useful. Because my solution also has a cultural context. The purpose of this distinction is to tell me which part of the name I should use in a formal letter, which part in an informal one, and what goes on the envelope. My letters are by necessity written in my own cultural context, and I don't see why I should sacrifice this for the sake of some idealized political correctness.

This has nothing to do with political correctness but rather why I believe most people fail to see the distinction between a "Name" - which all Parties have - and the other attributes specific to the subtype. If you think only of First Middle Last then it's a simple mistake to think of "John Sydney McCain" as data derived from First = "John", Middle = "Sydney", and Last = "McCain". It's only when you're confronted with names that don't fit the model that you start to consider that the full name perhaps isn't always derived from three constituent attributes. Having dealt with all manner of names I have come to realize that the commonality between them is most likely "Family Name" and "Given Name(s)".

Moreover, the names you mention are different attributes of the contact. The "salutation name" and the "informal" or "preferred" name need not be composed entirely of the legal name components. A constituent in a fund raising system might be named "John Patrick Deval" but prefers to be addressed as "JP" on all correspondence. And you can bet that if "JP" is a big donor "JP" will be addressed how "JP" wants to be addressed, despite the derivation rules you setup in your system. Many people setup a system of calculated fields with "override" fields that allow one to change the default derivation behavior but this is simply an implementation variant of the true rule: other "name" attributes for people are usually, but not always, related to the other name attributes in the system.

The point being, "Name" is not derived in the Party model and is therefore suitable to be physically implemented in the PARTY table as an attribute common to all Party types and subtypes.

Link to comment
Share on other sites

I have no issue with the name field (or fields) being in the supertype table. In fact, my own little address book (made in version 4, before I knew anything about relational databases) already has a single LastName/CompanyName field.

However, had I decided that FullName must NOT be calculated from the other name components, I would have to enter every single name component at least twice (with no doubt some typing errors creeping in during the process). That seems to be a high price to pay to accommodate a few exceptions.

Link to comment
Share on other sites

However, had I decided that FullName must NOT be calculated from the other name components, I would have to enter every single name component at least twice (with no doubt some typing errors creeping in during the process). That seems to be a high price to pay to accommodate a few exceptions.

I didn't say that you shouldn't have some mechanism by which to facilitate data entry, but that is a tangential issue compared to the modeling of the data.

I go back to the idea of a "PreferredName." In the data model it is a separate attribute from "FirstName" or "GivenName" or what have you. At the same time the assumption is that unless specifically stated otherwise we can assume the "FirstName" is also the "PreferredName" and can use data entry techniques (auto-enter calculation, scripted process, script trigger, etc.) to facilitate this.

It's a similar concept to that of web checkout forms that allow you to say your shipping address is also your credit card billing address. They present a simple mechanism to help you avoid double-entry of data but that's not necessarily indicative of how the data is stored on the back end.

FMP makes it difficult to separate the collection and presentation of data from the underlying storage of data. At the same time, however, I don't believe that additional effort is significantly more than the effort required in other systems. On the contrary, I think what FMP excels at is making it incredible easy to create both simple and complex database applications with its tight data/interface binding. Unfortunately for most systems in the case of the latter it often comes at the expense of being poorly modeled and therefore susceptible to problems down the road.

Link to comment
Share on other sites

I am not quite sure what you are suggesting, in practical terms. Certainly, the labels "FirstName", "GivenName", etc. are of no importance - what's important is that the correct "name element" is readily available when and where needed.

To take the simpler example: what exactly is the advantage of storing duplicate shipping and billing addresses as a default, compared to using an override rule for the billing address? It seems to me more an issue of workflow and user interface than data modeling.

Link to comment
Share on other sites

Thanks for the further contributions Corn, Comment, and DJ.

I am actively working on this "type" of structure in a FileMaker solution - so this is all very helpful.

Hi Corn, yes I am from the US and your point is well taken about names. Could this be solved by modeling names as:

TABLE: person_name_types

ASSOCIATIVE TABLE: person_names

TABLE: person

John (first name type)

Robert (middle name type)

Allen (middle name type)

Doe (last name type)

III (suffix name type)

But once again these items would have to be concatenated to have meaningful use when selecting a "party". There is obviously a need to have both: single values where appropriate and combined values where deemed appropriate.

I really wrestled with "where" to put the "name" fields... in the super or the subtype. As a matter of fact I went back and forth a couple of times. Currently, the name fields reside in each subtype table. I believed I moved them back in their respective subtype tables because of this very issue of the names for people are broken out and identified by their individual components.

The problem this has left me is "unstored calc" fields for names and I strongly suspect this is not a "good" thing.

So how does one go about creating a name field in the supertype that can serve the other needs of just providing the first name, or the last name, etc. I am not clear on what is being suggested.

Link to comment
Share on other sites

It seems to me that the whole "names" thing is getting a bit esoteric. And back-to-front. Determine the data that is needed, then store it appropriately.

If there is a need to store the person's "full name" to print on an envelope, than store it: "Professor Jon Robert Allen Dow III".

If there is a need to store a salutation for a letter, then store it: "Prof Dow" or "Robert" for formal and informal.

Yes this loads up the data entry hugely, and I have never done it this way and have no plans to because I've never seen it encountered. This academic discussion about name storage is getting a bit frustrating, because it appears to be a solution looking for a problem.

Sorry for the rant. :)

Link to comment
Share on other sites

If you're truly sorry for ranting then don't rant and then you won't have to be sorry. :)

The names issue is precisely the problem here: of what use is the Party supertype if it contains no attributes. More specific to FMP, how do you display anything meaningful in a portal of Party records if all of the attributes are located in the subtype tables and the portal contains a mix of subtypes?

The obvious answer is to store an attribute - you can call it "Foo" but I'm going to call it "Name" - in the Party supertype table. This name will identify a particular party, whether it be a company, a person, or perhaps an aggregation of people.

Here's where the "cultural construct" comes into play. Because of a Western bias we tend to think of names as "first middle last" and that there is this thing called a "full name" that can be constructed from "first" + "middle" + "last". It's a lie I tell you. Let's take a real person, 'Dr. Phil'. His full legal name, should I need to store it, is 'Phillip Calvin McGraw'. Now there is some question here - do we store that as 'Dr. Phillip Calvin McGraw, PhD' or do we keep the honorific and titular attributes separate? Meanwhile he is more commonly known as 'Dr. Phil' in most situations - should we just store that as the "full name?" All these names and what to do...

But that's exactly my point. If you only have a view of names as being "first middle last" then storage of the so-called "full name" becomes a problem because you tend to see it as a derivation of the separate attributes. Once you see that "first middle last" is just one of many different naming conventions and that they are insufficient for composing many of the names you will need to actually use in your system you will find, I believe, that storing a "Name" attribute in the PARTY table is the least of your worries.

Link to comment
Share on other sites

Okay, I've been reading along and see your points (and I am speaking to all of you). But if you split what you know should be split and store how you know it should be stored, it is far easier to grab what you need when you need it than to later attempt to pick it apart. As I was told once, "why put it together only to pull it apart later?"

I suggest providing every possible piece in a separate field. If a User isn't sure how portions should be placed then they can 1) ask the customer while speaking with them (WHAT A CONCEPT) or 2) guess. Even a guess is better than dumping everything into one field simply because it seems like too much work to do it right the first time.

Well, that's my opinion anyway ... :)

Edited by Guest
Link to comment
Share on other sites

Corn & LaRetta

I think we're all on the same page. :)

Now that we have names sorted out, shall we move the discussion on to handling addresses as well? :hair:

Link to comment
Share on other sites

My ONE post in this 4-page thread means I'm over-talking a subject? I thought expressing our opinion on databases is why we are here! :wink2:

Edited by Guest
Reworded a bit
Link to comment
Share on other sites

I personally like the "provide all possibilities" approach. If I DO NOT provide a field label that prompts a data entry person to enter the "last name" then the field will end up with some first names, middle names, nicknames, and whatever else name is known at time of entry. To a large degree, these "broken" out fields for names and their respective labels do help promotes some data consistency on data entry.

I know the value of the above observation is not what is being argued. Furthermore, the "Name" as one complete attribute makes sense, and the "first" name is really only a fragment of the whole. However, we do refer and infer in fragments. By separating them to begin with, I believe we have a much higher probability of success of reconstructing the name as we see fit for a particular application - versus breaking a whole name down into useable fragments. For example; formal name when we are sending legal notice; informal when we are inviting the person to our company weekend bbq.

But in the context of supertype and subtype, I think the primary issue is "where" to store the names. Corn, you made it clear that they should reside at the supertype level. Makes sense and follows all that I have read about common attributes in super, unique attributes in subtypes. (For the life of me I can not recall why I went back and forth and finally settled on placing them in their subtypes.)

I have posted a slice of a screenshot that shows my field definitions in my "party" supertype table. I still have left the "name" fields in their respective subtype tables and you will notice that I have unstored calcs for the names in the parties table (these values are coming from related tables). I suspect that the unstored calc for the "names" is bad. Can anyone confirm this.

If this is bad, then I have to move these name fields AGAIN back into the supertype - right?

If this is the case, is this an example of FileMaker being difficult to work with or weak in trying to implement a normalized schema?

Picture-7.jpg

Link to comment
Share on other sites

I think what you're getting hung up on is the idea that either the Full Name should be derived from its constituent parts (e.g. First Name, Middle Name, LastName) or that the individual name fields should be derived from the Full Name.

What I am advocating is that the two are stored separately - "Full Name" in PARTY and FirstName, MiddleName, and LastName in PERSON. Likewise in ORGANIZATION you might have attributes specific to organizations (like it's legal name) but it's "Full Name" is also stored in PARTY. I hesitate to use the term "Full Name" here since it is a fuzzy term (and in my models call the attribute "Name" only) but I think you know what I mean.

That is NOT to say that your data entry forms and methods should force the user to enter data twice. On the contrary, you should make data entry as convenient as possible while encouraging quality data. So if you want to capture FML as separate fields by all means do so, and automatically calculate the "Full" name to be stored in the PARTY table as a convenience. But... you should provide a mechanism by which they can change the Name in the PARTY table to something meaningful. So if they have two "Joe Wurzelbacher" records they can modify the name to better distinguish them. One might be 'Samuel "Joe the Plumber" Wurzelbacher' while the other is 'Joe Wurzelbacher Sr.'.

Which is also to say, you should have fields specific to the purpose for which they're needed. For example, if you need "addressee" or "formal salutation" fields you should create and label them as such. When someone asks for "First Middle Initial Last" I ask them how they intend to use it. More often than not the intent is as an addressee field so that's what I give them. I have other fields that are used literally for filing - for example a "File As" field that might contain 'WURZELBACHER, Samuel Joseph (Joe)'.

Link to comment
Share on other sites

When storing the PARTY name fields, instead of using unstored calc fields, I use text fields with an auto-enter. In the PARTY table I also have a "name trigger" field that gets tickled to trigger the various auto-updates and set that field by script when editing PERSON or COMPANY fields.

Is this a sensible thing? I do it so I can view, find, and sort on a stored field rather than unstored.

Link to comment
Share on other sites

When storing the PARTY name fields, instead of using unstored calc fields, I use text fields with an auto-enter. In the PARTY table I also have a "name trigger" field that gets tickled to trigger the various auto-updates and set that field by script when editing PERSON or COMPANY fields.

Is this a sensible thing? I do it so I can view, find, and sort on a stored field rather than unstored.

Sorts don't use the index although Finds do. Since I am explicitly saying the "Name" field in PARTY is an "identity" field that means I would have it stored and indexed.

The operative concept here is that this Name field can be modified independently of the various name fields stored in the subtype. If this is not desirable in your situation then an auto-enter calculation can work for you. You might consider using a script triggering plug-in like ZippScript to force a re-calc of the PARTY::Name field when any of the other name fields in PERSON or ORGANIZATION are changed. Alternatively you can use a SQL plug-in to modify the PARTY::Name field in the same way (this is pretty cool, no scripts needed to update related data but if the record is locked, the update doesn't happen).

Link to comment
Share on other sites

  • 2 months later...

The operative concept here is that this Name field can be modified independently of the various name fields stored in the subtype. If this is not desirable in your situation then an auto-enter calculation can work for you. You might consider using a script triggering plug-in like ZippScript to force a re-calc of the PARTY::Name field when any of the other name fields in PERSON or ORGANIZATION are changed. Alternatively you can use a SQL plug-in to modify the PARTY::Name field in the same way (this is pretty cool, no scripts needed to update related data but if the record is locked, the update doesn't happen).

This thread isn't quite long enough, so I thought I'd try to resurrect it! ;-)

I always forced the Name field on my users and never left it up to them to modify, mostly because I didn't like options for triggering the update on the Name field. Without automatically updating the field I figured I'd get a lot of garbage entered or not at all (and don't get me started on FileMaker validation). This has changed now in FM 10, because we now have a reliable approach to updating data, even if it exists in related tables.

For instance, you could set an indicator Name_Modified_Ind which is set when you've manually entered a value in the name field, and cleared if you erase the current value. Then using OnObjectModify event traps[1], you could update the Name field from any or all of the name fragment fields in a live fashion (i.e., as you type), but only if Name_Modified_Ind is empty. Providing this type of realtime feedback to the user is immensely valuable, because they no longer have to wait for the record commit or the field exit to know what's going to happen.

_____

[1] I used the same type of technique in a demo file that provides Spotlight-style search filtering.

Link to comment
Share on other sites

This has changed now in FM 10, because we now have a reliable approach to updating data, even if it exists in related tables.

I'll take an exception to that. IMHO, nothing has changed. A script is still a script. A transaction that depends on data entry AND a script to be complete is still less reliable than a transaction that depends on data entry alone.

Link to comment
Share on other sites

In the context of my post, FM 10 script events are more reliable than a plug-in because: you have fewer failure points, it's integrated with FMP at a lower level than a plug-in, and it doesn't require additional software to be installed.

I fail to see how a user performing data entry alone could be any more reliable than a scripted transaction which controls the entry of data. In my world the opposite is true (i.e., users left to their own devices will fill rapidly fill a database with garbage). Color me puzzled.

Link to comment
Share on other sites

In the context of my post, FM 10 script events are more reliable than a plug-in because: you have fewer failure points, it's integrated with FMP at a lower level than a plug-in, and it doesn't require additional software to be installed.

Well, considering that they are "object" triggers they may in fact be a bit less reliable since they are tied to your interface, not to your data. In FMP 10 we still have the problem of the "rogue" interface file that bypasses any script triggering mechanisms.

The disadvantages of the plug-in methods are that they require the plug-in to be installed at the workstation and, in the case of script triggering plug-ins, they run "post-event." The first problem can be addressed rather easily, the second not as well.

Link to comment
Share on other sites

I don't know if native script events are more reliable than a plug-in, but that's hardly the issue. From the point-of-view of data integrity, a script is a script - and it doesn't matter if it's triggered by an event or by user clicking on a button. If the transaction isn't complete until the script has run, then there's a vulnerability.

For example, one might be tempted to accompany user entry of a sale by a triggered script to update the inventory - "because we now have a reliable approach to updating data, even if it exists in related tables." My exception is directed at the dangers of such illusion.

Link to comment
Share on other sites

Well, considering that they are "object" triggers they may in fact be a bit less reliable since they are tied to your interface, not to your data. In FMP 10 we still have the problem of the "rogue" interface file that bypasses any script triggering mechanisms.

... or even alternate layouts which a developer has overlooked and not added triggering to. I guess what I was trying to understand is why -- other than having an extra potential point of failure -- a scripted operation would be inherently less reliable than just data entry. It sounds as if Comment glossed over the potential issues with users entering data when he took issue with scripted operations. I still submit that a procedure controlled via scripting -- even triggers based on object events -- is still more reliable for the situation that I presented than just data entry alone.

All that said, I would never trust a truly important transaction (e.g., posting to inventory or a general ledger) to a scripted operation initiated by an triggered event. And, I absolutely would require such an important transaction to be performed via script and not data entry.

Link to comment
Share on other sites

I think we both agree that a developer should use proper discretion for routines which are triggered by FM 10's new event model. I still don't understand how -- for transactional processes like updating inventory -- data entry could even approach the reliability of a scripted operation initiated by a click of a button.

Link to comment
Share on other sites

This topic is 5294 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.