Jump to content
View in the app

A better way to browse. Learn more.

FMForums.com

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

single table structure- why not?

Featured Replies

Could someone remind me again why we use more than one table? The more I think about it, the more it seems purely cosmetic. I am considering implementing an upcoming DB with all records in a single filemaker table, and do all the normalizing through a tTable field that would be used in relationships. Anyone care to talk me down from that ledge?

-Raz

I can only envy people being able to see recursive datastructures in every scenario, but i would say from a maintainable point of view, say one and a half year later a client calls in and ask for a tiny change because you havn't nailed their business logics more than 99% (honestly this occures to pure geniuses as well!!) ...which is fastest to maintain? The principle usually applied here is called Ochams razor http://www.answers.com/occam%27s+razor&r=67

But chances are that we're not really talking tabels and table occurances but instead where to put the tables in a single file or in several, which is an entirely different debate, and as usual is it a matter of discourse ...that is highly dependable on the task at hand or rather the hardware it's deployed on. There is no definitive answers here, but it gives a perspective to listen to this podcast:

http://podcast.Adatasol.com/media/Filemaker_PodCast_07112005.mp3

Because eventhough they're selling a tool to ease migration, aren't Danny not entirely devoted or convinced to the single file approach!

Even though your line of questioning is pretty much like:

True wisdom knows

it must comprise

some nonsense

as a compromise,

lest fools shouls fail

to find it wise.

Which is the one of the most concise ways of expressing the gist of http://en.wikipedia.org/wiki/The_Emperor's_New_Clothes ...is it indeed a challenging question to deal with if you really are into recursive structures and see them right away ...but I do admittedly belong to the mere mortal kind here!

There is another aspect on this as well, discussed eloquently in this video:

http://previews.filemakermagazine.com/videos/513/DataTagging_full.mov

So before we gets deeper into the topic would I at least be pleased to know which of the 3 interpretations of your question fits your reasoning best, without disregarding you as someone with a clearer view than us lemmings.

--sd

Edited by Guest

  • Author

Hi Soren,

To be honest, I am completely at a loss as to what your three interpretations of my question are, but I gather that my post could use some more clarification. I am planning on a 2 file sepmodel approach (Data and UI/BL with probably a third with reports and possibly a fourth plugging in Bob's snazzy audittrail module), but while dissecting the audittrail and going over many many many posts, I realized some fundamental errors in my thinking and am currently re evaluating all of my previously held FM beliefs.

I am working up a template that is a near pure application of the sep model and allows addition, modification, and control of any data field from the logic/ui file, but got stuck at how to modify or create new tables from the ui, found that I was having to duplicate much work across tables, and would be locked into my normalization structure once I defined it in define fields. Then I started thinking, why do I need them anyway? Especially with the limitations FM imposes between them and the complexity of running reports across them. I looked up the FM definition of a table and it said:

A collection of data pertaining to a subject

And then I was thinking, why would you need a relational database unless all data in it pertained to each other in some way?

This may sound like a novice question, but I have been using FM for years and am quite familiar with the established practices of using it. I am waxing theoretical here because I am challenging all of my assumptions after realizing that relatively few people on this forum have a deep understanding of FM7+ (I fear I am one of them, and am trying to fix that) and are forging on ahead with mentalities imposed by earlier versions. I am beginning to suspect that tables are one of these artifacts, but am not confident yet.

While Lisa Simpson taught me all about Occam's Razor in the early nineties, she did not relay how to interpret that principle concerning the question of tables. In regards to

say one and a half year later a client calls in and ask for a tiny change because you havn't nailed their business logics more than 99% ...which is fastest to maintain?

This is precisely the scenario that is leading me towards a single table solution in the data file. I am thinking that not having to import records across tables when adjusting normalization would be a significant improvement.

Aside from having to adjust to paying attention to only table occurrences as opposed to tables and TO's, I am having a hard time coming up with any downsides.

I need some devils advocate's before I commit much more time to this though...

-Raz

Separation issues aside, could you explain how you would implement, say Customers - Invoices - LineItems - Products in a single table?

  • Author

Fields:

xID: (unique failsafe)

tContext: text field would hold "Customer", "Invoice", "LineItems", or "Products"

tSerial: serial num based on next higher value by tContext

tContext defines which 'table' the record would be a part of. relationships would use serial and context together to filter. Everything else would be SOP. I havent decided yet if I will keep empty fields in each record that dont pertain to the context, or if I have each context use the same fields to store different types of info (say, Invoice::Num1 would be Invoice number where Item::Num1 would be Item quantity). I seem to remember that empty text fields do not take up much space at all, but will need to woodshed on that a bit.

Is that enough detail to start?

Q: Could someone remind me again why we use more than one table?

A: for data that only needs to entered / edited in one place, and not be repeated in each record

yes, very often, a single table "flat file" is enough (and that's all we had, in FileMaker 2)

I would suggest you try it both ways, and see which is easier / cleaner / more efficient for your particular solution

What's the advantage of essentially having all the entities in one table? It would seem to me this would greatly decrease the readability of the solution.

  • Author

Ah, Speak of the devil... I do appreciate the feedback.

A: for data that only needs to entered / edited in one place, and not be repeated in each record

In this case, the only duplication of data happens in the tContext field for each record that would formerly have been in the same different table. Yes, it is still a duplication of data, but I suspect that FM dupes this parameter on its own when it is defining/referencing tables, but does not allow a developer to modify it outside of define fields. And is the suggestion I propose really any more of a flat file than using FM tables? In the spirit of reconsidering old assumptions I did some review on what a flat file is, but I am not clear exactly if FM qualifies as one in this case- the distinction seems to be how/if you use relationships regardless of FM table structure. If I put all records in one table and used no relationships, I would agree with you about being flat, as if I used a bunch of different tables with no relationships. But I cant see any limitations and only added functionality from FM's point of view.

Ender:

What's the advantage of essentially having all the entities in one table? It would seem to me this would greatly decrease the readability of the solution.

I thought so as well, and encountered that initially, but am finding that a lot of this is due to my expectations and prior routine. Including initial dummy records with field function by context data in them allows you to view context specific field names in layout mode by using the view-show sample data feature, and greatly improves readability, as does having clear and informative TO names.

As far as what you gain;

Primarily for me it began as a conceptual liberation from being locked into pre-defined tables. You could add or adjust tables whenever from the logic file. Also, creating and viewing reports involving distantly related data is seeming much easier in this method. The relational graph is much cleaner (for ex; you dont have to make sure you plug in to all contexts in a 'global preferences' type relationship. A single one to all relationship to your data file would accomplish it). Also, I am thinking this might be of help in eliminating the persistent problem of updating stored calculation results that reference related fields, but that thought is far from developed at the moment.

Let me turn the question around though, What is the advantage of not being able to dynamically define your table structure?

-Raz

Especially with the limitations FM imposes between them and the complexity of running reports across them.

If thats the real issue at stake here, is it just another way to put a previous thread of yours:

http://www.fmforums.com/forum/showtopic.php?tid/174852/post/198817/hl//#198817

That makes you believe that you actually need a flatter structure?? So there is really nothing utterly clever in the direction of recursive structures that have muffled in the debates when turning into fm7 and 8?? Neither does it seem like you're following the reasoning behind Petrowski's video.

I am thinking that not having to import records across tables when adjusting normalization would be a significant improvement

It's a feature in Valentina, that I really for some time couldn't get my head around, why anyone would need it anyway? But later occured the very same idea for a template I made dealing with importing denormalized and then organizing it accordingly later: http://www.filemakerpros.com/Splitting.sit

But I suggests that you while at it should think in alternative ways of summarizing than usual summary reports - investigate this:

http://fmforums.com/forum/showpost.php?post/152237/

...although I havn't made my mind up if it really is imposible to make GetSummary( to work over relations. It's too late now for experimentation ...perhaps I'll find time tomorrow??

--sd

  • Author

If thats the real issue at stake here, is it just another way to put a previous thread of yours:

No, that issue was just that FM neglected to mention that getsum couldnt break by related field. There may be a way to use getsum involving related fields, but that will not change the fact that it does not seem to be able to break by a related field.

I also have definitely found alternative ways to summarize reports. I am not looking for workarounds in the present investigation, I am questioning my design foundations that required these workarounds.

*added a note:

The real issue for me started as an attempt at a functional SepModel that would allow Unlimited modification and expansion of functionality and design purely through the UI/Logic file.

**added another:

Also, I am not so convinced that my proposition involves flattening the DB at all. I would still use the same number of relationships as with a multi table approach. Again, isnt the distinction between flat and related files in the relationship graph, and not in the define tables section?

Edited by Guest
more thoughts

I hope I have understood your idea.

No it is not. Tables define objects that belong to the same class, and relations, well, the relations between them.

Of course you can use a single table for everything and an additional field that tells what object class you have in a record, but:

- this solution will project an n-dimensional space (or tensor) down to a one-dimensional space (a vector), and the projection will always take only two dimensions (a matrix) at once. You will not be able to reconstruct the n-dimensional tensor from this again, except for the price of redundancy (as gdurniak said in other words as well)

- finding and sorting will be more complicated, because one needs always to specify the class as well. It will work at maximum two different classes.

In a normalized solution, you can specify the tables (the object classs), and the DBMS will do the rest for you (indexing, sorting).

- self joins will eventually decrease the performance

- objects and their keys must be duplicated as soon as you have to relate one object to two other objects of the same class; effect on sorting?

If solutions like yours were always feasible, we would still have our data in spreadsheets such as Excel.

Edited by Guest
Additional point on finding and sorting

  • Author

Thanks Martin! This is exactly what I need.

Considering Tensor to vector, I believe I get what you are saying, but isn’t this true for related DB's using different tables as well? Aren’t tables just FM’s way of automating this redundancy and hiding it from view? Isn’t the n-dimensional space purely a mathematical schema that needs to be reconstructed from the relational graph regardless of tables? Isn’t having 3 different tables without relationships the like having three different one dimensional spaces, not a multidimensional space? Three different flat files, not a relational space at all? Ah, so many questions!

Yes, I agree that finding and sorting would involve one extra parameter on the developer side, but isn’t this parameter already used in some fashion when FM decides which table to do the find in, and wouldn’t it be nice to have access to it?

I don’t see self joins (on a one to one manner) any more necessary in this implementation then a multi table solution, but might be missing something here.

"- objects and their keys must be duplicated as soon as you have to relate one object to two other objects of the same class; effect on sorting?"

I am not sure I understand your point here, could you provide a brief scenario where this would be necessary in a single table approach, but not in a multi table?

For me, speed is the big unknown and concern. I do think that using filtered portal searches would be pretty fast, and allow you access to all distant related records in a single portal...I need to get it mocked up and populated soon to test it out, but everytime I sit down to work on it, I realize some other preconception that has creeped into the design and muddied the view.

Raz--

Technically, you're presenting very interesting ideas. I agree with comment, though.

Any knowledge representation in software is an abstraction; none of us is instructing the computer to store the zeroes and ones in a particular data stream. So, Filemaker (and RDBMS in general) make use of intellectual constructs called tables to make it easier for humans to map their concepts of the real world into data structures. Everything we do in RDBMS can be accomplished using text files, if you're so inclined--but you have to do all the work yourself.

The system you're describing is elegant and extremely malleable, but my experience is that this doesn't reduce the amount of work that goes into a specific solution. My past experience always has been that I put all this effort into creating a generic, reusable application, only to find that it would have been easier if I had simply written a straightforward purpose-built one. Moreover, I think the real value in a RDBMS is in its specific structures--which you would still have to build within the single table realm you envision.

I think the reason that tables are used is that it makes sense to people in a fundamental way that pure abstraction does not. It's a tool that works.

David

  • Author

David,

I usually agree with comment as well (if not at first, then after chewing on my foot for a while...) but am not sure what his point was in this thread other than "less talk, more action."

My past experience always has been that I put all this effort into creating a generic, reusable application, only to find that it would have been easier if I had simply written a straightforward purpose-built one.

Isn't that the truth! I have been down this road before as well. Part of me is resigned to the fact that this is pureley a theoretical excercise, but part of me honestly believes that this is completely possible to pull off with the new features of FM7 (especially the let and evaluate functions). Regardless, I have been learning a tremendous amount about things I didn't even know I didn't know.

I think the reason that tables are used is that it makes sense to people in a fundamental way that pure abstraction does not. It's a tool that works.

I completely agree. There are many tools in filemaker that allow some sophisticated things to be done without much understanding of them. These tools can be quite valuable, but have (at least in my case) at times trapped me into a certain limited way of thinking. But I do think it is possible to maintain the conceptual aid and readability that tables offer without actually defining them in define fields, although I havent quite worked it out yet.

OK, enough talk. Back to the action...

-Raz

A fascinating thread gentlemen.

Phil

Have moved my derived question to this thread:

http://www.fmforums.com/forum/showtopic.php?fid/36/tid/175344/pid/199461/post/last/#LAST

--sd

Edited by Guest

I am not sure I understand your point here, could you provide a brief scenario where this would be necessary in a single table approach, but not in a multi table?

If I'm understanding the question, it's basically the same question in a regular normalized mutli-table system for which the answer is "Why do we need Join tables, Alex?"

Stated another way. Lets say you have a Students table and a Classes table. Since a student can take multiple classes, and a class can have multiple students, this is a M-N relationation (many to many). It turns out the only real way to do this is with a third table, the so-called "Join" table. The join table in its minimal form is a two-field table that contains records matching a Student primary key with a Class primary key.

As an exercise, perhaps you could try to explain to us how you would model this type of relationship (many to many) with your single table system. I'm sure it can be done, but it doesn't sound easy...

  • Author

it's basically the same question in a regular normalized mutli-table system

Yes, that is what I was thinking as well, which would mean that it does not have much impact on the model I am proposing.

Instead of a separate record in a join table with two fields, you simulate the same thing with a "StudentClasses" T.O., a new record that has tContext="StudentClasses", and then use two fields on that record that you would assign to StudentID and ClassID. Your relational graph would look identical to a multi table approach as each record, regardless of tContext, has a uniqueID:

Students::UniqueID---->StudentClasses::StudentID

StudentClasses::ClassID----->Classes::UniqueID

-Raz

That might work, but how would you handle data typing & naming? I.e. it's really useful to have a field named "StudentID" that has field validations (numeric only, 6 digits, unique, must exist, not editable).

Would your single-table solution use fields in this way? Or would you just have generic, un-named fields, e.g. Field1, Field2, Field3, all of which would have to be text fields with no validation?

If the former, I think you'd quickly end up with a table with 500+ fields in it, which as a designer would quickly drive me insane, and may have performance implications (e.g. using a 500-field record to store a single number is not efficient).

If the latter, then I think you'd run into other problems, such as data type conversions (numeric<->text<->date), indexing and sorting issues (dates stored as text don't sort properly, key field matches may not work across data types, etc.)

  • Author

Yes, thanks!

These are the readability issues I am working on at the moment.

I usually validate through script anyway (regardless of table structure). If you really must have field level validation, I suppose it could be done in the logic file.

At the moment I am tinkering with the generic approach:

UniqueID: auto enter Unique ID

Trigger: auto enter replace contents: Let[x=Text1&Text2&Text3...]; Trigger +1 (this is a simplified version))

Text1

Text2

Text3...

Some generic auto entered replace fields:

Auto1:

Let( x1=Trigger;

Evaluate("Let([vAuto= "&""Auto1"" & "];" & GetField("View::xgLogic") & " )")

)

Auto2:

Let( x1=Trigger;

Evaluate("Let([vAuto= "&""Auto2"" & "];" & GetField("View::xgLogic") & " )")

)

Auto3...

xgLogic is in Logic file, and is a text global which has ALL of the auto entered tContext specific calcs conditionally parsed by a case statement:

Case(

vAuto="Auto1" and tContext = "Students"; YourDesiredStudentAutoCalc1;

vAuto="Auto1" and tContext = "Classes"; YourDesiredClassesAutoCalc1;...

//last step keeps the data the same if no condition was set for it to change:

GetField(vAuto)

)

Sorting/indexing could be done through the logic file with getasDate(text1) getasNum(text2) etc autocalcs.

I might also just make a point to have a few date1 num1 timestamp1 container1 fields in there as well, and store the appropriate data in those fields.

Readability wise, using a template with each field assignment per tContext as a text label next to them, and then just copying and pasting these fields onto layouts as opposed to dragging and selecting from a difficult to read generic table helps. Also, the initial dummy record with field assignment names as the field data and using 'show sample data' in layout mode is working pretty well.

I am also working up a separate FieldAssignments file that tracks these assignments per tContext. I am thinking of incorporating this into a substitute calc so that calcs and the ilk can be written out using intuitive filed name assignments and then 'translated' into their generic names...

Much more work to go though!

-Raz

I would hate to be the developer who inherts a solution built like this. Way too much trouble.

Hmm. I'm thinking that this single-table solution is the worst of both worlds. FileMaker (bless it's heart) absolutely sucks if you are a super power user and want to do true object oriented design (e.g. to have objects in a 4th generation language automatically bound to database field), and it also sucks if you want to do weird-dimensional object hierarchies (such as linked lists, recursive lists, etc.). And it sucks at separating data from design, has no inheritance, etc. FileMaker is awesome at RAD prototyping, cross-platform design, simple, maintainable scripting, etc, which is probably why most of us are here on this forum.

So I think your single-table design is keeping the bad and throwing out the good.

I'd suggest you consider Python, DHTML and a SQL server of some sort if you really want to do "weird" stuff like this }:(

  • Author

Yes, that is the best argument I can think of against it. It is not for the faint of heart, and is not a standard method of FM DB design (which is no small drawback). Also, it is seeming more complex than it is because I am tying it in with a separation model approach, which is not standardized as well.

However, the use of script parameters was not standardized either when it first was introduced, and would have been a headache to figure out if you were not aware of it.- not that I am saying they are the same degree off the previous model, more that just because a standard doesnt exist yet, doesnt mean that a particular method should be abandonded (triple negative?), just that standards need to be developed.

I do have little doubt that we should be heading this direction - The power it offers and easy reusability of foundations are very compelling to me. The simple ability to view a field name as a calculated result in the developer windows (scheduled for FM9, right guys? right?) would pretty much change everything, no?

Again, this is a theoretical excercise, and is providing some valuable insights for me. Storing all autoentered calcs in a single field in the logic file is something I think I will definitely salvage regardless of the number of tables.

-Raz

Edited by Guest
fleshed out response

  • Author

Python is my summer project, but mostly for music purposes, where wierdness is called for more frequently.

Edited by Guest

And it sucks at separating data from design, has no inheritance, etc.

Well is has to a certain extend, take a look at:

http://www.newcenturydata.com/downloads/separation_demo.zip

...for the separation issues, and

http://www.filemakermagazine.com/modules.php?op=modload&name=News&file=article&sid=607&mode=thread&order=0&thold=0

For the enheritance but it's correct that "overriding" ...lacks something to be desired!!!

--sd

Going back to the readability point, i started having a few issue's with this in my larger tables where i'd have about 100 fields in total, maybe about 50 of which would store data.

So i started to split the fields into different sections by creating extra fields. Mainly calcs from data from globals from keys etc. But you could do the same here, splitting them up into your relevant groupings.

Though what you'd be doing in this case would be completley redundant where you could just be using tables anyway...

Overall though, this thread is really interesting, good job guys }:( You've managed to both intrigue and ... umm... entertain?... me.

~Genx

I think the disadvantages are quite clear. The advantage is not. You have mentioned 'conceptual liberation'. I am not sure that is a worthy goal. The most flexible, concept-liberated, database is a fresh, empty file. When you start creating tables, fields, relationships, etc., you are making CHOICES. As a result of these choices, the file is becoming less generic and better suited to its proposed task. That, I think, is the way it should be.

  • Author

Xochi:

...absolutely sucks if you are a super power user and want to do true object oriented design (e.g. to have objects in a 4th generation language automatically bound to database field), and it also sucks if you want to do weird-dimensional object hierarchies (such as linked lists, recursive lists, etc.).

Could you please explain this in a bit more detail? I have a vague sense about what you are saying and think I might agree, but some specific examples of, well, 'suckiness' would benefit the discussion.

Comment:

You have mentioned 'conceptual liberation'. I am not sure that is a worthy goal.

You bring up some excellent points (that will take some time to consider), but on this one I completely disagree. We are in the theory forum, this should be in some ways our ultimate goal. You are to blame for this thread in some regard, as it was your posts Here and Here that was the bamboo rod which whacked me on the back of the head while I was out gardening in my FM5 philosophy patch (ouch, but thanks!).

You were absolutely right. I am beginning to see how almost all of the big workarounds and walls I have come up against in many files are just symptoms of a misconstrued understanding of the nature of the beast. The way this understanding is flawed is largely due to the imposition of concepts that aid in human comprehension into the fundamental and rigid design of actually how the data is stored. Filemaker does not need these concepts, we need these concepts. I believe there is a way to maintain these conceptual aids in how we interact with the data, but not impose them as actual physical barriers of a sense in how the data is handled by filemaker.

I feel like we have just upgraded from a rickshaw to a porche. Yes, we have a great stereo, shiny leather seats, and a sunroof, but we are still pulling the ******* thing around. Let us get in and drive! Of course we will have to learn how to shift, steer, and park, but come on now...

There has been some great observations and feedback so far - I hope it continues, but it is a lot to digest. I am going to put it all in my pipe and smoke it for a bit.

Thanks, - I will be back.

-Raz

I spent hundreds of hours consolidating my database solution, which did yeild some noticable performance gains, but now we started deploying Mobile 8 and found that 1 database means only 1 table can be mobile... this SUCKS!

That's a bummer, but it doesn't seem to fit into this discussion.

Why sure, blame me for it...

I am not sure my point got accross. We ARE in a theory forum, but I don't see why that would make ultimate flexibility a desirable goal. I am against flexibility at the expense of other concerns (such as readability, for instance), in theory as well as in practice. I am saying that "flexible" is, in a way, the opposite of "well-suited".

There is always the temptation to leave decisions in a reversible state - because you can. But if you want to get ahead, you need to burn some bridges behind you. I think that is true for almost any endeavor, not just FileMaking.

Also, it is seeming more complex than it is because I am tying it in with a separation model approach, which is not standardized as well.

Try to read this thread where this quote stems from:

When looking at the amount of work required to simulate the type of

development model I might create in another environment I can't help

but wonder why I am using FMP at all. This extends beyond mere FMP

usage to encompass the interface things we do (e.g. hide the status

area and then replicate it; tightly script everything including Finds

and Sorts; etc.). After all of that investment would the time spent

on making FMP behave contrary to its design have been better spent

learning a new tool like REALBasic or Revolution? Do your solutions

end up looking "cookie cutter" as you attempt to leverage the

investment you made in the methodology? Does the user end up with a

better experience than if you had worked within the FMP idiom?

From: http://www.nabble.com/The-placement-of-Calc_fields-in-the-separation-model.-t1329384.html

What Corn Walker says is pretty much the same as Xochi and Ender, but not least Comment says!

Wrong tool!

--sd

Edited by Guest

  • Author

Okay,

“I am saying that "flexible" is, in a way, the opposite of "well-suited".”

Yes, in a way, but not in essence. There are a multitude of things were what is most “well-suited” for something is “flexible” in nature (Theater, Database Design, and Parenting to name a few).

Tables exist to contain data that pertains to each other, and separate it in some way from that which does not.

All data in a relational DB pertains to each other, or is in the wrong place.

All data in a relational database should be in a single table.

Relational depth has nothing to do with the number of tables. In fact, number of TO’s is also somewhat irrelevant. The Singles Table at the Champagne Room could have just as easily left off all of the TO’s except for Data, ParentData, and ChildData. What determines relational depth is the interaction of your data with the TO’s.

Practicality

It turns out that nearly all the issues of practicality came up from trying to address ideas of separation at the same time. Evaluating text outside of an unstored calc sucks the life out of my machine. That whole sep thing will have to wait. With the SingleTable approach. Issues of practicality recede once the concept is understood. Implementing a mildly complex solution was surprisingly fast, significantly faster than a multitable approach.

Speed-

It seems pretty snappy. Will try loading it up later on.

Readability-

The concept of tables is a great aid to human understanding. To contiue to utilize this tool, Table structure is charted and formalized in the Table, TableFields, and Fields table. There should probably also be a Table Occurences table as well.

It has forced a more methodical approach, but completely manageable. Some more scripting will aid in this, and will only have to be done once.

After the initial wooziness, It actually became much more readable with the singletable approach. The relational structure is completely transparent.

The only real concern I have is:

I would hate to be the developer who inherts a solution built like this. Way too much trouble.

I am thinking if I do go this route, I would have to script something that would just export all the ‘Tables’ to actual Tables, and a relational chart as well. I will also have to be on my best commenting.

Benefits:

There are many.

I just inherited a 36 file monster from 1995. With some moderate scripting, I am thinking I could dump them all in the SingleTable and re-normalize with the click of a button. Use this exact script again for any other DB.

The simplicity of the relational design greatly improves the readabilty of how calcs will refresh through an update. Everything is considered from the Data TO context.

Join tables are a thing of the past.

Stored calcs do not have to be monitored for refreshing.

I am sure there are plenty more benefits as well, it suggests many possibilities.

Anyway, I invite you to come play in the Champagne Room tonight, and then stumble back tomorrow to see what specific practical issues are brought up.

-Raz

Well, I'm baffled.

I've seen your sample, but I don't see how you can say this is an improvement--substituting the well-designed developer interface provided by FileMaker, for one with generic field and relationship names that can't do half of what you could with normal fields. You rave about the benefits, but haven't provided evidence.

Some of the problems:

1. There are too many places to go to in order to add a field, and you have to keep track of both the assigned FieldName and the FileMaker field name. Suppose for example, a particular entity needs a new number field, you would need to first try to figure out if any of the number fields are available for that particular entity, and if not, add it to the FileMaker fields in Data. Then you would be able to assign it in that entity's definition. Then you have to remember which Data field it was so it can be placed in a layout.

2. Calculation definitions are unwieldy. In addition to having to decipher and adhere to the unusual syntax for the calculations, you would again have the issues with using/mapping generic field names ("Now which of those fifteen date fields was the 'Date Last Ordered'?").

3. What about stored calcs? Is this not possible, or did this not seem like something that would be useful?

4. It doesn't look like filtered relationships are possible in this solution. How about: Multi-keys? Range relationships? Multi-criteria relationships?

5. You seem to say that looking further up or down the chain of relationships than the immediate Parent or Child record is unnecessary (or at least, you don't account for it,) but I've found the ability to view grandparent and grandchild records to be a great benefit of FM7/8. For example, if you wanted to see all the Items that a particular Customer has ordered in the past.

Well, I could probably go on, but I'm getting tired. You've posed an interesting design, but I just don't see it as practical.

  • Author

1. Agreed. Pureley interface issues though, and separate from the underlying design. They could easily be refined (come on, you know you have tackled much more challenging interface problems...). Say, for example, to automate the assignment of datafields all at once after you were done. The layout pallette feature is not up yet, but does help tremendously, as will the set initial record to table field names script before layout editing.

2. Agreed. Same as above. Some slight tweaks to a handy replace function (stripped from a mail merge demo...) should allow all calcs to be written by their Table Field names. This issue could be mitigated in other ways as well.

Both points above are things that need to be tackled only once. Once the model is in order, it would not be an issue for any succesive files.

3. What about stored calcs? they are just as possible/awkward as in a multi table approach. I usually do not find them useful, but there would be no problem using them here.

4. No problem at all. Why would they be?

5. Not unneccessary, it is already being done. The data is always looking everywhere. Traditionally, a relational graph involves a flattening out of the dimesnsional space the data exists in. In this flattening out, data, in the abstract form of relationships, becomes duplicated and redundant. That is, you have the same quality of relationship duplicated over and over throughout the graph. These redundancies obscure the true nature and timing of how the data relates. This results in update issues, increased processing power, and poor performance. In the SingleTable model, there is only one center point for the data (Data TO), and each relationship is of a unique quality. The relationship bewteen Customer and Invoice is identical as that between Invoice and Item. There is no need to recreate this space, just ask which one you want to stand in the center of it.

To see all the items a customer ordered is simple and can be done from any of the 4 imposed tables with the same technique.

From Customers:

Create Value list all ChildData MatchIDs from Data. Assign an unused calc to return that value list. Create relationship from Data TO Calc=Item ParentID. It is just a matter of asking the children what their view from the center is.

It is a different way of working, but it is much easier to see reusing soemthing like this than more traditional structures.

I suppose there is always runtimes as well...

Your solutions to these seem to involve adding more complexity to this already complex system, but you gloss over them as if they are self-evident.

For example #4, where you say filtered relationships are "No problem at all." Well, how? Your relationships between the data consist of a parent to child via a single key field. So if you're filtering, wouldn't you need to add additional relationships? Same problem with range relationships and multi-criteria relationships.

There are many ways to make Filemaker work. For example, we often see here a "dumb" file that is worked almost entirely thru extensive scripting.

I believe there's a way that Filemaker was intended to work, and that the best way to utilize the application is to follow that way. It is very hard to justify this belief with rational arguments. It is even harder to come up with rational arguments against other methods: indeed, why not let scripts handle all data shuffling and all calculations?

But I have found that following what I called "the Filemaker paradigm" makes most problems easy to solve, and that those solutions are the most elegant ones.

It is a different way of working, but it is much easier to see reusing soemthing like this than more traditional structures.

Let me take one more stab at this 're-usability' issue. You can make a refrigerator suitable for tropical climates. Or, you can make a very cost-effective refrigerator. Try to do both at the same time, leaving the decision to the last minute, and you will end up with a unit that is twice as expensive to make and breaks twice as often.

If I am making an invoicing solution, my primary goal is to make the best invoicing solution for the particular situation. I am not at all bothered whether I will be able to re-use this solution for tracking ads in a newspaper. I firmly believe that if such consideration entered my mind, I would end up with less than optimal invoicing solution.

Granted, I might have a basic invoicing template. I might even have a more basic join-table template. But the most basic template is a new file. Anything already defined in the file could be contributing to my primary goal - or hindering it.

Raz,

This whole exercise seems to be a lot of work to go through to make sure that you do things differently. I can understand that imports, and data transfers may seem easier if you're new to FM7/8, but once you get used to doing things the new way, I think you'll find things aren't as tough as they were.

In FM6, if you wanted to script an import of a bunch of data, you had to go into each affected table, and create scripts to prepare the table, perform the import, then do any further processing. Then you had to create one script that managed all these import scripts.

In FM7, you can do it all in one script. Just change layouts to work on a different table.

You can develop a solution your suggested way, and have to document what each field is based on what type of record you are storing ... OR ... you can develop using the tools FMI provided, the way they intended, and not have to look at your documentation to discover the use of every field.

I prefer to look at a field name and know what it is. For example, a field called NameFirst, in the Contact table is the first name of a contact. What is field Text1 when the record is a contact record? Better look at the book that you wrote to document the solution!

All that being said, it would probably be a fun exercise to create a solution the way you suggest. I just wouldn't want to have to maintain it!

  • Author

Apologies,

The design is still new to me and was not 'simple; at all until it finally clicked in (in the course of an hour the number of table occurrences dropped from 16 to 3). If I am glossing over some things, it is because I haven’t had to explain them to anyone yet. The key difference is that this model evaluates everything from the datas perspective, where a more standard design requires the data flatten out to our perspective before we interact.

In The Data TO, all needed relationships between all data exist. To access remotely related data, a record stands in the Data TO,a sks a child record what it would see if it was in the middle, who asks its child records what they would see in the middle, who asks its child records what they see... I don’t mean to say that all can be accomplished by two relationships, just that there is no need to duplicate relational states. You only need to create one 'heart' that essentially the representation of all possible states of relation within the DB.

Yes, your question about filtering would require an additional relationship (just as it would in a multi table approach), because it is introducing a new state of relation, that of a parent being directly related to a child and its grandchildren at the same time. However, this relationship would only have to be made once, and from the Data TO. Now all data can exist in this state of parent-child, Child-parent, Parent-grandchild relation. Without changing a thing but the record you are viewing, a potrtal that would display all products ordered by a certain customer on a customer record would display all Items on all purchase orders related to an invoice from an invoice record.

It is not simple, but is it really more complex than a standard graph? I would maintain that it is actually much simpler, as the relational state of the data is crisply defined and easy to access. No more searching all over the graph to see where else something is related, it only needs to be considered from a single point.

Comment,

I get your points about the Filemaker Paradigm and re-usability. I think that this sort of relational structure is actually more in line with this paradigm. It does not try and do anything fancy with the program, but rather eliminates redundancies and allows FM to only have to consider things directly related to each piece of data. There is no distantly related data, it all occupies the same centerpoint.

"If I am making an invoicing solution, my primary goal is to make the best invoicing solution for the particular situation."

I think that there is much wisdom in that approach. Now, let me ask you what your approach would be if you were designing a product for a database developer to streamline their process of database creation. You are hired as a Meta-Consultant, if you will. You would point out that many of his projects have a remarkably similiar relational structure. You would notice that within a single project, identical relational structures are being created and employed all over the place. You would observe that much time is spent adjusting and tweaking this structure over the course of the project. You would notice that this tweaking does not come easily, and is inhibited by tables. These are major inefficiencies in the work process, and your solution would aim at eliminating them as much as possible without creating more work in the process. Yes, it would be overkill for a one-off invoicing system, but it would be perfect for me.

Readability issues aside (I don't think those are insurmountable), I don’t see much against this method than a belief structure, all be it a largely held belief structure. The data is actually happier in a single table state.

You would point out that many of his projects have a remarkably similiar relational structure. You would notice that within a single project, identical relational structures are being created and employed all over the place. You would observe that much time is spent adjusting and tweaking this structure over the course of the project. You would notice that this tweaking does not come easily, and is inhibited by tables. These are major inefficiencies in the work process, and your solution would aim at eliminating them as much as possible without creating more work in the process.

Raz,

I think you are really over-estimating the time and effort involved in making schema changes. Adding a table and relating it to your current structure, or changing a table's place in the relational structure can be done quite quickly, even in the most complex of databases.

As a meta-consultant, I would recommend that the client build a good template file, that has a reliable contact module in it. After that, I would tell him to make sure that he has a firm understanding of data theory, so that it is not difficult for him to create new tables, TOs and relationships on the fly. After all, I've seen a lot of extensive template files that really restrict the developer (okay, ME) by having too much created ahead of time.

Since all our clients have different businesses, expectations, and desires, there is no way to streamline our processes too much. A lot of it is a creative process, just like for a Web developer. The Web developer may have a few template CSS pages that he starts with, but there are too many variables in each project for him to go beyond that.

what your approach would be if you were designing a product for a database developer

Someone already did that - it's called Filemaker. It has enough limitations already, thank you very much. Ask me instead whether I would use Filemaker if it came out of the box using your paradigm.

  • Author

Thanks again for the feedback- it remains very helpful.

Adding a table and relating it to your current structure, or changing a table's place in the relational structure can be done quite quickly, even in the most complex of databases."

Yes, very easy. However, re-normalizing your data is not. Neither is creating reports with data from multiple distant related tables. Imagine a calculated multikey that could access all data in the system. Set that multikey correctly, and you can display any data you want. All this would take in a singletable model is creating a single relationship from the Data TO: CalcField=MatchID. Now all data could access all other data directly through a single portal.

Also, you cannot script in FM's define fields, and many features of importing still require manual attention. All of these could be controlled through scripting in what I propose.

Dont get me wrong- I agree completely about the over detailed template point. I have walked down that road as well. That is what I find appealing about this model- the 're-usable' features deal with how data interacts on a fundamental level, and have nothing to do with work flow or interface issues that plague more surface level templates.

I am resigned to the fact that I won't be making my fortune selling SingleTable Developer kits to the masses, but I cannot find any limitations that this model brings (beyond the mentioned readability ones, which appear resolvable), and see many multi table limitations that it overcomes. If there are limitations imposed by this model(beyond the cosmetic), I would like to know.

Ask me instead whether I would use Filemaker if it came out of the box using your paradigm.

It does come out of the box using this paradigm, we just choose not to use it in this way, and for good reason. It demands a more complex understanding of things. I think the real question is if a more complex understanding would help the process in any significant way. That remains to be seen, but at the very least it has given me a much more accurate and readable model of how complex systems relate.

-Raz

...If there are limitations imposed by this model(beyond the cosmetic), I would like to know.

:bang: Well, you can lead a horse to water...

I am resigned to the fact that I won't be making my fortune selling SingleTable Developer kits to the masses, but I cannot find any limitations that this model brings (beyond the mentioned readability ones, which appear resolvable), and see many multi table limitations that it overcomes. If there are limitations imposed by this model(beyond the cosmetic), I would like to know.

I don't think I'll be able to change your mind, Raz. You definitely haven't changed mine.

And now, I am bowing out.

It does come out of the box using this paradigm, we just choose not to use it in this way, and for good reason. It demands a more complex understanding of things.

I am not sure I agree to the first part, but that's not important. I agree there's a very good reason not to use in this way. But I would state the reason as "It demands we think in computer language instead of in human concepts". You might think this is a minor point - I consider it crucial.

There's absolutely no logical reason why turning a wheel clockwise should make a vehicle turn right, instead of left. Still, no car manufacturer would ever consider switching. One important role of software is to provide a usable interface to the humans involved in the process.

There's absolutely no logical reason why turning a wheel clockwise should make a vehicle turn right, instead of left.

Actually, there is a very good reason for this. Imagine if the steering wheel were, in fact, a wheel sitting on the ground. If you rotated it clockwise, which way would it go? To the right. It would only go to the left if gravity were reversed and objects stuck to the ceiling instead of the ground.

There are similar exaplanations why "up" generally means "more" and "down" usually means "less" with manually operated controls. Humans have built-in physical analogies for abstract concepts.

Edited by Guest

There IS a very good reason for it - but it is not a logical one. In fact, you have given us a prime example of associative thinking. All in all, you are only strengthening my point.

There IS a very good reason for it - but it is not a logical one. In fact, you have given us a prime example of associative thinking. All in all, you are only strengthening my point.

Objects behaving according to the laws of physics is, in my book, a pretty darn good (and logical) explanation. Carrying that to one level of abstraction is still logical, though one can argue it's not causally necessary.

  • Author

I think I am not presenting myself clearly. To me, the readability issue is the only valid point brought forth so far. I do not mean to discount that point, but would like to investigate if there are any structural/technical flaws with the model. Closer to the beginning of the thread, people said:

If solutions like yours were always feasible, we would still have our data in spreadsheets such as Excel.

...I'm sure it can be done, but it doesn't sound easy...

...And it sucks at separating data from design

...I think the disadvantages are quite clear. The advantage is not.

3. What about stored calcs? Is this not possible, or did this not seem like something that would be useful?

4. It doesn't look like filtered relationships are possible in this solution. How about: Multi-keys? Range relationships? Multi-criteria relationships?

5. You seem to say that looking further up or down the chain of relationships than the immediate Parent or Child record is unnecessary (or at least, you don't account for it,) but I've found the ability to view grandparent and grandchild records to be a great benefit of FM7/8. For example, if you wanted to see all the Items that a particular Customer has ordered in the past.

These objections no longer seem to be present. I think I have demonstrated that not only is it possible, but a single table model is pretty easy to implement structurally, and easy to adapt to new needs. I think I have also indicated some of the advantages that come with this model.

I agree there's a very good reason not to use in this way. But I would state the reason as "It demands we think in computer language instead of in human concepts". You might think this is a minor point - I consider it crucial.

Yes that is the only issue as far as I can see. I realize that I have been glossing over this in a fashion, but I do not consider it a minor point at all either. It in fact could ultimately be the deal breaker. No matter how comfortable I get with the model, no matter how readable it seems to me, if no one else could/would interact with it, it would be worse than not giving the client the master password. It would be forever wedding me and the database in a certain sense. Way too much commitment for me, and not terribly fair to the client.

I don't think I'll be able to change your mind, Raz. You definitely haven't changed mine.

In spite of my rhetorical excess, I have no intention of converting the world to a single table approach. It has proven a valuable exercise for me, I realize that others may find it a waste of time. You are welcome to play with the idea or disregard it as you wish.

I think I will pursue it more though. If asked to justify it practically, I will tell myself that I am building a normalizing tool. Something that I can do a scripted dump of all 36 files from this legacy database (all of them with InvoiceItem1, InvoiceItem2, InvoiceItem3 fields...) into, experiment with some different ERD models, render new field, table, and relationship lists, and then dump them back into new tables for a more traditional and accessible final product. I believe it could be a useful tool and will let me confront some more of the readability issues along the way.

I will try and post a copy in the Champagne Room when I get one up and rolling. Spring just arrived here, so it may take a while to get on it...

Thanks,

-Raz

Carrying that to one level of abstraction is still logical

A is very smart. A has red hair.

Carrying that to one level of abstraction:

People with red hair are very smart.

Create an account or sign in to comment

Important Information

By using this site, you agree to our Terms of Use.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.