Razumovsky Posted March 19, 2006 Posted March 19, 2006 Could someone remind me again why we use more than one table? The more I think about it, the more it seems purely cosmetic. I am considering implementing an upcoming DB with all records in a single filemaker table, and do all the normalizing through a tTable field that would be used in relationships. Anyone care to talk me down from that ledge? -Raz
Søren Dyhr Posted March 19, 2006 Posted March 19, 2006 (edited) I can only envy people being able to see recursive datastructures in every scenario, but i would say from a maintainable point of view, say one and a half year later a client calls in and ask for a tiny change because you havn't nailed their business logics more than 99% (honestly this occures to pure geniuses as well!!) ...which is fastest to maintain? The principle usually applied here is called Ochams razor http://www.answers.com/occam%27s+razor&r=67 But chances are that we're not really talking tabels and table occurances but instead where to put the tables in a single file or in several, which is an entirely different debate, and as usual is it a matter of discourse ...that is highly dependable on the task at hand or rather the hardware it's deployed on. There is no definitive answers here, but it gives a perspective to listen to this podcast: http://podcast.Adatasol.com/media/Filemaker_PodCast_07112005.mp3 Because eventhough they're selling a tool to ease migration, aren't Danny not entirely devoted or convinced to the single file approach! Even though your line of questioning is pretty much like: True wisdom knows it must comprise some nonsense as a compromise, lest fools shouls fail to find it wise. Which is the one of the most concise ways of expressing the gist of http://en.wikipedia.org/wiki/The_Emperor's_New_Clothes ...is it indeed a challenging question to deal with if you really are into recursive structures and see them right away ...but I do admittedly belong to the mere mortal kind here! There is another aspect on this as well, discussed eloquently in this video: http://previews.filemakermagazine.com/videos/513/DataTagging_full.mov So before we gets deeper into the topic would I at least be pleased to know which of the 3 interpretations of your question fits your reasoning best, without disregarding you as someone with a clearer view than us lemmings. --sd Edited March 19, 2006 by Guest
Razumovsky Posted March 19, 2006 Author Posted March 19, 2006 Hi Soren, To be honest, I am completely at a loss as to what your three interpretations of my question are, but I gather that my post could use some more clarification. I am planning on a 2 file sepmodel approach (Data and UI/BL with probably a third with reports and possibly a fourth plugging in Bob's snazzy audittrail module), but while dissecting the audittrail and going over many many many posts, I realized some fundamental errors in my thinking and am currently re evaluating all of my previously held FM beliefs. I am working up a template that is a near pure application of the sep model and allows addition, modification, and control of any data field from the logic/ui file, but got stuck at how to modify or create new tables from the ui, found that I was having to duplicate much work across tables, and would be locked into my normalization structure once I defined it in define fields. Then I started thinking, why do I need them anyway? Especially with the limitations FM imposes between them and the complexity of running reports across them. I looked up the FM definition of a table and it said: A collection of data pertaining to a subject And then I was thinking, why would you need a relational database unless all data in it pertained to each other in some way? This may sound like a novice question, but I have been using FM for years and am quite familiar with the established practices of using it. I am waxing theoretical here because I am challenging all of my assumptions after realizing that relatively few people on this forum have a deep understanding of FM7+ (I fear I am one of them, and am trying to fix that) and are forging on ahead with mentalities imposed by earlier versions. I am beginning to suspect that tables are one of these artifacts, but am not confident yet. While Lisa Simpson taught me all about Occam's Razor in the early nineties, she did not relay how to interpret that principle concerning the question of tables. In regards to say one and a half year later a client calls in and ask for a tiny change because you havn't nailed their business logics more than 99% ...which is fastest to maintain? This is precisely the scenario that is leading me towards a single table solution in the data file. I am thinking that not having to import records across tables when adjusting normalization would be a significant improvement. Aside from having to adjust to paying attention to only table occurrences as opposed to tables and TO's, I am having a hard time coming up with any downsides. I need some devils advocate's before I commit much more time to this though... -Raz
comment Posted March 19, 2006 Posted March 19, 2006 Separation issues aside, could you explain how you would implement, say Customers - Invoices - LineItems - Products in a single table?
Razumovsky Posted March 19, 2006 Author Posted March 19, 2006 Fields: xID: (unique failsafe) tContext: text field would hold "Customer", "Invoice", "LineItems", or "Products" tSerial: serial num based on next higher value by tContext tContext defines which 'table' the record would be a part of. relationships would use serial and context together to filter. Everything else would be SOP. I havent decided yet if I will keep empty fields in each record that dont pertain to the context, or if I have each context use the same fields to store different types of info (say, Invoice::Num1 would be Invoice number where Item::Num1 would be Item quantity). I seem to remember that empty text fields do not take up much space at all, but will need to woodshed on that a bit. Is that enough detail to start?
gdurniak Posted March 19, 2006 Posted March 19, 2006 Q: Could someone remind me again why we use more than one table? A: for data that only needs to entered / edited in one place, and not be repeated in each record yes, very often, a single table "flat file" is enough (and that's all we had, in FileMaker 2) I would suggest you try it both ways, and see which is easier / cleaner / more efficient for your particular solution
Ender Posted March 19, 2006 Posted March 19, 2006 What's the advantage of essentially having all the entities in one table? It would seem to me this would greatly decrease the readability of the solution.
Razumovsky Posted March 19, 2006 Author Posted March 19, 2006 Ah, Speak of the devil... I do appreciate the feedback. A: for data that only needs to entered / edited in one place, and not be repeated in each record In this case, the only duplication of data happens in the tContext field for each record that would formerly have been in the same different table. Yes, it is still a duplication of data, but I suspect that FM dupes this parameter on its own when it is defining/referencing tables, but does not allow a developer to modify it outside of define fields. And is the suggestion I propose really any more of a flat file than using FM tables? In the spirit of reconsidering old assumptions I did some review on what a flat file is, but I am not clear exactly if FM qualifies as one in this case- the distinction seems to be how/if you use relationships regardless of FM table structure. If I put all records in one table and used no relationships, I would agree with you about being flat, as if I used a bunch of different tables with no relationships. But I cant see any limitations and only added functionality from FM's point of view. Ender: What's the advantage of essentially having all the entities in one table? It would seem to me this would greatly decrease the readability of the solution. I thought so as well, and encountered that initially, but am finding that a lot of this is due to my expectations and prior routine. Including initial dummy records with field function by context data in them allows you to view context specific field names in layout mode by using the view-show sample data feature, and greatly improves readability, as does having clear and informative TO names. As far as what you gain; Primarily for me it began as a conceptual liberation from being locked into pre-defined tables. You could add or adjust tables whenever from the logic file. Also, creating and viewing reports involving distantly related data is seeming much easier in this method. The relational graph is much cleaner (for ex; you dont have to make sure you plug in to all contexts in a 'global preferences' type relationship. A single one to all relationship to your data file would accomplish it). Also, I am thinking this might be of help in eliminating the persistent problem of updating stored calculation results that reference related fields, but that thought is far from developed at the moment. Let me turn the question around though, What is the advantage of not being able to dynamically define your table structure? -Raz
Søren Dyhr Posted March 19, 2006 Posted March 19, 2006 Especially with the limitations FM imposes between them and the complexity of running reports across them. If thats the real issue at stake here, is it just another way to put a previous thread of yours: http://www.fmforums.com/forum/showtopic.php?tid/174852/post/198817/hl//#198817 That makes you believe that you actually need a flatter structure?? So there is really nothing utterly clever in the direction of recursive structures that have muffled in the debates when turning into fm7 and 8?? Neither does it seem like you're following the reasoning behind Petrowski's video. I am thinking that not having to import records across tables when adjusting normalization would be a significant improvement It's a feature in Valentina, that I really for some time couldn't get my head around, why anyone would need it anyway? But later occured the very same idea for a template I made dealing with importing denormalized and then organizing it accordingly later: http://www.filemakerpros.com/Splitting.sit But I suggests that you while at it should think in alternative ways of summarizing than usual summary reports - investigate this: http://fmforums.com/forum/showpost.php?post/152237/ ...although I havn't made my mind up if it really is imposible to make GetSummary( to work over relations. It's too late now for experimentation ...perhaps I'll find time tomorrow?? --sd
Razumovsky Posted March 19, 2006 Author Posted March 19, 2006 (edited) If thats the real issue at stake here, is it just another way to put a previous thread of yours: No, that issue was just that FM neglected to mention that getsum couldnt break by related field. There may be a way to use getsum involving related fields, but that will not change the fact that it does not seem to be able to break by a related field. I also have definitely found alternative ways to summarize reports. I am not looking for workarounds in the present investigation, I am questioning my design foundations that required these workarounds. *added a note: The real issue for me started as an attempt at a functional SepModel that would allow Unlimited modification and expansion of functionality and design purely through the UI/Logic file. **added another: Also, I am not so convinced that my proposition involves flattening the DB at all. I would still use the same number of relationships as with a multi table approach. Again, isnt the distinction between flat and related files in the relationship graph, and not in the define tables section? Edited March 19, 2006 by Guest more thoughts
Martin Brändle Posted March 19, 2006 Posted March 19, 2006 (edited) I hope I have understood your idea. No it is not. Tables define objects that belong to the same class, and relations, well, the relations between them. Of course you can use a single table for everything and an additional field that tells what object class you have in a record, but: - this solution will project an n-dimensional space (or tensor) down to a one-dimensional space (a vector), and the projection will always take only two dimensions (a matrix) at once. You will not be able to reconstruct the n-dimensional tensor from this again, except for the price of redundancy (as gdurniak said in other words as well) - finding and sorting will be more complicated, because one needs always to specify the class as well. It will work at maximum two different classes. In a normalized solution, you can specify the tables (the object classs), and the DBMS will do the rest for you (indexing, sorting). - self joins will eventually decrease the performance - objects and their keys must be duplicated as soon as you have to relate one object to two other objects of the same class; effect on sorting? If solutions like yours were always feasible, we would still have our data in spreadsheets such as Excel. Edited March 19, 2006 by Guest Additional point on finding and sorting
Razumovsky Posted March 19, 2006 Author Posted March 19, 2006 Thanks Martin! This is exactly what I need. Considering Tensor to vector, I believe I get what you are saying, but isn’t this true for related DB's using different tables as well? Aren’t tables just FM’s way of automating this redundancy and hiding it from view? Isn’t the n-dimensional space purely a mathematical schema that needs to be reconstructed from the relational graph regardless of tables? Isn’t having 3 different tables without relationships the like having three different one dimensional spaces, not a multidimensional space? Three different flat files, not a relational space at all? Ah, so many questions! Yes, I agree that finding and sorting would involve one extra parameter on the developer side, but isn’t this parameter already used in some fashion when FM decides which table to do the find in, and wouldn’t it be nice to have access to it? I don’t see self joins (on a one to one manner) any more necessary in this implementation then a multi table solution, but might be missing something here. "- objects and their keys must be duplicated as soon as you have to relate one object to two other objects of the same class; effect on sorting?" I am not sure I understand your point here, could you provide a brief scenario where this would be necessary in a single table approach, but not in a multi table? For me, speed is the big unknown and concern. I do think that using filtered portal searches would be pretty fast, and allow you access to all distant related records in a single portal...I need to get it mocked up and populated soon to test it out, but everytime I sit down to work on it, I realize some other preconception that has creeped into the design and muddied the view.
T-Square Posted March 20, 2006 Posted March 20, 2006 Raz-- Technically, you're presenting very interesting ideas. I agree with comment, though. Any knowledge representation in software is an abstraction; none of us is instructing the computer to store the zeroes and ones in a particular data stream. So, Filemaker (and RDBMS in general) make use of intellectual constructs called tables to make it easier for humans to map their concepts of the real world into data structures. Everything we do in RDBMS can be accomplished using text files, if you're so inclined--but you have to do all the work yourself. The system you're describing is elegant and extremely malleable, but my experience is that this doesn't reduce the amount of work that goes into a specific solution. My past experience always has been that I put all this effort into creating a generic, reusable application, only to find that it would have been easier if I had simply written a straightforward purpose-built one. Moreover, I think the real value in a RDBMS is in its specific structures--which you would still have to build within the single table realm you envision. I think the reason that tables are used is that it makes sense to people in a fundamental way that pure abstraction does not. It's a tool that works. David
Razumovsky Posted March 22, 2006 Author Posted March 22, 2006 David, I usually agree with comment as well (if not at first, then after chewing on my foot for a while...) but am not sure what his point was in this thread other than "less talk, more action." My past experience always has been that I put all this effort into creating a generic, reusable application, only to find that it would have been easier if I had simply written a straightforward purpose-built one. Isn't that the truth! I have been down this road before as well. Part of me is resigned to the fact that this is pureley a theoretical excercise, but part of me honestly believes that this is completely possible to pull off with the new features of FM7 (especially the let and evaluate functions). Regardless, I have been learning a tremendous amount about things I didn't even know I didn't know. I think the reason that tables are used is that it makes sense to people in a fundamental way that pure abstraction does not. It's a tool that works. I completely agree. There are many tools in filemaker that allow some sophisticated things to be done without much understanding of them. These tools can be quite valuable, but have (at least in my case) at times trapped me into a certain limited way of thinking. But I do think it is possible to maintain the conceptual aid and readability that tables offer without actually defining them in define fields, although I havent quite worked it out yet. OK, enough talk. Back to the action... -Raz
Søren Dyhr Posted March 23, 2006 Posted March 23, 2006 (edited) Have moved my derived question to this thread: http://www.fmforums.com/forum/showtopic.php?fid/36/tid/175344/pid/199461/post/last/#LAST --sd Edited March 23, 2006 by Guest
xochi Posted March 23, 2006 Posted March 23, 2006 I am not sure I understand your point here, could you provide a brief scenario where this would be necessary in a single table approach, but not in a multi table? If I'm understanding the question, it's basically the same question in a regular normalized mutli-table system for which the answer is "Why do we need Join tables, Alex?" Stated another way. Lets say you have a Students table and a Classes table. Since a student can take multiple classes, and a class can have multiple students, this is a M-N relationation (many to many). It turns out the only real way to do this is with a third table, the so-called "Join" table. The join table in its minimal form is a two-field table that contains records matching a Student primary key with a Class primary key. As an exercise, perhaps you could try to explain to us how you would model this type of relationship (many to many) with your single table system. I'm sure it can be done, but it doesn't sound easy...
Razumovsky Posted March 23, 2006 Author Posted March 23, 2006 it's basically the same question in a regular normalized mutli-table system Yes, that is what I was thinking as well, which would mean that it does not have much impact on the model I am proposing. Instead of a separate record in a join table with two fields, you simulate the same thing with a "StudentClasses" T.O., a new record that has tContext="StudentClasses", and then use two fields on that record that you would assign to StudentID and ClassID. Your relational graph would look identical to a multi table approach as each record, regardless of tContext, has a uniqueID: Students::UniqueID---->StudentClasses::StudentID StudentClasses::ClassID----->Classes::UniqueID -Raz
xochi Posted March 23, 2006 Posted March 23, 2006 That might work, but how would you handle data typing & naming? I.e. it's really useful to have a field named "StudentID" that has field validations (numeric only, 6 digits, unique, must exist, not editable). Would your single-table solution use fields in this way? Or would you just have generic, un-named fields, e.g. Field1, Field2, Field3, all of which would have to be text fields with no validation? If the former, I think you'd quickly end up with a table with 500+ fields in it, which as a designer would quickly drive me insane, and may have performance implications (e.g. using a 500-field record to store a single number is not efficient). If the latter, then I think you'd run into other problems, such as data type conversions (numeric<->text<->date), indexing and sorting issues (dates stored as text don't sort properly, key field matches may not work across data types, etc.)
Razumovsky Posted March 23, 2006 Author Posted March 23, 2006 Yes, thanks! These are the readability issues I am working on at the moment. I usually validate through script anyway (regardless of table structure). If you really must have field level validation, I suppose it could be done in the logic file. At the moment I am tinkering with the generic approach: UniqueID: auto enter Unique ID Trigger: auto enter replace contents: Let[x=Text1&Text2&Text3...]; Trigger +1 (this is a simplified version)) Text1 Text2 Text3... Some generic auto entered replace fields: Auto1: Let( x1=Trigger; Evaluate("Let([vAuto= "&""Auto1"" & "];" & GetField("View::xgLogic") & " )") ) Auto2: Let( x1=Trigger; Evaluate("Let([vAuto= "&""Auto2"" & "];" & GetField("View::xgLogic") & " )") ) Auto3... xgLogic is in Logic file, and is a text global which has ALL of the auto entered tContext specific calcs conditionally parsed by a case statement: Case( vAuto="Auto1" and tContext = "Students"; YourDesiredStudentAutoCalc1; vAuto="Auto1" and tContext = "Classes"; YourDesiredClassesAutoCalc1;... //last step keeps the data the same if no condition was set for it to change: GetField(vAuto) ) Sorting/indexing could be done through the logic file with getasDate(text1) getasNum(text2) etc autocalcs. I might also just make a point to have a few date1 num1 timestamp1 container1 fields in there as well, and store the appropriate data in those fields. Readability wise, using a template with each field assignment per tContext as a text label next to them, and then just copying and pasting these fields onto layouts as opposed to dragging and selecting from a difficult to read generic table helps. Also, the initial dummy record with field assignment names as the field data and using 'show sample data' in layout mode is working pretty well. I am also working up a separate FieldAssignments file that tracks these assignments per tContext. I am thinking of incorporating this into a substitute calc so that calcs and the ilk can be written out using intuitive filed name assignments and then 'translated' into their generic names... Much more work to go though! -Raz
Ender Posted March 23, 2006 Posted March 23, 2006 I would hate to be the developer who inherts a solution built like this. Way too much trouble.
xochi Posted March 23, 2006 Posted March 23, 2006 Hmm. I'm thinking that this single-table solution is the worst of both worlds. FileMaker (bless it's heart) absolutely sucks if you are a super power user and want to do true object oriented design (e.g. to have objects in a 4th generation language automatically bound to database field), and it also sucks if you want to do weird-dimensional object hierarchies (such as linked lists, recursive lists, etc.). And it sucks at separating data from design, has no inheritance, etc. FileMaker is awesome at RAD prototyping, cross-platform design, simple, maintainable scripting, etc, which is probably why most of us are here on this forum. So I think your single-table design is keeping the bad and throwing out the good. I'd suggest you consider Python, DHTML and a SQL server of some sort if you really want to do "weird" stuff like this }:(
Razumovsky Posted March 23, 2006 Author Posted March 23, 2006 (edited) Yes, that is the best argument I can think of against it. It is not for the faint of heart, and is not a standard method of FM DB design (which is no small drawback). Also, it is seeming more complex than it is because I am tying it in with a separation model approach, which is not standardized as well. However, the use of script parameters was not standardized either when it first was introduced, and would have been a headache to figure out if you were not aware of it.- not that I am saying they are the same degree off the previous model, more that just because a standard doesnt exist yet, doesnt mean that a particular method should be abandonded (triple negative?), just that standards need to be developed. I do have little doubt that we should be heading this direction - The power it offers and easy reusability of foundations are very compelling to me. The simple ability to view a field name as a calculated result in the developer windows (scheduled for FM9, right guys? right?) would pretty much change everything, no? Again, this is a theoretical excercise, and is providing some valuable insights for me. Storing all autoentered calcs in a single field in the logic file is something I think I will definitely salvage regardless of the number of tables. -Raz Edited March 23, 2006 by Guest fleshed out response
Razumovsky Posted March 23, 2006 Author Posted March 23, 2006 (edited) Python is my summer project, but mostly for music purposes, where wierdness is called for more frequently. Edited March 23, 2006 by Guest
Søren Dyhr Posted March 23, 2006 Posted March 23, 2006 And it sucks at separating data from design, has no inheritance, etc. Well is has to a certain extend, take a look at: http://www.newcenturydata.com/downloads/separation_demo.zip ...for the separation issues, and http://www.filemakermagazine.com/modules.php?op=modload&name=News&file=article&sid=607&mode=thread&order=0&thold=0 For the enheritance but it's correct that "overriding" ...lacks something to be desired!!! --sd
Genx Posted March 24, 2006 Posted March 24, 2006 Going back to the readability point, i started having a few issue's with this in my larger tables where i'd have about 100 fields in total, maybe about 50 of which would store data. So i started to split the fields into different sections by creating extra fields. Mainly calcs from data from globals from keys etc. But you could do the same here, splitting them up into your relevant groupings. Though what you'd be doing in this case would be completley redundant where you could just be using tables anyway... Overall though, this thread is really interesting, good job guys You've managed to both intrigue and ... umm... entertain?... me. ~Genx
comment Posted March 24, 2006 Posted March 24, 2006 I think the disadvantages are quite clear. The advantage is not. You have mentioned 'conceptual liberation'. I am not sure that is a worthy goal. The most flexible, concept-liberated, database is a fresh, empty file. When you start creating tables, fields, relationships, etc., you are making CHOICES. As a result of these choices, the file is becoming less generic and better suited to its proposed task. That, I think, is the way it should be.
Razumovsky Posted March 24, 2006 Author Posted March 24, 2006 Xochi: ...absolutely sucks if you are a super power user and want to do true object oriented design (e.g. to have objects in a 4th generation language automatically bound to database field), and it also sucks if you want to do weird-dimensional object hierarchies (such as linked lists, recursive lists, etc.). Could you please explain this in a bit more detail? I have a vague sense about what you are saying and think I might agree, but some specific examples of, well, 'suckiness' would benefit the discussion. Comment: You have mentioned 'conceptual liberation'. I am not sure that is a worthy goal. You bring up some excellent points (that will take some time to consider), but on this one I completely disagree. We are in the theory forum, this should be in some ways our ultimate goal. You are to blame for this thread in some regard, as it was your posts Here and Here that was the bamboo rod which whacked me on the back of the head while I was out gardening in my FM5 philosophy patch (ouch, but thanks!). You were absolutely right. I am beginning to see how almost all of the big workarounds and walls I have come up against in many files are just symptoms of a misconstrued understanding of the nature of the beast. The way this understanding is flawed is largely due to the imposition of concepts that aid in human comprehension into the fundamental and rigid design of actually how the data is stored. Filemaker does not need these concepts, we need these concepts. I believe there is a way to maintain these conceptual aids in how we interact with the data, but not impose them as actual physical barriers of a sense in how the data is handled by filemaker. I feel like we have just upgraded from a rickshaw to a porche. Yes, we have a great stereo, shiny leather seats, and a sunroof, but we are still pulling the ******* thing around. Let us get in and drive! Of course we will have to learn how to shift, steer, and park, but come on now... There has been some great observations and feedback so far - I hope it continues, but it is a lot to digest. I am going to put it all in my pipe and smoke it for a bit. Thanks, - I will be back. -Raz
Edward Newman Posted March 24, 2006 Posted March 24, 2006 I spent hundreds of hours consolidating my database solution, which did yeild some noticable performance gains, but now we started deploying Mobile 8 and found that 1 database means only 1 table can be mobile... this SUCKS!
Ender Posted March 24, 2006 Posted March 24, 2006 That's a bummer, but it doesn't seem to fit into this discussion.
comment Posted March 24, 2006 Posted March 24, 2006 Why sure, blame me for it... I am not sure my point got accross. We ARE in a theory forum, but I don't see why that would make ultimate flexibility a desirable goal. I am against flexibility at the expense of other concerns (such as readability, for instance), in theory as well as in practice. I am saying that "flexible" is, in a way, the opposite of "well-suited". There is always the temptation to leave decisions in a reversible state - because you can. But if you want to get ahead, you need to burn some bridges behind you. I think that is true for almost any endeavor, not just FileMaking.
Søren Dyhr Posted March 27, 2006 Posted March 27, 2006 (edited) Also, it is seeming more complex than it is because I am tying it in with a separation model approach, which is not standardized as well. Try to read this thread where this quote stems from: When looking at the amount of work required to simulate the type of development model I might create in another environment I can't help but wonder why I am using FMP at all. This extends beyond mere FMP usage to encompass the interface things we do (e.g. hide the status area and then replicate it; tightly script everything including Finds and Sorts; etc.). After all of that investment would the time spent on making FMP behave contrary to its design have been better spent learning a new tool like REALBasic or Revolution? Do your solutions end up looking "cookie cutter" as you attempt to leverage the investment you made in the methodology? Does the user end up with a better experience than if you had worked within the FMP idiom? From: http://www.nabble.com/The-placement-of-Calc_fields-in-the-separation-model.-t1329384.html What Corn Walker says is pretty much the same as Xochi and Ender, but not least Comment says! Wrong tool! --sd Edited March 27, 2006 by Guest
Razumovsky Posted March 29, 2006 Author Posted March 29, 2006 Okay, “I am saying that "flexible" is, in a way, the opposite of "well-suited".” Yes, in a way, but not in essence. There are a multitude of things were what is most “well-suited” for something is “flexible” in nature (Theater, Database Design, and Parenting to name a few). Tables exist to contain data that pertains to each other, and separate it in some way from that which does not. All data in a relational DB pertains to each other, or is in the wrong place. All data in a relational database should be in a single table. Relational depth has nothing to do with the number of tables. In fact, number of TO’s is also somewhat irrelevant. The Singles Table at the Champagne Room could have just as easily left off all of the TO’s except for Data, ParentData, and ChildData. What determines relational depth is the interaction of your data with the TO’s. Practicality It turns out that nearly all the issues of practicality came up from trying to address ideas of separation at the same time. Evaluating text outside of an unstored calc sucks the life out of my machine. That whole sep thing will have to wait. With the SingleTable approach. Issues of practicality recede once the concept is understood. Implementing a mildly complex solution was surprisingly fast, significantly faster than a multitable approach. Speed- It seems pretty snappy. Will try loading it up later on. Readability- The concept of tables is a great aid to human understanding. To contiue to utilize this tool, Table structure is charted and formalized in the Table, TableFields, and Fields table. There should probably also be a Table Occurences table as well. It has forced a more methodical approach, but completely manageable. Some more scripting will aid in this, and will only have to be done once. After the initial wooziness, It actually became much more readable with the singletable approach. The relational structure is completely transparent. The only real concern I have is: I would hate to be the developer who inherts a solution built like this. Way too much trouble. I am thinking if I do go this route, I would have to script something that would just export all the ‘Tables’ to actual Tables, and a relational chart as well. I will also have to be on my best commenting. Benefits: There are many. I just inherited a 36 file monster from 1995. With some moderate scripting, I am thinking I could dump them all in the SingleTable and re-normalize with the click of a button. Use this exact script again for any other DB. The simplicity of the relational design greatly improves the readabilty of how calcs will refresh through an update. Everything is considered from the Data TO context. Join tables are a thing of the past. Stored calcs do not have to be monitored for refreshing. I am sure there are plenty more benefits as well, it suggests many possibilities. Anyway, I invite you to come play in the Champagne Room tonight, and then stumble back tomorrow to see what specific practical issues are brought up. -Raz
Ender Posted March 29, 2006 Posted March 29, 2006 Well, I'm baffled. I've seen your sample, but I don't see how you can say this is an improvement--substituting the well-designed developer interface provided by FileMaker, for one with generic field and relationship names that can't do half of what you could with normal fields. You rave about the benefits, but haven't provided evidence. Some of the problems: 1. There are too many places to go to in order to add a field, and you have to keep track of both the assigned FieldName and the FileMaker field name. Suppose for example, a particular entity needs a new number field, you would need to first try to figure out if any of the number fields are available for that particular entity, and if not, add it to the FileMaker fields in Data. Then you would be able to assign it in that entity's definition. Then you have to remember which Data field it was so it can be placed in a layout. 2. Calculation definitions are unwieldy. In addition to having to decipher and adhere to the unusual syntax for the calculations, you would again have the issues with using/mapping generic field names ("Now which of those fifteen date fields was the 'Date Last Ordered'?"). 3. What about stored calcs? Is this not possible, or did this not seem like something that would be useful? 4. It doesn't look like filtered relationships are possible in this solution. How about: Multi-keys? Range relationships? Multi-criteria relationships? 5. You seem to say that looking further up or down the chain of relationships than the immediate Parent or Child record is unnecessary (or at least, you don't account for it,) but I've found the ability to view grandparent and grandchild records to be a great benefit of FM7/8. For example, if you wanted to see all the Items that a particular Customer has ordered in the past. Well, I could probably go on, but I'm getting tired. You've posed an interesting design, but I just don't see it as practical.
Razumovsky Posted March 29, 2006 Author Posted March 29, 2006 1. Agreed. Pureley interface issues though, and separate from the underlying design. They could easily be refined (come on, you know you have tackled much more challenging interface problems...). Say, for example, to automate the assignment of datafields all at once after you were done. The layout pallette feature is not up yet, but does help tremendously, as will the set initial record to table field names script before layout editing. 2. Agreed. Same as above. Some slight tweaks to a handy replace function (stripped from a mail merge demo...) should allow all calcs to be written by their Table Field names. This issue could be mitigated in other ways as well. Both points above are things that need to be tackled only once. Once the model is in order, it would not be an issue for any succesive files. 3. What about stored calcs? they are just as possible/awkward as in a multi table approach. I usually do not find them useful, but there would be no problem using them here. 4. No problem at all. Why would they be? 5. Not unneccessary, it is already being done. The data is always looking everywhere. Traditionally, a relational graph involves a flattening out of the dimesnsional space the data exists in. In this flattening out, data, in the abstract form of relationships, becomes duplicated and redundant. That is, you have the same quality of relationship duplicated over and over throughout the graph. These redundancies obscure the true nature and timing of how the data relates. This results in update issues, increased processing power, and poor performance. In the SingleTable model, there is only one center point for the data (Data TO), and each relationship is of a unique quality. The relationship bewteen Customer and Invoice is identical as that between Invoice and Item. There is no need to recreate this space, just ask which one you want to stand in the center of it. To see all the items a customer ordered is simple and can be done from any of the 4 imposed tables with the same technique. From Customers: Create Value list all ChildData MatchIDs from Data. Assign an unused calc to return that value list. Create relationship from Data TO Calc=Item ParentID. It is just a matter of asking the children what their view from the center is. It is a different way of working, but it is much easier to see reusing soemthing like this than more traditional structures. I suppose there is always runtimes as well...
Recommended Posts
This topic is 6431 days old. Please don't post here. Open a new topic instead.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now