Heathbo Posted October 30, 2007 Posted October 30, 2007 Lately I've been working on a Call Log system. Within this database there are two tables. One is the Call Log (where records for each call are recorded, including whether is was a call received or made). The other is a reporting table. This table only has one record. In this database, I wanted to see how many incoming and outgoing calls were made each week. I have a Call Status field in the Call Log that has two values (incoming or outgoing). So I assumed that I would have to make 14 fields (one for each day of the week for incoming another 7 for outgoing). For example: The Monday incoming field would be a case statement that would test if the record was created on monday and is the Call Status field set to incoming. If it is it's value is 1. Otherwise its 0. The reporting table is linked to the call log via a date field that shows the current week (this is used to filter the calls). In the reporting table there is a calc field that uses the sum function to total the values in all the Monday incoming fields. The problem with this system is, I had to create 14 fields for 1 week. I'm looking at 50+ fields if I wanted to expand it to a month. It was mentioned in another post that it may be possible to do this with repeating fields. Is this possible and how? Any help you can give is much appreciated.
Heathbo Posted October 30, 2007 Author Posted October 30, 2007 I've attached a file as an example. Duplicate_fields.fp7.zip
comment Posted October 30, 2007 Posted October 30, 2007 You could produce a report, sub-summarized first by date, then by call type. That would be the most "Filemakerish' solution. For a "live" view of summary data, you could have a table of 7 records (one for each day of the week), with dates calculated from a global date set by the user. Each record would summarize the calls for one day. You could show the 7 records in a portal for a weekly overview. Same with a month - all you need is more records in the days table. See a very similar technique here: http://fmforums.com/forum/showpost.php?post/205259/ I wouldn't completely rule out the option of having 7 calculated dates linked to 7 TO's of the CallLog table either. But if you're thinking of expanding this to a month view, I'd say forget about it.
Søren Dyhr Posted October 30, 2007 Posted October 30, 2007 I wouldn't completely rule out the option of having 7 calculated dates linked to 7 TO's The issue here is speed, while indeed it could be pulled off ... is opening such a layout something likely to take some time rendering, as to why the relayed is faster is a good question ... but it seems to be avoiding unnessersary evaluations on the stuff more than a one relation away ... suddenly is the index common and not 7 different ones - well Im open for a good explanation here??? What bothers me here is that the behaviour is clearly exprirenced, but the pure math behind is a riddle to me ... --sd
Heathbo Posted October 30, 2007 Author Posted October 30, 2007 You could produce a report, sub-summarized first by date, then by call type. That would be the most "Filemakerish' solution. Could you please explain?
Søren Dyhr Posted October 30, 2007 Posted October 30, 2007 His point is that as long as you insist on the vast number of fields without bothering with breaking it out in separate records, is any kind of statistics including summaries at least wishful thinking often burdened with massive scripting to remedy the artifacts the idea or lack of exhibit. Realize that methods actually exist, and that they actually are based on genuine mathematical proofs which contradicts your initial hunches regarding the way the data should be sliced and diced to behave efficiently ... I won't say that a crafty soul not might be abel to pull it off in your direction, wonders were actually made with fm2.1 but as such was it recieved with widespread pleasure among developers that filemaker became relational with fm3! --sd
comment Posted October 30, 2007 Posted October 30, 2007 Go to the Calls table, find the calls for the week, sort by date and call type, and enter Preview mode. Your report layout needs two sub-summary parts, one by date, the other by call type. Define a summary field as Count of CallID (or any other field that's guaranteed not to be empty), and place it in both sub-summary parts (and in a grand summary part as well, if you want one). You can delete the body part.
comment Posted October 30, 2007 Posted October 30, 2007 The issue here is speed, while indeed it could be pulled off ... is opening such a layout something likely to take some time rendering With 7 TO's aggregating a week's worth of calls I don't think it's going to be such a problem. In any case, the 'slots' approach saves an awful lot of calc fields, so I would lean more towards it.
Søren Dyhr Posted October 30, 2007 Posted October 30, 2007 the 'slots' approach saves an awful lot of calc fields Yes! Do you remember when we debated prefab records against the fixed set of record calc'field-adjusted by a global field, where I on my previous dozy G4 indeed could see a marginal difference. The number of calc'fields clearly must have a significance, but is it the entire explanation? I recall Chris Moyer at devcon '05 talked about a single carthesian index for several fields, instead of turning on indexing for all when the measure of data to process was just below genuine datamining but the case here is opposite? The number of fields locking relations to the same index involved when the layout renders is apparently playing a part in the explanation as well. With the slots is it just the one and same relation. --sd
comment Posted October 30, 2007 Posted October 30, 2007 The number of calc'fields clearly must have a significance, but is it the entire explanation? I don't know. I am not a speed freak, and I will gladly sacrifice a few milliseconds for overall simplicity and elegance. I am well aware that one may need to adopt a different attitude when dealing with large amounts of data on a congested network, and the solution needs to work today. But the thing is that hardware speeds increase all the time, while a data structure tends to stay put...
Søren Dyhr Posted October 31, 2007 Posted October 31, 2007 But the thing is that hardware speeds increase all the time, while a data structure tends to stay put... Not all companies measure of data, matches Moore's Law over time! Obviously the less normalized structures tend to outgrow it faster than the properly constructed. Further more does it seem like the OS usually are the first to claim or rather hog the new frontier territories. What you tend to imply with your statement, would indeed please the geniuses behind this strategy: http://my.advisor.com/doc/13023 Where one-size-fits-all, apparently are expected to be saved by the Moore'ish bell! I do really subscripe to Hernandez rule - Never to adabt to the legacy base structure as basis for the new structure. You seem surprisingly to legalize jerrybuilds??? We should strive to build solutions as thoroughly as posibly ...as close to matematical esthetics and abstractions as posible, to eliminate expirience and the sudden twist of fate an organization might encounter. Who could foresee the great famine in Eireland or the Phylloxera bug that almost wiped out all wineproduction in Spain and France? --sd
comment Posted October 31, 2007 Posted October 31, 2007 I'm afraid you must have missed my point entirely. I don't mind that, but you could have given me a little more credit before suggesting that I "legalize jerrybuilds". The calendar example illustrates my point quite well. You may have experienced a small lag using my solution on your old computer, but the structure itself is efficient. As the hardware improves, the lag goes away - but the alternative solution still has to deal with megatons of redundant records.
Søren Dyhr Posted October 31, 2007 Posted October 31, 2007 What I had in mind, was the utilization of 7 primarykeys, I usually use the calculated version of "slots" as well, although you should keep in mind that the global directing the calc'fields can't be tossed arbitrarily - it's bound to reside in the sandwiched layer to ensure proper freshing. the lag goes away - but the alternative solution still has to deal with megatons of redundant records This is not correct, the difference will always be there, but the chances anyone will notice it, would dissapeare over time. --sd
comment Posted October 31, 2007 Posted October 31, 2007 A solution using 7 primary keys works well in many situations and is easy to implement - so I cannot rule it out as a valid option to be considered. This is not correct, the difference will always be there, but the chances anyone will notice it, would dissapeare over time. Yeah, well. http://en.wikipedia.org/wiki/If_a_tree_falls_in_a_forest
Søren Dyhr Posted October 31, 2007 Posted October 31, 2007 I thought you had something like this: http://icrontic.com/articles/pipelining_explained ...in dual core processors up your sleves? I believe that dependencies forces a certain order each instruction is performed, and the more actions awaiting the later will the result be arriving. --sd
Heathbo Posted October 31, 2007 Author Posted October 31, 2007 This has been a most interesting and educational post. I want to thank both of you for attempting to clear things up for me. Unfortunately you two have lost me. Some of what your talking about is above my current knowledge. Can both of you either upload or link to an example of what your referring to? Thanks again.
Søren Dyhr Posted October 31, 2007 Posted October 31, 2007 Sure...! http://fmforums.com/forum/showpost.php?post/205259/ --sd
Recommended Posts
This topic is 6234 days old. Please don't post here. Open a new topic instead.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now