
SurferNate
Members-
Posts
101 -
Joined
-
Last visited
Everything posted by SurferNate
-
There are at least three or four ways to get what you want. I can tell you that a self join based on any field for which you want a summary is a good place to start. Also, make sure that everything is STORED and INDEXED. On 200K records, if you have unstored or non-indexed values in your relationship, you will most likely have to wait a long time each time for your results. Others here will recommend scripting and reporting on subsummary values. It all depends an the specific output you need. P.S. - That's an excel file. You will almost always get better results here if you build something close to what you want in FM and post it. P.S. - You might want to look into GetSummary() also.
-
Just brainstorming, but could you set your font size to one point? That renders typically as just dots on the screen. It's a hack even if it does work but you could try. Another option comes to mind, maybe you could default login as a guest account, which has no access rights except your login screen, and then have a button on your login screen to call the re-login script step. That would cause a username and password dialog to pop up, but at least gives you the secure password input.
-
I don't know how best to frame this question, as it only became important to me upon some very recent "big" learning steps. I guess I have two major ponderings... Is there any good example file out there, that shows an efficient way to build the structure for a Product-Assembly-Component type database. I hacked my way through this and am not satisfied with the "cost" of the results in terms of both complexity and speed. Anyone reading or contributing to my Summarize Grandchildren headache has the background to see what I mean. Also, as far as contact (CRM) relationships are concerned, is there a preferred structure, where people have many to many relationships with various locations, phone numbers, businesses, etc.? Is there a "best practice" example file available to show how best to handle this? For instance, I may have John Smith, his personal residence, his main business (a corporation), his side business (DBA Somthingorother), his mailing addresses, etc etc. What is the best structure to relate John Smith, to all of that, and yet have each type of contact/location also be functionally independent of John Smith, for other relationships (a corporation, or any address for that matter can be independent does not actually need John Smith)?. To complete the question, how might I take all of that and simplify the end user selection of the exact Contact/Company/Location relationship? Is there a preferred balance in the complexity of such structures? I know the question is vague. That's because I don't have one explicit issue to resolve, but more a set of functional paradigms to build. EDIT: I just opened and started playing a little with SeedCode Complete. That probably just answered most of my CRM question. For what they include, it looks pretty darn cost effective to just pay for either the open site or developer license.
-
Finding records that do not match a normal relationship
SurferNate replied to gerrys's topic in Relationships
Without seeing your structure. I would start with suggesting a separate calculated field that determines if your given set of conditions has been met. Try to have this calculation be stored/indexed if possible (ie it does not refer to globals, unstoreds, or related fields). You can then use this field for your conditional relationship. EDIT: You used the word "report". Why not just script a find and then go to your report layout? -
Yah, anyway, while I burned keys typing this up it seems others came to a similar conclusion! It occurs to me that someone with time to waste might create a pure test file, that basically generates arbitrary sized data sets in at least two tables including stored and unstored values, along with dates, text, and numbers via a script. The one could just download the file and generate a test set of data locally to see if a certain structure is practical. So many conditions might affect the outcome. What if one needed to import and validate against the current value of an unstored calc across 100K records? LaRetta, I would think, after what I have learned here, that the validation is extremely fast because the "unique" value can be compared to a stored index on Target. The only speed issue here is the import of the Source data. You are really only parsing the full set of source records here, not Target records. I would be willing to wager that if an imported value on Source is either unstored, or Source is a very large set, then it will drag the process down. Comment may have named the real suspect here. Working with indexed data on both sides of any database transaction is probably more efficient by several orders of magnitude.
-
So there is still in fact a time delay for the import? More than anything, the news, and new learning experience for me is to understand that large data sets can actually take a lot of time to process. I have become accustomed to me nice "small" data sets processing so fast, that I basically can ignore small processing time differences. It is interesting, and important to me to know that when data sets grow beyond a very limited range, structure, and method, are paramount to success. Shoot, the ability of Mac OSX to pull a clean list of found records almost "instantly" from a 120 GB drive (100K+ files) tells me a little something about the differences in types of data processing and storage. This little fish just took a tiny peek at the ocean...
-
Ahhh, I understand better now, on both sides of the question. Your main goal is to periodically ADD a "copy" of any records from Data, to Trigger, that have changed since the last transaction (this is partially what I mean by transactional, the data is not manipulated "live", on demand, by a relationship, but explicitly, and in a controlled way, by a specific transaction). Then, also periodically, you have a script that "fires" emails based upon any new records added to Trigger since the last successful script run, yes? The portal is relational, the import is transactional. One might want to lookup Transactional Database on Wikipedia for a much better understanding than what I have so far (I could even be missing the point entirely, who knows)... Cool, I learned a whole bucket of good stuff here...
-
Okay, I found the near limit on recursion in this case, it works, relatively smoothly, at ~5000 records. There's still a lag, but about 2 seconds at most. Still it's not capable of handling a large enough data set to be practical in any real sense...
-
So, in thinking about this, is this more of a case where a transactional approach is best? I mean, should the Triggers file be transactional in relation to the Data file? That of course assumes that my very limited understanding of the difference between relational and transactional is even applicable here. That's what you mean by suggesting a Find/Import/Export script?
-
Well, I think I did find out. I tried adding 150,000 records and GTRR works, with about a 60 second pause the first time and a 5 second pause after the first run. My RCF version sputtered and gave up outright, with the sheer amount of data. Can't dig through that mountain, gotta go around it. And so I learned a couple new things. P.S. - I also see what you mean about specific cases being different. I was just noting that in this case the size of the data set alone basically excludes the use of recursion to solve the problem.
-
For my own education, I really do care whether my "messy" solution or the "clean" solution with GTRR is faster on the large data set. I think am coming up against the large data-set/speed barrier in a couple of my own solutions, and want some background info to make better structural decisions in the future. Now that I understand recursive CF's a little and can actually write them, I'm wondering if I should reconsider the Summarize Granchildren solution too. I just don't know enough yet about RCF-vs-relationship-vs-scripting, as far as speed of results is concerned...
-
dynamic perform find from field contents
SurferNate replied to filemaker 8 user's topic in Finding & Searching
I forget if the SetVariable script step exists in FM8, if it does, it is your answer. Use it to hold your find variable and then SetVariable ($Variable ; Field) Enter Find Mode SetField (Field ; $Variable ) Perform Find (no dialog) SetVariable ($Variable ; "") Otherwise, use a global field instead of a variable. In either case, you want to avoid using OS level operations like "cut" and "copy" for in-solution scripts, since these values are handled/stored by the OS, and may affect other user actions in other unrelated applications. By the way, it looks like you also might want to to call a Loop/Exit Loop If/EndLoop to go through all iterations of your script, until the find/delete action is exhausted. P.S. - Be very careful with this, you can inadvertently wipe out your entire data set. Back up the whole file before you start playing with loop/setfield/delete type scripts. -
Two cents (that's all it's worth from me), but: It looks from your excel file that you have a large number of common similar values to each equipment type, values that don't seem to change so you might want a table to normalize just those values for each equipment type. Lets call that EquipmentTypes Then you have each equipment item record, which has common fields, but differing values, that's your EquipmentItems table. Then, following the example of DG structure, you have another separate Source Table for each EquipmentType (provided the EquipmentType values are all fully predetermined at time of database development). You can then add each piece of Equipment from its respective TO Layout, and have only the relevant fields on that layout. I'm pretty sure that the only way this works is when, at time of development, you can conclusively state a manageable range of category or value types that the user cannot and will not change throughout the use of the solution.
-
Well, I can see why, personally on one occasion I really really wanted to see records in a portal that otherwise had no natural relationship. That doesn't make it the best way, but it might still be the desired way at the moment. I solved the question. File attached. It took two Recursive CF's and five more TO's for me to do it, I think, the way LaRetta was asking for the result. Tell me if this is actually correct and what you wanted. I can explain and post the CF's too, but they're fairly specific to this example file. You may want to adjust them to your needs. NonRelate_Copy.fp7.zip
-
Am I missing the point in thinking that Data must be manipulated from a Trigger layout? Otherwise Trigger will not "see" any changes made to Data, not in such a way that it can instantly evaluate and fire a script. In fact, the Data changes must actually originate in Trigger, if I understand FM logic correctly. P.S. - Several years ago you had a very neat trick for auto entering notes and logging them. FM 8 broke this and I figured out a new way to do it with a separate Journal table and a recursive CF to pull the data back into the NoteLog format. Ummm..I digress, I think I might have one way to do what I think you want...working on your file now...
-
Portal value list won't populate because of a relationship issue
SurferNate replied to mickeyfinn's topic in Relationships
Questions: Service Items are attributes of Services? Therefore you have many=many? Second, Difficulty and Labor are direct attributes of Services but not direct attributes of Service Items? Again, you would have another many=many relationship? -
Comment, Yes, metaphors are not necessarily accurate. In this case, it fits the model I have in mind perfectly. I am more than aware that after muddling my way through "forcing" the result I want, I do sometimes revert back to a more simplistic solution. Such is the learning curve. I still don't think I can go recursive in a practical way, although I did already play with Jonathan Stark's Inventory file when working on this problem. I'm still more than open to the idea of recursion, It just means AFAIK that I wind up with a whole huge mess of fields in one or two tables, not including the requisite multikeys. Also I didn't know that repeaters could go to such a high value. I was held back thinking the limit was 100.
-
In answer to the last few replies, no of course I don't plan picnics for a living. The Product-Assembly_Material structure is practical to me business wise, and so used a fairly simple picnic as a metaphor. I did spend a little time playing with Matt Petrowsky's recursive "Infinite Hierarchies" solution, understood it pretty well, and realized I wanted to force a more explicit set of rules and levels. Yes recursion can be limited pretty easily. I just failed to see a substantial benefit. I realize that at some time I might want to add optional Super-Assemblies, but that's not terribly concerning to me right now. One big part of this is the point of knowing exactly how and why this structure works, as I can foresee using it again. Comment you mentioned using a repeating field, except I understand that to have it's own limits. What if Assembly eventually contains more than 100 different materials? Also, reporting on the join table only works if all of the record joins are similar in nature, Comment nailed the point down pretty well in recognizing that one has separate join records from Parent to Child, and from Child to Grandchild. Even though I use a multipurpose join table, there is a subset of Parent=Child records and another subset of Child=Grandchild records, where the only commonality is Child. Grandchild never actually equals Parent.
-
Count no of sub summaries in report
SurferNate replied to MitchBVI's topic in Calculation Engine (Define Fields)
Soren, I read that thread, and I'll have to read it again to figure out what the point is. There's a lot of back and forth about whether it's faster to report on stored or unstored calcs. I think I read at the end that it's six of one and half a dozen of the other. In any case, I've been "gone" from posting and reading for quite a while, and come back to the conversations with a little more more hard-knocks DB development practice and a lot better understanding of the logical landscape now. It seems to me that you like reports a lot better than relational summaries. I'm curious about that, as I assume you have a very good and time-tested reason. Is it a paradigm that carries over from other DB work? Or is it FM specific? -
And so I feel compelled to rant. /rant/ Why is it that the more "normal" I try to make my data and structure, the more difficult it becomes to aggregate and analyze?/*rant/
-
So, having been over to that other thread, where you and Sopren discuss efficiency of find operations on stored and unstored calcs. Is it reasonable to think that a scripted solution could be faster than unstored calcs? I'm studying up on recursion, and more than once I played with the idea of creating a dedicated Reports table, to contain only the calculated info, that could "bridge" multiple tables and perform recursive summaries on a RecordNumber = RecordNumber basis. This way I can have one actual unique record for each recursive summary, and could also use that as a temporary join table from Product to Materials. I guess the hangup that brought me thus far is that I don't want to just display a summary from Product, I want to be able to also quickly interact with root values stored in Materials at the same time. This has to do more with business logic than DB logic. In business, if I am working on Product, I can quickly see if one of my Materials is out of date or incorrect, and make the change "live". I also have calculations/logic that converts partial "lots" of material to whole "lots" for ordering purposes. Even though a Sandwich contains two slices of Tomato, I still need to buy a whole Tomato. Not that any of that constitutes requisite conditions for the structure I finally came to, just that considering those conditions, I have not come up with any other more elegant solution. Oh, and knowing very little about SQL, and learning a cursory amount about the differences between transactional and relational DB structure, I'm way over on the side of relational in this case.
-
Count no of sub summaries in report
SurferNate replied to MitchBVI's topic in Calculation Engine (Define Fields)
Mitch, A cartesian join is when you have two Table Occurrences on your graph, and instead of using field=field or another logical relationship like field>field or field It just means that in either direction, all records in the foreign Table Occurrence are visible to the current record. This is one method one might use if one wanted to perform a "live" simple aggregate calculation of "everything". I think that there are strong differing opinions on structure and reporting simply because FM allows one to do some things that are not "good data structure", but are easier to understand from a layperson's standpoint. (I myself falling squarely into that category of course). I think as a general rule, and these folks might actually agree, that the bigger the job, the more critical it becomes that your structure be traditional and "normalized". For small projects and single user or very small group solutions, traditional rules can be bent in the name of convenience or fun, so long as FM allows it and performs as expected. -
I agree that makes sense for data summary viewing, which I could potentially decide is really the only important issue. For now, I wanted the option of interactivity with the data in a portal, including the ability to see the current aggregated summary of grandchild records through all current recursive relationships. I remember reading a post from Soren, where he remarks on newbies feeling accomplished at making FM do something it is not designed to do. Maybe that is what I have done, but if so, I am frustrated that I could not accomplish the same goal in another more robust manner. If there is another piece of the puzzle that I am missing, I certainly wish I had it. Am I alone in wanting to do this? Again, it seems to me that the question has been asked and considered by at least a few in the past, but so far I understand none of the answers to be direct solutions...shoot, my workaround is not even remotely direct, except it does use relationships and calcs, which are understood AFAIK to be the most reliable means to an end here.
-
I may have put that wrong. I can GTRR Tomato. I know I can do that anyway though. What I want to see is "Tomato 5 $2.00" and have that info live in front of the user. It's entirely possible that I'll change my mind about this, when records multiply and speed becomes more apparent, but for now, adding another Sandwich to Picnic produces an apparently instant result of "Tomato 7 $2.80" Soren is trying to send me over to the "report" camp on this issue, but I don't see how I can do that still without a recursive multikey.
-
Thanks, One reason I felt I had to do this is that I can now GTRR from my portal. I'm sure there is a way to do so with a custom function too, but again I'm still struggling with truly understanding and writing recursive calcs. I do worry that this structure will "break" with a FileMaker update. If nothing else though it is proof of concept, and gets me what I need right now. I definitely would be excited to see another way to accomplish this goal. Somehow relating Grandchild to Parent in a reliable way, such thaI can recur, summarize, filter and aggregate Grandchild records "live" would be a very good thing indeed.