
SurferNate
Members-
Posts
101 -
Joined
-
Last visited
Profile Information
-
Slogan
Troublemaker
SurferNate's Achievements
-
There are at least three or four ways to get what you want. I can tell you that a self join based on any field for which you want a summary is a good place to start. Also, make sure that everything is STORED and INDEXED. On 200K records, if you have unstored or non-indexed values in your relationship, you will most likely have to wait a long time each time for your results. Others here will recommend scripting and reporting on subsummary values. It all depends an the specific output you need. P.S. - That's an excel file. You will almost always get better results here if you build something close to what you want in FM and post it. P.S. - You might want to look into GetSummary() also.
-
Just brainstorming, but could you set your font size to one point? That renders typically as just dots on the screen. It's a hack even if it does work but you could try. Another option comes to mind, maybe you could default login as a guest account, which has no access rights except your login screen, and then have a button on your login screen to call the re-login script step. That would cause a username and password dialog to pop up, but at least gives you the secure password input.
-
I don't know how best to frame this question, as it only became important to me upon some very recent "big" learning steps. I guess I have two major ponderings... Is there any good example file out there, that shows an efficient way to build the structure for a Product-Assembly-Component type database. I hacked my way through this and am not satisfied with the "cost" of the results in terms of both complexity and speed. Anyone reading or contributing to my Summarize Grandchildren headache has the background to see what I mean. Also, as far as contact (CRM) relationships are concerned, is there a preferred structure, where people have many to many relationships with various locations, phone numbers, businesses, etc.? Is there a "best practice" example file available to show how best to handle this? For instance, I may have John Smith, his personal residence, his main business (a corporation), his side business (DBA Somthingorother), his mailing addresses, etc etc. What is the best structure to relate John Smith, to all of that, and yet have each type of contact/location also be functionally independent of John Smith, for other relationships (a corporation, or any address for that matter can be independent does not actually need John Smith)?. To complete the question, how might I take all of that and simplify the end user selection of the exact Contact/Company/Location relationship? Is there a preferred balance in the complexity of such structures? I know the question is vague. That's because I don't have one explicit issue to resolve, but more a set of functional paradigms to build. EDIT: I just opened and started playing a little with SeedCode Complete. That probably just answered most of my CRM question. For what they include, it looks pretty darn cost effective to just pay for either the open site or developer license.
-
Finding records that do not match a normal relationship
SurferNate replied to gerrys's topic in Relationships
Without seeing your structure. I would start with suggesting a separate calculated field that determines if your given set of conditions has been met. Try to have this calculation be stored/indexed if possible (ie it does not refer to globals, unstoreds, or related fields). You can then use this field for your conditional relationship. EDIT: You used the word "report". Why not just script a find and then go to your report layout? -
Yah, anyway, while I burned keys typing this up it seems others came to a similar conclusion! It occurs to me that someone with time to waste might create a pure test file, that basically generates arbitrary sized data sets in at least two tables including stored and unstored values, along with dates, text, and numbers via a script. The one could just download the file and generate a test set of data locally to see if a certain structure is practical. So many conditions might affect the outcome. What if one needed to import and validate against the current value of an unstored calc across 100K records? LaRetta, I would think, after what I have learned here, that the validation is extremely fast because the "unique" value can be compared to a stored index on Target. The only speed issue here is the import of the Source data. You are really only parsing the full set of source records here, not Target records. I would be willing to wager that if an imported value on Source is either unstored, or Source is a very large set, then it will drag the process down. Comment may have named the real suspect here. Working with indexed data on both sides of any database transaction is probably more efficient by several orders of magnitude.
-
So there is still in fact a time delay for the import? More than anything, the news, and new learning experience for me is to understand that large data sets can actually take a lot of time to process. I have become accustomed to me nice "small" data sets processing so fast, that I basically can ignore small processing time differences. It is interesting, and important to me to know that when data sets grow beyond a very limited range, structure, and method, are paramount to success. Shoot, the ability of Mac OSX to pull a clean list of found records almost "instantly" from a 120 GB drive (100K+ files) tells me a little something about the differences in types of data processing and storage. This little fish just took a tiny peek at the ocean...
-
Ahhh, I understand better now, on both sides of the question. Your main goal is to periodically ADD a "copy" of any records from Data, to Trigger, that have changed since the last transaction (this is partially what I mean by transactional, the data is not manipulated "live", on demand, by a relationship, but explicitly, and in a controlled way, by a specific transaction). Then, also periodically, you have a script that "fires" emails based upon any new records added to Trigger since the last successful script run, yes? The portal is relational, the import is transactional. One might want to lookup Transactional Database on Wikipedia for a much better understanding than what I have so far (I could even be missing the point entirely, who knows)... Cool, I learned a whole bucket of good stuff here...
-
Okay, I found the near limit on recursion in this case, it works, relatively smoothly, at ~5000 records. There's still a lag, but about 2 seconds at most. Still it's not capable of handling a large enough data set to be practical in any real sense...
-
So, in thinking about this, is this more of a case where a transactional approach is best? I mean, should the Triggers file be transactional in relation to the Data file? That of course assumes that my very limited understanding of the difference between relational and transactional is even applicable here. That's what you mean by suggesting a Find/Import/Export script?
-
Well, I think I did find out. I tried adding 150,000 records and GTRR works, with about a 60 second pause the first time and a 5 second pause after the first run. My RCF version sputtered and gave up outright, with the sheer amount of data. Can't dig through that mountain, gotta go around it. And so I learned a couple new things. P.S. - I also see what you mean about specific cases being different. I was just noting that in this case the size of the data set alone basically excludes the use of recursion to solve the problem.
-
For my own education, I really do care whether my "messy" solution or the "clean" solution with GTRR is faster on the large data set. I think am coming up against the large data-set/speed barrier in a couple of my own solutions, and want some background info to make better structural decisions in the future. Now that I understand recursive CF's a little and can actually write them, I'm wondering if I should reconsider the Summarize Granchildren solution too. I just don't know enough yet about RCF-vs-relationship-vs-scripting, as far as speed of results is concerned...
-
dynamic perform find from field contents
SurferNate replied to filemaker 8 user's topic in Finding & Searching
I forget if the SetVariable script step exists in FM8, if it does, it is your answer. Use it to hold your find variable and then SetVariable ($Variable ; Field) Enter Find Mode SetField (Field ; $Variable ) Perform Find (no dialog) SetVariable ($Variable ; "") Otherwise, use a global field instead of a variable. In either case, you want to avoid using OS level operations like "cut" and "copy" for in-solution scripts, since these values are handled/stored by the OS, and may affect other user actions in other unrelated applications. By the way, it looks like you also might want to to call a Loop/Exit Loop If/EndLoop to go through all iterations of your script, until the find/delete action is exhausted. P.S. - Be very careful with this, you can inadvertently wipe out your entire data set. Back up the whole file before you start playing with loop/setfield/delete type scripts. -
Two cents (that's all it's worth from me), but: It looks from your excel file that you have a large number of common similar values to each equipment type, values that don't seem to change so you might want a table to normalize just those values for each equipment type. Lets call that EquipmentTypes Then you have each equipment item record, which has common fields, but differing values, that's your EquipmentItems table. Then, following the example of DG structure, you have another separate Source Table for each EquipmentType (provided the EquipmentType values are all fully predetermined at time of database development). You can then add each piece of Equipment from its respective TO Layout, and have only the relevant fields on that layout. I'm pretty sure that the only way this works is when, at time of development, you can conclusively state a manageable range of category or value types that the user cannot and will not change throughout the use of the solution.
-
Well, I can see why, personally on one occasion I really really wanted to see records in a portal that otherwise had no natural relationship. That doesn't make it the best way, but it might still be the desired way at the moment. I solved the question. File attached. It took two Recursive CF's and five more TO's for me to do it, I think, the way LaRetta was asking for the result. Tell me if this is actually correct and what you wanted. I can explain and post the CF's too, but they're fairly specific to this example file. You may want to adjust them to your needs. NonRelate_Copy.fp7.zip
-
Am I missing the point in thinking that Data must be manipulated from a Trigger layout? Otherwise Trigger will not "see" any changes made to Data, not in such a way that it can instantly evaluate and fire a script. In fact, the Data changes must actually originate in Trigger, if I understand FM logic correctly. P.S. - Several years ago you had a very neat trick for auto entering notes and logging them. FM 8 broke this and I figured out a new way to do it with a separate Journal table and a recursive CF to pull the data back into the NoteLog format. Ummm..I digress, I think I might have one way to do what I think you want...working on your file now...