bac mac

Members
  • Content count

    41
  • Joined

  • Last visited

Community Reputation

0 Neutral

About bac mac

  • Rank
    novice

Profile Information

  • Gender
    Not Telling

FIleMaker Profile

  • FM Application
    12 Advanced
  • Platform
    Mac OS X Lion
  • Skill Level
    Intermediate
  1. Usually, we only need to look at the exact matches because the most common test subject, Table A record, will match records in Table B that have sufficient cases for statistical significance. Failing that, we need to look further, e.g., include matches where there is one additional or one less disability- both cases would have a score of 8/9 compared to 9/9 for a perfect match. And yes, it will be entirely dependent on the test subject. We will include a limit on the mismatches and it may be 8/9 or possibly lower.
  2. That is a very concise calculation that would do well identifying records with one mismatch and allow for that mismatch to be in any of the 9 fields. Would it also go beyond that and help evaluate less equivalent matches? The context for this involves a table (B) of hundreds to thousands (depending on which data set we use) each representing a subject type with 0 to 9 different disabilities, e.g., inability to walk without assistance, inability to bathe without assistance, etc. For each of these subject types, we have a record that indicates the status with respect to each problem. Each problem is given a field and a fully functional subject type would have able, or 1 in each of these 9 fields. A problem with one or more of these activities would have unable or 0 as the value for the field. (Actually, right now, it uses text- able/unable, but I'm considering switching it to 1/0- thoughts on that with respect to performance?) Then, in a separate table (A), we generate a new record with a test subject and populate the 9 activity fields in this table with the status for each of these activities. We can identify table B records that exactly match the state of the 9 activity fields in the table A record (easy via a single relationship matching each of the 9 fields between the tables). The question involves how best to identify the records in table B that include a mismatch and sorting them in order from best to worst match. What do you think?
  3. From a record in Table A that includes 9 fields, I need to identify the records in Table B (of hundreds of records and include 9 corresponding fields) I can easily define a relationship that retrieves the records in Table B where the contents of the 9 fields exactly match, 9 out of 9, between the Table A query record and Table B records. How do I include the best, but not perfect matches, e.g., 8 out of 9 of the fields exactly match. I can script through all of the table B records and set 9 additional temporary fields with a 1 for exact matches and 0 for mismatches, but that seems cumbersome. I'm using Pro 12, but if 14 had some features that would make this easier, that would be fine. And just to be clear, I'm not needing to handle any mismatches between the corresponding fields. The corresponding fields either match exactly, or they don't. In fact, the fields could be boolean to start with and just contain a 1 or 0 each. What is the recommended way to determine the best match records involving a set of multiple fields?
  4. I am accessing SQL tables from FMP 12 via ODBC and this has mostly been working fine, except for some performance issues that crop up now and then, e.g., Not Responding appearing at the top of the window for a few minutes now and then, but then it always clears and continues to function. I don't fully understand exactly how the connection operates or the best practices. For some of the connections, I just obtain a total count of the records and the names of all of the fields in the remote table. The remote table may contain 2-40 fields and about 80 records to 150 million records. Since the last time I added one of the remote tables to FMP, I keep getting an error in a FileMaker Pro popup window: "There is not enough room on the disk to complete this operation. Make more room on the disk, then try again." And the only option is to click OK. When I do, it goes away, but then reappears about 3 minutes later. This last table has 50 million records and about 30 fields. Unlike the other tables, I could not identify a unique field or a unique pair of fields. After trial and error, I ended up selecting all 30 fields to get it the table to appear. 1. Does the fact that I selected 30 fields when adding the table through the ODBC result in a performance hit affecting memory and/or disk space? (And, I have not had luck yet having a simple unique key added on the remote.) 2. What happens when I access SQL data through ODBC? Is the entire remote table brought into FMP memory? Regardless of what subset of the data I may actually be using? Would it be significantly more efficient if I did have a single unique key field in the remote table to use? 3. Any other advice about optimizing use of SQL data by FMP 12? Would there be significant benefits if we switched to FMP 13? Thank you, Bruce
  5. That totally does it. And I greatly appreciate your speedy replies. I never would have come up with that syntax. I don't think I've ever used the Quote function. In my current incarnation, I have Table Name fed by a drop down list populated by a field calculation: TableNames("") and now, when I select the desired table, I instantly see the total record count in the SQL table desired. I'll investigate the other points that you mentioned. Thank you!!! Bruce
  6. That solved it instantly! I had initially used the * and I also had tried escaping the table name, as well as too many things to mention, but never hit the correct combination. Now, I just tried playing with getting the TO name entered into the calculation from another field, and I tried escaping that as well, but can't get that to work. I also tried using the ?: ExecuteSQL ("Select Count (*) From ?";"";"";Table Name) Table Name is a field in the current table. I suspect I have to add a calculation to generate the appropriately escaped TO name. Do you have a suggestion for that syntax, with the TO name is in a field called Table Name? Thank you very much, -Bruce
  7. I need some info from many tables from an SQL system and I have ODBC working and I can use the data through relationships with filemaker commands and almost everything is fine. I'd like to count the records in various SQL tables without setting up relationships and I've tried a large number of variations and searched the posts and web with no luck. For example In a calculation: ExecuteSQL ("Select Count () from Dflt.epCLC_Step3_RAN50";"";"") yields a ? This is a simple table of one field of scrambled IDs, just 50 records. Eventually, I plan to have the table specified by a filemaker field to present record counts for different tables that might be of interest. What do I need to do to get a record count without establishing a relationship?
  8. The filter was hard coded into the portal, but I was trying to get away from using that portal. Good observations. Thanks you
  9. OK, that is what I thought- having to add the additional field to Table A to accomplish the relationship, where with the portal, I could specify that in the portal filter. The instability I mentioned was due to the fact that it is a served file and I keep forgetting that globals can easily disappear. I guess globals in this case are best set during an open script. The value in the global Course Type in table A is going to be pretty constant and only rarely changed by the end user. Thanks very much for your help.
  10. I'd be very happy with a filtered relationship, but I'm not sure how to best accomplish that. It was easy for the portal, but I don't see a similar mechanism without the portal. Is there?
  11. In layout A, I can readily display the desired field value (date) from a related table (using a filtered portal where I filter to display related records where a different field (Course Type) in table B = a particular string. The relationships involves multiple fields and is also sorted to display the desired date. That works fine. I need to use that date (obtained from table B ) in layout A/table A in other calculations, so I need to place that date in a field in Table A. I'd rather not use a script for this. -> Is it possible to write a calculation that would result in the same value (date in this case) that I see in the filtered portal? (I can do this if I set up fields in table A that contain Course Type designators (globally stored) for each of the desired strings and then set up TOs of table B with the designator and the other parameters in the relationship. But this seems a bit unstable on this served file. It also seems a bit messy requiring the additional designator fields in table A.)
  12. Right, I'll create the calculation fields and then determine the numbers in each state and avoid the excess finds that I was heading towards. Thank you.
  13. For some analyses, I have a script that is working fine determining the number of records meeting various constraints, but the finds seem to slow things down. For example, out of ~3500 total records, I can narrow down the found set to the ~40 records that correspond to one parameter (determined through some related tables) and then the ~20 of those that meet a second criteria, e.g., a field called due priority contains "Overdue". The script uses combinations of finds, extend finds, or constrain finds to count the various subsets. Am I just being lazy? • Should I just add a calculation to each record and then sum up the new field to determine the counts for the subsets with an aggregate or summary kind of function? It seems as if that might be faster since it would be restricted each time to the initial found set of ~40 records. • Or is there another recommended philosophy for this? Thanks, Bruce
  14. Thanks all! I now have an understanding of context in Filemaker. I appreciate all the tips, Bruce
  15. What was mysterious to me was the fact that the global field that was a problem with the relationship looked fine in a layout displaying that field but eventually developed relationship problems likely due, as you suggest, playing with that fields value differently while the database was single user vs. reinstalled in FMS. But the field always looked OK. 1. How do I "confirm that you are in the correct context when you update your global fields"? I'm not sure what is meant by context. 2. So I can use global storage for a field in a relationship as long as I assign that field during an open script and it should work if I: a. set the current term in the Prefs table, not globally stored b. during the open script assign that value from the Prefs table current term field to a term field in an unrelated table and have that term field stored globally. Correct?