Jump to content

Sholly

Members
  • Content Count

    32
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Sholly

  • Rank
    member
  1. I am trying to export a field that is a return separated list. It is a list of all chimpanzees who are together in a group at a given time. One particular record might look like this in Filemaker: BAR JOL LIL BT BS When I export it to Excel it looks like this: BARJOLLILBTBS But I need it to look like this when it gets to Excel: BAR JOL LIL BT BS Is there a way to do this?
  2. Hi again. I tried Tominator's solution (I want to be able to export into statistical software, so a dedicated field for the baseline is best) and I'm doing something wrong. It must be with my relationship, but I just can't see the problem. Would either of you guys be willing to take a look? I have attached a simplified version of my actual file. As far as I can tell, it mirrors Tominator's solution exactly. The only difference I can see is that "value" in Tominator's solution is a number and mine is a calculation. Not sure if that makes a difference. Thanks! Cortisol_Basel
  3. Hmm, the TRIMMEAN is interesting. I think it might actually have similar logic though. Again, I'm no statistics expert, but it seems that in a normal distribuition 2 SDs is the equivalent to a 95.45% radius around the mean (the confidence interval most people use). Maybe this is the logic behind the 2SD method?
  4. Thank you Tominator. I will try to integrate this into my database and see how it goes!
  5. No, no function in excel. Just calculating the numbers, then deleting those that are out of range "by hand".
  6. OK, you're totally right. And re-reading the references on the method, it looks like I am wrong that you need to/will ever get down to one number. The idea is obviously to remove all outliers from your baseline calculation. So, in your scenario, at 888 you would just take the mean and that would be your baseline. It probably only seemed to work for me because I have tiny sample sizes (75 max per individual) and non-normally distributed values. Thanks, once again, for solving a problem that I didn't even know I had yet!
  7. I am no expert in statistics either! I just do what I'm told by my advisors. lol. I will try what you are suggesting on Filemaker. I have done it on excel and it works for individuals who have larger sample pools. I can get down to one number relatively quickly. The problem arises in individuals who only have 5-10 samples or so. I often end up with 2/3 numbers remaining that are within the "acceptable range" (not +/-2SD), and then I have to go and just take the mean of those 3 numbers... and that becomes the baseline. I'm thinking it may just be easier to do this in excel.
  8. There will be more samples in the future, but this is the kind of thing that may only need to be re-calculated once every few years, not on the fly. Regarding a reference for my method. I think you mean the method of calculating baseline hormone levels (but maybe you mean who told me a custom fuction would be good). The developer I've been working with a bit told me about the recursive function. As far as the method for calculating the baseline goes, I do my laboratory work at the Smithsonian and it is standard practice there. We usually cite the following pubs, but there are more.
  9. Hi again. So, I've been told I need to use a "recursive custom function" to accomplish a task, and I'm wondering if that sounds right. I have about 1500 records that represent individual chimpanzee urine samples. Each record has a field for "cortisol". This represents the amount of the hormone cortisol found in the sample. I am trying to establish a baseline cortisol level for each individual chimpanzee. I currently have the "urine sample" table linked to the "chimp" table via "chimpID#", and the new "baseline" field will live in the "chimp" table. Basically what I need the "baselin
  10. LaRetta, I think this must be a big problem for me. I do this frequently. Comment, I believe you are right about having too much data in the layout. Also, thank you for your answer about the auto-enter calcs. I will NOT be using that as a solution to this particular issue. :)
  11. I'm just trying to speed up performance. I've got quite a few calc fields in my layout, and many of them are dependent on another... so things are becoming very slow. As I understand it (from looking on here and elsewhere), you can speed up performance in these situations by: 1. Not including all the calc fields in the layout 2. Changing calc fields to "stored" 3. Writing a script to do the calcs and enter them all at once #1 is a possibility for me if I mess around with my layouts a bit. #2 I know how to do, so it seems very attractive, though I want the field to auto-upda
  12. This is a simple question, but searching the internet and looking at my books have yet to give me a simple answer. If I set a field as an auto-calc field (instead of a regular/unstored calc) and unclick "Do not replace existing value", will Filemaker automatically update the value when a related field is modified? I just want to make sure I don't stick myself with field that won't re-calculate if/when I change a related value in my database. Thanks!
  13. Once again, you have solved my problem. It worked like a charm. Thanks!
  14. Hmm, the "GetValue..." solution isn't working. I currently have the calc below, but it isn't giving me a real number for everything because it is finding the first value, and not the first NON-EMPTY value. And for some, the first value is empty. (Last (Behavior to Follow by Date ID#:Timestamp_Begin) - ( Behavior to Follow by Date ID#::Timestamp_Begin ))/60 When I change it to incorporate comment's solution I get unusable values for all records. My adjusted calc that doesn't seem to work at all looks like this: (Last(Behavior to Follow by Date ID#::Timestamp_Begin)
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.