July 2, 200718 yr Hello everyone, I am compounding 2 fields (date,time) in order to create duplicate records flag. Question: By just doing a simple compound of c_flag = date&time result is eg. 6/25/20079:43:13PM would it be better (or possible) to have the calculation field hold 625200794313PM? I guess I would be looking 'DateToText' conversion. Again, this compound is to be indexed so my line of thought was that text field without "/" and ":" entries would 'index better'. Thoughts? Many thanks!
July 2, 200718 yr Author Hi Comment, This gives the same result as the basic compound...not sure if it matters but 'date' and 'time' fields are stored as calc text.(btw thanks for Post#257075) Thanks a bunch!
July 2, 200718 yr This gives the same result as the basic compound Not really - the result, if kept as type timestamp, is basically a number (the number of seconds elapsed since 1/1/0001 0:00:00). It is only DISPLAYED as a date & time concatenation. This should not only address your indexing concerns, it will also sort correctly. not sure if it matters but 'date' and 'time' fields are stored as calc text. Yes, it matters. Why is this a good thing? The best way, I think, would be to change the calcs to result in date and time, respectively. In a pinch you could use: Timestamp ( GetAsDate ( PseudoDate ) ; GetAsTime ( PseudoTime ) ) but this relies on the file's date & time formats.
July 2, 200718 yr Author but this relies on the file's date & time formats. Now it makes sense, thank you. After I ran a few test I realised that my compound wold not be unique enough so I may have to cram in a few more fields. Knowing this definetly helps. Thanks again!
July 2, 200718 yr Author Hi Comment, Hopefully I can get your attention to this post again. After I have added a few more fields and essentially 'broken' the date/time format the timestamp no longer applies. However, I can still index the fields...but something tells me that this is not a strong index especially when dealing with 100,000 of records. file attached. Any ideas? Thanks everyone for your attention to this post. All the best! TimeCompound.zip
July 2, 200718 yr Author still the same isuse of format and ':' '/' characters.Maybe I am just looking at it too deeply as either way it can be indexed. Thanks!
July 2, 200718 yr I am guessing you want to flag duplicates based on the exact time, like this? TimeCompound.fp7.zip
July 2, 200718 yr Author Hi comment, That is exactly what I was trying to accomplish. After running a few examples I was amased that log file (rawstring) has multiple entries to the Millisecond.May have to cram yet another field in there to make a 'unique id' out of it. This is great! Thanks again!!!
July 2, 200718 yr Ultimately, the question what constitutes a duplicate is not a Filemaker question - it's a decision you need to make, based on your understanding of the meaning of the data. If you want to force a unique ID per record, use an auto-entered serial ID - this will be "unique" even if ALL other data is duplicated in another record. BTW, there's no need to concatenate fields - you can simply add more matchfield pairs to the relationship.
July 2, 200718 yr Author BTW, there's no need to concatenate fields - you can simply add more matchfield pairs to the relationship. I am truly Yes, I do have to determine what constitues dups...and FM opened a whole enother dsicussion. Strangely enough the manufacturer of the device that produced these logs(RawStrings) was not even aware (or not admitting) that their 'reporting' was compleatly misleading. After I added the measure of milliseconds I still was able to find skipped heartbeats in their logs and unjustifed duplicates. Sort of re-inventing the wheel. Much oblidged for your patience!
Create an account or sign in to comment