May 11, 200817 yr Hi: I have a file maker database with duplicate records. I am trying to find out how to write a script that will search duplicates based on field A and insert the number of duplicates there are in the field B. Any help is appreciated. Thanks Gary
May 11, 200817 yr Well, we'd have to start with: How do you know it's a duplicate record? That seems like a silly question, but unless their primary key is the same, you'll need to define the rules for what's a duplicate. I've been down this road, and ideally, you end up on a list view, and have a human delete the duplicates. Also, if a record has dependent records, you need to have a mechanism for handling orphans and reassigning them to the correct parent. For example, you have duplicate client records. Each client has related invoices. If you delete a client, you don't want to delete the invoices (watch for cascading deletes!). You want to reassign them to the client record that you are keeping, right?
May 11, 200817 yr Author For reasons that would be too long to explain, we not looking to delete the record. Just want to know how many occurrences.
May 11, 200817 yr Author Record my not be duplicate. Just the duplicate value in the field. An example would be a duplicate phone number.
May 11, 200817 yr Define a self-join relationship based on matching field A. Then you can calculate how many there are of each by Count ( related::field A ).
May 11, 200817 yr For reasons that would be too long to explain, we not looking to delete the record. Just want to know how many occurrences. Like it or not, the devil's in the detail ... in order to get more than just series of stabs in the dark - is it urgent for repliers to get both context and purpose. Otherwise could just say - I insist on feeling important! But to solve technical problems is it an entirely different game! Keep away from abstractions in your explanations - don't beat around the bush! --sd
May 11, 200817 yr Yes, I've spent a lot of time on this forum guessing at needs, and I really am trying to avoid doing so. I do my best, but I find that I'm usually way off with my guesses. It is easier to just answer the question asked, but it is more helpful, esp. to beginners, to probe for the "why" because it usually uncovers data modeling errors. In your case, finding duplicates, typically leads to deleting, and I wanted to warn you that deleting is a big deal with considerations that you should be aware of.
Create an account or sign in to comment