susan Posted November 2, 2001 Posted November 2, 2001 I have a file with many many records, all of which have at least one duplicate. I want to be able to find all the unique records and then delete the duplicates. Is there a quick and dirty way to do this? Susan
LeCates Posted November 2, 2001 Posted November 2, 2001 Hi Susan, You probably already know that you can use the ! Find operator to find duplicates in a field. Then you can sort your found set by the values in that same field -- thus lining up all the duplicates. Then you can use a script that loops through the found set, omitting the first instance of each value, but leaving the duplicates. After the loop gets to the bottom of the list, you can delete the remaining found set. For a quick reference on using omit in a looping scripts, check out this thread: http://www.fmforums.com/cgi-bin/ultimatebb.cgi?ubb=get_topic&f=15&t=000821 Hopefully this is enough to get you started. Good luck!
bobsmith Posted November 3, 2001 Posted November 3, 2001 As I read it, Andrew's solution works if there is only one duplicate for any record. ere is a sample of a script I use to delete duplicates where there is/maybe more than one duplicate. Enter Browse Mode Perform Find [ Request 1: master field ! ] [ Restore find requests ] Sort [ Sort Order: master field (Ascending) ] [ Restore sort order, No dialog ] Go to Record/Request/Page [ First ] Set Field [ g_duplicates, master field ] Omit Record Loop If [ master field = g_duplicates ] Delete Record/Request [ No dialog ] Else Set Field [ g_duplicates, master field ] Omit Record End If Exit Loop If [ Status( CurrentFoundCount) = 0 ] End Loop Set Field [ g_duplicates, "" ] Show All Records
Recommended Posts
This topic is 8421 days old. Please don't post here. Open a new topic instead.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now