October 23, 200619 yr Hello Everyone, My current database project is made up of read-only data imported from fixed-field text files and all fields are made of calculations that grab certain characters from the line of text using the Middle calculation. So, no data is modifiable because of being based on a calculation. My problem is that most of these text files contain an even 15,000 lines or records. I have one import, however, that shows 15,001 records, despite its corresponding text file only having 15,000 lines (i.e., there is a duplicate record that snuck in somehow). Without visually scanning all records for the duplicate, is there an easy way to find a duplicate? I'm currently experimenting with validation on the import field to ensure that each line of text is unique, but I'm afraid that this might slow an import that already takes 10 minutes to an even longer wait, but it might be a failsafe that is necessary. This only helps the future imports. I still need to find a way to identify the already created duplicate. I know that I can find the imported records by their import number (a field created in the table) and delete it and re-import it, but this seems a waste if there might be a way to capture the duplicate and then delete it. Any ideas will be appreciated greatly. Thank you in advance for your time and attention. Mac Hammer
October 23, 200619 yr Go to Find mode, display the status area, and popup the list of symbols. "!" is the one you want.
October 23, 200619 yr Another thing to check is whether you have a blank record that was accidentally created before or after you did the import. Quick way to check is unsort the table then check the first and last records.
October 24, 200619 yr Author Thank you all. I had done the sort to look for a blank record, but I was not aware of the "!" use in a Find request. I've used it and it found duplicates of another variety not related to my exact records, but I've got the right tool now, so I just need to search the right field and I'll be set. Gracias Amigos! Mac Hammer
October 30, 200619 yr Hey guys, I have a similair problem, i want a script that will automatically delete any duplicate records in a specific table. im sure it can be done i just cant figure it out. Ive been looking around all posts and this was the closest one i found related to what i need to accomplish. thanks ahead of time.
October 30, 200619 yr Author Although I haven't tried this, it seems easy enough. First, perform a find using the "!" to see a list of your duplicate records. Then, build a script that includes a find. In the find what section, select your last find as what you want to find. Then, issue the script command to delete found set. You might want to build a custom dialogue or four into it that asks if you "really, really, really, really" know what you are doing before you allow the records to be deleted. }:| Best of luck, Mac
October 30, 200619 yr Since you guys are on FM 85. Do a search in the help section for "Finding duplicate values" including the quotes. It will give you some info. Also there should be lots of past posts regarding omitting / deleting / marking duplicate records etc in these forums. Here is one technique. Sort Records [ No Dialog; fieldWhatever ] Go to Record [First] Set Variable [$DupCheck; fieldWhatever] (Set a global field if Using FM7 or prior versions) Go to Record [Next] Loop If [fieldWhatever = $DupCheck] Omit Record Else Set Variable [$DupCheck; fieldWhatever] Go to Record [Next; Exit After Last] End If End Loop
October 31, 200619 yr I hope this helps. I am currently using this script to find dups. Its not perfect but it works. If any readers of this, can suggest any improvements to the script, please let me know. wps_Contacts.pdf
Create an account or sign in to comment