Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×
The Claris Museum: The Vault of FileMaker Antiquities at Claris Engage 2025! ×

This topic is 6727 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

I thought that there would be a simple way to do this but I cannot find it

Basically I am importing large numbers of records at a time , converting the data and then exporting the result to another program. I have a calculated field that produces a list of values for each record, some of which might be duplicates.

I want to remove the duplicate values from the field preferably with a calculation rather than by creating a value list and scripting a Replace Field Contents script step ... ie I would like to complete all calculations during the import process.

During import the Field A is a calculated field that may well contain values such as

ACPARGF

ACFRAGF

ACPARGF

ACLONGF

ACZRHGF

ACLONGF

ACPARGF

ACPARGF is repeated 3 times and ACLONGF twice. The result I am trying to acheive would be to use a second calculated field ( Field B ) which would contain only unique values from Field A.

ACPARGF

ACFRAGF

ACLONGF

ACZRHGF

I tried using the new List function in FM8.5 but that does not filter out duplicate values, I know I can create a related table (each record has a unique serial number) and use a value list but cannot find a way to insert this during import... the best I have come up with so far is to import the data, then use a scripted ReplaceFieldContents with ValueListItems command to enter the values in Field B.

It seems faster to perform the calculations during the import process than by script afterwards, the time difference matters because of the sheer volume of data being run through the program.... on average there are 250 files each with between forty and fifty thousand records ...... i.e up to 10 million per batch so any small time savings become significant over each data batch

Posted

Hey,

Make sure that your table has a Serial Number

You could create a self join based on the calculated field that you are looking for duplicates of(you may want to switch it to an auto-calc text field, instead of a calculation field for indexing purposes).

So you'll have the tables Table1 & Table1SelfJoin

Then create a DupeCheck calculation in Table1 that is as follows:

If( SerialNumber = Table1SelfJoin ; 1; 0 )

Then if you do a find on DupeCheck = 0, those are all the records that are duplicates.

Does that make sense? Let me know if you need some type of sample file.

Hope that helps a bit!

Martha

This topic is 6727 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.