Jump to content
View in the app

A better way to browse. Learn more.

FMForums.com

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

Removing duplicate values in a calculation

Featured Replies

I thought that there would be a simple way to do this but I cannot find it

Basically I am importing large numbers of records at a time , converting the data and then exporting the result to another program. I have a calculated field that produces a list of values for each record, some of which might be duplicates.

I want to remove the duplicate values from the field preferably with a calculation rather than by creating a value list and scripting a Replace Field Contents script step ... ie I would like to complete all calculations during the import process.

During import the Field A is a calculated field that may well contain values such as

ACPARGF

ACFRAGF

ACPARGF

ACLONGF

ACZRHGF

ACLONGF

ACPARGF

ACPARGF is repeated 3 times and ACLONGF twice. The result I am trying to acheive would be to use a second calculated field ( Field B ) which would contain only unique values from Field A.

ACPARGF

ACFRAGF

ACLONGF

ACZRHGF

I tried using the new List function in FM8.5 but that does not filter out duplicate values, I know I can create a related table (each record has a unique serial number) and use a value list but cannot find a way to insert this during import... the best I have come up with so far is to import the data, then use a scripted ReplaceFieldContents with ValueListItems command to enter the values in Field B.

It seems faster to perform the calculations during the import process than by script afterwards, the time difference matters because of the sheer volume of data being run through the program.... on average there are 250 files each with between forty and fifty thousand records ...... i.e up to 10 million per batch so any small time savings become significant over each data batch

Hey,

Make sure that your table has a Serial Number

You could create a self join based on the calculated field that you are looking for duplicates of(you may want to switch it to an auto-calc text field, instead of a calculation field for indexing purposes).

So you'll have the tables Table1 & Table1SelfJoin

Then create a DupeCheck calculation in Table1 that is as follows:

If( SerialNumber = Table1SelfJoin ; 1; 0 )

Then if you do a find on DupeCheck = 0, those are all the records that are duplicates.

Does that make sense? Let me know if you need some type of sample file.

Hope that helps a bit!

Martha

Create an account or sign in to comment

Important Information

By using this site, you agree to our Terms of Use.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.