Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×
The Claris Museum: The Vault of FileMaker Antiquities at Claris Engage 2025! ×

This topic is 7088 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

Hi All,

I know this has been asked in the past, but I searched the site and can;t find a straight forward answer.

I have a database with several thousand records. One field is called AT Number. By design, many records have the same data in this field. It is important most of the time (makes sense honest!) but now I have a need to only display 1 occurence of each of the AT Numbers.

Is there a way I can delete or omit records that have this duplicate AT Number?

Thanks for helping if you can. And again, sorry for asking something I know has probably already been addressed.

Thanks,

Steve

Posted

but now I have a need to only display 1 occurence of each of the AT Numbers.

Hi Steve,

The best way to approach this would depend upon your need. How will this information be viewed or used? You can use the ValueListItems() function or a self-join on ID with Max() serial to identify only one of each record.

There have been many posts on 'finding unique.' If you want a report, you can create a columnar list layout with grouped data (leading part) by your AT field (be sure to sort by the AT field). You can delete the body entirely if you have no need of the detail. The best solution for you will depend upon what you're trying to accomplish with the information.

LaRetta wink.gif

Posted

Hi LaRetta,

What I really want to do is make a back up fo the DB in question. Delete (permanently) all records before the start of the year in the new copy, then delete all at#'s except for the very first occurence. In the original design it was needed to have this # in more than one record, but now I need to do some statistical analysis that is based on only the first occurence of that number. I could just sort and manually remove, (only sabout 3,000 records) since the start of they year) but I was hoping for a quicker way to do this!

Steve

Posted

Hi Baylah,

I suggest you use a self-join. Create another table occurrence and join on the AT number using =. We'll call this join UNIQUE.

How you find the 'first' record only you can determine but here are fields to consider using. You need a number or date field because the Min() function only works on numbers (or dates which are numbers). We'll call this field FIRST:

Creation Date

Activity Date

Unique Serial

If you don't have a field you can use, create a calculation (number) with: Get(RecordID). Then create a calculation (number) with:

FIRST = Min(UNIQUE::FIRST)

This will produce a 1 on the first of each AT matching entry. Then perform a find for 0 and delete them. Back up first. smile.gif

You may want to leave this join here for other uses in your solution - or for yearly 'cleansing.'

LaRetta

  • 1 month later...
Posted

i hate to jump in but i tried this same thing but instead of boolean 1 and 0 i get the number of the first unique record. changing the field format to boolean results in all values = yes .

this seems to correctly produce the number of the first unique record but i am unsure how to get the boolean 1 & 0 result you mention.

any idea what i might be doing wrong here ?

thanks

stimpy

Posted

FIRST = Min(UNIQUE::FIRST) will not return the number of the first unique record. It sounds as if you may have omitted the "FIRST =" portion of the calculation.

You can also use FIRST = UNIQUE::FIRST, which will return 1 for the first unique record according to the sort order of the UNIQUE relationship and will be faster than using Min.

Posted (edited)

Here's a quick and dangerous solution which I have just tested in connection with a similar thread! So BACKUP first.

Make two clone copies of your file. In the main file sort the table in the order that you want so that the first record for each duplicated Id (AT number) is the one you want to keep. Now import that file into the first empty copy. That changes the (unsorted) order of the records in the file to the order you have just specified. In the second still empty clone copy validate the duplicated ID as Unique, Non-empty, validate always, user cannot override and import the data from the now sorted copy. FMP will pick up the first record of each ID and discard the rest on validation.

EDIT reading through this again you don't really need two clones do you? .. or do you?

Edited by Guest
Posted

Interesting method (though I wouldn't call it quick...).

A couple of notes:

1. I don't think you need the intermediate file. Filemaker recognizes non-unique values in any order.

2. Unique + Validate Always are the key here. AFAICT, the other options do not matter.

Posted

The purpose of the intermediate file was to ensure that the first value picked up was the one that you wanted but as you say it probably isn't needed. I tested this with a copy of an invoices file and tried to pick up the most recent invoice for each customer. My first try resulted in picking up the oldest invoice for each customer which is why I went through this rigmarole of two copies one to sort and one to filter out the duplicates.

Posted

Note that the same goes for the found set. If the source file has any open windows, the found set from the frontmost window will be imported, in the sort order of that window. If not, all records will be imported, in order of creation (the last statement untested).

  • 3 weeks later...
  • 2 weeks later...
Posted (edited)

I can't figure out what I am doing wrong. I was able to identify duplicates but when I go to do a find , no records are displayed. The self join doesn't seem to be working either

SELFJOIN.zip

Edited by Guest
Posted (edited)

I'm returning your file.

You needed to join Major to Major. Then your FIRST calculation was changed to:

serial = unique::serial

Now if you view the table, a 1 is produced if the first record. Note that record 7 doesn't display a 1 even though you said it was the newest but it was NOT. Since records are added in their natural sort order (of unsorted), the first record of each Major will always be the first, right?

UPDATE: Ah, I see you are looking for the LAST even though you said FIRST? Then use:

serial = Max( unique::serial)

But that is incorrect by your example. You say the 'first' record you want is, ummmm, the first and NOT the last. You have confused me royally but hopefully the theory of the above calcs will clarify whatever you need.

LaRetta

SELFJOINRev.zip

Edited by Guest
Posted

I got every thing to work now.Thanks Laretta sorry about the mental anguish. Here is the file I came up with. The problem with the file I attached had some other stuff I was trying to play around with to get me the same result. It didn't matter if I had the first or last record, just had to unload those in between. This file was written in Filemaker 4.0. Using version 7 I didn't have to do all this but I thought someone else might like to see this in its rudimentary version. The application counts the total number of members within a given zipcode

example.zip

This topic is 7088 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.