Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

This topic is 5217 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

I'm in advertising and I wrote a database to track what products fell on which pages of a book. I have my employees generate an XML report that contains the Job Number, the Page Number and the SKU's. I import that into FileMaker and now I can see where specific SKU's end up.

Later, my customer makes changes and moves SKU's around. I have my employees generate a new XML report, which I import. As you can imagine, I'm creating a lot of dupes, not to mention disrupting my data integrity.

I need a way of importing the XML and having FileMaker keep only the newest set of records for any given page. I'm thinking of a few ways to do it, but they feel a bit clunky...I wonder if anyone can suggest an elegant way of getting there.

My way is essentially to do 2 imports. The first would be to get the Job Number and Page from the XML. I would then do a search to see if I already have records for that page of that job, and delete them if I do. I would then do the second import like normal.

By the way, I had written a process to identify duplicate SKU records, then delete the older one, but then I remembered that doesn't really accomplish the main goal. If page 10 has 5 SKU's on it, I want to end up with 5 records for page 10 of Job X. If the customer removes a SKU at some point, I want to have 4 records. Looking for dupes at the SKU level doesn't help when SKU's are added or removed. : So, now I'm back in search of a way to deal with the entire record set.

Posted

Why can't you import matching on SKU, update existing and add new if no match?

Posted

I could do that, but it wouldn't take care of deleted SKUs would it?

Posted (edited)

Deleting all records (since FM only deletes 25 at a time) takes a very long time. And then you would still have to import/add all records again. That would take longer, I believe.

A question: Do you want to ALWAYS match the source? Are any records created in Target directly? You would lose them if you did so I will continue my suggestion based upon wanting an EXACT match.

I might suggest a script from target such as:

Show All Records

Import [ 'Update matching records in found set.' ] - match on unique ID or SKU?

If [ Get ( FoundCount ) :notequal: Get ( TotalRecordCount )

Show Omitted

Delete All Records [ no dialog ]

Show All Records

End If

This will update all records in your target, then show any records which were NOT updated and you can delete them. Keep in mind that, when importing, all records which match will appear in your imported set even if no field data in the record has changed. So what is NOT in the record set immediately after import will be those NOT in the source.

Would that work for you?

Edited by Guest
Changed first sentence
Posted (edited)

FM only deletes 25 at a time

Where is this coming from? I see that deleting 10,000 records takes about a second or less.

In any case, I asked the question in order to clarify the requirements - not to make a suggestion.

Edited by Guest
Posted

The time required to delete records is influenced by how many dependencies are attached. When deleting from large structures, it is very slow. It also slows down when deleting served records.

Mine was just a suggestion. :wink2:

Posted (edited)

Where is this coming from?

It was discussed by FMI engineers at the last DevCon, I believe. The number of records it deletes at a time might only be 15 ... but I think 25 was what they said.

UPDATE: It is too bad that FM can't simply drop tables like MySQL or use FM's '*create empty clone' on a single, specified table. * what I mean is that it is very fast if you create an empty clone of a file and it would be nice if that same technology was made available to the Developer when deleting records from a table. It was explained that it wasn't that simple.

Edited by Guest
Added update and then additional explanation
Posted

It is too bad that FM can't simply drop tables like MySQL or use FM's 'create empty clone' on a single, specified table.

Hm... If you have such need (and I presume you do, if you studied this to such depth) perhaps you could devise something using a separate file and/or recurring import.

Posted

Back to topic:

I believe OP doesn't want to delete ALL records in the omitted set, only some of them. Especially now that I read this again:

My way is essentially to do 2 imports. The first would be to get the Job Number and Page from the XML. I would then do a search to see if I already have records for that page of that job, and delete them if I do. I would then do the second import like normal.

If the imported set contains only one Job Number and one Page, it should be easy to adapt your script to switch to omitted and constrain the found set.

Posted (edited)

I don't personally have the need now. We rarely delete all records except during Developer maintenance, migration or recoveries. But there are many times where this would have come in handy. And yes, I have ran time studies on deleting as well. If deleting from file on my desktop, it is (at minimum) 5 times as fast as deleting same records from save file if served and we use top-line server, network and equipment.

We don't always have the luxury of using a separate file when a solution already exists. Even so, we currently use separation model but still all of the large data tables exist within one data file and it would be really nice if we could leave some tables intact (value list tables etc) and only 'drop' others.

It is faster to import into raw FM file with no relationships or auto-enters and then import from FM to another FM table than to import directly from external source. Anyway, I don't want to hijack this thread but deleting all records from large tables makes me cringe. :crazy2:

UPDATE: I just read your second post. Excellent point.

Edited by Guest
Posted

Okay, so here's what I ended up doing...not sure it's the best solution, but it works and is relatively quick.

I setup a temp table, then set my XML import to go there. Once imported, I have a calculation flag (based on relationship of the job number and page number) to be "1" if there are related records in my live table, and "" if there aren't. I think loop through the import data in the temp table, and if the calculation flag is 1, I go to related records, showing only related records, then delete all. I then return and continue the loop. Once the loop finishes, the script imports all records from the temp table into the live table, then deletes all records in the temp table.

This topic is 5217 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.