Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

This topic is 5279 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

I have deleted some records from a large DB file by mistake. I have a backup of the file and the records but I don't know which records were deleted. However, the backup DB does not contain some modifications which I made to the current DB and which I would like to keep e.g. relationships and scripts. What is the best for me to import the data from the old file (e.g. do I match fields? and reimport all the data) without messing up my existing database? Many thanks.

Posted

Maybe like that?

1. Open both DBs side by side (your Original and the Backup)

2. Maybe create a formula field which summarizes some (like ten) fields in both DBs.

3. write a script in the original which displays all records, sorts them and calls a second script in the Backup file, which does the same (show all and sort by the same criteria)

4. now cycle thru the records in the Original and compare each record's new formula field with the corresponding from the Backup file. If they are equal, step to the next record in both files. If not equal, ommit the record in the Backup DB.

5. finally export the ommitted records from your backup and re-import them in your Original file.

HTH,

demski

Posted

What is the calculation/script which checks the field in one file to another. I have done as you suggest and made identiacal fields in both files which summaries say ten or so fields into one. I see I need to copare them but how do I do that? I can't do this by hand there are 170k records! I can't see how to run a script which operates on two files at the same time since all the tables are called the same anyway. I suppose I could export them and look at them in Winmerge?

Posted

Hi enquirerfm,

I thought of a "helper script" in the Backup DB, which you could call from the script in the original DB. You could call it with a ScriptParameter passing the value of your comparison field to the helper and respond with a Script result saying something like "was equal and ommitted" or "was not equal, step on"

As I have no access to a FileMaker at the Moment I am not sure wether you have to add the Backup DB to the external files of the Original DB to have access to the remote scripts.

HTH

Posted (edited)

I think this could a lot simpler. There are 2 methods, but likely the first (comment's) is the easiest.

First, you must have a unique primary ID for any of this to work accurately; if not, you'll have to construct a calculated one by concatenating fields. It is too late to use Replace to reserialize, because the records differ.

Method 1:

In your original database, set the Validation of your primary ID to [x] Always validate, and [x] Unique. Then just Import [x] New records. Duplicates cannot be imported because they'll fail the validation. No "updates" will occur, because it is not a "matching records" import.

Method 2:

Create a relationship in the Backup database to the newer one, based on primary ID. Put a field (the ID) from the newer file on a corresponding table layout of Backup file. Do a Find for * (anything) in the field.* These are the records which are the same. Omit Multiple (high number). These are the records you deleted. Import into the newer file.

* You may be tempted to use = (nothing) so you don't have to Omit. It will not work however, as you cannot find "records which do not match the relationship" directly, because there is not a valid relationship.

The * in the Find does not have quotes.

[P.S. Always backup a file before doing such an operation. It will have to be done for each table.]

Edited by Guest
Posted

Thank you so much - option 1 worked fine because I have a unique ID which is auto-inserted when every record created. maybe you can suggest an easy way to de-dupe now or should I set up a new query? I could match 5 - 6 fields to check for dupes.

Posted

You mean de-dupe your existing records (nothing really to do with the Import)?

You could create a multi-criteria relationship on your "dupe" fields. Every one of them must not be empty and must match for it to be a "dupe". Then create a calculation, result Number:

not IsEmpty (dupe relationship::Serial_ID) and

Serial ID ≠ dupe relationship::Serial_ID

That will produce a 1 for later duplicates. Now, whether the other data is the same or not is a different question, and no so easy to tell.

This topic is 5279 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.