Jump to content
Server Maintenance This Week. ×

Empty record cannot be deleted


This topic is 4306 days old. Please don't post here. Open a new topic instead.

Recommended Posts

My solution crashed, and I recovered it fully.

The problem now is that there is a record that is found that is empty. If I delete it, and do a specific find operation again, it reappears.

Any idea what may be wrong?

Thank you in advance,

Fed

Link to comment
Share on other sites

unfortunately these kind of things happen with corruption. Your safest bet is to export out the data into text files and then reimport them into a clone backup.

Link to comment
Share on other sites

Try saving a compressed copy and see if that fixes it.

If not, recover the file again and see if it is fixed. Hopefully it is.

If not, you're in trouble because the file is really broken. As Mr Vodka suggested, at this point you either look for a known-good backup to clone and export/import all the data into (unfortunately no there is no automated way to do it) or you rebuild the file from scratch.

Exporting as FileMaker files will bring all the container data across as well. Exporting as a text format like csv will not.

Link to comment
Share on other sites

Thank you. It seems to have worked.

Here is a summary:

CRASH - I could not open the file at all

Recovery done - record with question marks with find function (one particular find request only)

Compression done - no change

Recovery done again - seem to be working fine

My question now is: How safe is this database to work with moving forward at this point?

Thank you very much again!

Link to comment
Share on other sites

Only way to be 100% sure is to rebuild completely from scratch or restore from a known non damaged copy.

You should also do a thorough investigation on why (root cause) the corruption happened in order to mitigate the true cause of the corruption.

Link to comment
Share on other sites

The purpose of the Recover command is to get the DATA out of the file, not to fix the file itself.

The premise is that there is a known-good backup of the file, since the file structure itself does not change (unless it's under development) whereas the data in the file may change frequently.

Link to comment
Share on other sites

I have a good backup, but it is a day old.

I have about 30 tables, and many have new records. One has 60 new records. Most of the tables are related. Should I take the last good backup, then import the new data into each of the tables in turn from the recovered file?

Also, how do I do a thorough investigation of the cause off the crash?

Link to comment
Share on other sites

First test the last backup and see whether it has corruption: run the recover command on a copy of it and check the recover log for issues. Then create a clone and import the data form the production file. Remember to reset serial numbers!

Link to comment
Share on other sites

Thanks again. If you don't mind, I'd like to ask a few more questions before I do this.

1) the current recovered file has no apparent issues in the log when I do the recover function on it now. Is this not enough to say it is ok?

2) when I create a clone of the backup, and find it has no issues when I do the recovery function, does that actually mean it is ok, or is this just the best we can do without a full rebuild.

3) when I import from the production file, is this production file the current recovered file?

4) do I keep the data in the backup, and just tick off the 'add new record' box when I import?

5) what do you mean by resetting the serial numbers? I thought the import function would use the serial numbers in the production file tables. Otherwise wouldn't the linked table potentially get messed up (I use serial numbers for linking tables)?

Sorry about all the questions, but I want to do this properly.

Thanks again!

Link to comment
Share on other sites

Clone your known-good backup. This will become the NEW production file.

Import the data from the old recovered file into the new production file, one table at a time. Remember to update the next serial numbers in the auto-enter options.

Say your backup file has a Contacts table, with a primary key serial number that is up to 101. Your production file was a day older and had some new contacts added, so it's up to 105.

When you import into the cloned backup, the next serial number will remain at 102, which has already been used. So you need to check the next serial number for each field and update the new file.

Link to comment
Share on other sites

root cause analysis is roughly..

Problem statement: Finds in my Filemaker file do not work as expected.

q. Why do the finds not work right?

a. My Filemaker solution crashed (improperly closed) and is probably corrupted.

q. Why did the file crash (improperly close)?

a. Make a list of all possible reasons that could cause improper file closure and for each answer ask...

q. Did this reason occur and cause the corruption?

a. If yes then flag the reason as a root cause. Iterate thru the entire list of reasons.

Define what needs to happen to permanently and verifiably prevent the root cause(s) from reoccurring (list of corrective actions).

Do a risk assessment of corruption happening again if nothing is changed.

Evaluate the root cause/ corrective actions against the risk assessment and implement the solutions that make sense for your business.

Link to comment
Share on other sites

This topic is 4306 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.