Jump to content
View in the app

A better way to browse. Learn more.

FMForums.com

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

FileMaker Pro 9 - database recover fails silently

Featured Replies

  • Newbies

We are using the recover feature on FM Pro 9 to periodically remove uneditable, blank records from a large database. We were surprised to find that the recovery process did not recover the database consistently. We performed several tests of the recover process using the same corrupt file and running recover on different computers. We observed that recover removed the blank records on some computers, but not others. Then we discovered that recover was running out of disk space on the computers where it didn't remove the records. When recover ran out of disk space, it cleaned up its temporary files and displayed the recovery complete dialog - no sign of trouble. We also found that recover needs 4x more temporary disk space than the file size it is recovering (13GB in our case). We observed the same recover behavior on versions 7,8 and 9.

The Recover function should not be used for this purpose. It is meant to recovery files that have Failed (i.e got corrupted, and will not open).

Lee

p.s.

Have you tried the File Maintenance Tool?

Main Menu >> Tools >> File Maintenance.

Do a find for the Tool name and see if there has been any discussion about it. I kind of remember a caution about using it.

Edited by Guest

I just did the search, and here is the tread I was thinking of. Link

I'd suggest skimming through help files and the knowledge base on the FileMaker site for a better understanding of what recovery does and is meant for. I'd also look for possible causes of the corruption, like antivirus software running on open database files or on backup files while a backup is running, etc.

Are you trying to Save A Copy as Compacted first when these issues crop up? It sounds like your blank records could be a result of corrupt indexes, which a compacted copy should help eliminate.

Lee, yes, the issue with File Maintenance is that it modifies the open file, so if it's run on a truly munged up file, it can make it even worse. Save A Copy as Compacted does the same File Maintenance, but on a separate copy of the file, leaving the original untouched.

As to the actual recover inconsistencies noted above, I can't speak to that. It's interesting, though.

  • Author
  • Newbies

Thanks, guys. The Save a Copy compressed does not correct the problem that we are seeing. The problem is very rare - 5 occurrences in 9 months, about 2500 inserts into the database. We have staff in 15 offices across the state accessing the database. The problem seems to be network related (connection glitches?), but it is difficult to troubleshoot due to its rarity. Our strategy is to figure out the most reliable way to fix the problem and then monitor the system to see if a pattern develops over time.

I wanted to report recover's odd behavior to the forum. I expect the tool to check that it has enough disk space before starting. Or at the very least, to report that it encountered problems and that recover ended abnormally.

It could be network dropouts, though as you say that's hard to diagnose. If users are tunneling through a VPN rather than, say, logging into a terminal server or citrix, then there's an increased chance of dropped/lost packets.

But just to reiterate, since you don't specify your whole maintenance/repair process - you shouldn't be re-hosting recovered files. You should be using Recover to obtain a data set that you can then import into a last known good clone of your file.

You probably already know that, I know - just making sure.

  • 4 weeks later...

I've encountered the same issue. IT seemed to be coming from users working on the file during a backup. I know this SHOULDNT be an issue but it was. 13GB isnt that large compared to the fact that each DB can hold 8TB, however a 13GB file will take a while to backup. 1st what I would suggest is to schedule backups when no one is expected to be using the file (if at all possible). Next take the file and OMIT the phantom records and then import them into a clean empty file. That will eliminate the records. Just a suggestion, let us know if that works.

Create an account or sign in to comment

Important Information

By using this site, you agree to our Terms of Use.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.