Jump to content

  •  

Photo

Database corruption - Multi file a better solution??


  • Please log in to reply
10 replies to this topic

#1 Electric_Soul  novice

Electric_Soul
  • Members
  • 49 posts
  • FM Application:11 Advance
  • Platform:Windows XP
  • Skill Level:Intermediate
  • Time Online: 17h 37m 23s

Posted 30 November 2010 - 01:19 PM

Hello everybody

Today we discovered that our file is corrupted. Somehow there are scripts in the database, that cannot be deleted. Each time we try to, we get the message "Script cannot be be found".

Our file is pretty big, and restoring a backup, that has been made 7 days ago, is not an option. That would take to much time. Our file has around 150-200 TOs.

Now we are thinking to split the file into several files. Creating a multifile solution.

This way, we hope that a corruption will not hit the whole solution.

I checked out the seperation model asswell. I like the Interface idea.

But speaking of file corruption, is a multifile solution a good approch?


Thanks
  • 0

#2 Kris M  Noisemaker

Kris M
  • Members
  • 281 posts
  • FM Application:13 Advance
  • Platform:Windows 7
  • Skill Level:Expert
  • Certification:10, 11, 12
  • Membership:TechNet
  • Time Online: 1d 50m 55s

Posted 30 November 2010 - 01:26 PM

Stolen quote from Vaughan cause it's very relevant...

"Invest some time working out why the files are becoming corrupted. FMP databases just don't go bad.

Don't run the files form a network shared volume. Always have them on a local hard disk, or running in FM Server (or FMP as peer-to-peer).

If running in FMS, don't use a wireless network."

Implementing the separation model wont reduce the chance for corruption (experts please chime in here if i'm delusional)
  • 0
“Everything should be as simple as it is, but not simpler.”
Albert Einstein

#3 Fenton  Post Master General

Fenton
  • Moderators
  • 5,046 posts
  • FM Application:11 Advance
  • Platform:Mac OS X Snow Leopard
  • Skill Level:Expert
  • Membership:TechNet, FileMaker Business Alliance
  • Time Online: 17h 12m 20s

Posted 30 November 2010 - 01:34 PM

Well, it is true that multiple files are less like to have "discoverable" corruption in all files at the same time. But notice I said "discoverable." When the files crash, they tend to all crash, since they are connected. So unless only one file crashes, they all were exposed to possible damage. So best practices would dictate replacing all files with clean backups, after importing all the data.

It depends however on how much data we're talking about. For example, I've got a couple of external files in a large solution with PDFs in them. These files are single table, simple as pie, but huge in size (>3 GB). It makes sense to keep them separate. If the main files crash I do not redo those files.

The Separation Model helps. Because it puts the most complex relationships and scripts into a file which can easily be replaced by a clean backup, without having to import much (one or few tables, one or few records).

In the very worst scenario, ie., the Data file becomes badly corrupted, and you have no backup (not likely), you could actually recreate it with far less trouble than the Interface file. Yes, a long Import of data, but few scripts, only basic layouts, etc..

You should absolutely have a routine to Import all data into a clean file(s).
  • 0

#4 Kris M  Noisemaker

Kris M
  • Members
  • 281 posts
  • FM Application:13 Advance
  • Platform:Windows 7
  • Skill Level:Expert
  • Certification:10, 11, 12
  • Membership:TechNet
  • Time Online: 1d 50m 55s

Posted 30 November 2010 - 01:47 PM

Delusion Confirmed!!!! now will someone make the Spongebobs go away cause they are scaring me
  • 0
“Everything should be as simple as it is, but not simpler.”
Albert Einstein

#5 Electric_Soul  novice

Electric_Soul
  • Members
  • 49 posts
  • FM Application:11 Advance
  • Platform:Windows XP
  • Skill Level:Intermediate
  • Time Online: 17h 37m 23s

Posted 30 November 2010 - 11:50 PM

@Kris M
Well, the IT guys, where lurking around our switch, and pulled some cabels.

@Fenton
You are right. We do already have more than one file. And it looks like 2 of them(the ones I was working on), got corrupted. So my thougt is, that this kinda error (broken scripts and function) would not spread to linked files. All other files, should hopefilly be ok.

BTW
Isn't it kinda risky to import data from a corrupted file?
That's what I did with the smaller file, but the big one, has 71 tables(not TOs) in it. How can I make sure, that I import all tables in the correct order? I am not the guy who designed the whole thing.

@both
Does the table import order matter? Or can I just import them as I like? One by one, without thinking about the order.
Right now, without knowing much about the data, I would import all the data with "Auto perform calculation" disabled.
Are there examples, where auto perform should be enabled?
  • 0

#6 Vaughan  Mostly Harmless

Vaughan
  • Moderators
  • 10,294 posts
  • LocationSydney, Australia
  • FM Application:13 Advance
  • FMGo:iPhone / iPod Touch, iPad
  • Platform:Cross Platform
  • Skill Level:Expert
  • Certification:8, 9, 10, 11
  • Membership:TechNet
  • Time Online: 4d 9h 48s

Posted 01 December 2010 - 12:11 AM

Isn't it kinda risky to import data from a corrupted file?


Believe it or not, This is EXACTLY what the Recover command is for: to recover the *data* from a corrupted file.

Bakup the databases, then run the Recover command over the files. Import the data from these recovered files.

The recover command doesn't guarantee the data is not corrupted, but it's the best (only) solution.

Import the data into known-good clones of the production databases. Often the trick is getting your hands onto known-good copies: frequently there are no backups, or all the backups have corruption.


Does the table import order matter?


That depends on the solution. (Sorry I cannot be more specific.) Most likely it will not matter, so be sure to perform a practical test. Be sure to update the next value for all the auto-enter serial numbers in the tables.
  • 0
Vaughan Bromfield
Sydney, Australia

Please post questions to the Forum, not directly to me. Back-up your files before making changes!

Whenever I hear the term "popular culture" I reach for my Iridium Q-36 Space Modulator.

#7 Fenton  Post Master General

Fenton
  • Moderators
  • 5,046 posts
  • FM Application:11 Advance
  • Platform:Mac OS X Snow Leopard
  • Skill Level:Expert
  • Membership:TechNet, FileMaker Business Alliance
  • Time Online: 17h 12m 20s

Posted 01 December 2010 - 09:21 AM

The order you import the tables should not matter, as auto-enter should be off for all. You do not need it, as you are importing every bit of data.

Serial IDs are another aspect however. You absolutely must reset every auto-enter serial ID field's "next" number to the "next" of the file you're importing from.

There are two methods to do this via script. The first and easiest is to go to last record of each table (with all showing, unsorted), and use the last serieal ID as the criteria. Then use: SerialIncrement(text;incrementBy) function to produce the value (respects text prefix), and the Set Next Serial Value script step to reset it. This method can be used on a closed file.

This method has one small weakness. If you do not have [x] "Delete related records" turned on for all "child" relationships, there is a slim possibility that orphaned records will get reassigned to the wrong parent.

How? If you created a record, and it was the last record for a table, and created children, then deleted the parent, the children would be orphaned, with a parent ID which would then match the "next" serial ID of a new parent. So it's important to make sure children are deleted and not leave orphans.

Another method, which avoids that problem, is to use the GetNextSerialValue ( "file"; "table::field" ) function to read the "next" serial number from the old file (which would always be correct). I would also use FileMaker 10's GetFieldName (field) function to target the ID field (it expects a quoted text value), so that it will not break if you change the name of the ID field (not likely). This method requires that the source file be open.

In either case, build and test, then look at the serial IDs. If you use the 2nd method, you can write a subscript which gets and compares the IDs of the tables, to make sure they're the same. (We had a problematic server machine, 50 tables, and had to do this fairly often for a while.)
  • 0

#8 Electric_Soul  novice

Electric_Soul
  • Members
  • 49 posts
  • FM Application:11 Advance
  • Platform:Windows XP
  • Skill Level:Intermediate
  • Time Online: 17h 37m 23s

Posted 02 December 2010 - 02:48 AM

Hello

@Vaughan / fenton

Thanks for the valuable advice. Really nice. Thanks a lot. I'll get into scripting all your tips right now.

Now:
I discussed the seperated model idea with my boss, and he likes it very much. Likley because he will take the beating from the big boss, if a crash will throw us back a week. So i will do the following.

1. Create a copy of our original database
2. Inside the copy(the new interface file), replace all the sources of each TO, with the original source file.
3. Delete all tables inside the interface file.

4.
I plan to put the interface file on each client machine. Therefore I can use the power of each machine to do all calculations. But do you know if this is really worth the effort?
What calculations does the server perform, that can also be performed on a client machine? Example: Lets say I am sorting 100000 records. My interface file resides on the client machine. Who is doing the sorting now? The server or the client?



bye and thank you
  • 0

#9 Vaughan  Mostly Harmless

Vaughan
  • Moderators
  • 10,294 posts
  • LocationSydney, Australia
  • FM Application:13 Advance
  • FMGo:iPhone / iPod Touch, iPad
  • Platform:Cross Platform
  • Skill Level:Expert
  • Certification:8, 9, 10, 11
  • Membership:TechNet
  • Time Online: 4d 9h 48s

Posted 02 December 2010 - 05:09 AM

Sorry to be blunt but your rationale for "separation" is crap.

The "separated" model is where the "interface" is separate from the "data" and so that changes can be made to the interface without requiring changes to the data file.

I don't think it will hep you. It's only of benefit for multiple installations of the database where only changes to the interface are made.

Separation is MORE work because the TOs have to be set up in both the data and interface files.

It's not as clear-cut as it seems on paper. Many things become more difficult.

Your original post you said "Our file is pretty big, and restoring a backup, that has been made 7 days ago, is not an option. That would take to much time."

Do you think that re-building the file using the separation model will be quicker?
  • 0
Vaughan Bromfield
Sydney, Australia

Please post questions to the Forum, not directly to me. Back-up your files before making changes!

Whenever I hear the term "popular culture" I reach for my Iridium Q-36 Space Modulator.

#10 Electric_Soul  novice

Electric_Soul
  • Members
  • 49 posts
  • FM Application:11 Advance
  • Platform:Windows XP
  • Skill Level:Intermediate
  • Time Online: 17h 37m 23s

Posted 03 December 2010 - 01:28 AM

Ok, I like straight forward answers :

So you are saying that a seperation model will not give me extra protection from data corruption?

If that's corrrect, then this is one of the issues where discussions never end, because others tell me differently. I started the same discussion in the filmaker Forum, checked out some developer blogs, filemakermagazine.

Did i mention that I screwed up the databse, while making changes to scripts and functions? Now the whole database is affected.

So in the and I have to decide. But before I do, could you please point out some of the difficulties?

Maintaining TOs in 2 files, well that not a big thing.
Will there be calculation problems? Talking about all field calculations(Auto Enter and formulas). When I thing about the proper context and calculations that depend on realationships, I'd say I have to maintain those relationships in the datafile.


Fenton already pointed out, that some relationships may become obsolete, depending on what I do. Sounds plausible.
I thought about it, and I'd say that all field calculations(Auto enter and formulas) have to be done on the datafile. Simply because your can only maintain those fields in the source file only. And the sourcefile(data file) does only show tables an TOs from itself, but not from the interface file. The rest can be done in the interface file.


Well when i have to restore a seperation model, then do not see a big problem. I have a interfacefile that I can simply replace, and then I have my datafile. I'll write some import/export scripts, and that's it.
  • 0

#11 Electric_Soul  novice

Electric_Soul
  • Members
  • 49 posts
  • FM Application:11 Advance
  • Platform:Windows XP
  • Skill Level:Intermediate
  • Time Online: 17h 37m 23s

Posted 03 December 2010 - 03:35 AM

Can anyone explain what the author is talking about the portal?

http://my.advisor.com/doc/13691
  • 0




FMForum Advertisers