Jump to content

FMS restart vs shut down and start


This topic is 3587 days old. Please don't post here. Open a new topic instead.

Recommended Posts

     I'm far away from my home office now. For some time I've used No-IP to address my server when I'm away. I have FMS set to start automatically on restart. Two days ago I believe my place had a power failure or brown-out. Since then I haven't been able to connect. Today I had a friend go over to my place and, sure enough, the server machine was off. She restarted it and rebooted the Airport Extreme as well. No dice. My question: will FMS restart on its own in the circumstances described above? Or is this quite different than a proper Restart?

 

Thanks.

Link to comment
Share on other sites

You don't want FMS to auto-start in this types of situations.  You really need to replace the files with the last backup or risk damage to the files in the long run.

 

The FMS admin service is ok to start automatically, but not the db engine; and that's a setting you turn off in the admin console.

Link to comment
Share on other sites

No, recover is not a maintenance tool.  It is designed to aggressively go through the files and do anything required to allow you to get as much of the the data out as it can.  In order to do that it will sacrifice whatever it takes, parts of the schema, records it find suspect...

 

Even if reports that no errors were found; I would not re-use the recovered files.

  • Like 1
Link to comment
Share on other sites

Wim,

That's not what I meant. I meant if I get no errors I would use the original files on the server machine, never the recovered copy.

 

In other words, is it likely a power failure will corrupt?

Link to comment
Share on other sites

 is it likely a power failure will corrupt?

 

That depends on what happens in the solution at the exact moment the power-out happens.  Since you don't know you have to assume the worst, not the best.

That's what backups are for.

 

If you have the FMS db engine auto-start you run the risk of overwriting your good backups with back backups if you don't catch that a crash or power-out happened.

 

You're taking a bit of a leap of faith by re-using crashed files, even if a recover on a copy does not show anything.  This type of damage can start small and build up over time as you go through the same practice.  Up to the point where the damage does become apparent but the only backup you have is one with slightly less damage.

Personally I'm not prepared to take that leap.

Link to comment
Share on other sites

Nothing was happening with the solution at the time. I know because I'm the only client. Still, I take your advice.

 

It could be as simple as the machine losing the ip address!

Link to comment
Share on other sites

Wim.

        It turns out, and I'm not on site at the moment, that the power failure caused the main modem/router in my office to revert to a state where all ports are closed. This appears to be the cause of the lack of connectivity. Since I can't access the router settings remotely AFAIK this will have to wait until I'm on site. I had packed up to return. Luckily I had the radio on in the car. 401 is closed until at least 3 pm. Too nice day to drive so I turned around.

Link to comment
Share on other sites

I agree with Wim. Never use the existing database files after a crash. Always replace them from your last good backup.  This may mean having to recreate data already input following the last good backup but it will save you a lot of trouble compared to what you will have to go through later on if your files become corrupted. You might want to think about adding more backups to insure you don't too loose much data entry.

 

The 'Automatically Start Database Sever' option should never be used. If you are making use of 'Enable Progressive Backups' feature, it becomes even more critical to not use the autostart option. If you do, you risk overwriting your progressive backup with bad/corrupted data thus negating it's entire time saving purpose.

Link to comment
Share on other sites

This may mean having to recreate data already input following the last good backup but it will save you a lot of trouble compared to what you will have to go through later on if your files become corrupted.

You can review what data might have been lost by searching a table for records with modification timestamp >= last backup less a few minutes. If the number of changes is small or in your case, Rick, only you are entering then that should work fine to re-enter the data. But if 1) there are larger number of Users, 2) the volume of data-entry is substantial, or 3) the relational structure is more complex, you might miss a related record or a field change or re-entering data is just too much. It might be easier, quicker and safer to export/import and let FM do the updating for you. To do so, you would run Recover on the crashed file and then:

  • In this crashed file, perform a find in the table for all records with modification timestamp >= last backup timestamp less *180.
  • Export as csv.
  • Go to a layout in your backup file for same table.
  • Show All Records
  • Import using Update Match Records and check 'Add Remaining records' - DO NOT CHECK "Perform auto-enter"
  • Map fields 'matching on field name'. Be sure and check your development log to be sure you haven't changed a field name or scan your map to be sure all fields mapped.
  • Use the primary key as the match field.
  • Then after import, go to your primary key's auto-enter, set the next serial (this can be scripted) to the same number as your crashed file (if you don't use a UUID).
  • Repeat for each table with found records which were modified.

It is easier to do all your exporting from the trashed file first, naming your csv for your table name and then you can move to the backup file to begin the imports. To me, this is the only way to guarantee that nothing is missed. This should all be scripted as part of your disaster recovery. The difficult part is having the discipline to keep the mapping and tables current in the script immediately after you make a schema change. So your data file should have both export and import pieces kept up to date (export with option for ALL or based upon timestamp entered by developer) and import of the matching csv files ... or ... use a good synching plugin. I've always thought that XML would be good for this but I've not explored it (yet).

* arbitrary number of seconds that you think to protect from overlap and that would depend more on how large your solution, speed of your server box etc. I figure it is better to go back too far than not far enough. The idea is to catch any records modified in that moment.

BTW, everyone should have the big five fields in EVERY table (except possibly small static tables): Creation Timestamp, Modification Timestamp, unique primary key, who created and who modified. These fields are your only guarantee of data retrieval in case of a problem. If you have Users which handle heavy data-entry, they would faint if you told them they had to re-enter an hour's worth of data. And if they are a call centre or sales reps who take information via phone, there may be no way to determine who placed an order which was lost.  

 

So for those reading, where data changes might be more than a few, there are options other than requiring your Users to re-enter their data. That would be business decision of course but if you maintain your Update/Recovery scripts, you can have them back up and running (even with large data sets) within 15-30 minutes, IOW, MUCH quicker than it would take the Users to re-enter the same data. AND ... if the data lost is invoices, quotes, purchase orders etc ... you risk that the re-entry will have an error. In all, a scripted recovery seems like a bit much but once you've used it in a crisis, you will think otherwise.

Sorry to go on and on but I have a soft spot for this subject. I agree also - never use a file after it has crashed ... EVER. If it doubt, throw it out.

Link to comment
Share on other sites

Many thanks for all the replies. As it turns out no user (me) was logged in when the power failed. I restarted the machine and copied the files to a desktop machine and checked them every way I could and they check out fine. No data lost and I have redundant backup strategies in place should a problem occur down the road. I know this goes against some of the advice I've been given, but this was a simple shut down. As far as having FMS set to start automatically on Restart, I believe this is safe as I have the server machine set to NOT automatically restart in the event of a power failure. In other words FMS only restarts automatically in the event of a voluntary restart and of course I can turn off FMS before a restart.

 

Rick.

Link to comment
Share on other sites

Steven,

 

I was speaking of a voluntary restart where FMS is shut down before the restart. In any case, I'm going to turn off the auto start feature when I get back on site.

 

Thanks.

Link to comment
Share on other sites

This topic is 3587 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.