Jump to content

Doug Gardner

Members
  • Content Count

    6
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Doug Gardner

  • Rank
    member
  1. Thanks for the ideas, Dan. If I hit a snag, I might end up going the route of an OS-level script, but I'm trying to avoid it because processing the results requires bringing in other technologies and external connections, and I think its best to have the smallest possible set of dependencies—the simpler it is, the easier it is to troubleshoot, and the less it relies on being really clever.
  2. Hi Claus, I appreciate the response. Actually, the behaviour is a little bit different than you describe. There is no prompt to select another route to the missing file under a specific set of circumstances. For the internal check (that is, the script running server-side, checking to see which of its files are open), as long as you're running the script with Set Error Capture [On], there is no pause and no dialogue when a file is not open or not found. The attempt to run the script in a missing file just returns an error 100. For the external check (that is, the script running on the FM Server that is polling the main file on each of the other FM Servers), again as long as you're running the script with Set Error Capture [On], the script runs all the way through and either an error 100 or 802 is returned. (On FM Server, the errors are suppressed anyway, but this works without interruption on FM Pro, too.) I understand that this might be unexpected behaviour. There are all kinds of ways you can end up with a dialogue asking you to locate a missing file. I should have mentioned that in this case the files in question do not have any tables represented on the relationship graph, so there is never a case where they are required to display data. You brought up a great point, and made me consider how FM Server running the script would handle a case where a file was not available. How would that affect subsequent attempts? Would the file need to be closed and re-opened (because the FileMaker Way is to try once, then stop trying until the file is closed and re-opened, as is the case with FM Pro)? I've done some preliminary testing, running the script on FM Server on files hosted elsewhere, then bringing a file down, running the script (and being notified the file is down), then bringing the file back up and running the script (and being notified the file is available). So far, it's looking good. Thanks again for all the help.
  3. Hi Claus, That's very close to the solution I've arrived at, though without the XML part (because it's not an option). If you've got a minute, I'd appreciate any critical comment on the basic method outlined in the previous post. Thanks!
  4. I was thinking that the function could be rewritten so that when it's run from FMS, it observes the purview of the administrator group from which the scheduled script runs. That could be a non-trivial rewrite, though, because functions are designed to operate within the client and might not be able to reach into the server environment without doing significant work. Regarding the initial post... I agree that monitoring cannot be done with FMS if you want a positive result—something outside the target FMS must actively perform the checking, at least ultimately. Here's what I think I'm going to do, given that there is no web access to the server machines, but there is external FM access. All criticism is greatly appreciated. ______________________________________ On the FM Server Deployments Create a script (called "Heartbeat") with just one script step: Exit Script [ Result: Get ( FileName )], paste it into each file on the server (about 10 files). In one of the files (that already happens to have file references to the other files), create a script "Heartbeat_Poll_Internal" that runs Heartbeat on each of the files and records the results in a log, timestamped, etc. ______________________________________ On an FM Server Set to Poll the Others Run a script, "Heartbeat_Poll_External", that collects the most recent log entry in each of the FM Server Deployments. The following results lead to flags for investigation: A machine is down, (so no report). The main file is down (the one that runs Heartbeat_Poll_Internal). Report returned showing that one or more files are down. This data can be displayed in a dashboard, and the results (positive and negative) can be emailed to the admin. _________ Pros Very quick and easy to deploy, with minimal database footprint. Uses existing software, configuration, and setup. _________ Cons The FM Server that is setup to poll the others could stop responding, and it's up to the admin to notice that a status report hasn't been received (there are ways to shift the locus of this type of problem, but I don't think there's a way to solve it—it leads to an infinite regress). It's possible for a file to be accessible to FM Server that is not accessible to others on the network—this does not address that type of problem.
  5. Hey Wim, thanks for responding. Yes, I agree that the challenge is collecting the data. Ideally, the data would somehow be gathered at a central location and a routine would check each set of files against the canonical list for each location, then the results would be summarized for each server, possibly in an email. So, once or twice a day the admin would get a simple report. One thing that I just remembered, though, is that I have seen cases where a FM Server reports that a file is open, it's status is "normal", and the file isn't really available on the network. I can't remember where I've seen this, or how long ago, and I have no idea whether it's a reasonable concern. And I might be overly cautious because these aren't super-high value, "mission critical", people-might-not-get-their-medication type systems that I'm concerned with in this case. Positive results: a check show that these files are open and responding properly. Negative results: a check shows that these files are not open or not responding. FM Server can alert you to a problem, but it won't send you an alert that everything is ok. The distinction wouldn't matter if we could trust that all problems always get reported, but as a matter of logic, that's not possible because one problem could be that the reporting isn't working. On another, somewhat related note, it's a bummer that the DatabaseNames function hasn't been updated for operating on FM Server. It just returns the name of the file in which the script runs, not all the files that are running on the server. There might be a security reason for that, but I'm guessing it's just that it's an old function that hasn't been revisited in light of more recent changes.
  6. I'm looking for the best way to test that files are open and "normal" on a number of FM Server 13 Windows deployments that a client has scattered all over the place. Note that this is testing for a positive result, not looking for a negative result (like, that a file isn't open because of problem). There are a couple of ways that come to mind: Write an OS-level script to get the filenames and status from fmsadmin at the command line, send them to a text file and deal with that appropriately. Another way to do it is have an FM script do something like check that each file is open on the host and record the results. Either way seems like more work than it should be. Is there a better way? A simple way? All help is appreciated. Doug
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.