Jump to content

Things to do


This topic is 1817 days old. Please don't post here. Open a new topic instead.

Recommended Posts

It was great to see all those positive reactions when we presented WhistleBlower's functionality and installation during Berlin's dotFMP. There was also some good feedback, on what can be done next.

Let me first explain about how Claus and I divided our efforts on this monitoring solution. While Claus has been working hard on the front end FileMaker solution, my job was to take care of the part that's installed on the FileMaker servers. We choose for a Windows service and a macOS daemon. We also choose to handle all monitoring communication by reading from, and writing to a MySQL database. The daemon uses the macOS FSEvents framework to monitor the FileMaker logs, and the service uses the FindFirstChangeNotification function. For process reporting, the service uses the  Windows Management Instrumentation and a terminal feed from the top command on macOS. The rest of the code for the daemon and service is pretty much the same, and everything is written in Xojo. So Claus has been building the FileMaker solution, and while doing so, asked me to add and/or improve something in the WhistleBlower daemon/service, each time something was needed on the FileMaker side.

The FileMaker solution we provide is an example of what you could do, but in fact you can brew your own FileMaker solution, and we even encourage that. We feel everyone has slightly different needs,

For the daemon/service I have 2 things on my wish list:

  • write the platform specific code for CentOS so WhistleBlower can run on a FileMaker Cloud Server. I already did some research on iNotify, and it seems to be the way to monitor the FileMaker Server logs. I have to set up a Xojo IDE and I' still not sure if I would be developing on Ubuntu or on CentOS, still a lot of things to prepare there, so don't hold your breath.
  • improve the MySQL connection to use encryption. We already considered doing that, but decided not to include this in the first release. You can always setup a VPN client service on the FileMaker server machine to make things more secure, at least for the moment.

Please feel free to share improvement ideas in this thread.

Edited by Peter Wagemans
Link to comment
Share on other sites

  • 4 weeks later...

Wishlist:

Autodeletion of SQL entries (my server filled up in 3 Weeks - using over 60 GB)

Encryption of username/pwd on SQL Server (already discussed at DotFMP)

Configuration of services (which service is being monitored per server - we have lots of servers were we deliberately switch off WebDirect)

Ability to use CURL (included on mac, free for windows) for ping monitoring (very rough and not completely testet/functional demo can be provided)

Prowl Notification time threshold (only send notification after X seconds of downtime - there are false positives now)

1 Notification per X minutes - if Server crashes only send 1 Notification every X minutes

 

Thanks

Tobias

  • Thanks 1
Link to comment
Share on other sites

Hi Tobias,

Thanks for this feedback.

On 7/16/2018 at 3:40 PM, TobiasLiebhartKoschierSE said:

Autodeletion of SQL entries (my server filled up in 3 Weeks - using over 60 GB)

Yeah, even on my test server, I was amazed by all the data it is generating. I think that FileMaker Server schedules are the best way to schedule regular deletion. @Claus Lavendt is this something we should create in the FileMaker front end? Maybe we could just make a deletion script with some parameters like a datestamp cutoff offset and a log file name, the script could default to all logs if that parameter would not be provided. The front end FileMaker solution is using an ODBC datasource as a FileMaker external reference for occurrences, so scripting this from FileMaker would be the best solution I think.

Quote

Encryption of username/pwd on SQL Server (already discussed at DotFMP)

Definitely something for me. I know. I've been spending considerable time installing Xojo on CentOS 7 ( Xojo installation on Linux really sucks ), and it already compiles - without functioning of course. But I should put that on hold and go for the encrypted connection first, I think it will be way more easy to implement.

On 7/16/2018 at 3:40 PM, TobiasLiebhartKoschierSE said:

Configuration of services (which service is being monitored per server - we have lots of servers were we deliberately switch off WebDirect)

Ability to use CURL (included on mac, free for windows) for ping monitoring (very rough and not completely testet/functional demo can be provided)

Prowl Notification time threshold (only send notification after X seconds of downtime - there are false positives now)

1 Notification per X minutes - if Server crashes only send 1 Notification every X minutes

I think these are all features to put in the FileMaker front end file. That rough demo would be nice to look at. Are you doing it with or without plug-ins? Please post it here.

I don't know exactly how the Prowl feature works, @Claus Lavendt also added this feature to the front end file. Maybe he can answer this one.

The daemon/service only sends log data, interpretation of that data is done on the FileMaker side.

 

Link to comment
Share on other sites

5 hours ago, Peter Wagemans said:

Yeah, even on my test server, I was amazed by all the data it is generating. I think that FileMaker Server schedules are the best way to schedule regular deletion. @Claus Lavendt is this something we should create in the FileMaker front end? Maybe we could just make a deletion script with some parameters like a datestamp cutoff offset and a log file name, the script could default to all logs if that parameter would not be provided. The front end FileMaker solution is using an ODBC datasource as a FileMaker external reference for occurrences, so scripting this from FileMaker would be the best solution I think.

Tried doing it with a FileMaker Script but that get slow really fast. Problem seems to be the Process Logs. As I understand you use them to monitor the running of the FM services. But they generate about 300-500 Log entries a second with only 5 monitored servers. And deleting them from within filemaker is really slow. Truncating or deleting them from SQL is way faster. I think the deamon should only maintain 1 set of process info (the current one). Everything else is just too confusing and there is no way one can make use of the amount of data right now.

 

6 hours ago, Peter Wagemans said:

I think these are all features to put in the FileMaker front end file. That rough demo would be nice to look at. Are you doing it with or without plug-ins? Please post it here.

Attached, user pass both admin. Find Script searching for CURL in Script Workspace. Also did a few adaptions regarding notification frequency.
And regarding this probably a threshold for notification would be nice to avoid false positives. Sometimes the WB reports a server problem and the next minute it's gone. So a configurable threshold would be nice (e.g. if we set it two double the time of the frequency of the server side script it would avoid false positives - with the caveat of notifying a tad delayed)

 

It would also be nice if you could enter more than 1 Prowl account for a server and if those accounts (api Keys) could be on a server basis.

Additionally I'd love to have a HDD monitoring as well. To see if the HDD fills up and in the event be able to take action before everything crashes.

RAM as well probably.

 

You told us in Berlin but I forgot - what is the reason the deamon does not work with FM11? I also have problems with a FM14 Host of mine, 15+ is working fine.

Thanks for all this fantastic work btw!!

FMSwhistleBlower.fmp12

Link to comment
Share on other sites

On 7/18/2018 at 4:31 PM, TobiasLiebhartKoschierSE said:
On 7/18/2018 at 10:16 AM, Peter Wagemans said:

Yeah, even on my test server, I was amazed by all the data it is generating. I think that FileMaker Server schedules are the best way to schedule regular deletion. @Claus Lavendt is this something we should create in the FileMaker front end? Maybe we could just make a deletion script with some parameters like a datestamp cutoff offset and a log file name, the script could default to all logs if that parameter would not be provided. The front end FileMaker solution is using an ODBC datasource as a FileMaker external reference for occurrences, so scripting this from FileMaker would be the best solution I think.

Tried doing it with a FileMaker Script but that get slow really fast. Problem seems to be the Process Logs. As I understand you use them to monitor the running of the FM services. But they generate about 300-500 Log entries a second with only 5 monitored servers. And deleting them from within filemaker is really slow. Truncating or deleting them from SQL is way faster. 

I found out a better way to do this. From the MySQL Server itself. Do this in MySQL Workbench:

In the MySQL menu ( I have a an older version running here, things could be slighly different ) choose Server->Options File.

Under the "General" tab, the first option is "event-scheduler". Enable that and put it to "On". Then restart the MySQL service.

Then execute the following SQL:

DELIMITER $$
DROP EVENT IF EXISTS Processes_Cleanup;
CREATE EVENT Processes_Cleanup
  ON SCHEDULE EVERY 1 MINUTE STARTS CURRENT_TIMESTAMP + INTERVAL 60 MINUTE
  ON COMPLETION PRESERVE
DO BEGIN
SET SQL_SAFE_UPDATES = 0;
delete FROM whistleblower.processes where TIMESTAMPDIFF(HOUR, TimeStamp, now())>96;
END;$$
DELIMITER ;

This creates an event that checks every hour if there are records that are older than 96 hours and deletes them. Of course you can adjust that number to whatever pleases you.

You can check the event schedule with:

show events ;

After the interval time, your processes table will be cleaned up to contain only the more recent records.

On 7/18/2018 at 4:31 PM, TobiasLiebhartKoschierSE said:

I think the deamon should only maintain 1 set of process info (the current one). Everything else is just too confusing and there is no way one can make use of the amount of data right now.

It is confusing indeed, but you need performance records over time if you want to make statistics over time with that data. If you do not want to do that, set the cuttoff to something really small.

Edited by Peter Wagemans
typo and some additional thought
  • Thanks 1
Link to comment
Share on other sites

Thanks a lot, I'm not a SQL Server poweruser so that event scripting will come in handy.

 

On 7/22/2018 at 12:47 PM, Peter Wagemans said:

It is confusing indeed, but you need performance records over time if you want to make statistics over time with that data. If you do not want to do that, set the cuttoff to something really small.

So the best course of action would probably be a (SQLServer? Deamon?) script that monitors the table, logs excessive CPU usage and then deletes the records no longer needed?

 

Link to comment
Share on other sites

Yes,  the MySQL server is much better equipped to do delete the records no longer needed. I wouldn't let the wbDaemon do all that. The MySQL workbench is a good interface to your data.

I don't understand how the MySQL could be able to log excessive CPU usage. Maybe I just understand you wrongly. It could be a strategy to have a MySQL schedule that deletes CPU figures that are very low, and as that is most of the time the case, you would end up with a rather compact set of meaningful data.

BTW, I have created a new version of the WBDaemon Config app, that CAN use SSL to connect. Setting up the MySQL server for SSL connections is not what I would call "easy peasy". Here's a good article use as a guideline: https://serverfault.com/questions/783861/enabling-ssl-in-mysql-when-using-windows-as-a-server-and-client . My MySQL test server runs on a Windows Server 2012 VM, so I do not have to translate this to MacOS, but I think it's probably about 99% the same, except for the paths, of course.

You have to create a lot of certificates and keys, and in the config application, you need to enter the client private key, the client certificate and the Server CA certificate.  There is now a "Test Connection" button that you can use to see if you entered everything correctly. If the 3 extra field ( actually I check only 1, lazy me ) are filled, the application tries to connect using SSLMode.

I have dropboxed the config app to @Claus Lavendt, and he will let me know how it works for him, when he has some time to test it.

I currently have no issues from the macOS config app, but - with exactly the same source code for the connection - the Windows config app is not able to connect using SSL, and connects without SSL, even if the MySQL (5.7)  server is configured to only accept SSLMode. Weird, and probably a MySQL security bug. I plan to install MySQL 8.0 to see if this fixes the problem. Or it could be a Xojo MySQL Community Plugin problem. Anyway, I'm not yet completely stuck (yet), I'll come back here when I find out what's happening.

Screen Shot 2018-07-28 at 18.20.58.png

Link to comment
Share on other sites

  • Newbies

Hi Peter,

After some extensive testing (15 servers reporting) i have an issue that is realy worrying me.

It seems that the proccesses are collected continuously, not giving our SQL Server enough room to breath for other applications.

Could you put in some pause after each collection zycle, otherwise our Server wont survive the 45 Servers we want to monitor.

Right now the deamons are submitting between 100 and 350 records per second, thats hard work even for a grown up SQL Server

That would be great.

Stef

Edited by StefP
Link to comment
Share on other sites

I think this something I should add in the config app, so you can specify it for every deamon separately. I think I have hard coded 30 seconds right now, and this can indeed result in peek traffic with so many daemons connected. I will give this priority. Thanks for the feedback, Stef.

Link to comment
Share on other sites

I'm currently looking into this code, and I think I will implement it like this:

1344467638_ScreenShot2018-08-08at14_26_39.png.81658acee39fa39f008a3feb4c110b75.png

  • you enter the frequency in seconds or
  • you enter the number of seconds after the current minute and you enter the frequency of minutes

Even with a frequency in seconds, you can experience peak moments, once the logging timer starts to drift, which it will.

To avoid this, the logging can happen after a number of minutes, on the set second after that number. I hope this sentence is readable 🙂

You can then configure each of your daemons to another offset, and if they are all properly configured to use a time server, that should spread the load. The last field here doesn't really help spread the load, but allows you to have a greater interval than 60 seconds.

Is this approach solving things?

Link to comment
Share on other sites

I am really getting fed up with the SSL connection. Xojo’s MySQL community plug-in is not supporting it very well, and is not maintained as I would like. Itis also inconstent in it’s behaviour in cross platform development.

I would like to leave it as it is now, and hope that Xojo will support it better in future release of their IDE. For now I will put a comment on tab that things are “experimental”.

Encrypting the data ourselves seems to be a good idea instead. The interface to configure that woul be simple: provide a password to encrypt, and one to decrypt. Switch these on the FileMaker side.

Maybe some options to encrypt only fmsadmin traffic, and not the logs, so less critical things can remain readable.

Link to comment
Share on other sites

  • 3 weeks later...
On 8/11/2018 at 12:14 PM, Peter Wagemans said:

I am really getting fed up with the SSL connection. Xojo’s MySQL community plug-in is not supporting it very well, and is not maintained as I would like. Itis also inconstent in it’s behaviour in cross platform development.

I would like to leave it as it is now, and hope that Xojo will support it better in future release of their IDE. For now I will put a comment on tab that things are “experimental”.

Encrypting the data ourselves seems to be a good idea instead. The interface to configure that woul be simple: provide a password to encrypt, and one to decrypt. Switch these on the FileMaker side.

Maybe some options to encrypt only fmsadmin traffic, and not the logs, so less critical things can remain readable.

Sounds good. When will the next version be ready? 🤩

Link to comment
Share on other sites

Hi everyone.

For some reason I never got notified of activity here, which is why I had to go to Peter’s home in order to learn about it...

Just wanted to let you know that I will respond to this next week.

 

In regards of process log entries filling up fast;

I’ve been using a ExecuteSQL script step to clean up this table, which should be in the FileMaker file. You just need to configure it. To me, that’s easier than using the MySQL workbench and it is working well, since it’s a SQL command.

As said, I will get back next week.

 

Thank you for your interest and feedback.

Link to comment
Share on other sites

33 minutes ago, Claus Lavendt said:

Hi everyone.

For some reason I never got notified of activity here, which is why I had to go to Peter’s home in order to learn about it...

I never do either, I shrugged it off that it was probably just me, but obviously it must be something on FMForums. I've set to get notified on any threads I've commented on, but never get any notifications. Correction, in the past year I may have been notified maybe once.

Perhaps the admins can shed a light on this?

Link to comment
Share on other sites

  • 4 weeks later...
  • 5 months later...
  • 2 weeks later...

I have put this project aside. I think it works well enough for smaller setups. Those with more servers should maybe consider a commercial product.

WB was written in Xojo, and requires an MBS plug-in license. For those who are serious about continuing work on this, let me and/or Claus know in a PM. I think I will transfer my code to Claus and let him decide on all this. I am a full time FileMaker developer and should not engage into too many side projects, I learned from this in the past.

A centralised service where all FileMaker Servers push information to, is one way to do things, but a model where JSON can be used to pull information and send commands using that same API, is a zero deployment one, and seems to have become the better way to do things. This was not yet available when we started the project. Of course not all features of WB are supported already by the Admin API, but this is a good reason for asking FileMaker about it. FileMaker is going for subscription based hosting, maybe the server will evolve more quickly when they get first hand experience with the problems encountered when hosting a lot of servers.

Link to comment
Share on other sites

I've built a Monitoring DB for our company using the DataAPI and scheduled scripts. It pings just once a minute but that is fast enough for us and allows to also send CPU load, RAM usage and HD usage. And all of this quite efficiently. Still it does not allow remote control of the server since it is dependent on the Scripting engine but at least it is a two-way controlled setup (if the pings do not come in in time you get a message and if the monitoring server is not reachable you get a message). Ideally you'd use 2 Monitoring servers that control each other for additional safety. 

So far we are quite satisfied with the setup and it is much easier to keep the data produced small enough. 

Link to comment
Share on other sites

This topic is 1817 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.