Jump to content

Feature request


This topic is 2683 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Hi guys

Synch is running beautifully, really fast. We're able to sync every 120 seconds, so our Financial Data Warehouse is effectively real-time.

 

Now it's been running a while I do have one suggestion that I would find helpful. When I have data type conversion errors - typically converting string to decimal - it would be really useful if the error message on the MirrorSync app window would tell me which table and primary key. As it stands when these errors happen I know that there's a data type issue, but need to dig through the log files to find which table and record. I'm tightening up data types throughout the system, but it's not a common problem. Once every few weeks is typical. 

I suppose the other alternative would be for me to write an import format so Logstash and Kibana can deal with it. Then i'd could just take advantage of all that ElasticSearch goodness. But i don't really have the time...

Link to comment
Share on other sites

Could you copy and paste an example of one of the errors you're seeing? It will help me find the spot in the code to make the change.

​I'll send you an example the next time this happens. 

Link to comment
Share on other sites

  • 2 weeks later...

Could you copy and paste an example of one of the errors you're seeing? It will help me find the spot in the code to make the change.

​Hi Jesse

 

It happened again this morning. Here's a screen-shot. 

While I'm at it. Another feature suggestion: perhaps the ability to set a user defined limit on the number of error emails? I had a sync problem at around 0457 this morning. By the time I got into the office I found close to 100 emails letting me know there was a problem. If i could limit it, it would still cover the situation of a temporary glitch which resolves itself (e.g. the error message if someone is deleting while the sync happens) but also let me know when it's a more major issue. (e.g. this mornings "Last sync failed: java.sql.SQLException: Connections could not be acquired from the underlying database!".

thanks

ian

Screen Shot 2015-06-02 at 09.08.33.png

Link to comment
Share on other sites

and... while I'm on the topic of features... I've just investigated last nights sync failure. I'm pretty sure it's down to running out of disk space. A way to restrict the maximum size of the log files would be nice. Or what might be simpler is - provide a sync warning (maybe hourly) when disk space is getting low. 

 

Do you prefer receiving feature requests directly via email or through this forum?

thanks

Link to comment
Share on other sites

Hi Ian - I've added a feature to the next release to delete log files older than 2 weeks. If you want me to post that version, let me know.

Regarding your original point about "the conversion from String to DECIMAL is unsupported.", for some reason I could not find that text (or even fragments of it) in my source code, so it's hard to find where to make the change. The next time that happens, use the 'send problem report' to send us the log file. If I have the log file, it will have the line numbers showing me where in the source code that error is generated.

Link to comment
Share on other sites

Thanks Jesse. That sounds like it will sort that issue very nicely. No need to post that version. I'm off to Berlin for dotFMP at silly o'clock tomorrow morning and have a load of other stuff queued up for when i'm back. I'll just download and install when you do the next release. I've deleted all the old log files, so not in a massive rush.

 

 

Link to comment
Share on other sites

  • 1 year later...
On 22 May, 2015 at 11:49 AM, _ian said:

I suppose the other alternative would be for me to write an import format so Logstash and Kibana can deal with it. Then i'd could just take advantage of all that ElasticSearch goodness. But i don't really have the time...

I'm not sure if I would involve logstash and kibana in such a matter, however writing the FileMaker data to Elasticsearch could improve search speed especially when search criteria is in related fields.

Link to comment
Share on other sites

42 minutes ago, ggt667 said:

I'm not sure if I would involve logstash and kibana in such a matter, however writing the FileMaker data to Elasticsearch could improve search speed especially when search criteria is in related fields.

you could remove a couple of layers of complexity and just do it with Nutch and SOLR

Link to comment
Share on other sites

How would you query Nutch or SOLR from within FileMaker? SOLR and elasticsearch are both interfaces on top of lucene. SOLR does not have aggregation; and as such would not have the features I would need to speed up searches. I'm not sure which complexity you are trying to get rid of.

"Don't forget about the aggregations ElasticSearch provides for those requiring OLAP like functionality. Solr cloud has only limited faceting. And if you need alerts on aggregations ES percolation delivers." – http://stackoverflow.com/questions/10213009/solr-vs-elasticsearch

Edited by ggt667
Link to comment
Share on other sites

This topic is 2683 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.