Jump to content

_ian

Members
  • Posts

    62
  • Joined

  • Last visited

  • Days Won

    1

_ian last won the day on October 6 2016

_ian had the most liked content!

Profile Information

  • Title
    Founder
  • Industry
    Tech Strategy
  • Gender
    Not Telling
  • Location
    London

Contact Methods

  • Website URL
    https://www.transformingdigital.ai

FileMaker Experience

  • Skill Level
    Expert
  • FM Application
    19

Platform Environment

  • OS Platform
    X-Platform
  • OS Version
    Mojave

FileMaker Partner

  • Certification
    9
    10
    11
    12
    13
    18
  • Membership
    FileMaker TechNet
    FileMaker Business Alliance

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

_ian's Achievements

Enthusiast

Enthusiast (6/14)

  • First Post
  • Collaborator
  • Conversation Starter
  • Week One Done
  • One Month Later

Recent Badges

4

Reputation

  1. thanks Jesse. My current thinking is to set it to ignore all deletions. I'll also revise my selection criteria to only replicate sales invoices that have been finalised. Those are never edited, so that avoids the need to replicate any deletions.
  2. Hi Jesse FileMaker is configured as the hub. Ordinarily we do want to sync record deletions. However, that is probably only true in some tables. Sales invoice lines, for example, require deletions to be synced because project managers will add and delete lines while they're in a draft state. Is that something we can set on a table by table basis? We will not be archiving any invoice related data.
  3. We've been doing a one-way sync from FileMaker Server to a data warehouse on SQL Server for years. MirrorSync runs every 15 minutes to pump the data from FileMaker tables to SQL Server tables. We'd like to archive records in the FileMaker system but retain them in the data warehouse. This is primarily for performance and usability reasons. The FileMaker users have no need to refer to the records that are more than 5 years old, of which there are a few hundred thousand. Hence, we'd like to archive them, but the finance team doesn't want to lose them out of their data warehouse. Is there a way to remove the records from FileMaker while retaining them in the data warehouse? Ideally we'd have a sync that didn't replicate the deletions, but then return to business as usual after that sync was done. ETA: Sorry i'd not read the prior post, which sounds quite pertinent. I'll look into the selection criteria through the script as that may be able to accommodate our needs.
  4. I just had to implement HotDog not HotDog in FileMaker... HotDog.mp4
  5. One of our IT guys has had a look at the snap-shot of the VM from last night and has found that there is an alias pointing to a directory in the 360Works folder, so he's re-enabled IIS and is putting it back the way it was. Interesting that the important bits, the synch process just kept on going with no problems.
  6. Hi We're doing a one-way sync from our FM Server to a SQL Server instance. MirrorSync is installed on the SQL Server instance which is configured as the spoke. This has been running for several years now with no problems. Do we need to run IIS on the SQL Server box? IT is on a mission to disable/remove all unnecessary services from our server VMs and have asked me whether IIS is required here. I somewhat rashly said "I don't think so" and they've disabled IIS. I'm not seeing any error messages in the MirrorSync logs and data is synchronising with no problems. However my MirrorSyncConfigClient.jnlp client doesn't seem to be responding. So is this dependent upon IIS? I suspect it is, but everything else is working as expected... thanks
  7. Hi I've come up with a solution for a problem I had last week and wanted to post it here in case it helps anyone else. We had a strange thing with our MirrorSync server last week - the internal database got out of sync with the FileMaker Server (Hub) and the SQL Server (Spoke), so it kept trying to add Suppliers that had already been added to SQL Server and returning error messages for each one. The suppliers had been added to the SQL Server spoke, but MirrorSync kept trying to add them again. I checked and we had no duplicate keys in the FileMaker database so that wasn't the problem. My solution was to switch off the SQL server error reporting for duplicate keys on the table in question executed this on the SQL SERVER: ALTER TABLE [Suppliers] REBUILD WITH (IGNORE_DUP_KEY = ON) run a sync - the redundant additions failed silently and MirrorSync updated it's internal database to record the addition. then back to the SQL Server to run ALTER TABLE [Suppliers] REBUILD WITH (IGNORE_DUP_KEY = OFF) Run a sync again and everything's happy now.
  8. Given what you want to do, you might consider looking at MirrorSync by 360Works. It's a synchronisation production that allows you to sync record creation, deletions, updates between two databases. In your scenario you could configure it to sync record changes from the MySQL tables into your FileMaker system. We use it to read from our FileMaker system into a data warehouse in SQL Server.
  9. I'm using Let(_path = Substitute(Get(DocumentsPath);"/";"\\") & "Manifest_Detail_Imports\\"; Right(_path;Length(_path)-1) ) then BE_ListFilesInFolder ( $_import_directory;"csv" ) to list the contents of the directory to read the contents of \Documents\Manifest_Detail_Imports on my server doing it server-side is massively faster and less troublesome than using a client to run imports.
  10. Thanks Jesse. That sounds like it will sort that issue very nicely. No need to post that version. I'm off to Berlin for dotFMP at silly o'clock tomorrow morning and have a load of other stuff queued up for when i'm back. I'll just download and install when you do the next release. I've deleted all the old log files, so not in a massive rush.
  11. I agree with Brent, we've been running WebDirect for about a year now and it's been absolutely rock solid. Slow, but rock solid. If it was my files sitting on a server that was being restarted every day, I'd be asking some questions of the hosting company.
  12. and... while I'm on the topic of features... I've just investigated last nights sync failure. I'm pretty sure it's down to running out of disk space. A way to restrict the maximum size of the log files would be nice. Or what might be simpler is - provide a sync warning (maybe hourly) when disk space is getting low. Do you prefer receiving feature requests directly via email or through this forum? thanks
  13. ​Hi Jesse It happened again this morning. Here's a screen-shot. While I'm at it. Another feature suggestion: perhaps the ability to set a user defined limit on the number of error emails? I had a sync problem at around 0457 this morning. By the time I got into the office I found close to 100 emails letting me know there was a problem. If i could limit it, it would still cover the situation of a temporary glitch which resolves itself (e.g. the error message if someone is deleting while the sync happens) but also let me know when it's a more major issue. (e.g. this mornings "Last sync failed: java.sql.SQLException: Connections could not be acquired from the underlying database!". thanks ian
  14. ​I'll send you an example the next time this happens.
  15. Hi guys Synch is running beautifully, really fast. We're able to sync every 120 seconds, so our Financial Data Warehouse is effectively real-time. Now it's been running a while I do have one suggestion that I would find helpful. When I have data type conversion errors - typically converting string to decimal - it would be really useful if the error message on the MirrorSync app window would tell me which table and primary key. As it stands when these errors happen I know that there's a data type issue, but need to dig through the log files to find which table and record. I'm tightening up data types throughout the system, but it's not a common problem. Once every few weeks is typical. I suppose the other alternative would be for me to write an import format so Logstash and Kibana can deal with it. Then i'd could just take advantage of all that ElasticSearch goodness. But i don't really have the time...
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.