So I have a hosted solution, with a dashboard with a couple portals. When I go to scroll a portal, scrolling even one portal screen has a delay of a few seconds, ie I click the scroll bar or try to drag it, and it takes about 3 seconds before it moves. Running locally, it's instantaneous. Running on a dev server on my LAN, there's a slight delay, but minimal.
Having read various threads about this situation, my first thought was it's due to a filtered portal, so I removed filtering, but that made no difference.
I also read various posts about unstored calcs and other factors that would cause all the data to have to be transferred to the client over the WAN.
But here's the thing - at the moment, there are about 20 records in the database, related to the particular portal.
Clearly, moving that data can't be the primary issue - right?
I do have a lot of Execute SQL calculations in related tables, and I've read that can cause poor performance. But wouldn't that only be a factor when there are a lot of records?
Are there design /schema choices that would cause significant lag regardless of the amount of data?
By Hampden Tech
I am seeing an error after syncing when I have a new record on the spoke (FileMaker DB) to be synced into a Microsoft SQL server table on the hub. The MirrorSync setup uses a custom primary key to do the insert into the SQL table.
According to the sync window as well as the sync log, the insert fails. However, when I review the table in SQL Server on the hub, the record has indeed been inserted properly. For some reason, MirrorSync is reporting this as an error. In addition, MirrorSync does not seem to remember the PK of the newly inserted hub record so if I do another sync on the spoke, it inserts it again. If I run the sync 3 times, I will have 3 new records inserted.
This is the error that I see in the log. No further information is available.
table Hub node MS SQL Server/KeyPhrases failed for source nodeId '275' This also happens on another table as well.
Is this a bug in version 4 of MirrorSync or is there some configuration step that is missing?
Any help would be greatly appreciated, as this is preventing us from releasing this solution to production.
By Hampden Tech
I have a situation that is causing me some issues. In my MirrorSync configuration I am able to download data from the HUD, make changes in the spoke DB and then sync back up and the changes are properly reflected in the HUB (Microsoft SQL) database.
However, I tried to add a new record on the spoke DB and when I did a sync operation, the error on the HUB indicated that the primary key field could not be null.
We manage the primary keys in our SQL database ourselves, so the next number for each table is stored and updated within the DB. MirrorSync must be expecting that our HUB table is using an Identify column to set the primary key. There is no way that I see of specifying the primary key value in the HUB database. In other words, if I could match up a value in the spoke DB with the primary key and then insert it, I could use an insert trigger on the HUB database to catch this and then assign it a new number in sequence. For example, I could setup a new field in the table called "Temp PK" and set it to some magic number like "999916". Then, I can trap for this on an insert trigger in SQL and assign it the correct way. The only issue that I see with this is that
1. I'm not sure how MirrorSync will be able to match up this new HUB PK with the Spoke PK, and
2. I'm not sure if there would be any conflict issues with multiple users syncing at the same time
Any advice on how to work around this would be helpful.
FMS 15 Windows Server 2012 host on Azure Plug in: 360Works ScriptMaster Purpose for using the Plug in: zip the pdf files. Problems We have:
There are a large amount temp files created on the client machine which run the fmp to connect the server , the temp files won't removed. What I am looking for:
Any script step can avoid these temp files stay in temp folder. How these files created? I attached a screenshot.
Let’s say we have two related tables: “Invoice” and “Invoice_Item”. We could create a calculation field in the “Invoice” table called “total_amount” with this formula:
total_amount = Sum (Invoice_Item::amount)
This field would have a negative impact in performance when appearing in the layout, since it would have to be defined as unstored, because it’s referencing a field from a related table.
Now let’s suppose this field is not used for any scripts, tooltips, conditional format, etc … would the performance of the database be negatively affected ONLY when this field appeared in a layout?
In other words, would adding an unstored calculation field to a table involve a performance penalty, even in the “unreal” case where this field didn’t appear in any layout, script, conditional format, etc.? thanks in advance!