Jump to content

daveinc

Members
  • Posts

    53
  • Joined

  • Last visited

daveinc's Achievements

Enthusiast

Enthusiast (6/14)

  • First Post
  • Collaborator
  • Conversation Starter
  • Week One Done
  • One Month Later

Recent Badges

0

Reputation

1

Community Answers

  1. Thanks Wim. I will dig around and see if I can identify anything else that is different.
  2. Hi Wim, Thanks for the Zabbix recommendation. I am using that to monitor all of our well-running servers now(although I never need to look at it because they all run very well!). Is there anyone who would know what is indicated when the situation I describe above with the FMTEMPFM files is occurring? I am desperately trying to figure this out with no luck so far and I consider you to be the authority(or at least knowing who the authority would be!) on all things FM. Thanks in advance for any suggestions, Dave
  3. Hi Wim, thanks for responding. I already know what design issues are problematic in this solution. I'm not looking for tips on redesign of the solution, I'm looking for the reason a slower, older, hardware setup performs MUCH better than a newer, faster, hardware setup with the exact same solution, load, hardware settings(# of CPUs, GB of RAM are identical. For Disk Space, the poor performing system has more free space, SSD drives, and Windows on C:(150GB free) Data files on E:(180GB free). The good performing system has far less free disk space, FastSCSI drives, and both Windows and the Data files are on a single drive C:(34GB free) ), and FM server settings. I have been looking at this in-depth on both sets of hardware for a long time. On the older, slower hardware, the solution does not write 3MB-6MB per second to FMTEMPFM files for 10 minutes to 1/2 hour at a time at the top of the disk stats in Windows Resource Monitor. The older, slower hardware does write to the FMTEMPFM files, but it is in short bursts of up to 1.5MB per second for 30 seconds at most. The CPU usage never rises above 7% on either system and looks very similar on both. The RAM stays very steady on both with about 5GB in use, 28GB on Standby and 26GB Free. The solution itself is not the issue, because it performs quite well on the older, slower hardware. I just want to know what is happening when the System Process(not the fmserver Process) is writing tons of data to FMTEMPFM files for a very long time like it does when the new hardware is performing poorly. When the new hardware is performing well, FMTEMPFM files are having very little data written to them, just like on the older, slower hardware. I'm guessing there is something wrong with either the CPUs or the RAM on the newer hardware that does not show up in the Resource Monitor. Either that, or faster CPU and Disk hardware is not necessarily better for hosting FM Server in certain peculiar situations. I would not be bothered by this and just leave the solution on the older, slower hardware, but my boss does not like the idea of investing in new hardware only to find it useless for the main reason for buying it in the first place! The old hardware was supposed to be retired and on the scrap heap by now!
  4. Hi, I have a performance conundrum that I have not been able to figure out for the past month and a half or so and am looking here for some guidance. We have a 9-year old FM system that we run our entire Production operation on. We have approximately 150 users in 9 locations across the US and Canada for this solution. We have a Windows Server 2012 VM with 14 CPU Cores, 60GB RAM, and an SSD Array SAN that we run our Production Server on. We have separate Drives configured for the Operating System and Data. All users not in the facility housing the server use Remote Desktop to access the system. The design of this system is not optimal for performance, but it has been running very well for the last 2 years (until early February 2020). In early February, it began to stall no matter what we did when we had more than 120 users or so connected and working normally. We identified several severely taxing actions, mainly Finds on related tables with millions of records, and eliminated those. No luck. We added unnecessary extra CPUs, RAM, and Disk space to no avail. We created a completely new VM with a fresh install of FM Server 16. No love. Finally in desperation, we moved the server to older, slower hardware with non-SSD hard drive array and voila! the system works fine again. This older VM is similar in every other way: Windows Server 2012, 14(slower) CPU Cores, 60GB(slower)RAM, FastSCSI Disk Array. As an anti-bonus, in thinking this would be temporary, we have both the OS and the Data on a single C drive on this older hardware. It works splendidly on this lesser setup. We had all variety of hardware experts in to make sure the newer/faster setup was performing correctly. We updated all firmware and restarted the whole setup. All benchmark tests show the newer system to be considerably faster in all phases, especially Disk. We have no problems with any other VMs on this newer/faster setup(including some less intensive Filemaker Servers). The one thing that occurs on the newer/faster machine that DOES NOT happen on the older/slower one, is that it stalls and the Disk is consumed by writing tons of data to FMTEMPFM* files for an extended period of time while there is no increase in data being written to the .fmp12 files. This particular FM Server and dataset is the only one that this happens to. We have two other FM Servers on there that have 200 plus users 24 hours a day and nary an issue. Does anyone know what is happening when the Disk Monitor shows the System Process is writing tons of data to FMTEMPFM files and not writing any more data than normal to the .fmp12 files? Thanks in advance for any guidance. Dave
  5. Value Lists come from the Production Server? Value Lists can be a critical part of development. Or is it only value lists that users can edit(which I never use anywhere, ever)? Just wanting to be clear with this.
  6. Yes. That does not help. I'm even committing the parent record just for good measure.
  7. Wim, I understand the transactional model, and in fact I am using it in this script repeatedly, but again that is not the question I am seeking an answer to. Because of the nature of our parts list, I have to create the initial Base Estimate Line Item records in their native table. The issue is NOT that my transactional edits after all of the records for the multilevel estimate are created are failing at any level. The script in question creates the initial Base Estimate Line Items and all of the initial Tiered Estimate line items based on the user's selection of a Part made up of many materials and work centers. After the initial Base Estimate line items are created in the Line Items table, my script uses relational transactions to create all of the initial Tiered Estimate Line Items. It successfully completes all of these creation transactions, but misses the first record in the relationship and throws no error because as far as FM is concerned at that point the first related record DOES NOT EXIST. I compared the number of related records for the Tiered Estimates at the end of the script with the number of records in the Base Estimate and it is always the same without any errors(but there is one less record in the Tiered Estimates in reality). I added a Perform Script at the end of the creation script just to compare the number of records a second time from a different layout with the same underlying table, and the second script sees the discrepancy. Why would only the first record in the relationship be the only one that isn't seen(until after the script finishes and I switch layouts, where all is well), and ONLY when a certain script is used, and it never once happened through 12,000 previous estimates created with the exact same procedure/tables/relationships in FM11? And why would a .01 second pause after the initial record creation but before I use the relational transactions to create the tiered records eliminate the problem entirely( admittedly only so far on 100 estimates or so)? You have to remember that this problem is there when there are NO other users connected to the server after hours and can be reproduced consistently every single time. If I run the script in the old FM11 system from 1 month ago(which has around 200 fewer estimates) I can't make it happen even once, and it never happened during the 12,000 estimates made in 11 with all the users connected. The transactional model is obviously the best method, especially for updating the existing tiered estimate records, but I would like to know why a very recently created related record is not seen immediately by FM12 when it clearly is by FM11. This makes me question using relationships for anything where it is important that the script knows what is there when the related record was created in its native table earlier in the same script. Seems I would have to do a lot of extra layout switching or pausing to make sure if I can't rely on FM12 relationships to do this work.
  8. Hi Wim, These triggers are all within the estimate and the estimate can only be edited by the Owner. The Estimate MUST update all of the iterations if there is a change to any one of tens of fields that affect price in the Base Estimate. Otherwise our estimators would be spending their entire day creating new estimates for minor changes to the original and piling up hundreds of old worthless records. But that is not the issue anyway. The issue is that a record created in another table isn't seen via any available function through a direct relationship unless a pause is put into the script. Committing the records has no effect, nor does resetting the primary key. The pause HAS to be there for FM to see that the record is there through the relationship. I have never had this issue before in 15 years of Filemakering. And the question is whether I should add this pause as a safeguard to all of my scripts that assume FM can see a record through a relationship if it is there when it was created less than "x" seconds ago earlier in the same script.
  9. Hi guys, me again. I've solved this issue myself already but found it aggravating enough that I thought I would post it here and see if anyone else had any similar issues. I have a complicated script that builds an Estimate after the entry of some base information. It is quite fast for the work that it does and has been in use for a couple of years without any problems. The script builds a multi-tiered Estimate for printed materials based on discounting for different quantity levels. The user builds an Estimate from materials and work centers plus markup for the base Quantity then the script creates duplicate estimates for all of the higher quantities desired. Since the upgrade to FM13, the users have reported that just the very first material line that was present in the base estimate was missing on all of the higher quantity estimates that were constructed via a certain sequence and never edited. There are multiple triggers to update all of the estimates when the base estimate is changed, and if any of those triggers but a particular one is used, the missing first lines are generated properly if the estimate is edited after its original incarnation. If I run the offending script with debugger on and a stop right before the first line item is duplicated, it always works perfectly. We looked through the 12,000 or so Estimates generated by FM11 with the same script on the same server and there was never a missing first line item. I had to add a .01 second pause to the script in place of the debugger stop and now it works correctly every time(so far). The script checks for the existence of a related record through a relationship at that spot. It would not work using IsValid, Count, or an unstored FoundCount until the pause was added. Just curious if I need to be on the lookout for this anywhere where I use the existence of a related record created earlier in the script(which is in many of my scripts) because it could be wreaking havoc already and no one has noticed yet. Thanks, Dave
  10. Just as a follow up, the FM issues have solved themselves with no direct intervention on my part. I guess it simply needs a week or two to get comfortable? My console still won't allow me to add Scheduled Scripts, but that's a minor issue compared to what was happening before.
  11. Thanks for your advice Josh. I knew the layout changed as to how it was constructed, but I didn't think it would be quite so dramatically slow. I just visited Richard Carlton's site and didn't find a video on converting an 11 layout to a new theme. Do you know where I can find this? Youtube maybe?
  12. The server is a Cove XFM setup that uses a RAID array of Solid State Drives. Top of the line in every facet. It is the exact same machine we had FM11 server on reformatted and setup with FMS13. Windows Server 2008 R2. I don't think it's a server issue, because the server stats are fine all the time. We also bought a number of new client machines to meet the requirements. It's almost like the client can't redraw the screen at an acceptable rate or is hogging resources for something. It happens at different times to different clients on both Macs and PCs, but again far too often to ignore. Regardless of the new features and their neato abilities, you have to be able to use the database for the upgrade to be worth it. I'm hoping the theme change will be a simple fix that won't create a ton of work for us. It just doesn't make sense to me because everything is exactly the same as it was Friday when 11 was zooming along wonderfully. I have restarted the server after hours a couple of times as well but still the behavior persists.
  13. Thanks for the quick replies guys. We tested extensively using a similar(but not exact) setup on a sandbox network. Of course we couldn't hire a team of 150 testers from all nine of our locations to simulate a real load. We had 5 testers and even then, we are not performing work at the rates our expert users would. Scripts and finds are not the issue now. Typing in completely unformatted fields is. All files were checked for index problems and compacted before conversion. They were not slow in any fashion during testing, in fact everything seemed faster, but that might have been because we only had 5 users. Finds are only slow when users perform them on unindexed or related fields, we have been going through and closing all of those loopholes or creating new ways to get to that information. We've never had any issues on FM11 that caused literal work stoppage and errors like this as long as the server was up and functional. What effect will changing the theme have? I don't want changes to our layouts. There were a host of minor display issues that were easy to resolve and I expected that. I would be thrilled if our issues were limited to slow scripts and bad finds, but not being able to type data into a field is a whole new level of problem. It's almost like there's an intense flash movie running within the FM window during these times. No other applications seem to be affected at all. Thanks again for any advice you may have.
  14. Hi guys, We recently upgraded our servers and clients to Filemaker Server13/Pro 12/13 and FM is practically unusable at times(far too often) where simply typing a character in a field takes 10 seconds. It is literally unusable in this state, especially since our users are accustomed to 11 which had it's own issues with slow scripts and finds, but never anything like this that makes it impossible to even work and causes entry errors all over the place while FM catches up with simple data entry. This is a huge database solution used by 150 users in 9 locations and recreating it from scratch would literally take years with our current FM staff. There are some optimizations to be made and we are always looking to optimize everything, but this issue occurs when typing into a standard text field with no triggers, no special formatting, nothing to optimize! Our server console(not as nice as the old one by the way) shows nothing abnormal according to Filemaker and consultants we have had look at it. The server machine, disk subsystem and network are more than enough according to everyone we have spoken with. All of our current client machines easily meet the system requirements Does anyone have an idea of why this would be and what can be done short of a complete rewrite to alleviate this untenable situation? Neither FM nor our consultant had a solution short of starting over, and that's just not an option with the scope of this solution. . We have been using FM since version 6 and have never had anything like this so I'm not sure what to make of it. We tested in the only practical manner before the upgrade and didn't notice any of these issues. Of course we couldn't simulate 150 users in 9 locations. We may have to abandon FM for good if this can't be resolved and I really can't believe others have not had these kinds of issues so I'm hoping the forums can help more than FM with this. Worried we've made a huge mistake, Dave
  15. Hi, I'm trying to import records via ODBC from FM12 Server to FM11. I can successfully import using the query builder with a WHERE clause but this needs to be a repetitive automated process for a user. I would like to use the Calculated SQL text option and have no problem if I just use a SELECT statement with no WHERE clause to bring in records. As soon as I add the WHERE clause, FQL errors out. The text from the Query Builder is exactly identical to the text from the Calculated SQL text but the Query Builder works and the Calculated SQL Text does not. I have tried every trick I could find on this to no avail. Thanks for any help you can provide. I will post the exact Calculated SQL Text on Monday if needed. Thanks Dave I found the solution and I can't believe it hasn't been stressed in the ODBC documentation or in the forums. You have to make sure the quote marks that end up in the SQL text are "straight quotes" and not "curly" or "smart" quotes. Unchecking the box in File Options=>Text on the importing machine takes care of it. Thanks to anyone who may have looked into this. Dave
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.