Jump to content

happymac

Members
  • Content count

    32
  • Joined

  • Last visited

Community Reputation

0 Neutral

About happymac

  • Rank
    member

Recent Profile Visitors

983 profile views
  1. happymac

    large transaction solution

    I've been pondering this and feel it is the best way to proceed, but I'm still struggling with how to deal with updates or deletes. Let me pose a scenario. Imagine if for August of this year we have 100 transactions for the month and a user deletes two of the transactions in the middle of the month. So, we now have 98 transactions. would all 98 of these need to be 're-totaled', or only the ones AFTER the removed transactions (roughly transactions 51-98)? OR do we leave transactions 51-97 out of sync and only update the last entry in the month with the summary totals? Also, I assume this would mean that all transactions in September, October and beyond would need re-totaling as well since we also want to display the totals for all time on the report as well. Probably a better solution ... would be to use a related 'summary' table and have one record in the 'summary' table for each month ... so that any change in ANY of the transactions in the month would result in this one 'summary' field getting re-calc'd? your insight would be really helpful!
  2. happymac

    large transaction solution

    yes, I was thinking the same thing. that we put data in a number field that essentially is a summary of all records in the transaction table for a specific product, specific revenue/expense stream and specific month, so ultimately if a product has 1,000 transactions for July for royalties paid ... the transaction table would retain the 1,000 transactions, but we would also have a separate table called "TRANSACTIONSSUMMARIZED" that would contain 1 record with one number field entry summarizing the 1,000 transactions. So if the 1,000 transactions total 3,085.13 ... this one record would have a number field with 3085.13 in it. This way the reports would in a sense have one record to evaluate per month instead of 1,000. Is this more or less what you were implying?
  3. happymac

    large transaction solution

    Comment ... I may have not been clear in my first posts ... I AM using summary fields, but a typical report has roughly 10-15 summary fields each having to look at potentially millions of records. I have not tried FASTSUMMARY and that might be a good solution to the performance problem ... and of course EXECUTESQL might be effective here as well. I guess before I go down those rabbit holes, the question is ... are you suggesting that it is quite reasonable with millions of records that we could keep the data normalized and use better summary techniques to get the reports to load reasonably quickly, OR is normalization never going to work sufficiently when we are talking about 5-10 million records?
  4. happymac

    large transaction solution

    again ... thank you to ALL for weighing in! I REALLY appreciate the suggestions. I think in a perfect world we would achieve a denormalized state that would be sufficient, and not shop around for non fmp solutions. With that said, I have really struggled with this particular situation where the records get out of sync ... and I think it has to do with the quantity of records. denormalizing is easy for smaller sets, but when dealing with millions of records, inevitably the "sync" doesn't complete or the user ends up having to wait for it to complete. Obviously we should be able to solve this, but it seems like either of two things happen with lots of records ... 1) the records get out of sync ... which is an error and can be solved, or 2) the user needs to wait while the denormalization occurs. A silly example to illustrate number 2 ... user imports transactions (roughly 30,000 records) through an import script that does lots of things. They then immediately want to run a report for that product. user has to wait for this denormalization to occur. the way we solve this currently is when the import script completes, it triggers a server side script to run and start updating the records. Great in theory, but this can take lots of time to complete. In the mean time if the user tries to pull a report they get incorrect data. It sounds like my techniques are correct, I just need to find ways to denormalize quicker. JBANTE also suggested denormalizing at the time of import or input ... which I agree is ideal, but ... since in this case we have so many possible reporting outcomes ... I'm not sure we can achieve a perfect solution here. I think we could severely cut down on the number of records by having a summary for each product for each month for each revenue / expense stream, and then use summary fields (unstored calcs) to summarize that data. Since this probably would reduce the amount of transactions by 90% - 95% that need to be summarized this would help drastically but it wouldn't perfectly solve the issue. AND ... yes, we "re-normalize" on the first of every month to make certain things are accurate ... but it takes days (literally) to force an update of all the records. I don't know ... maybe I'm going down the right path, just need to perfect the code so it is error proof, but it certainly seems frustrating, and getting calls from the panicked customer that the report says one thing and the database says something different is definitely no Bueno.
  5. happymac

    large transaction solution

    I appreciate your thoughts, thank you. maybe giving an example of one report would be helpful to see how PSOS could help. a typical report shows one year of all transactions for a product. It is broken down with columns that display each month. so column A is January, column B is February, etc. In each column, there are rows for various revenue and expenses related to that product. so you might have a line "revenue from xx" and another "marketing expense" and another "royalty paid". then at the bottom of each column, we have totals that add up the above pluses and minuses. It's a bit more complicated than this, but more or less this is a good description. The fields are calculation fields that are located in the product table. these fields summarize (sum is used primarily) the totals for each revenue / expense type by month. So it uses a relationship from product->transaction with match fields like (month, year, expense type, revenue type, etc ... all number fields). Bottom line ... when you open the layout it immediately starts calculating the fields on the layout and forces you to wait while they run. There is very little scripting that is going on, user selects some global values (year and product and revenue type (or multiple types), then the unstored calculations run while the user waits. One idea I suppose is to use PSOS to "load" the layout on the server, then push the unstored but calculated values into a new temporary reporting table that the user then sees. SO ... the user never sees the unstored calculations, but rather waits for the server to load the unstored values then imports those values into a temp table that the user sees. Or maybe the whole layout load is arcane and there is a better way to load the data. One problem related to this report is that they have tens of thousands of products and each product has about 20 different revenue types which the user selects when running the report. So it would be next to impossible to try and run the reports after hours so they are completed before the user walks in each day. I realize you aren't suggesting this, but I certainly thought this would be an answer early on ... but it wont work here.
  6. happymac

    large transaction solution

    when we "archive" a month, it sums the balance in a number field ... so it is stored at that point. BUT ... until we archive the transactions we use unstored calculations to summarize. so if the report is showing the total revenue for July for one product, we have an unstored calculation that finds all the transactions for that one product for that one month and displays the total (via a relationship). super slow to generate the reports, but I don't see any other way while these transactions are potentially still in flux. After roughly a year, they get archived with stored values. We are NOT using PSOS to generate these reports since they are run by employees on the fly, and this is we could entertain changing to speed things, but in all honesty PSOS hasn't proven too impressive to me. I have many processes running with PSOS, but the speed increase still hasn't wowed me in most cases (maybe 20% speed boost if I were to guess). AND ... I've definitely had situations where PSOS has bogged down the entire server with too many processes running at once ~ so that worries me in this case. I would entertain moving things to the server if I felt the speed boost would be sizable. I'm really looking to take these reports from 30 minutes down to just a few minutes. Storing the transactions DID this (de-normalizing), but it created inaccuracies that could potentially be debugged to get perfect, but I have been trying for years to get it right and still am dealing with issues...
  7. happymac

    large transaction solution

    We have implemented this but it still isn't enough. We have a historical table that holds data that is older than 2 years ... but the last two years of data is still crazy. Basically the historical table contains an archive of all the historical records, and in its place it creates one record in the transaction table per month of similar transaction records in the transaction table. So maybe 100,000 records get deleted from the transaction table and in its place is one record (totaling the values). this 'archiving' helps, but isn't enough. Probably is that each month we are importing about a half million transactions (records in the transaction table) and so each monthly report we run takes about 30 minutes to create (about 24 hours for the entire batch). the reports are fairly complex showing multiple products over multiple months, but I have to think that a more enterprise level system would be able to summarize this data within minutes. I'm hugely loyal to FMP, but I am thinking it's time to entertain a more robust system just for reporting???
  8. happymac

    large transaction solution

    actually all of them are required because the reports show historicals.
  9. We have a database solution that is basically an accounting solution. We have a transaction table that stores all the transactions. This table contains millions of records. We have been dealing with speed issues for a few years where when we run reports, since it is such a large set of records, the reports take hours to complete. the Transaction table contains no unstored fields, but just the sheer size of the table makes the reporting a bear. We have tried storing the calculated values overnight so that the users don't have to wait for the reports to run, but sadly the stored values occasionally become inaccurate and cause obvious issues. the transaction table is always accurate, but the stored values many times are not. After years of dealing with this and adjusting code to try and eliminate this issue. We are thinking the best solution is to find a system that can handle the immense table and reporting ... to use FileMaker as a front-end for the users, but some other system (SQL or Python or ???) for quick calculations / reporting. The problem is not going away and in fact is getting worse as more and more records get added on a daily basis. I feel like we can keep bandaiding the solution, but the real long-term solution is to realize that maybe we have outgrown FileMaker's capabilities and move the data storage and calculations to a different platform. Am hoping to have some others shed some light on similar situations they have run into and what solutions I should consider to solve this frustrating issue. thank you in advance!
  10. happymac

    Quickest way to see if child objects exist

    BruceR ... thank you for your suggestion. Unfortunately though it is just as slow. I tried this technique already.
  11. summary of problem - any thoughts on really fast validation attempts to see if child records exist? I have a parent table (called SUMMARY) that has records that store values from a child table (called TRANSACTIONS). The purpose of the parent table is to store summaries of records from the child table, so instead of the user waiting for an unstored calculation, they can access the parent table's stored value. An example of this is ... we might store the total for all sales for one month. so the child table may have 10,000 'sales' records in the month, while the parent table has one record with a number field (stored) showing the sales for that month. We are running into a problem with a maintenance script, where we want to know if there are any SUMMARY records that do not have child records. In other words, there's a chance the TRANSACTIONS got deleted, and so we need to delete the SUMMARY (parent) record as a result. Currently we run a simple loop in the parent (SUMMARY) table and say show all records go to first record loop If ( isempty ( child::pktransaction ) ... delete record end if go to next record (exit after last) end loop This works, but with 100,000 parent records and over 4 million child records, this is REALLY slow. Takes about 3-9 seconds per record and therefore takes over 24 hours to complete. I am trying to find a faster method. The goal is only to see if there are related records for the summary record. I tried the following ... 1) changing "isempty" to "isvalid" 2) executeSQL with one query "select pktransaction from transaction where xx = ? and xx = ? and xx = ?" 3) adding an unstored get(foundcount) record in the child table and seeing if that is empty 4) doing an old fashioned find from a layout on the parent (SUMMARY) table for omit * in the child's table pktransaction field. all of which do not seem to improve on the speed. I thought of maybe doing some kind of subquery using executeSQL but dont know if it will help anyway. Something like executesql "select pktransaction from transaction where xx = ? and xx=? and xx=? IN (select pktransaction from transaction where date = xx). though this would significantly reduce the child records in the SQL query by only looking at one month's records (10,000 records instead of 4 million), but am having trouble getting the query written correctly, and dont know if this will help anyway. any thoughts on really fast validation attempts to see if child records exist? thank you! fyi - all the related fields that I am querying OR are part of the relationship criteria are fully indexed.
  12. strange issue ... I am running an executeSQL to get a value and store that into a variable, then when the user goes to another record and runs the same executeSQL to get the new value it does not updating correctly. Have tried ... adding a goto field, refresh window, commit record, goto another layout and come back and nothing seems to work. more detail on the issue below ... we have a INVOICE NUMBER field that gets a value that is incrementally higher than the last entry. It is technically not a "serial number" but rather a text field (since the value has text and numbers in it) that users' can alter, so don't confuse it with a primary key. Basically, when the user clicks into the INVOICE NUMBER field, we scripttrigger (on field enter) a popover to show the next value that the user should use, the value that we show the user is generated by an executeSQL (below) that is put into a merge variable for the user to see ... they can then type that value into the field if they want (or ignore the suggestion if they want also. The goal is for the the executeSQL command to finds the highest numerical value of the records in the table, then display that value + 1. So if the last used invoice number (we call the field 'code') is SQ0014, then it would show the user SQ0015. it works one time, but if you then go to the next record and click into the field again, the scripttigger still returns the same value, as if the last entered value never got stored into the database. so, you open the database and click into the field and the merge variable shows SQ0015 ... then click into the field manually set the value, then goto the next record and click into the field again it still shows SQ0015 even though it has been used (it should show SQ0016, since that is the next available code). by the way, at the begining of the scripttrigger and at the end, I clear the variable by using "set variable = """, so it should be resetting?? the setvariable command is as follows ... Let( [ assetcode = GetValue ( ExecuteSQL ( "select code from ASSET where upper(code) like 'SQ%'" ; ""; "" ) ; ValueCount ( ExecuteSQL ( "select code from ASSET where upper(code) like 'SQ%'" ; ""; "" ) ) ) + 1 ; codenumber = Filter ( assetcode ; "0123456789" ) ; ] ; "SQ" & codenumber ) what can i do to force the executeSQL to recalculate and not use the previously stored value? Thank you in advance!!
  13. I have a Windows Server 2012 R2 Standard (64bit, 48gb ram) running FileMaker Server 13.0.9.905 and we have a problem where PSOS 'clients' are not getting disconnected after the script runs. They eventually fill up the activity 'client' window and over a few days the server eventually stopped accepting new connections since we hit the 'maximum simultaneous script sessions' limit. I have been watching this over a few months and the problem seems to go away after a reboot ... it seems to work fine for a few days disconnecting the session after the script runs, but then we go back to the same issue where it seems to never disconnect their session. we have a handful of psos scripts that have the same result when the problem happens, it doesn't disconnect any of them. What's odd is that the users quit at night, so they are no longer connected and we have a 4 hour disconnect idle client setting ... and these ghost clients still are connected. and they stay connected until i forcefully "disconnect" them (in the activity screen) or reboot the server. the scripts that are running normally take about 5-10 seconds to complete normally ... they are a basic script that finds records and creates new records based on the found set. I don't see any odd records appearing in the database (like an infinite loop occurring), and if an infinite loop was occurring I assume this wouldn't be a random situation but would always consistently not work. Also in the log i see no reference to the session after the first minute of the script running even though it is still listed in the activity ... which implies to me that the session is not doing anything. these scripts have the "exit script" step at various junctures, as an example ... if none found ... exit script, so the script rarely finishes without a true exit. scratching my head on this one and wondering if anyone else has seen similar issues
  14. happymac

    email attachment using applescript

    there is never an error message, Applescript processes it correctly with no errors, just no attachment appears.
  15. happymac

    email attachment using applescript

    I have a solution that creates an email with an attachment. We need to use Applescript to do this, not the default 'send mail' function, and I am having trouble getting the applescript to attach the attachment. What is strange is that some records, it works consistently, and other records it never works. Basically I save a PDF to the desktop, then send an applescript command to use that attachment. The desktop PDF always creates, the new mail message always creates with the recipient pulled from the fm database, but the attachment only sometimes attaches to the message. In FileMaker, I am setting a variable with the applescript code, then after running the applescript which calls the variable. the variable code is as follows ... "set theAttachment to " & Quote( $MacPosixPath & $filename ) & "¶ set RecipientAddress to " & Quote ( estimate_CLIENTCONTACT__~fkclientcontact::email1 ) & "¶ set RecipientName to " & Quote ( estimate_CLIENTCONTACT__~fkclientcontact::name_full ) & "¶ tell application \”Mail\” ¶ set newMessage to make new outgoing message with properties {visible:true} ¶ tell newMessage ¶ make new to recipient with properties {name:recipientName, address:RecipientAddress} ¶ tell content of newMessage¶ make new attachment with properties {file name:theAttachment} at after the last paragraph¶ end tell¶ end tell¶ activate ¶ end tell ¶" The variable $MacPosixPath is defined earlier in the script as ... Substitute ( Right ($desktopPath ; Length ( $desktoppath) - 1 ) ; "/"; ":") and $desktoppath is simply defined as Get ( DesktopPath ) I have attached two images to this post showing the resulting applescript that is executed. One that works correctly and the second that does not work. In both cases the pdf is present on the desktop. They look identical ... stange for sure! thank you in advance!
×

Important Information

By using this site, you agree to our Terms of Use.