Jump to content

happymac

Members
  • Content Count

    32
  • Joined

  • Last visited

Community Reputation

0 Neutral

About happymac

  • Rank
    member

Recent Profile Visitors

1,450 profile views
  1. I've been pondering this and feel it is the best way to proceed, but I'm still struggling with how to deal with updates or deletes. Let me pose a scenario. Imagine if for August of this year we have 100 transactions for the month and a user deletes two of the transactions in the middle of the month. So, we now have 98 transactions. would all 98 of these need to be 're-totaled', or only the ones AFTER the removed transactions (roughly transactions 51-98)? OR do we leave transactions 51-97 out of sync and only update the last entry in the month with the summary totals? Also, I assume this w
  2. yes, I was thinking the same thing. that we put data in a number field that essentially is a summary of all records in the transaction table for a specific product, specific revenue/expense stream and specific month, so ultimately if a product has 1,000 transactions for July for royalties paid ... the transaction table would retain the 1,000 transactions, but we would also have a separate table called "TRANSACTIONSSUMMARIZED" that would contain 1 record with one number field entry summarizing the 1,000 transactions. So if the 1,000 transactions total 3,085.13 ... this one record would have a
  3. Comment ... I may have not been clear in my first posts ... I AM using summary fields, but a typical report has roughly 10-15 summary fields each having to look at potentially millions of records. I have not tried FASTSUMMARY and that might be a good solution to the performance problem ... and of course EXECUTESQL might be effective here as well. I guess before I go down those rabbit holes, the question is ... are you suggesting that it is quite reasonable with millions of records that we could keep the data normalized and use better summary techniques to get the reports to load reasonably q
  4. again ... thank you to ALL for weighing in! I REALLY appreciate the suggestions. I think in a perfect world we would achieve a denormalized state that would be sufficient, and not shop around for non fmp solutions. With that said, I have really struggled with this particular situation where the records get out of sync ... and I think it has to do with the quantity of records. denormalizing is easy for smaller sets, but when dealing with millions of records, inevitably the "sync" doesn't complete or the user ends up having to wait for it to complete. Obviously we should be able to
  5. I appreciate your thoughts, thank you. maybe giving an example of one report would be helpful to see how PSOS could help. a typical report shows one year of all transactions for a product. It is broken down with columns that display each month. so column A is January, column B is February, etc. In each column, there are rows for various revenue and expenses related to that product. so you might have a line "revenue from xx" and another "marketing expense" and another "royalty paid". then at the bottom of each column, we have totals that add up the above pluses and minuses. It's a b
  6. when we "archive" a month, it sums the balance in a number field ... so it is stored at that point. BUT ... until we archive the transactions we use unstored calculations to summarize. so if the report is showing the total revenue for July for one product, we have an unstored calculation that finds all the transactions for that one product for that one month and displays the total (via a relationship). super slow to generate the reports, but I don't see any other way while these transactions are potentially still in flux. After roughly a year, they get archived with stored values. We
  7. We have implemented this but it still isn't enough. We have a historical table that holds data that is older than 2 years ... but the last two years of data is still crazy. Basically the historical table contains an archive of all the historical records, and in its place it creates one record in the transaction table per month of similar transaction records in the transaction table. So maybe 100,000 records get deleted from the transaction table and in its place is one record (totaling the values). this 'archiving' helps, but isn't enough. Probably is that each month we are importing
  8. actually all of them are required because the reports show historicals.
  9. We have a database solution that is basically an accounting solution. We have a transaction table that stores all the transactions. This table contains millions of records. We have been dealing with speed issues for a few years where when we run reports, since it is such a large set of records, the reports take hours to complete. the Transaction table contains no unstored fields, but just the sheer size of the table makes the reporting a bear. We have tried storing the calculated values overnight so that the users don't have to wait for the reports to run, but sadly the stored val
  10. BruceR ... thank you for your suggestion. Unfortunately though it is just as slow. I tried this technique already.
  11. summary of problem - any thoughts on really fast validation attempts to see if child records exist? I have a parent table (called SUMMARY) that has records that store values from a child table (called TRANSACTIONS). The purpose of the parent table is to store summaries of records from the child table, so instead of the user waiting for an unstored calculation, they can access the parent table's stored value. An example of this is ... we might store the total for all sales for one month. so the child table may have 10,000 'sales' records in the month, while the parent table
  12. strange issue ... I am running an executeSQL to get a value and store that into a variable, then when the user goes to another record and runs the same executeSQL to get the new value it does not updating correctly. Have tried ... adding a goto field, refresh window, commit record, goto another layout and come back and nothing seems to work. more detail on the issue below ... we have a INVOICE NUMBER field that gets a value that is incrementally higher than the last entry. It is technically not a "serial number" but rather a text field (since the value has text and numbers in it) that
  13. I have a Windows Server 2012 R2 Standard (64bit, 48gb ram) running FileMaker Server 13.0.9.905 and we have a problem where PSOS 'clients' are not getting disconnected after the script runs. They eventually fill up the activity 'client' window and over a few days the server eventually stopped accepting new connections since we hit the 'maximum simultaneous script sessions' limit. I have been watching this over a few months and the problem seems to go away after a reboot ... it seems to work fine for a few days disconnecting the session after the script runs, but then we go back to the same is
  14. there is never an error message, Applescript processes it correctly with no errors, just no attachment appears.
  15. I have a solution that creates an email with an attachment. We need to use Applescript to do this, not the default 'send mail' function, and I am having trouble getting the applescript to attach the attachment. What is strange is that some records, it works consistently, and other records it never works. Basically I save a PDF to the desktop, then send an applescript command to use that attachment. The desktop PDF always creates, the new mail message always creates with the recipient pulled from the fm database, but the attachment only sometimes attaches to the message. In FileMaker, I am
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.