March 17, 201114 yr Newbies Hi FMForum, This issue has been bugging me enough to finally jump on the bandwagon, make a member name for FMForums and begin posting. Here is my problem: I am currently importing ~3 million records from a text file. Each record has lots of qualitative fields, let's call them A, B and C, and several numerical values, lets call them X, Y and Z. There are summary fields here which find totals and counts of field X, Y and Z. After importing the data, i create a concatenated field, say i combine fields A, B and C to make ABC. In a new table, I import unique occurrences of ABC, and would like to find the totals of X and counts of Z, etc. of all the related records in the original table. Lets call these summarized counts and totals X', Y' and Z'. In order to accomplish this, I replace the field contents in X', Y' and Z' with the summary fields in the original table. It all works very well. The problem is that is takes a very long time, something like 50-80 hours. The mystery is that if I only run the data import before doing the number crunching, then I close and reopen the file to run the scripts, the second half (which usually takes up the lions share of the time) runs so fast. Maybe 50x faster than it does if i run the script the whole way though. I've tried adding in "flush cache to disk" script steps, but that doesn't help either. I've even used a different file to open the file remotely, run part of the script, close the file, and run the rest. No luck so far. Any ideas?! Regards, Josh
Create an account or sign in to comment