
datashouldbefree
Members-
Posts
15 -
Joined
-
Last visited
datashouldbefree's Achievements
-
I have data base with unique ID numbers for every record in the data-base. Data-base1 has 920,000 records. data-base 2 has 6,000 records. All of the records in databse 2 have matching IDs with Database 1. (In fact all of the IDs in database2 came from Data-base1, which were sent out to get some new data added to them). Now, when I attempt to import the new data added to the 6,000 records in data-base2 into database1, some of the records are droped. for example: A136756 is a common ID between both data bases. However, when importing the new data into database1, Filemaker fails to import from this ID. I am loosing about 700 records. I have the two data bases in a seperate program and matched/imported the new data in that program, so I know all of the IDS are correct. (I need the formating functions in filemaker, otherwise I would not be bothering..._ Any thoughts as to why Filemaker is failing to import some of the IDs and not others? The ID in databse one is generated by parsing a long ascii record/text string.
-
MS Access Can but FM Can't
datashouldbefree replied to datashouldbefree's topic in Importing & Exporting
6.4 million records (20 variables) representing @400 characters imported from a dbase file took about 4 hours, which is significantly shorter than importing that same file using text parsing functions, which took about 10 hours. The data is stored and managed on a Mac book pro with 1GB of DDR2 SDRAM on a 2 GHz Intel Core Duo. Importing the same text file into SPSS, performing three parsing functions and converting the file into dbase took @20 minutes. For those that manage large data-sets using Filemaker, when you have to reorganize the data or perform new calculations, do you perform these actions in a more robust data-processing program and then return the new or revised variables to the filemaker data-base? For example, another FMForum user working with voter data got some good advice on how to combine voters within one household. If I tried to perform those calculations on my files, it would take countless hours. I would assume that it would be much faster to do all those functions in SPSS or SAS and add the new data to file maker. Is there any significant downside to this? -
MS Access Can but FM Can't
datashouldbefree replied to datashouldbefree's topic in Importing & Exporting
Thanks for the input. Having moved several large sets of fixed width text files (400+ MB) into file maker using a variety of strategies (parsing the data with middle(data;x;x) as field calculations and similar strategy using a looping script, I believe that Fenton's original recomendation that I use Text Wrangler to turn the data into a deliminated-text file was correct. Filemaker simply "chokes" on making so many calculations. -
MS Access Can but FM Can't
datashouldbefree replied to datashouldbefree's topic in Importing & Exporting
Sorry for the language confusion. I usually work with larger data-sets using SPSS, a statistical software, where that language would make more sense. I am importing all of the information from each record as one line of text, numbers and spaces as a single line of text about 2,500 characters long. Once the data has been imported into File Maker, I then rearrange the data into the proper fields using the Middle(variable;X;X) calculation. I am importing the data as text. The problem appears to have been that Mac OS was not sure how to handle some of the files, despite their having a .txt extension. A suggestion was made to manage the data prior to importing into filmmaker using Text Wrangler. This program returned a Mac OS error 116 when the files were opened. After looking into this error code, I understand that it means that Mac OS is not sure how to handle the file and is having trouble figuring out its actual size. With a fix provided by folks that use Text Wrangler, this problem appears to have been solved. The most recent import of the most troublesome file returned the correct 1.1 + million records and the parsing is occurring now....well for the past seven hours or so. A seven million record file processed just fine, though I will have to go on vacation or buy a new computer while filmmaker to parses the data into the appropriate variables. For those that work with larger data sets, had the data not required the use of calculated text functions to parse the information into the correct variables and had set delimiters (tabs, comma) how long would moving 7 million records into a new data-base take? The folks that use Text Wrangler assured me that the program could manage absurdly large text files. Perhaps its worth the time to manage the data first with Text Wrangler and then import once the records have a delimitated file structure. Thank you for you is on and off list suggestions and help. -
MS Access Can but FM Can't
datashouldbefree replied to datashouldbefree's topic in Importing & Exporting
FM says that everything imported fine, no skipped records. Curious as to what was happening, I re-copied all of the original files to seperate folders. I then imported the file as a "single string" and then imported the "single string" FM file into the registered voter file with all of the parsing equations. My thought was that FM might be failing on all of the calculations on the initial input. The first "single string" import returned more than additional 200,000 records. When I began to import this file into the voter registration file, FM initially saw the only @700K records -- rather than the 920+K records. I stoped the import, deleted the records and tried again. The second time, it imported all of the records and returned to correct number of records. Did the same procedure on a second file with more than 1.1 million records in Access (the correct amount) but FM returned just 670K when imported to the "string file" then to the voter registration file. FM incorrectly split single string into two peices 5008 times. Well, this has left me confused as to what is going on. Thanks for your thoughts. -
import large ascii file
datashouldbefree replied to datashouldbefree's topic in Importing & Exporting
This is a very hand way of building the import strategey for multile fixed files. Much more straight forward than having to define the calculation each time and it allows you to see the variables and lenghts at a glance. Thanks for offering this up! -
I have tried to import a large text files into FM. The file has more than 1 million records. FM seems to import the file, but when you look at the total number of records and check to see if the data was imported properly, you find that many records are not being imported. The same text files were imported into MS Access and all of the records and data are found. The text file is being put into FM 7 on a new Macbook Pro Latop with 1gb of memory. Any thoughts as to why Access is managing the large text file but FM is not? :
-
import large ascii file
datashouldbefree replied to datashouldbefree's topic in Importing & Exporting
Thanks for the file. Once filemaker get through importing the data...going on 3 hours now. I will look at it. The file is 514 MB and will likely have more than 1 million records when done. I know this is allot of data, but more than 3 hours to import the file? Generally, I use SPSS to manage data sets this large but we want to have the data available on the web. I am running on a macbook pro with 1 GB of memory. I have the file structured to import the first "long string" from which all of the other variables are calculated. Would it be faster to do the calculations via an "insert calculation" script after the initial "long string" has been imported? Seeing FM process each record at a time using the loop function lead me to think that it would be slower, but I could be wrong. Is there some batch processing function that I missed? I have five more CDs and the idea of having to wait several hours for each one leads me to believe that I have missed some import step to speed the process along. Thanks for your thoughts. -
import large ascii file
datashouldbefree replied to datashouldbefree's topic in Importing & Exporting
I solved my problem. I needed to install the correct drivers. When taking additional files into the data base, I found that defining the calculations, though time consumeing the first time, makes the importing/conversion process much faster than running a script/loop to perform the calculations. -
import large ascii file
datashouldbefree replied to datashouldbefree's topic in Importing & Exporting
In order to get the data into FM, I first put the "long-string" into excel. After being imported/opened in filemaker, I used the parse function as suggested. I tried out a loop and defined calculation and both worked fine, though defining the feild was much faster. However, I am running into a problem on the larger files, which cannot be opened in excel or word due to their size. When I try to open or import the original .txt file data directly into filemaker i get an error message saying the file type is not supported. Any thoughts? -
import large ascii file
datashouldbefree replied to datashouldbefree's topic in Importing & Exporting
All good suggestions. I will work on using sbg2's suggestion. As a novice, i think laying out the strucutre so plainly will allow me to write an import script that will work. While it may be longer, I can understand it -- heh. Thanks again. I will post the outcome on Monday. -
import large ascii file
datashouldbefree replied to datashouldbefree's topic in Importing & Exporting
Thanks for the insights. Here is some more information. All of the records and disks have that same structure, fixed width, with a carriage return at the end of each record. The data files contain many padded "blank" spaces, making visual inspection of the records a little off putting. But here is what it looks like. 2003PD126640010208250101S C Y00260011000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000ALAN J GERSON PETER GLEASON DEMOCRATIC The "blanks" don't reproduce in copy/paste. Here are some sample variables: Start End Type Length Election-ID 1 5 A 5 Office 6 7 A 2 County 8 8 A 1 District 9 10 A 2 Ass. Dist 11 12 A 2 Elect. Dist. 13 15 A 3 And so on. Two additional sets of data, of similar file structure and size will also need to be converted and linked between each other to generate reports and match address/name data from membership lists to voter histories, election results and turn-out information by census block. Thanks for your further thoughts. -
I have a 20 fixed width ascii files with 20,000+ records each that I would like to import into filemaker. The files are text files with no deliminaters, 200+ variables, taking up 3200 "positions" or characters. I would like some advice as to how to best move these files into filemaker. Thank you for your advice Chris
-
I am new to FMP7 and I am looking for a more elegant solution to the question below. I have 3 variables with the values of a, b or c. I want to create a new variable that counts the number of times the value "c" occurs within a record. I could create three new variables that issolate the "c" using the If and/or Case function and then create a variable that "counts" the number of "Cs" in the record but there must be a more effeicient way of doing this. Also, is there a way to create mutiple new variables using scriptmaker so you don't have to "muscel" your way through the point and click menus, as you could using SPSS syntax? Any help would be appreciated.