Jump to content
View in the app

A better way to browse. Learn more.

FMForums.com

A full-screen app on your home screen with push notifications, badges and more.

To install this app on iOS and iPadOS
  1. Tap the Share icon in Safari
  2. Scroll the menu and tap Add to Home Screen.
  3. Tap Add in the top-right corner.
To install this app on Android
  1. Tap the 3-dot menu (⋮) in the top-right corner of the browser.
  2. Tap Add to Home screen or Install app.
  3. Confirm by tapping Install.

Featured Replies

I have tried to import a large text files into FM. The file has more than 1 million records. FM seems to import the file, but when you look at the total number of records and check to see if the data was imported properly, you find that many records are not being imported.

The same text files were imported into MS Access and all of the records and data are found.

The text file is being put into FM 7 on a new Macbook Pro Latop with 1gb of memory.

Any thoughts as to why Access is managing the large text file but FM is not? :

There usually is a log file FM produces after the import, or a message saying records where skipped. What does it say?

Another thought.

Your import to Access was done using Windows. Import to FMP was done on a Mac?

From memory, Mac and Win have slightly different End-of-Record markers.

  • Author

FM says that everything imported fine, no skipped records. Curious as to what was happening, I re-copied all of the original files to seperate folders. I then imported the file as a "single string" and then imported the "single string" FM file into the registered voter file with all of the parsing equations. My thought was that FM might be failing on all of the calculations on the initial input. The first "single string" import returned more than additional 200,000 records. When I began to import this file into the voter registration file, FM initially saw the only @700K records -- rather than the 920+K records. I stoped the import, deleted the records and tried again. The second time, it imported all of the records and returned to correct number of records.

Did the same procedure on a second file with more than 1.1 million records in Access (the correct amount) but FM returned just 670K when imported to the "string file" then to the voter registration file. FM incorrectly split single string into two peices 5008 times.

Well, this has left me confused as to what is going on.

Thanks for your thoughts.

That is a most confusing explanation... what format are you using for import?

What's this about importing the file as a "single string"?

  • Author

Sorry for the language confusion. I usually work with larger data-sets using SPSS, a statistical software, where that language would make more sense. I am importing all of the information from each record as one line of text, numbers and spaces as a single line of text about 2,500 characters long. Once the data has been imported into File Maker, I then rearrange the data into the proper fields using the Middle(variable;X;X) calculation.

I am importing the data as text.

The problem appears to have been that Mac OS was not sure how to handle some of the files, despite their having a .txt extension. A suggestion was made to manage the data prior to importing into filmmaker using Text Wrangler. This program returned a Mac OS error 116 when the files were opened. After looking into this error code, I understand that it means that Mac OS is not sure how to handle the file and is having trouble figuring out its actual size. With a fix provided by folks that use Text Wrangler, this problem appears to have been solved.

The most recent import of the most troublesome file returned the correct 1.1 + million records and the parsing is occurring now....well for the past seven hours or so.

A seven million record file processed just fine, though I will have to go on vacation or buy a new computer while filmmaker to parses the data into the appropriate variables.

For those that work with larger data sets, had the data not required the use of calculated text functions to parse the information into the correct variables and had set delimiters (tabs, comma) how long would moving 7 million records into a new data-base take?

The folks that use Text Wrangler assured me that the program could manage absurdly large text files. Perhaps its worth the time to manage the data first with Text Wrangler and then import once the records have a delimitated file structure.

Thank you for you is on and off list suggestions and help.

Yes, I would definitely pre-process the text file to make sure each field value is properly delimited so you can avoid all the calculated fields. That's where the slowdown occurs.

As to how long importing 7 million records would take if FM doesn't have to calculate things for each record coming in? Impossible to say. It all depends on the machine specs. Lots of RAM and a fast hard disk will help a lot. Also importing into a local copy of the file and not a hosted file will speed things up (with a hosted file there is lot of network overhead involved)

  • Author

Thanks for the input. Having moved several large sets of fixed width text files (400+ MB) into file maker using a variety of strategies (parsing the data with middle(data;x;x) as field calculations and similar strategy using a looping script, I believe that Fenton's original recomendation that I use Text Wrangler to turn the data into a deliminated-text file was correct.

Filemaker simply "chokes" on making so many calculations.

  • Author

6.4 million records (20 variables) representing @400 characters imported from a dbase file took about 4 hours, which is significantly shorter than importing that same file using text parsing functions, which took about 10 hours. The data is stored and managed on a Mac book pro with 1GB of DDR2 SDRAM on a 2 GHz Intel Core Duo.

Importing the same text file into SPSS, performing three parsing functions and converting the file into dbase took @20 minutes.

For those that manage large data-sets using Filemaker, when you have to reorganize the data or perform new calculations, do you perform these actions in a more robust data-processing program and then return the new or revised variables to the filemaker data-base?

For example, another FMForum user working with voter data got some good advice on how to combine voters within one household. If I tried to perform those calculations on my files, it would take countless hours. I would assume that it would be much faster to do all those functions in SPSS or SAS and add the new data to file maker.

Is there any significant downside to this?

Create an account or sign in to comment

Important Information

By using this site, you agree to our Terms of Use.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.