Jump to content

Importing ASCII file into FMP


This topic is 7687 days old. Please don't post here. Open a new topic instead.

Recommended Posts

I have an ASCII list file which I am trying to import into FMP. The problem is that the list is not 1 record per line, but any record may be comprised from up to 7 lines in the list. The only way you know when you come to another record is if a line begins with "10".

2 records are given as an example:

1034127 01.12.199100.00.00003 T8 3 SN


4001.11.19940029 17729

5001.12.1991Arterio-venous fistula of the abdomen,

5001.12.1991dissection and repair of, with restoration of



1034130 01.12.199100.00.00003 T8 3 SN


4001.11.19940012 17712

5001.12.1991Surgically created arterio-venous fistula of an

5001.12.1991extremity, closure of


10, 20, 30, 40, 50 at the start designate different fields in the same record. My problem is (1) how to separate the records as there is no delimiter other than the occurance of a "10" at the start of a line and (2) the lines which start with 50 need to all be in one field in the final database for each record. I am not sure how to combine them.

Any ideas much appreciated.

Link to comment
Share on other sites

Thanks for the idea Bob

problem is that the file is 1.3 MB

I might just have to split it up, but the idea of a looping script for each line is a great one. I suppose I can use the middle function before appending the "50" items to just extract the required text for each record.

I'll give it a go when I get some time

thanks again

Link to comment
Share on other sites

If the file is too big to fit completely into a global field, then I suggest importing it into Filemaker as one line of raw data per record. Just set it up to import tab delimited into one field. Then you can create a similar looping script to look at one record at a time and parse it into new records. You can either create your new records in the same file, which involves jumping around a bit since new records are created at the end of the file and you have to remember which record you were parsing so that you can jump back to the one following it; or you can have a related file where you create the new records.

Personally, I would probably use the single file method because I suspect it would be faster. If you do, here are a few tips:

1. Have a field that identifies the record type as either imported raw data, or a record your script created. That way, you can find and delete the raw data records when you are done.

2. As each raw data record is processed, omit it from the found set. That way, you can get to the next record to be processed by doing a "Go to record/request [first]"

3. Keep your data in global fields temporarily until you have a complete record. Then create your new record and transfer the data to the non-global fields. This prevents a lot of jumping back and forth between records.

4. After you create each new record, omit it from the found set. That way, your looping script won't accidentally start trying to process your new records when it runs past the raw data records. You can use the "Exit loop if[status(currentfoundcount=0)]" step to terminate the script.

Link to comment
Share on other sites

Bob, thanks for your ideas. I finally got a looping script to work as you suggested, with one line of raw data per record. My only remaining problem is, and forgive me if this is a simple technique (I'm a doctor not a FMP guru :-)), I am unsure how to append or join the "50" records together, as several "50" items form the complete field in any one record.


Link to comment
Share on other sites

Thought about it this am in theatre and believe it or not, it works !!! The whole file took 3 mins to process. This is the looping script - it might be of value to anyone who has a similar problem.


Go to Record/Request/Page [First]

If ["Left(Raw data, 2)="10""]

New Record/Request

Set Field [Mark, "New"]

Set Field ["10", "Parsed 10 field"]

Set Field ["20", "Parsed 20 field"]

:: and so on for all the fields you want in your final record

Omit Record

Go to Record/Request/Page [First]

If ["Left(Raw data, 2)="10""]

Set Field ["Parsed 10 field", "Middle(Raw data, 3, 5)"]


If ["Left(Raw data, 2) = "20""]

Set Field ["Parsed 20 field", Middle (Raw data, 22,7)"]

::and so on for each line, parse out the data you want::


:: this is how I appended or merged the "50" items into one field::

If ["Left(Raw data),2)="50""]

If ["IsEmpty(Parsed 50 field)"]

Set Field ["Parsed 50 field", "Middle(Raw data, 13, 80)"]


InsertCalculatedResult["Parsed 50 field"," " & Middle(Raw data, 13, 80)"]

End If

Omit record

Exit Loop If ["Stauts(CurrentFoundCount)=0"]

End Loop

Note: The "Parsed 10 field" and the like are the global fields and the fileds "10", "20" etc. are the final fields for each record. You find all the records that you created as they have "New" in the Mark field.

You will have to delete the very first record marked with "New" as it is created right at the start of the script and contains no data.

Thank you very much for the advice which allowed me to achieve this.

Feel free to improve on this script.

Link to comment
Share on other sites

Instead of using the IF statement to check to see if the parsed 50 field is empty, you can simply use a set field instruction as follows:

Set Field [Parsed 50 field, calculation = Parsed 50 Field & "Middle(Raw data, 13, 80)"]

This will append rather than overwrite. If the field is currently empty, it appends the data to the empty field which is the same a setting it to the raw input value. This may speed things up a bit, because you don't need to execute the IF.

Link to comment
Share on other sites

This topic is 7687 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Create New...

Important Information

By using this site, you agree to our Terms of Use.