richardb Posted May 3, 2001 Posted May 3, 2001 I have an ASCII list file which I am trying to import into FMP. The problem is that the list is not 1 record per line, but any record may be comprised from up to 7 lines in the list. The only way you know when you come to another record is if a line begins with "10". 2 records are given as an example: 1034127 01.12.199100.00.00003 T8 3 SN 2001.11.200001235.3000926.5001182.80 4001.11.19940029 17729 5001.12.1991Arterio-venous fistula of the abdomen, 5001.12.1991dissection and repair of, with restoration of 5001.12.1991continuity 5001.12.1991(Assist.) 1034130 01.12.199100.00.00003 T8 3 SN 2001.11.200000386.4500289.8500333.95 4001.11.19940012 17712 5001.12.1991Surgically created arterio-venous fistula of an 5001.12.1991extremity, closure of 5001.12.1991(Assist.) 10, 20, 30, 40, 50 at the start designate different fields in the same record. My problem is (1) how to separate the records as there is no delimiter other than the occurance of a "10" at the start of a line and (2) the lines which start with 50 need to all be in one field in the final database for each record. I am not sure how to combine them. Any ideas much appreciated.
Chris Wood Posted May 3, 2001 Posted May 3, 2001 If you cansort the file ASCII file out in Excel, it is then a simple matter of opening the Excel file in FMPro. If you have made the firt Excel row the field labels, FMPro makes the database for you, Chris
Chris Wood Posted May 3, 2001 Posted May 3, 2001 If you can sort the ASCII file out in Excel, it is then a simple matter of opening the Excel file in FMPro. If you have made the firt Excel row the field labels, FMPro makes the database for you, Chris
BobWeaver Posted May 4, 2001 Posted May 4, 2001 If the file is less than 64k in size, you can paste the whole thing into a global text field and, then run a script that loops through each line (by finding the position of the
richardb Posted May 6, 2001 Author Posted May 6, 2001 Thanks for the idea Bob problem is that the file is 1.3 MB I might just have to split it up, but the idea of a looping script for each line is a great one. I suppose I can use the middle function before appending the "50" items to just extract the required text for each record. I'll give it a go when I get some time thanks again
BobWeaver Posted May 6, 2001 Posted May 6, 2001 If the file is too big to fit completely into a global field, then I suggest importing it into Filemaker as one line of raw data per record. Just set it up to import tab delimited into one field. Then you can create a similar looping script to look at one record at a time and parse it into new records. You can either create your new records in the same file, which involves jumping around a bit since new records are created at the end of the file and you have to remember which record you were parsing so that you can jump back to the one following it; or you can have a related file where you create the new records. Personally, I would probably use the single file method because I suspect it would be faster. If you do, here are a few tips: 1. Have a field that identifies the record type as either imported raw data, or a record your script created. That way, you can find and delete the raw data records when you are done. 2. As each raw data record is processed, omit it from the found set. That way, you can get to the next record to be processed by doing a "Go to record/request [first]" 3. Keep your data in global fields temporarily until you have a complete record. Then create your new record and transfer the data to the non-global fields. This prevents a lot of jumping back and forth between records. 4. After you create each new record, omit it from the found set. That way, your looping script won't accidentally start trying to process your new records when it runs past the raw data records. You can use the "Exit loop if[status(currentfoundcount=0)]" step to terminate the script.
richardb Posted May 7, 2001 Author Posted May 7, 2001 Bob, thanks for your ideas. I finally got a looping script to work as you suggested, with one line of raw data per record. My only remaining problem is, and forgive me if this is a simple technique (I'm a doctor not a FMP guru :-)), I am unsure how to append or join the "50" records together, as several "50" items form the complete field in any one record. Cheers
richardb Posted May 9, 2001 Author Posted May 9, 2001 Thought about it this am in theatre and believe it or not, it works The whole file took 3 mins to process. This is the looping script - it might be of value to anyone who has a similar problem. Loop Go to Record/Request/Page [First] If ["Left(Raw data, 2)="10""] New Record/Request Set Field [Mark, "New"] Set Field ["10", "Parsed 10 field"] Set Field ["20", "Parsed 20 field"] :: and so on for all the fields you want in your final record Omit Record Go to Record/Request/Page [First] If ["Left(Raw data, 2)="10""] Set Field ["Parsed 10 field", "Middle(Raw data, 3, 5)"] Else If ["Left(Raw data, 2) = "20""] Set Field ["Parsed 20 field", Middle (Raw data, 22,7)"] ::and so on for each line, parse out the data you want:: Else :: this is how I appended or merged the "50" items into one field:: If ["Left(Raw data),2)="50""] If ["IsEmpty(Parsed 50 field)"] Set Field ["Parsed 50 field", "Middle(Raw data, 13, 80)"] Else InsertCalculatedResult["Parsed 50 field"," " & Middle(Raw data, 13, 80)"] End If Omit record Exit Loop If ["Stauts(CurrentFoundCount)=0"] End Loop Note: The "Parsed 10 field" and the like are the global fields and the fileds "10", "20" etc. are the final fields for each record. You find all the records that you created as they have "New" in the Mark field. You will have to delete the very first record marked with "New" as it is created right at the start of the script and contains no data. Thank you very much for the advice which allowed me to achieve this. Feel free to improve on this script.
BobWeaver Posted May 10, 2001 Posted May 10, 2001 Instead of using the IF statement to check to see if the parsed 50 field is empty, you can simply use a set field instruction as follows: Set Field [Parsed 50 field, calculation = Parsed 50 Field & "Middle(Raw data, 13, 80)"] This will append rather than overwrite. If the field is currently empty, it appends the data to the empty field which is the same a setting it to the raw input value. This may speed things up a bit, because you don't need to execute the IF.
Recommended Posts
This topic is 8600 days old. Please don't post here. Open a new topic instead.
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now