Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

This topic is 2629 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

Hello

Is there a simple way to import various files (more than 2000) where tabs separate 9 different fields so that each import would populate the fields as they are?

The files are in UTF8 and PLAIN TEXT

Importing them with the FileMaker function : FILE->IMPORT->TEXT FOLDER simply brings each file in one records wich is useless to me as I then need to script them to separate each chunk into a different field

Importing each text file manually is impossible given the number of files in the folder

Each file is properly formatted therefore there would be no "empty" fileds in the import process

Any help would be highly appreciated

Dan

Posted (edited)
41 minutes ago, dkey said:

Importing them with the FileMaker function : FILE->IMPORT->TEXT FOLDER simply brings each file in one records wich is useless to me as I then need to script them to separate each chunk into a different field

It's not entirely useless, because you can either parse them individually (as you say) or tell a script to export each file in turn and import it into the target table.

Another option is to use AppleScript to automate the import of the original files.

 

Edited by comment
Posted
On 9/4/2017 at 9:36 PM, comment said:

Another option is to use AppleScript to automate the import of the original files.

 

Thanks for your reply ... unfortunately I don't know how to write the script and the files to import are about 2000 and import themmanually is very though although it could be done ....

The db has 8 text fields and one Timestamp field (Media Date) tyhe order is exactly the same for each field in the TAB separated files as follows:

 SCNSEQTAKE    Scene    Seq    Take    Duration    Name    Media Date    Media Folder    FileHash

 

Thanks a lot for your help

 

Posted

Why don't you import them as files in folder and parse out the data to individual fields? That would be really easy (I am assuming each file holds one record only).  All it would take is 9 Set Field[] steps in a loop over 2000 records, roughly:

Import Records
Go to Record [ First ]
Loop
  Set Variable [$data; Value: Substitute ( YourTable::Text Content ; Char(9) ; ¶ ) ]
  Set Field [ YourTable::SCNSEQTAKE ; GetValue ( $data ; 1 )
  Set Field [ YourTable::Scene ; GetValue ( $data ; 2 )
  Set Field [ YourTable::Seq ; GetValue ( $data ; 3 )
  Set Field [ YourTable::Take ; GetValue ( $data ; 4 )
  Set Field [ YourTable::Duration ; GetValue ( $data ; 5 )
  Set Field [ YourTable::Name ; GetValue ( $data ; 6 )
  Set Field [ YourTable::Media Date ; GetValue ( $data ; 7 )
  Set Field [ YourTable::Media Folder ; GetValue ( $data ; 8 )
  Set Field [ YourTable::FileHash ; GetValue ( $data ; 9 )
  Go to Record [Next; Exit after last ]
End Loop
Commit Records

where Text Content is the target field for importing the file's content. 

Posted (edited)
23 hours ago, comment said:

Why don't you import them as files in folder and parse out the data to individual fields? That would be really easy (I am assuming each file holds one record only).  All it would take is 9 Set Field[] steps in a loop over 2000 records, roughly:


Import Records
Go to Record [ First ]
Loop
  Set Variable [$data; Value: Substitute ( YourTable::Text Content ; Char(9) ; ¶ ) ]
  Set Field [ YourTable::SCNSEQTAKE ; GetValue ( $data ; 1 )
  Set Field [ YourTable::Scene ; GetValue ( $data ; 2 )
  Set Field [ YourTable::Seq ; GetValue ( $data ; 3 )
  Set Field [ YourTable::Take ; GetValue ( $data ; 4 )
  Set Field [ YourTable::Duration ; GetValue ( $data ; 5 )
  Set Field [ YourTable::Name ; GetValue ( $data ; 6 )
  Set Field [ YourTable::Media Date ; GetValue ( $data ; 7 )
  Set Field [ YourTable::Media Folder ; GetValue ( $data ; 8 )
  Set Field [ YourTable::FileHash ; GetValue ( $data ; 9 )
  Go to Record [Next; Exit after last ]
End Loop
Commit Records

where Text Content is the target field for importing the file's content. 

 

Thanks a lot for your time and kindness

The script works perfectly, however each text file has more than one paragraph how to "loop" each file so every paragraph becomes a separate record?

Here an example of a file although some files have more or less paragraphs

This is file 0033.txt

330101    33    01    01    00:00:50:16    DSC9267.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    FA43E370D45F362A
330102    33    01    02    00:00:44:16    DSC9268.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    BF4577E2E0F8B9E6
330103    33    01    03    00:00:53:13    DSC9269.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    E0684C862923D8A1
330104    33    01    04    00:00:53:22    DSC9270.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    7499E6FC14FE6334
330105    33    01    05    00:00:56:04    DSC9271.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    89424E00BDA46984
330201    33    02    01    00:00:29:16    DSC9272.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    F625681411136E05
330202    33    02    02    00:00:35:19    DSC9273.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    E16113C4DCB7BF73
330301    33    03    01    00:00:28:23    DSC9274.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    7255CE2131EE9FE4
330401    33    04    01    00:01:00:09    DSC9275.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    5A23DC0EDEE3A5AB
330501    33    05    01    00:01:11:01    DSC9276.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    BE2403BAA6DDDAE4
330601    33    06    01    00:00:51:03    DSC9277.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    2670DFB1ED76BD78
330602    33    06    02    00:00:48:18    DSC9278.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    5EFB822868AEE245
330701    33    07    01    00:01:30:00    DSC9279.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    309304CE359B5E9C
330801    33    08    01    00:01:37:20    DSC9280.MOV    15/7/2017 15:31:46    /Volumes/SHOWDOW_2017/OriginalClips    F12D6E0680A36767
,

 

Thanks again

Edited by dkey
Posted

You need a different strategy. Let me explain how to do this manually first, then we can talk about scripting some or all parts.

You will need two tables, let's call them TEMP and TARGET. The TEMP table needs these fields:

  • Text Content (Text)
  • sList (Summary, List of Text Content)

The TARGET table will have the 9 fields used by your data files.

  1. Start by importing the files (as folder) into the TEMP table, mapping Text Content to Text Content.
  2. Next, control-click into the sList field, select Export Field Contents… from the shortcut menu and export to a known location.
  3. Import the exported file (as tab-separated file) into the TARGET table.

In the end you should have about 2,000 records in the TEMP table, and many more in the TARGET table.

 

 

Posted
1 hour ago, comment said:

then we can talk about scripting some or all parts.

Thanks  again Comment ... I did everything as suggested

How do I need to fix the previous script if is still the one I need to use?

Dan

Posted
5 minutes ago, dkey said:

How do I need to fix the previous script if is still the one I need to use?

There is nothing in the previous script that's applicable to this method. 

Posted (edited)

I do get the 1998 files however the tabs get lost 

This seems to be an a regular problem with Filemaker

Opening the file in my Textwrangler I just see spaces where I should find <tab> except in the last part of the paragraph where TextWrangler shows two "red" inverted question marks as such 

000101 00 01 01 00:00:24:24 DSC_0661.MOV 7/8/2017 15:54:58 /Volumes/SHOWDOW_2017/MEILI Walks 01/010 1FA556300502009F000102 00 01 02 00:00:57:12 DSC_0662.MOV 7/8/2017 15:55:52 /Volumes/SHOWDOW_2017/MEILI Walks 01/010 200AC7FEE96A2ED4000103 00 01 03 00:00:09:15 DSC_0663.MOV 7/8/2017 15:59:50 /Volumes/SHOWDOW_2017/MEILI Walks 01/010 F32E306D85C20ECD000104 00 01 04 00:00:50:19 DSC_0664.MOV 7/8/2017 18:01:04 /Volumes/SHOWDOW_2017/MEILI Walks 01/010 21F2E7348A2D4B3E000105 00 01 05 00:00:52:23 DSC_0665.MOV 7/8/2017 16:02:42 /Volumes/SHOWDOW_2017/MEILI Walks 01/010 4DC482E8DED4C943000201 00 02 01 00:00:01:02 DSC_0666.MOV 7/8/2017 16:06:18 /Volumes/SHOWDOW_2017/MEILI Walks 01/010 CDAC4BC790AB5592000202 00 02 02 00:00:27:12 DSC_0667.MOV 7/8/2017 16:06:22 /Volumes/SHOWDOW_2017/MEILI Walks 01/010 B1731833D7BF69B8000203 00 02 03 00:00:18:15 DSC_0668.MOV 7/8/2017 16:07:40 /Volumes/SHOWDOW_2017/MEILI Walks 01/010 F95ABC734A63BE81000204 00 02 04 00:00:18:00 DSC_0669.MOV 7/8/2017 16:08:12 /Volumes/SHOWDOW_2017/MEILI Walks 01/010 16D9DD0074C87250

This is the first line as I see it in Text wrangler as it contains the 9 paragraphs of the first record 

Is there a way to originally import the files including the various tabs in The Text Content field?

The original files in Text Wrangler are standard text in basic UTF8 with the MACINTOSH carrige returns.

Sorry to give you all these problems
 

 

Edited by dkey
Posted

Hi Comment

I found a "very silly" way to fix the problem (although I am always in troubles importing text files in text fields in FMP)

In TextWrangler I replaced the tabs with "ç" as it doesn't exist in any of the files, and added a "∂" before the Carriage return

Then I exported to FMP using UTF8 with BOM

I followed your instructions and exported the files as tab 

Back to TextWrangler I converted the "ç" to tab and removed the "∂" before the carriage return

Importing in the Target this time placed every part in the various fields as needed

Well after all I am no spring chicken but it worked so thanks again for your very good assistance

Kind regards

Dan

 

 

PS

I still wonder how to find a proper way to import text into FMP keeping things as they originally are --- I've had all kind of problems with it and never found a proper solution

If you like I will add an original file for you ... hoping I dont abuse your time

Here it is

 

00.txt

Posted (edited)
11 minutes ago, dkey said:

I followed your instructions and exported the files as tab 

That's not following my instructions. I suggested exporting the summary field's contents using the Export Field Contents… command. If you exported the records to a tab-delimited file, then of course the in-field tabs were converted to spaces: http://www.filemaker.com/help/16/fmp/en/#page/FMP_Help%2Ftab-separated-text-format.html%23

 

 

 

 

Edited by comment
Posted

I believe I did this is the visual sequence of my action

Import file:

59b580ba228fe_01Import.thumb.gif.b3c1d0b29221db1a147da2dff1c1c182.gif

result of the import in TEMP Table

59b580dd413c7_03TempTable.thumb.gif.0db2791f785e0e30c477ba963f97cd53.gif

 

 

Export sList Table

59b580ecc9625_04ExportfromsList.gif.ee055a1c8338054f743b139f4cde2ecf.gif

Text Wrangle page after the exportt

59b580f8d528a_05TEXTWRANGLER.thumb.gif.1aae6675470085d6f8ff801bf9718327.gif

 

Import file into TARGET Table

59b58104e3559_06ImportinTarget.thumb.gif.2fa46af0a479c040f03b341eaae32b41.gif

 

Result in the TARGET Table after importing

59b5811d7e913_7Targtresult.thumb.gif.5d4cf0e8e34c1d9d804cb014f5d3b17b.gif

 

I am not sure what I did wrong. Sorry

 

 

 

 

 

02 Import result.gif

Posted
3 minutes ago, dkey said:

I am not sure what I did wrong.

Your third step (Export sList Table) is wrong. You are exporting records as tab-separated file. You should be exporting field contents instead. Please read this carefully:

3 hours ago, comment said:
  • Next, control-click into the sList field, select Export Field Contents… from the shortcut menu and export to a known location.

 

Posted

Thanks I got lost previously now everything is oK

Thanks and regards

This topic is 2629 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.