April 1, 201213 yr Hi all - I just recently wrote a script that converts images stored in container fields to images stored in a directory with a reference stored in the container field. The script loops through the records exporting the images to a directory and then inserts the exported image via the Insert Picture script step. There are roughly 75,000 records/images to process. What seems to be happening is that the more images that are in the directory, the longer it takes the Insert Picture step to run. This seems odd since: 1. The path to the file to insert is passed to the step and 2. We are merely storing a reference to the file. The performance has come to a crawl. It's taken 24 hours to get through 50,000 records. I truly thought that this would take an hour or two at the most. Am I missing something? The script is really simple. Step 1 - Export image. Step 2 - Insert image as reference. The Export step takes a second or two at most. The Insert step takes several minutes. Any knowledge or suggestions would be appreciated. Thanks!
April 1, 201213 yr That's definitely too long. I've done similar things (just exporting) with 60.000 picts in a few hours to a local disk on a G4. Could it be that the destination folder is on a network drive? I would not do that with 75.000 items in a folder on a network drive. If you can script it, add a directory layer like this: basefolder/ 1000/ pict1001 pict1002 pict1003 ... 2000/ pict2001 ... These directories need to be created before you export.
April 1, 201213 yr Author This thread and especially Fenton's response should help. Thanks bcooney. I did learn something from that post. It never occured to me to have a calculation container field with the path in the calculation. Alas, that won't help me here. I have a system in place that allows our clients to choose how they want their images stored. It allows them to change the type of storage, in db or as reference, and set the directory if they choose as reference. The system then automatically goes through each record and "updates" it from one to the other. So, using a calculation field wouldn't help in this case. Could the performance issue be that I'm using a single directory to store all the files? Does that cause a performance issue in general? Karsten, I think you have a good idea with the sub-directory approach. I think keeping the number of files per directory will help. I can't really do much about the network directory issue. It's a multi-user app so it makes sense to store the images on the server.
April 1, 201213 yr ... Could the performance issue be that I'm using a single directory to store all the files? Does that cause a performance issue in general? Karsten, I think you have a good idea with the sub-directory approach. I think keeping the number of files per directory will help. I can't really do much about the network directory issue. It's a multi-user app so it makes sense to store the images on the server. On the mac a network folder with only 5000 pictures is annoyingly slow (needs ages to load all the metadata). Another aspect here not yet talked about is the size of the pictures. An image referenced from a media field is IIRC loaded fully for display. Generally I avoid using references; I leave pictures on disk and create & import a scaled version for use in FileMaker. What kind of system do you have where clients/users choose how the pictures are stored? I consider this an implementation issue which does not concern my users. Though I give them tools to open their pictures from the database in photoshop or preview, or reveal them in the Finder (the originals, not the database thumbnails).
Create an account or sign in to comment