Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

This topic is 6898 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

I'm currently running an import on the 'newest' version of our one file, importing all the information over from the 'old' file. There are a few fields that are missing on the old file, but I'm just using matching names to handle all the field matches.

Anyway, the file I'm importing from is roughly 61k records, and the total file size is probably around 200MB (a lot of text). I ran the import for over an hour and was only able to import roughly 15k records when I then had to cancel it.

If it makes a difference I was terminal serving into our server at work, and I had the filemaker server service stopped. I can't understand why it would be running THAT slowly, maybe because of terminal server?

Posted

Importing large numbers of records, defining fields, Recovering a file are all things that should not be done through a Terminal Services connection. If you need to do these things remotely, use a connectivity tool that will not make these activities fail if the internet connectivity drops in the middle of the activity. Such tools include pcAnywhere, VNC, or Timbuktu(Mac).

While it is not ALWAYS true, it is almost always true that the Terminal Services Server is setup to reset sessions if the connection is dropped. There is nothing worse than being two hours into a recover and having your connection drop forcing you to start over.

That said, my experience with importing large sets of records has showed me that there are many factors that influence the performance of the importing records. The number of indexed fields in the receiving file, the location of the file containing the records being imported, and the quality of the workstation performing the import.

The more fields that are set to be indexed in the receiving file the slower the import will be because FileMaker not only imports the data from each record, but also indexes as it's importing.

I have found that copying the file holding the records to be imported to the workstation you are doing the import on can help performance. On top of that I have found that setting the Sharing status of the two files to Single-User can help performance. Lastly, and the one that probably doesn't need to be said, the higher the performance capabilities of the workstation the better the performance of the import.

Hope this helps.

Posted

Matt, thanks for your insight. I'm still not sure what the problem may be, but you've given me some good ideas to test. I think I may take the files home and see how long the import takes on my computer not on the work network. The workstation performance shouldn't be much of an issue, it's a dual-proc server with 1G or 2G RAM and that's all that machine does. I guess further testing is required on my part.

Thanks for the help :

Posted

The workstation performance shouldn't be much of an issue, it's a dual-proc server with 1G or 2G RAM and that's all that machine does. I guess further testing is required on my part.

Thanks for the help :

Yup, sounds like the workstation is definitely not the bottle neck. I would bet that indexing is the bottle neck. It is the most common culprit.

Good luck. :thumbup:

This topic is 6898 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.