Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×

This topic is 7496 days old. Please don't post here. Open a new topic instead.

Recommended Posts

  • Newbies
Posted

I hope this is the right place for this.

I have a client who runs server 5.5 and version 6 desktop clients, 14 clients. Their business involves multiple databases, with one database being a collection of pictures. This pictures database reachs the 2GB limit of the server roughly every 2 months. They have a procedure for catching it before the limit hits and backing up the older records to cd's using date ranges that are made available if those pictures are needed for further reference. the cd's are just copys of the pictures database with records removed based on dates as a primary key. The problem is the process they go through to do this is time consuming and there has got to be a better way to achieve this end result.

Their process is to shut down the server service. Copy the particular database to another location 3 times. then to open each of those 3 databases and do a find for a particular date range and delete the omited records. This ends up giving them 2 database files to be burned to a cd and one to go back on the server. They also run into a time consuming problem of checking for unused blocks in those files. All in all it takes a few hours for this process, and while it does provide a safe result in the end, there has got to be a better way to do this. I am not an expert in filemaker by any means, and they have asked me to look into a better solution along with some minor networking issues i am covering for them. I have looked into import and export features and have not found any great solutions.

I should also mention that beyond the actual picture fields in the database, most of the other fields are pulled from a relation to a main order database. I have been thinking there could be a way to do this involving an empty clone and importing a found set, but i would like to know if anyone might have a better option for this issue.

And I know that version 7 and server 7 will remove the 2GB barrier, but upgrading is not an option due to budget at this time. Also I would rather they not use up more disk space by avoiding the problem with upgrading to 7 and removing the limit, as they do not employ a full time technician to monitor their hardware and could fill up their disk space on their server rather easily by upgrading and forgeting the file size.

If more information is needed, please let me know.

Posted

I can't think of anything other than imports into a clone(s), then tossing the original file. The big question is whether an import is faster than how long it takes to delete records then remove unused blocks. I believe importing would be easier to script and automate.

As far as not watching the file size, there is a Status(CurrentFileSize, or Get ( FileSize ) function.

The Troi File plug-in, or AppleScript on a Mac, could help with creating/renaming the clones. It takes very little time to create a clone, esp. if there are few fields.

  • Newbies
Posted

Thanks for the reply. I have been looking at cloneing and importing and believe that is going to be the better way to go.

This topic is 7496 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.