Jump to content

File Size Increasing - Why? (not a beginner user)


This topic is 7055 days old. Please don't post here. Open a new topic instead.

Recommended Posts

I have a bizarre problem that I cannot figure out.

Two days ago, I was working on a script of two related files of a 13 file DB hosted on a win Filemaker server (posted this on the FM 5-6 forum also as I am not sure if the problem is client or server side). While creating some calculation files for the script I noticed that performance had severely bogged down when clicking done - when the changes were written to disk. Creating a relatively simple calculation and committing that change took 10-15 seconds, a far cry from the relatively quick time this usually takes. Each record in this particular file has an image field stored in the db originating from jpegs and tifs. Occasionally, while scrolling through the records an image would take a while to be retrieved.

Why was performance dropping off so rapidly? I decided to close the db and reopen it. While reopening all 13 files are opened. When opening this particular file it took noticeably longer to open. This particular file houses 500+ records and due to the images is about 45 megs. I looked at the file location on the server where it was located and noticed the file ballooned to 90 megs! I unhosted the file made a local copy and opened it up. Performance still dragged. I then decided to "recover" the file. The file was recovered with no errors or records corrupted. I opened the file and everything seemed fine. I then closed the file - this is where it starts to get interesting. The file removed "empty data blocks" and wala - file size back to ~45 megs.

I wasn't sure why this happened, but later that day the performance issue came up again. This time the file size increased to ~52 megs. Same recovery process as before and file size back down.

What was I doing to cause this to happen? I am not sure. Recently, I had written a script and thought somehow this might have something to do about it. Here's the setup:

The mysteriously increasing file is the "parent" of the relevant files. I am doing scripting in the "child" file (1300+ records). Basically, it's a one to many relationship. While scripting in the child file I am setting "number" fields in the parent. I thought that since I am making so many requests to the server file there might be a problem with indexing (was turned on for the fields being set). The fields were changing so quickly and often that I thought indexes were being created and overwritten - maybe that was causing the problem. I then turned indexing off and everything seemed to be normal. As it turns out later that day performance slowed, file size increased again. I have been trying to find a way to consistently re-create this problem but can't. I think it must be related to the scripts that I am writing as this problem just started I believe (not positive but am pretty sure this hasn't happened previously) after writing those.

Has anyone ever seen something like this? File size mysteriously "grows", recover it - no corrupt records are found, open the recovered file, close it file size goes back to what it should be. Also of note, if the "growing" file is opened locally and not recovered first, when closing the file size stays "increased". Anyone have anything? I am at a loss here having spent the majority of yesterday trying to figure this out.

TIA

Link to comment
Share on other sites

I deleted your duplicate post in General Discussions >> FileMaker v5 - v6.

Please do not double post in the Forums!

1). It is not necessary.

2). It can be confusing.

3). It is counter productive.

4). It can make the members mad at you.

5). It is against List Etiquette.

Think of this Forum as one list with several areas of interest (or topics). You, the poster, are expected to read the topic descriptions, and then try to determine which areas comes the closes to your question. It really doesn't matter if you miss the topic area (i.e. put it in Scripts instead of Define Fields, just as long as you think that is the problem. Rest assured that we will still supply answers to your question (provided there is one). We do ask one thing though, Please "Avoid" the General Discussions areas if there is a Topic area already available.

If you would like additional information about any of this, or specifically the info in 1 through 5 above, feel free to contact me by PM.

TIA

Lee

:cool:

Link to comment
Share on other sites

Not sure what's going on. I had a similar situation once in a file where each users has one record created at the start of their session (including a logo container) and the record was deleted when they log out. The file size would balloon even with virtually no records in it when it was closed. Never understood why but I worked around it by not populating the logo field.

But what I really wanted to say: do NOT - I repeat NOT - recover the file to fix problems like this. The recover command is meant to get data out of a badly crashed file and it is by nature very aggressive. 'Recover' will toss out anything that it thinks is not right (data, field def, field options, script steps or whole scripts, layouts or layout objects, ...). It's very likely that you cause more problems than you fix.

Link to comment
Share on other sites

How else can I fix the problem though? The only way to reduce the file size back to it's normal level is to recover, open then close. If not file size and performance will remain an issue.

So far, no data has been lost and I have done it 4 times. I am still trying to pinpoint the cause. Extremely frustating to say the least!

Also, no extra records are being created as what you had experienced. The only tangible effects I can see are:

1. decreased performance, to the point of near un-usability

2. increased physical file size

I see no extra records or anything like that. I am completely stumped. Also, given the dearth of technical data and support available about filemaker (well free info - too used to working with open source stuff) it's hard to even formulate searches to see if anyone else has had similar problems. In the last 5,6 workhours this problem has not reoccured - hopefully it stays that way. Still, I would like to know what happened/is happening!

Link to comment
Share on other sites

The accepted way to restore files to their minimum size is to Save a Copy as Compressed. Then delete the original file, and rename the copy to the original name. This is not easily scripted, more of a manual operation. If you continue to recover the file you will probably break it eventually, if you haven't already. You'd be wise to go back to an earlier clone (obviously already the smallest size) and re-import your records.

In FileMaker Developer 7 there is a File Maintenance command(s) to compact a file in one step. Unfortunately also not scriptable.

Link to comment
Share on other sites

I have Filemaker server 7 v2 and as (Citrix) client Filemaker Pro 7 v3

My filesize also increases and records dissapear!

I've called with filemaker and they tay the file is corrupted, recovering won't help.

They can recover the database for $100 a table!

The worst thing is that the records dissapear and there is no solution available. Rebuilding the database and tables is costing me more than 3 weeks of work, and if it happens the next time, then what?

I never had this problem in 5 and 6

Link to comment
Share on other sites

  • 4 weeks later...

Not sure if anyone was following this thread, but I figured out how to "fix" it. The save a copy "compressed" option did not always return the file back to its original size.

When I notice performs becomes sluggish on a file, I unhost the it. Then I make a copy open it locally. After that, I create a calculation variable called tmp and set it equal to one (or whatever). I turn indexing *on* for this field (this is the key right here). Then I close the file and wala - the file returns back to it's normal size. This change in size is often quite dramatic with a "bloated" file going from 30 megs back to 10.

I still have no idea why any of this is occuring but at least there is a solution in hand.

Link to comment
Share on other sites

If you have a lot of fields that are indexed, your file will grow just with users doing finds and sorts. If you have corrupt data in your file, guess what - your index gets corrupt with that same corrupt data, casuing all sorts of problems, including the file bloat you have described.

One way of clearing data corruption if you do not have container fields:

1. Save a copy of your file as a clone no records (this also strips the index from the file. This will ensure you are not carrying any corruption via corrupt records or index.

2. Export your data either as a TXT or XML file with all fields and all records exported - this will filter your data through the ASCii export that will strip nearly 90% of data corruption out of your records.

3. Assuming that you have no layout corruption in your clone file, import the TXT or XML file into your clone - make sure you first save a copy of the clone for later emergencies.

I also recommend keeping indexed fields to a minimum - I only index fields - and only the key fields- that finds are regularly performed in. I do not index fields for sort fields - regular sorts I script - which the first time they are run the sort is saved in the index - after that they will run nearly as fast as indexed sorts. Scripted, unindexed sorts run just as fast as indexed, unscripted sorts. Seldom used sort fields I also do not index - they are seldom used and the first time a non-indexed sort is performed, it will run slow, but are then also saved to the index.

IMHO you either have some data corruption happpening in your file, or you have some user(s) who are doing a lot of complicated finds and sorts that is bloating your index.

Remember - the Recover command is only designed to recover lost data - it will NOT eliminate data corruption or layout corruption.

Link to comment
Share on other sites

This topic is 7055 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.