Jump to content

pteaxwa

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by pteaxwa

  1. Is there a way to quickly clear an entire global repeating field that does not require each repetition to be deleted? I am currently implementing a set of back/forward buttons in a database using a global repeating field of an arbitrarily large size. Since these type of buttons act as a stack, I tried to implement the functionality as one would normally use via an array (i.e each new page visit yields a push on the back stack, clicking back pops the value off back and pushes in onto the forward stack). Since FM8 supports accessing individual elements of a repeating field dynamically (I have done this in previous versions of Filemaker via string operations), using a repeating field seemed like the most 'natural' solution. However, I want to clear the global repeating fields upon file exit (or open) without having to loop through each element. Moreover, when ever a new page is visited the forward stack needs to be cleared. Currently, I can avoid having to clear the entire stack by creating another global variable to essentially act as a pointer to the location of the proper index to access. However, I do not want additional data being stored in global variables where not necessary. (In a quick test I did, storing a floating point number in each of the maximum possible 32000 repetitions resulted in about a megabyte of storage.) So, how can I clear a repeating field quickly?
  2. Not sure if anyone was following this thread, but I figured out how to "fix" it. The save a copy "compressed" option did not always return the file back to its original size. When I notice performs becomes sluggish on a file, I unhost the it. Then I make a copy open it locally. After that, I create a calculation variable called tmp and set it equal to one (or whatever). I turn indexing *on* for this field (this is the key right here). Then I close the file and wala - the file returns back to it's normal size. This change in size is often quite dramatic with a "bloated" file going from 30 megs back to 10. I still have no idea why any of this is occuring but at least there is a solution in hand.
  3. Anyone have experience using Filemaker as a COM Object? I've haven't done much windows programming but I am looking at doing some Python/Filemaker interaction via COM. Any good resources to look into?
  4. I realized that in that file I forgot to ensure the calculation is unstored. It needs to be updated depending on the "current found count". Attached is the slightly modified version. median.zip
  5. I was asked to calculate some median values and was surprised that as of Filemaker 6 (version I use - not sure about 7) has no built in function to calculate the median value of a found set. Hopefully this file can be of use to someone who needs to do the same. Also, if something is incorrect in the script, I would appreciate some feedback! median.zip
  6. How else can I fix the problem though? The only way to reduce the file size back to it's normal level is to recover, open then close. If not file size and performance will remain an issue. So far, no data has been lost and I have done it 4 times. I am still trying to pinpoint the cause. Extremely frustating to say the least! Also, no extra records are being created as what you had experienced. The only tangible effects I can see are: 1. decreased performance, to the point of near un-usability 2. increased physical file size I see no extra records or anything like that. I am completely stumped. Also, given the dearth of technical data and support available about filemaker (well free info - too used to working with open source stuff) it's hard to even formulate searches to see if anyone else has had similar problems. In the last 5,6 workhours this problem has not reoccured - hopefully it stays that way. Still, I would like to know what happened/is happening!
  7. I have a bizarre problem that I cannot figure out. Two days ago, I was working on a script of two related files of a 13 file DB hosted on a win Filemaker server (posted this on the FM 5-6 forum also as I am not sure if the problem is client or server side). While creating some calculation files for the script I noticed that performance had severely bogged down when clicking done - when the changes were written to disk. Creating a relatively simple calculation and committing that change took 10-15 seconds, a far cry from the relatively quick time this usually takes. Each record in this particular file has an image field stored in the db originating from jpegs and tifs. Occasionally, while scrolling through the records an image would take a while to be retrieved. Why was performance dropping off so rapidly? I decided to close the db and reopen it. While reopening all 13 files are opened. When opening this particular file it took noticeably longer to open. This particular file houses 500+ records and due to the images is about 45 megs. I looked at the file location on the server where it was located and noticed the file ballooned to 90 megs! I unhosted the file made a local copy and opened it up. Performance still dragged. I then decided to "recover" the file. The file was recovered with no errors or records corrupted. I opened the file and everything seemed fine. I then closed the file - this is where it starts to get interesting. The file removed "empty data blocks" and wala - file size back to ~45 megs. I wasn't sure why this happened, but later that day the performance issue came up again. This time the file size increased to ~52 megs. Same recovery process as before and file size back down. What was I doing to cause this to happen? I am not sure. Recently, I had written a script and thought somehow this might have something to do about it. Here's the setup: The mysteriously increasing file is the "parent" of the relevant files. I am doing scripting in the "child" file (1300+ records). Basically, it's a one to many relationship. While scripting in the child file I am setting "number" fields in the parent. I thought that since I am making so many requests to the server file there might be a problem with indexing (was turned on for the fields being set). The fields were changing so quickly and often that I thought indexes were being created and overwritten - maybe that was causing the problem. I then turned indexing off and everything seemed to be normal. As it turns out later that day performance slowed, file size increased again. I have been trying to find a way to consistently re-create this problem but can't. I think it must be related to the scripts that I am writing as this problem just started I believe (not positive but am pretty sure this hasn't happened previously) after writing those. Has anyone ever seen something like this? File size mysteriously "grows", recover it - no corrupt records are found, open the recovered file, close it file size goes back to what it should be. Also of note, if the "growing" file is opened locally and not recovered first, when closing the file size stays "increased". Anyone have anything? I am at a loss here having spent the majority of yesterday trying to figure this out. TIA
  8. I am fairly new to Filemaker and come from a relational database background. Working with an old data set strewn with errors, I came upon a set of 300+ records out of 2000 that were duplicates of some sort. I set out to delete those records, but quickly saw some hurdles... I found this post on how to delete duplicates: http://www.afilemakeraffliction.com/list/howto/afa1095.html I didn't really feel like implementing as he did to delete the duplicates, so I fudged this solution. For the sake of simplicity say you have 1 field: id, in a file called Foo. 1. Create a self-relationship with 'id'. 2. Create a calculation field '_count_dupes' that does this Count(Foo::id). 3. Write a script to loop through all the records: # begin the script Go to Record[First] Freeze Window Enter Find Mode Set Field[id, "!"] #not necessary, but should speed up the search Perform Find, Replace Found Set Loop Loop Exit Loop If (_count_dupes <= 1) #could just do = 1, but <= to be safe. Delete Record/Request[No Dialog] End Loop Go to Record[Next, Exit after last] End Loop Refresh Window Go To Record[First] #end the script Naturally, after doing the perform find you may want to order the records based upon some sort of Creation date to not delete the first created record... I haven't seen this particular method posted anywhere. Hopefully, it can be of some use to someone and maybe you might be able to use it or better yet improve upon it!
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.