Jump to content

Mark Welch

Members
  • Content Count

    33
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Mark Welch

  • Rank
    member
  • Birthday 09/19/1961
  1. > "...requires some good - and possible unique - design approaches" < And there's the issue: to achieve success with FileMaker, I need to adopt and learn a "completely different" way of doing things. My whole reason for using FileMaker was for simplicity, ease of use, and quick development -- not a long learning curve requiring help at every single stage. The cache issue is a perfect example. On the phone today, after I had complained several times about the fact that even running "flat-out," FileMaker rarely used more than a few percent of CPU capacity (peaking at 9% maximum
  2. At some point, I must decide whether to continue with a solution that is rapidly getting more complex and unworkable, with the hope that somehow I might make it work, or to move back to start over with another solution. That "help geek at FM," like the other help geeks at FM that I have talked with, made it very clear that FM does not wish to provide any support for its product; he was happy to confirm that yes, FileMaker is just really slow, and with that many records, yes, it might just not work. And absolutely yes, to get a list of unique values and a count for each, I'd need to go throug
  3. Thanks to everyone for your advice. After spending time on the phone with FileMaker support, it seems very clear that these performance issues are neither unusual nor unexpected -- FileMaker simply cannot handle the size database that I need, despite its marketing claims.
  4. I am afraid that I'm at the point of pulling my hair out over FileMaker's performance. I finally surrendered and called technical support and used my "one free ticket" on essentially the most basic question, "Does FileMaker work?" The answer, in the end, is that FileMaker will not work for my needs. I am abandoning 30 days of work attempting to get this tool to work for my needs. I will contact the client whose project first led me to this product, and advise them that the project is returning to "zero," with no technology solution selected. My personal project will also revert to
  5. Thanks -- I definitely did try to exclude the body section, but I'm not sure if I succeeded since I cannot see what the three sections were called. I am simply not going to waste any more time on a "summary field," as it clearly drags performance. But then again, maybe it's not the summary field. Even after deleting it, I am getting long periods when FileMaker won't respond but actually doesn't seem to be doing anything. Eventually I had to shut down the program via Windows, and when I reloaded it started "checking for consistency..." and if the progress bar is a reasonable measure,
  6. IdealData wrote: > "I think you need to define a SELF JOIN RELATIONSHIP using the URL as the match on both sides of the relationship. Then you can use the COUNT function to evaluate the number of records that contain the same URL. The SELF JOIN is a strange concept at first - a TABLE pointing at ITSELF, however it is perfectly valid and works just like any other relationship." < I simply don't understand this -- it just doesn't sound right, and given the incredibly long delays I've experienced trying to use a summary field in my database, I don't think this is something I should try
  7. I'm trying to identify duplicate URLs (for thumbnail images) in a database, in order to identify "stand-in" images that should not be shown (for example, many merchants have a standard image that says "image not available" which is silly for me to display). I've defined a summary field called "count_of_thumb" and I've tried creating new layouts several different ways, but all I ever end up with is either all 1's or all values the same (being the count of all records, I'm not sure if it's all records with a value in the field or a count of unique valies). Clearly I am not understandin
  8. Adam sent me a link for a new beta version of the plug-in, and it definitely works now for the http downloads to a specified destination. Thanks!
  9. Thanks. My first reaction was, how could I possibly delete all records without having the records in the current view. The obvious answer was to use a short script (go to layout, show all, delete all). Sure enough, it was much faster.
  10. I'm not sure how "having fields on the screen" would affect this; the screen isn't updated during the deletion process. I assume that you are right that some (perhaps most) of the delay may come from updating the index. What I don't understand is, if I'm deleting all the records from a table (with no data from the table related to any other tables), why can't the entire table contents AND all index entries for the table be deleted together, much more quickly?
  11. During the current development stage of my project, I need to delete all records from certain tables before re-importing data again. What confuses me is the incredibly slow speed to "delete all records." It seems to be deleting one record at a time, taking more than a second per 100 records deleted. I'm still waiting for 550,000 records to be deleted, before I can start importing again (this was already a "reduced data set" which excluded files containing another 1.5 to 2 million records of source data; for the next few cycles I'll use an even smaller set of source files). At this
  12. I'm still a newbie at using FileMaker, and I'm definitely having some trouble finding the right plug-ins to handle these tasks: (1) Download FTP files (fine, I can do this with MooPlug, if I know the exact path and file name). (2) Get file-dates and directory listings for FTP sites (MooPlug can't do it; after spending an hour with FTPit Pro, I'm confident that it will take many more hours to decipher its sparse documentation, to see if it can actually meet this set of needs -- but it appears that it can't handle other functionality I need). (3) Download files via http (MooPlug c
  13. To follow up: TextPipe Pro is definitely doing more for me than I think I could have ever expected from scripting within FileMaker. I've been able to incorporate several different filters (including removal of embedded HTML code, character mapping, white-space removal, and a wide range of text-string substitutions, as well as transforming from pipe-delimited to CSV. My current "test suite" of files consists of 139 files totalling 1.2GB, and TextPipe Pro filtered through the files in 25 minutes. FileMaker was then able to import more than 1 million records from those files in less t
  14. > So it seems the best route would be to pre-process the data in another application. < I reached the same conclusion, and right now I'm test-driving TextPipe Pro, which seems like it might actually help me a lot more.
  15. Thanks for the replies. I am writing a script for this, and that's where I'm encountering problems. There are several hundred of these pipe-delimited text files, each with its own irregular update schedule. A few files are updated daily; some are updated once a week or so; some don't seem to be updated more than once a quarter. Each file contains these fields: ProductID|Name|MerchantID|Merchant|Link|Thumbnail|BigImage|Price|RetailPrice|Category|SubCategory|Description|Custom1|Custom2|Custom3|Custom4|Custom5|LastUpdated|status|manufacturer|partnumber|merchantCategory|merchantSubc
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.