December 11, 20205 yr Hi I have a 5 000 record with url to check on web viewer (loop > open url in web viewer ... check source code, go next) After 400+/- records it s slow down and in the end not respond at all. I went to check monitor activity and I see 30Go. If I close the file it s keeps the 30go and keep slowing whole Mac, but if I quite filemaker and open again it s worlds well until 400 more records ... anyone had to deal with such issus ?
December 11, 20205 yr That's an old version of FM there In your loop, do a pause of a few seconds every 50 or so iterations. See if that keeps the memory down.
December 11, 20205 yr Author thank you Wim, I did some research and it s seem s to be an issue, I found out that web viewer make a memory leak and the best thing to do is to have two windows, one for webviewer and one for yours fields, then after each load reset the webviewer and wait 1 sec. "Set Web Viewer [Object Name: "wv"; URL: ""] Set Web Viewer [Object Name: "wv"; Action: Reset] Pause/Resume Script [Duration (seconds😞 1] Close Window [Current window]" credit to "keeztha" it was 4 years ago maybe someone sorted it out since ^^ Result : I keep one window and just reset and wait 1 sec I use 3 time less memory and went to 800 records I m sure it will go below but will still increasing and froze so I keep looking Edited December 11, 20205 yr by ibobo
December 13, 20205 yr Author finally the best thing to do, parameter : 1 window, 1 web viewer, 1 portal. script : -loop - go to portal, select nem record, set web viewer to the url (from portal) then - reset webviewer wait 2 sec , go to portal nem +1 -end loop with that I can go up to 5000
December 13, 20205 yr What is it that you're checking on those URLs? Couldn't you interact with an API instead of loading a web viewer?
December 14, 20205 yr Author I m checking prices of products on Amz, if I use curl, website see it, and I don t want to deal with their api. since I reset every time now the issus is captcha that I didn t had before XD
December 14, 20205 yr Pretty sure that you're violating some sort of Amazon EULA by doing web site scraping. So be careful that you don't end up on their blacklist. They have APIs for this and they're not that hard; if you don't want to deal with them, find a local developer that you like and trust and get them to integrate for you. Your solution will be a lot more stable, performant and scalable.
Create an account or sign in to comment