Jump to content

Go to prior page or reload on web viewer


EllenG

This topic is 5400 days old. Please don't post here. Open a new topic instead.

Recommended Posts

I have a web viewer that starts here: http://alpacanation.com/alpacas-for-sale.aspx

The user can then enter the search criteria and select an animal from the search list. I have an extract for the data from the animal page. After the extract, I want to return the user to the list of animals from the search criteria page.

I can't figure out how to capture the options selected on search and none of the "set webviewer" options (back, forward, reload) seem to work. Must be a simple solution???

Also, is there a way to extract a picture from the web page into FMP10?

Thanks.

Link to comment
Share on other sites

The script step Set Web Viewer [Webviewer Name; Action: Go Back] will work.

A likely possible reason why it would not work is that the web viewer has either not been named, or the name used for webviewer in the script step, doesn't match that on the layout; in this case, Filemaker fails to report any errors but executes the Set Web Viewer step. To check this, go to Layout, select the webviewer, go to View menu, and select Object Info. Make sure Object Name is filled in and matches the name used in Go Back (may want to copy/paste it).

For downloading a remote web image and placing it in a Filemaker container field, try using ScriptMaster from 360 Works (free plugin) to download an image URL into a container. Specifically, see example "Get URL As Container" in their example file. Alternately you could download the image using cURL, ftp, etc but that is going to be more complicated.

Link to comment
Share on other sites

I verified that I have the correct object name, but no matter what I do, it keeps returning to the first web page and not the prior page that the user gets (the list of animals from the selected criteria.) Is there any way to get back to that page? My web viewer file is attached, if that will help.

Also; I need help extracting some data. I am able to get all the other fields I need, but this one. There are 2 possible scenarios of how it will show up in the html source code, so I am also including the 2 html files; but here is what they look like (the common initial code that I can position to is: lineage_dam.gif:

Source 1 example:

(need to extract Yes Suri Cinder)

lineage_dam.gif" align="left">

Yes Suri Cinder

Source 2 example:

(need to extract Café Aulait)

lineage_dam.gif" align="left">

Cafe Aulait

Thanks again... Don't know what I would do without this forum!

webviewer.zip

Link to comment
Share on other sites

First, let me say that Alpacas are adorable!

Thanks for providing this sample. Much of my work with Filemaker has involved screen scraping so it's helpful to view the solution.

The problem lies in creating a new record after extracting the relevant data; this causes a reload [of the URL initiating the search, with all fields blank]. The reason is, the webviewer is under 'hybrid' control; at first, the program loads the search page, but then, it's under user control, they navigate through a page to list alpacas, then select one, then press Extract, and repeat this for each Alpaca they're interested in (at least, that's how I understand the process flow). To see this happen, navigate to an Alpaca detail, then go to Records|New Record and you'll see a reload occur.

However, the URL you are navigating to is still set to the first search page, so that loads, with no sex, age, type, etc set, rather than the list you want.

I believe the simplest resolution is this: Before you create a new record, execute a [bACK] command to WebViewer. That will take you back to the list of Alpacas. Next, Set a Variable to the webviewer's "source" (it's URL) to store the URL of the list page so you can go to it later. The resulting URL will contain the search criteria for the list; going to this page later, will present the last list they viewed. Next, execute a set webviewer [forward] and proceed to extract the fields from the individual alpaca screen. Following field extraction, reload the page stored in the variable to list the alpacas. Since it appears to be an HTTP get request, this should work; alternately, you can put the webviewer under total FM control where the user enters the age, sex, type into FM fields.

Edited by Guest
Link to comment
Share on other sites

I wanted to respond to this in a separate message since it's a different question (parsing HTML):

I suggest first setting an index field to the position of , that is, the end of the string of interest for case 1. I'd start that search at the position of "lineage_dam.gif". i.e.

Set field i=Position(HTML;"lineage_dam.gif";1;1)

set field j=Position(HTML;"";i,1)

If j<1 THEN it wasn't found, in which case, data conforms to case 2, and you should set j=Position(HTML;;i,1)

i=Position(HTML;">";j,-1) (use -1 to search backwards for the next tag delimeter)

alpacaname=Middle(HTML,i,j-i)

Else

(else meaning it was case 1)

i=Position(HTML;">";j;-1)

alpacaname=Middle(HTML,i,j-i)

Endif

You will likely need to eliminate return, shift characters over one or two but this is the general idea.

You can also attempt to parse the data using Javascript and the DOM. Any solution parsing HTML will be prone to break if/when webpage is changed so if same data can be obtained via webservices that will be better.

Checking overall lengths before filling fields may be advantageous (>25 characters, don't fill, etc).

Link to comment
Share on other sites

Use Set Variable [$URL_with_List GetLayoutObjectAttribute ( "AN_Extract" ;"source")]

source = URL of page currently displayed in webviewer.

So that would go in the extract script before a new record is created. I'd also include a Set Web Viewer [Object Name: "AN_Extract" ; Action: Go Back, a Pause/Resume for 2 sec before this step; after above step, a Set Web Viewer Action: Go Forward.

Even better, replace pause with logic to check page fully loaded (rather than a fixed duration pause).

That way, it navigates BACK to the list view, records the URL, goes FORWARD to the detail view, then after adding the record, you can navigate to the saved URL.

Link to comment
Share on other sites

Glad to hear it's working. You might try the FILTER function against, say, abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVXYZ0123456789 to clip all spaces, returns, unprintable characters, etc all in a single function.

If the solution is distributed you may want to consider maintaining a version number and querying it against a number on your own page, when solution loads since updates may be required as web pages change frequently which may break the screen scraping logic. It is very frustrating dealing with sources that don't have webservices which are so much more predictable.

If screen scraping is done in a Javascript, it will be a bit more portable and could be downloaded to update the solution's logic without re-downloading the entire solution if it breaks.

Link to comment
Share on other sites

This topic is 5400 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.