Jump to content

Ben Kreunen

Members
  • Content Count

    16
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Ben Kreunen

  • Rank
    member

Profile Information

  • Gender
    Male
  • Location
    Melbourne, AU
  1. Olger's approach is pretty much what we use for our work orders, although we don't hide closed work orders, we sort by status first and then due date...Changing a due date seems to make the purpose of setting one a bit pointless... but you could print via a script that checks the due date first and resets it if necessary
  2. Built some more complex searches but used http://www.url-encode-decode.com/ to avoid making mistakes ;-) Back on track now. Adding in optional criteria if available/relevant to narrow down the data retrieved eg. person was alive in the same decade of publication of journal article (for authors)... I think I'll switch to using multiple templates as in your example for the different components to reduce the clutter in the calculation for debugging.
  3. I had tried it with a decoded URL, but not with single quotes... yes, I was too focused on one problem and not thinking about what I "should" have been using.
  4. Yes. Thanks. It's not the same URL that their form returns but it also makes sense. Should have thought of that ;-)
  5. That may be, but I was in a situation where direct import of the XML would instantly crash Filemaker. The calculations/scripts I set up are reasonably flexible in that I only need to copy a few lines of script and define the text for the start and end tags. The other nice thing (from a scraping perspective) is that the start and end "tags" can be any text. I've used this a number of times to quickly pull bits of data buried in ugly HTML code and even OCR'ed text documents of structured text data. But for XML it's definitely only a last resort if all direct import options have been exhauste
  6. Neither. Substitute is what you need here as all you're doing is multiple substitutions. Substitute(DecorationTechnique;[“Incised”;“Incised leather-hard paste”];[“Leather-hard paste” ; “Incised leather-hard paste”]; ...etc )
  7. Ah those were the days. FWIW in the end I gave up on fixing the XML and wrote a script to insert the XML into a text field and then scrape the data fields out recursively. Extremely crude, but it made the solution completely self-contained so it works on mobile devices as well. Ended up presenting it at a conference: http://www.vala.org.au/component/docman/?task=doc_download&gid=482&Itemid=269
  8. (Posted too quick, should probably be moved to Importing & exporting > XML/XSL) Hi All I'm trying to import and XML data source but am having problems getting the data in. I can download the XML locally and import it with my XSL without a problem. I've also tested the URL generated by the script on http://www.freeformatter.com/xsl-transformer.html successfully. I can view the URL in a web viewer object OK. It fails when I try to import AND when I try to insert from URL into a text field. I've tried various combinations of encoded/decoded characters in the URL but no luck yet. Open
  9. Also have a look in the −−post−data=string, −−post−file=file section in the wget manual. That authentication method possibly doesn't apply in this case. I haven't tried this yet so that's about as far as I can help
  10. Tested out the download > import > export > import process on a list of the RECORDKEYS we've digitised already. (I work in a digitisation service so we're caught in the middle between collections and the digital repository) * Batch downloading was OK although our catalogue timed out occasionally... one benefit of using wget as it retries failed connections. * The rest of the processing was nice and fast - import folder of TXT into a text field, autoenter calculation strips out DOCTYPE - loop - transfer text of corrected XML to a global text field in an unrel
  11. Your URL requires an authenticated, encrypted session so you'll need a bit more than just the url in this case. There are a few examples in the wget manual (PDF in the /man folder of wget)
  12. I had overlooked that for now... partly trying to keep it as portable as possible. A PERL script may be an option for our internal uses. ...but I'm hoping that I can make a good enough case out of the benefits from the re-use of the data that we can push for the DOCTYPE to be fixed/removed. Not so much the at the correction, but that I had to wrestle a number of dumb ideas (read up on nesting for-each) to get to final solution... Definitely need to learn more XML To simplify things a bit I keep the apps and data in folders with the database . It's not essential b
  13. Continuing with my workaround... I've looked at various ways of importing into a text field from a calculated URL. Being on Windows I tend to use DOS batch files for different things. In this case: Export .BAT to have wget download the XML to a text file with a fixed name "xrecord.TXT" in a directory where it's the only file. Run the .BAT Import the folder to a global text field, strip out the DOCTYPE so Filemaker won't explode Export field as plain text (XML with only the field contents) to "xrecord.XML" Import "xrecord.XML" As I already have the RECORDKEY in a field
  14. Many thanks for this. Changing the DOCTYPE will require requesting the vendor to make a change.... hmmm... but for Filemaker on a desktop I can always import into a text field, strip it out and export to a temporary XML file. Not sure yet what I could do with Filemaker Go but I have a temporary hack that scrapes the title and author from the HTML record for quick verification which is enough for the mobile usage. I'm looking at using an iPhone/iPod Touch as a barcode scanner to create lists of items being sent to us for scanning rather than them filling in spreadsheets. Quicker for th
  15. Hi All Yes, yet another XSL cry for help. I have a couple of Filemaker databases for managing scanned images of library collection items that currently import metadata from Excel spreadsheets. This process is extremely inefficient (for those preparing the XLS) and suffers numerous data quality issues. We've recently been provided with a few extra URL schemas for specific lookups and an XML feed of an individual record. I've hacked a means of importing the metadata (scraping the source code of the XML) but it's not pretty. Importing the XML is obviously much more sensible but this
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.