
Quito
Members-
Posts
106 -
Joined
-
Last visited
-
Days Won
1
Everything posted by Quito
-
Hi, I'm trying to obtain or refresh a Google Cloud OAuth 2.0 Access Token using a Service Account Key and FileMaker native Base64Encode. Dependencies: Service Account JSON key from the secure container field. Using FileMaker's native TextDecode function to interpret the container data as UTF-8 text. I'm out of ideas on what could be causing the fail. Anyone care to pitch in with ideas? All the very best, Daniel
-
xml import Distinguish fields during XML import
Quito replied to Quito's topic in Script Workspace and Script Triggers
23 XSLTs and counting... These are really complex XMLs. For example, as there are four Medical Record Numbers (MRNs), the GP, the Hospital, the Specialist, and the Research Institution get different contents from the XML. It's truly daunting, but amazing when you get the expected result. -
Hi, Comment, Following your advice, after making the script a bit more legible, and adding Close Data File, the script progressed successfully: ----------- Insert from URL [ Select; With dialog: Off; Target: $Pubmedxml ; "https://eutils.ncbi.nlm.nih.gov/... Set Variable [ $Pubmedxml; Value: RightValues ( $Pubmedxml ; ValueCount ( $Pubmedxml ) -2 ) ] Set Variable [ $filePath_XML; Value: Get (DesktopPath) & "pubmed.xml" ] Get File Exists [ "$filePath_XML" ; Target: $fileExists ] If [ not $fileExists ] Create Data File [ "$filePath_XML" ; Create folders: Off ] End If Open Data File [ "$filePath_XML" ; Target: $dataFile_XML ] Show Custom Dialog [ "File ID" ; "File ID for " & $filePath_XML & ": " & $dataFile_XML ] Write to Data File [ File ID: $dataFile_XML ; Data source: $Pubmedxml; Write as: UTF-8 ] Close Data File [ File ID: $dataFile_XML ] --------- In time, the DesktopPath will change to TemporaryPath. Now I'm getting a 719 error (Error in transforming XML using XSL) when parsing the second stylesheet, but that's for another topic. Thank you sooo much and, All the very best, Daniel
-
xml import Distinguish fields during XML import
Quito replied to Quito's topic in Script Workspace and Script Triggers
Hi, Comment, Just to follow-up on this, your advice was very sound. Text functions were abandoned and am currently juggling 23 XSLTs. Not ready yet, but the test results look amazing. Best regards, Daniel -
Hi, @comment, Write to Data File has replaced what was scripted previously. I am getting a 300 error (because the file is open?). I've checked around the Forums but cannot find a way to fix it. Is something missing in the script? I'm adding a screenshot. All the very best, Daniel
-
I think you could make this significantly simpler by using variables instead of fields. Please elaborate further. It won't work in a server-side script because you are using Export Field Contents. You should be writing to a data file instead. This was also already mentioned in the previous thread. So, I tested it on Windows, made a few adjustments and finally it's working on both MacOS and Windows 11. Will work on the Write to Data File script now, and I'll open another topic, if necessary. Although the software has always been intended to be used on a server, I had to see it working locally first. Does the Write to Data File function for both local and server use? All the very best, Daniel
-
Solved it: In the GetContainerAtribute, I was writing a specific name for "filename" in some portions of the script. I noticed it when the script went through with "filename", yet failed with the specific name. Thanks, Comment. After at least 7 years, the PubMed en español project is finally ready for use in MacOS. Will be testing it shortly in Windows and on the server. Best regards, Daniel
-
Hi, Comment, The contents of the cXML_source are: Substitute ( XML_source ; [ "<!DOCTYPE PubmedArticleSet PUBLIC \"-//NLM//DTD PubMedArticle, 1st January 2024//EN\" \"https://dtd.nlm.nih.gov/ncbi/pubmed/out/pubmed_240101.dtd\">" ; "<!-- <!DOCTYPE PubmedArticleSet PUBLIC \"-//NLM//DTD PubMedArticle, 1st January 2024//EN\" \"https://dtd.nlm.nih.gov/ncbi/pubmed/out/pubmed_240101.dtd\"> -->" ] ) If I import the XML using the XSLT from the File/Import Records/XML Data Source, it performs flawlessly. So, it has to be something wrong with the script. The XML contains the following declaration: <?xml version="1.0" ?> Your assumption is correct. The XSLT states the "utf-8" encoding, twice. AFAIK, "utf-16" is necessary for asian languages, but otherwise, I don't understand the implications, nor how to correct the script. Maybe I have to force the use of "utf-8" during the download?
-
OK, so it's taken me 20 days to progress to a promising, yet non-working script: Inside the first line is: Choose ( Abs ( Get ( SystemPlatform ) ) -1 ; /*MAC OS X*/ Get ( TemporaryPath ) & "Pubmed.xml" ; /*WINDOWS*/ "filewin:"& Get ( TemporaryPath ) & "Pubmed.xml" ) cXML_source contains a Substitute that removes the second line from the XML (the DOCTYPE with the DTD). Import fails with a [719] Error in transforming XML using XSL (from Xalan).
-
OK, so POE.ai provided the following script, based on your reply: # Define script variables Set Variable [ $url ; "https://example.com/file.txt" Set Variable [ $tempFolder ; Get ( TemporaryPath ) ] Set Variable [ $tempFilePath ; $tempFolder & "temp.txt" ] # Insert file from URL into a variable Insert from URL [ Select ; $url ; $tempFilePath ] # Remove second line from the text Set Variable [ $text ; Substitute ( $text ; ¶ & GetValue ( $text ; 2 ) & ¶ ; ¶ ) ] # Write modified text to a temporary file Set Variable [ $fileHandle ; Open for Write ( $tempFilePath ) ] If [ $fileHandle ≠ "" ] Set Variable [ $writeResult ; Write to File ( $fileHandle ; $text ) ] Close File [ $fileHandle ] End If # Import the temporary file Import Records [ With dialog: Off ; "$tempFilePath" ] ------------- I don't expect it to work as is but, do you notice something overtly wrong for any step in particular?
-
Hi, Comment, Thank you. Yes, during the import. The XML processing works if the XML file is manually sent to a container field. This triggers the XML/XSLT processing script correctly, the XML is sent to the Desktop and reimported using the XSLTs. My problem occurs if the XML is stored as a calculated text in a field OR as an XML within its container. Then the script fails. Can't the XML processing + XSLT occur directly against the stored files, without the need of the export-reimport step? ----- Yes, we have discussed this before, and I have updated the scripts accordingly, taking into account your insight as much as possible. Thank you again! My position is that both the XML and the XSLT are stored in their corresponding container fields, and that exporting the XML just to reimport it seems unnecessary. It does work, yet stripping the second line would make it perform faster. Also, there can be thousands of separate XMLs in the processing queue (one of my tests involve a batch with 4800 separate XML records; another test involves a single XML with tens of thousands of records than can be over 2GB in size). If every time an XML needs to be processed the Desktop gets a copy, then pretty soon the Desktop will get madly cluttered with XML files that then need to be deleted. As I do not know the path of the Desktop user, I don't think I can script delete the XML files after processing occurs. It also seems pretty dangerous to me to have scripts running against the Desktop, unless the user allows it. Thus the idea of handling eveything from within the tool. Now, I'm thinking that perhaps storing the XML from the text field into a temporary variable might do the trick. Or just importing the record directly from the XML server using an HTTP request, and skipping the Insert from URL step altogether. Yet, the problem regarding the second line will persist, and the import of a large file will take months. All the very best, Daniel
-
Hi, Using Insert from URL, the result can be: 1. Downloaded as text into a text field (in XML format). 2. Downloaded as a text file into a container field, also in XML format, with a generic file name (efetch.fcgi). The tricky part is that the second line contains the DOCTYPE. When FileMaker reads this line, it spends up to twenty seconds there, and then progresses with the script. As there are many XSLTs, FileMaker reads the DOCTYPE that many times, extending the entire process to way over 3 minutes or so. If I manually remove the second line, the entire processing time goes down to the expected 1-2 seconds in total. In 1., I've commented out the second line with a calculation, but I haven't been able to get the script to recognize the calculated text as XML and use that modified XML to process with the XSLTs. In 2., I don't really know if I can modify the contents of efetch.fcgi from within its container field. What would be the best way to do this? Would both cases requiere the modified XML to be downloaded and reuploaded? Best regards, Daniel
-
Weird behavior with Checkbox Set using Value Lists from Second field
Quito replied to Quito's topic in Value Lists
Thank you, Comment, Fortunately, it was possible to modify the values just enough to make them different, regarding the first 100 characters. All is well now. Best regards, Daniel -
Weird behavior with Checkbox Set using Value Lists from Second field
Quito replied to Quito's topic in Value Lists
Thank you, Comment, The Checkbox Set in English, upon quick inspection, seems to be fine. The Checkbox Set in Spanish messes up the selection, even though both sets pull the data from the same value list. Best regards, Daniel weird checkbox.fmp12 -
Hi, There is a weird behavior occuring with a couple Checkbox Sets (not all of the sets present in the layout): 1. When the user selects Checkbox #5, the selected value is correct but the visual indicator of the checkbox goes to Checkbox #3. 2. When the user selects a Checkbox #6, the value is correct but the visual indicator of the checkbox goes to Checkbox #4 3. When the user selects a Checkbox #7, the value is correct but the visual indicator of the checkbox goes to Checkbox #4 4. When the user selects a Checkbox #8, the value is correct but the visual indicator of the checkbox goes to Checkbox #3 This happens using the "x" mark or the "Check" mark. Deselecting the Checked values becomes a messy challenge, because more values appear selected: 5. When the user selects Checkbox #8 again, the visual indicator of the checkbox goes to Checkbox #5! Repositioning and resizing the field that contains the Checkbox Set, deleting the fields and starting over, and changing the Value list fields (and back) didn't fix the behavior. At first, it seemed to be related to lists with more than 12 values, but further testing a list with over 100 values performed fine. It doesn't matter if the values are plain words, or if they contain numbers, percentages, accents, or dots (COVID-19, 98%, .) What on Earth is going on with the Value lists #3-8?
-
Thank you, Comment, Before I forget, the second line of the PubMed XML contains the DTD: <!DOCTYPE PubmedArticleSet PUBLIC "-//NLM//DTD PubMedArticle, 1st January 2024//EN" "https://dtd.nlm.nih.gov/ncbi/pubmed/out/pubmed_240101.dtd"> This line is causing the spinning beach ball prior to processing the XML. If I delete the line, the import speeds up considerably (down from minutes to seconds). Is there a way of telling the import script to comment out/disregard/not import/delete the second line of the PubMed XML, prior to processing? The XSLTs are in Container (Global) fields and are working as expected. As I believe you are correct, the plausible explanation is that FileMaker is creating temporary files and using those. Regarding your second hint, I'll test it now. I'm guessing that if the scripts are performed on the Server/Cloud, the local XML file will be stored on the server, not on the users machine, and that I'll have to add a unique ID to the file name to avoid collisions during import. All the very best, Daniel
-
Hi, An "Insert from URL" script populates a field with PubMed XML. I can confirm that the XML is imported in its entirety. Multiple XSLT were written to handle the correct import of the XML into tables and their corresponding fields (the PubMed Horcrux described a few years back). Testing the XSLT from the Menu File/Import Records/XML Data Source, the import proceeds fine, although it takes about 15-20 seconds for the spinning beach ball to disappear and FileMaker to progress with the XSLT/XML import. Don't know why there is this initial delay. Currently, I have placed the XSLTs into Containers (Global) fields. When I run a new script to process the PubMed XML (within its field) using the XSLT within the Containers, I get the Dialog window: XML/XSL information is not enough to proceed with import/export. Pressing "OK" on the Dialog Window moves the script to the next XSLT, and so on, yet no records are imported. I don't understand why the import works with one method, and not with the other. Can you advise on the reason for the initial delay, and suggest what might missing in the import script? Best, Daniel
-
xml import Distinguish fields during XML import
Quito replied to Quito's topic in Script Workspace and Script Triggers
OK, so I looked at the XML and there are two unique identifiers for the MRN, namely, the id root (last numbers ending in 7, 8, 9, and 10) and the <code code= (numbered 1, 2, 3, and 4). The id root 2.16.840.1.113883.3.989.2.1.3.7 is on the record prior to GP12345567. Maybe there's a way of processing the XML file to go into XML to FMP (Drag n'Drop) twice? Once, in the usual drag n'drop fashion, while the second script copies the entire XML and places it into a "MyXML_pasted_into_a_field" (or maybe even a global) field. I'm probably chasing my tail with the following workaround but, a simple Let could do the trick for most of the field clashes, without going into XSLT hell. Something like: // First, a Substitute to remove all unnecessary spaces from the XML. Then Let ( [ field = MyXML_pasted_into_a_field ; begin = "<id root=\"2.16.840.1.113883.3.989.2.1.3.7\" extension=\"" ; start = Position ( field ; begin ; 1 ; 1 ) + Length ( begin ) ; end = Position ( field ; "\" /><code code=\"1\"" ; start ; 1 ) - 0 ] ; Trim ( Substitute ( Middle ( field ; start ; end - start ) ; Char ( 10 ) ; "" ) )) ...and so on for the remaining MRNs. Below is the bit of XML that contains the four MRNs, after supposedly running the Substitute function to remove unnecessary spaces. -------------------- <component typeCode="COMP"><adverseEventAssessment classCode="INVSTG" moodCode="EVN"><subject1 typeCode="SBJ"><primaryRole classCode="INVSBJ"><player1 classCode="PSN" determinerCode="INSTANCE"><name>ZXDR</name><administrativeGenderCode code="1" codeSystem="1.0.5218" /><birthTime value="20200202" /><deceasedTime value="20210303" /><asIdentifiedEntity classCode="IDENT"><id root="2.16.840.1.113883.3.989.2.1.3.7" extension="GP12345567" /><code code="1" codeSystem="2.16.840.1.113883.3.989.2.1.1.4" codeSystemVersion="2.0" /></asIdentifiedEntity><asIdentifiedEntity classCode="IDENT"><id root="2.16.840.1.113883.3.989.2.1.3.8" extension="SPE3456789" /><code code="2" codeSystem="2.16.840.1.113883.3.989.2.1.1.4" codeSystemVersion="2.0" /></asIdentifiedEntity><asIdentifiedEntity classCode="IDENT"><id root="2.16.840.1.113883.3.989.2.1.3.9" extension="HOS1234567" /><code code="3" codeSystem="2.16.840.1.113883.3.989.2.1.1.4" codeSystemVersion="2.0" /></asIdentifiedEntity><asIdentifiedEntity classCode="IDENT"><id root="2.16.840.1.113883.3.989.2.1.3.10" extension="RES9876543" /><code code="4" codeSystem="2.16.840.1.113883.3.989.2.1.1.4" codeSystemVersion="2.0" /></asIdentifiedEntity> -------------------- Haven't tried it yet. Your insight on this subject is very important. All the very best, Daniel -
xml import Distinguish fields during XML import
Quito replied to Quito's topic in Script Workspace and Script Triggers
Thank you, Comment, OK, I'll go with your suggestion. First I'll tweak the "Universal XSLT", in hopes I can add the extra field without too much effort. Otherwise, an entirely new XSLT will turn into a huge task for me. The XSLT seems to be in a global field, not something I'm familiar with taking apart/replacing. Best regards, Daniel -
Hi, I'm using Jens Teich's XML to FMP (Drag n'Drop), which includes its own "Universal XSLT" to import XML. The resulting fields are named path, parent, node, value, and type. The XML file can hold information from 1, 2, 3, 4 or more Medical Records. I've added a calculation to capture each Medical Record Number - MRN (Case ( path = "MCCI_IN200100UV01/PORR_IN049016UV/controlActProcess/subject/investigationEvent/component/adverseEventAssessment/subject1/primaryRole/player1/asIdentifiedEntity/id/" and parent = "id" and node = "extension" and type = "Attribute" ; value ; "" ) to find the relevant value and send it to the Summary field. The problem I have is that there is still some granularity missing to distinguish the Medical Record Numbers, as the calculation will bring multiple MRNs into the same Summary field (because the calculation criteria are the same for all MRNs, see attachments). I cannot anticipate the structure of the Medical Record Number (the example includes a prefix to distinguish if the MRN is from a General Practitioner, a Specialist, a Hospital, or a Research Study, but I don't think I'll be that lucky in a Real World Environment). This is happening with other fields that also have repeating path, parent, node, and type patterns (such as the patient's age, Adverse Event date, weight, and height). I'm guessing that the "Universal XSLT" is missing a field that distinguishes what goes where and allows to send the values to the correct Summary fields. Any thoughts wil be well received. Thanks in advance!
-
Hi, I have this script: ------------ If [ $$RecordClean [Child::ChildID] ] Revert Record/Request [With dialog: Off ] Set Variable [ $$RecordClean [Child::ChildID] ; Value: "" ] Else Set Field [Child::Value ; TextStyleRemove(Child::Value; If(Get(SystemPlatform) = 3; Bold; HighlightYellow) ) ] End If Exit Script [ Text Result: ] ------------ It's working correctly on the text field (Child::Value) of the first ChildID, but the search term stays selected from a previous search on the text field (Child::Value) of the remaining ChildIDs. TextStyleRemove should loop using the ParentID to perform the TextStyleRemove on ChildID 1, ChildID 2, etc. I hope this makes sense. Many thanks beforehand, Daniel
-
Changing the time offset to GMT -05:00
Quito replied to Quito's topic in Custom Functions Discussions
Yes, you are correct in everything, including that we do not observe DST anymore. I'll look into it again soon to see if I can get the same result with your other suggestions and will post results here. All the very best, Daniel -
Changing the time offset to GMT -05:00
Quito replied to Quito's topic in Custom Functions Discussions
I replaced the entire script (provided at the beginning of this topic) with: Get ( CurrentHostTimestamp ) - Time ( 5 ; 0 ; 0 ) and it's bringing the current local time successfully. Thank you very much, Comment!