Hi - I'm developing a business solution to be hosted on FM Server. It will be hosted on Soliant Cloud. This is my first time developing for Server. I read Steven and Wim's whitepaper on FM 16 security, which was very helpful.
In the past, when I've created upgrades to my solution, I've imported data from the previous version into the new one. Each update is a modified version of the previous file.
I read about the benefits of using File Access Protection. My solution is a single file solution so I can basically exclude any other file having access - except I'm not sure what impact that will have on import from previous versions. I assume both files will have the same ID - but not sure if that means that FM will see the older version as trusted or not.
I have a database that we use to update our website inventory. A few years ago we began offering customized merchandise that gets dropshipped direct from suppliers. Suppliers give us data feed files with their inventory levels, pricing, etc. and this file manipulates the data. It takes our current web database, compares values and exports those products with their updated values.
Importing the data and exporting used to take less than hour but now it takes several since the size of the web database has grown and the number of suppliers has grown. Everything is automated through scripts. In the main table (web database), the proposed quantities, pricing, leadtime, variations, etc. all use unstored calculated fields to determine the new value. I then have a separate field which is used to flag items that need updating. The major bottleneck of the entire process is the searching of this field. It can take sometimes over an hour to search this field. Other steps like exporting the changes can take a while, too.
I have done some things to optimize the database but it still seems that these unstored calc fields are what is dragging everything down. I have tried replacing some of those calc fields with text/num fields with "replace field contents" script steps (or auto entry) but it does not seem to make a difference because of the indexing. The database is not hosted or shared and my computer has decent specs with an SSD HD. I've got a simplified design chart attached for reference.
I am not sure that this is what comes with a large complex database file or if my design is flawed. The only two things I can think to try to reduce the processing time is:
1) Rewrite the scripts to update the supplier/inventory table records instead of replacing the records fresh each time.
2) Use a looped set field script to set the "change flag" field and/or the other updated price/qty/etc fields
Any thoughts or advice is much appreciated.
I'm completely new to XML and I'm making an XLST for an XML feed into my Filemaker database.
Surprisingly, I got the basics down rather fast, but I'm really stumped on a few fields.
See attached XLST, sample dataset and Filemaker test database.
Referencing the XSLT metadata, the import works perfectly fine for the nodes named 'titleId' down through 'dealType' as well as the nodes under 'licensingWindow'. However I can't seem to get the values of a few other nodes. The trouble for me is that the nodes are nested in as child nodes that aren't named uniquely and I just can't figure out how to get their values.
<value>Mickey continues to feel mounting pressure from the network.</value>
<value>Mickey continues to feel mounting pressure from the network as an affiliates dinner is fast approaching and they need something to sell to advertisers.</value>
What I'm looking for is the value (text beginning with the word Mickey) of the field 'value' on the node under the ones whose name is 'name' containing the values 'SYNOPSIS75' and 'SYNOPSIS234'. Those values seem to be the only way to uniquely identify the 'value' fields. That's what I'm not getting. I had one try that did correctly identify and pull over the value data but it messed with the overall for-each I need for importing a number of records and thus I only got the data for a single record.
I'm guessing there is an easy fix for this, but I'm just not familiar enough with XML to know it yet.
I'd really appreciate it if someone could point me toward the right technique to use on this.
You'll see in my XLST some of my failed attempts, if you'd like to see what I was trying.
Thanks in advance,
I'm surte this is quite a simple task ... for someone expert
I see I can export Record content using a Variable as the new filename for the exported file.
I have more than 1200 records in my db and I hope to export 3 fields using one [fieldname: Title] as a filename.
The path for my db is as follows
file:/Macintosh HD/Users/Claudio/Documents/Synch Test Copy Copy.fmp12
Thanks a lot
I have a small issue with an export on a network path. Generally the path is alive, but very rarely it can happen that is not reachable. When not reachable, you'll get the error message from FileMaker that file can not be created .... (error code = 800). Since this dialog box can not be suppressed by Error Capture =On, the script will stop there until someone goes and clicks on the OK button of the message.
Was wondering, is there a way to check if the path is reachable, and if it is not reachable then to exit script?
Thank you, Toni