MonkeybreadSoftware Posted March 22, 2023 Posted March 22, 2023 In the past we may have connected a FileMaker solution to push data to a web server with a MySQL database. But nowadays, we prefer MongoDB for a lot of things. No hassle dealing with defining the schema upfront and enjoying a lot of built-in features like audit logs and scaling via a cluster. With MongoDB we can setup a cluster and link a few servers together as a cluster. Within the cluster the MongoDB servers synchronize themselves automatically eventually. You may connect for example a server in America with one in Europe and one in Asia. This distributes the load to this servers and makes the whole access more responsive. If needed, we can later add more servers. Like for USA have a server in California, in Texas and one in New York to distribute load within the USA better. The cluster can grow as you need more servers to handle the load. You may even add a local on-premise server to that cluster, so you have a local copy of the MongoDB and work when your office is offline. Let us draw this for you: Once you have the cluster, you could connect to it. We may have several web servers to show a website and have it access the MongoDB cluster to query the data. Since the web servers are distributed to be close to the customer, they can connect to the cluster and end up connecting to the closest server. That may show data on the website. With MongoDB you get audit logging on the server, so the changes can be logged. You may also have one of the servers of the cluster in-house to be the local copy of the data. That is great if internet is down for your company and you still like to work on the data. Once connection resumes the synchronization continues and you may see new data on other servers. Now you can use the FileMaker platform to write the tools to manage this data set. Like think about you'd run a business with a lot of customers world wide, which may need to look on the data, e.g. by clicking on an URL in a newsletter. You manage your customers, create the newsletters and the data for the landing pages in FileMaker as records in the FileMaker database. The data is sent to the MongoDB cluster when needed and then the web servers can access it later to show it. When the client sees the newsletter and clicks on the links, they can view the details on the website and you know the newsletter arrived. We got a backup copy of all our data in FileMaker with this synchronization as well as an audit log in the MongoDB. You may ask what is needed to do the synchronization. Well, for your example here we will make one script using MongoDB functions in MBS FileMaker Plugin, which runs on a commit of a record and sends this record to the MongoDB. This may be called from all layouts where user edit records. We usually don't delete records directly, but set flag to mark them as deleted, so we don't need to synchronize deletes. For the sync we assemble a JSON of the current record walking over the fields in the layout with a while loop. Here is our sample script: # URL to the MongoDB server Set Variable [ $URL ; Value: "mongodb://localhost/" ] # name of the database to use Set Variable [ $databaseName ; Value: "mirror" ] # Clear error log for MBS error logging Set Variable [ $r ; Value: MBS("ClearErrors") ] # # assemble record data Set Variable [ $json ; Value: Let ( [ // Create object as reference j = MBS( "JSON.CreateObjectRef" ); // List fields in current layout fields = FieldNames ( Get(FileName) ; Get(LayoutName )); // how many? c = ValueCount ( fields ); // make a loop to add field values r = While ( i = 1 ; i ≤ c ; [ name = GetValue ( fields ; i ); r = MBS( "JSON.AddItemToObject"; j; name; MBS( "JSON.CreateValue"; GetField ( name ) )); i = i + 1 ] ; i ); // format the result text = MBS( "JSON.Format"; j ); // free json r = MBS( "JSON.Release"; j ) ]; // return result as text text ) ] # # Connect to MongoDB Set Variable [ $Mongo ; Value: MBS( "MongoDB.New" ) ] Set Variable [ $r ; Value: MBS( "MongoDB.SetURI"; $Mongo; $URL ) ] Set Variable [ $r ; Value: MBS( "MongoDB.Connect"; $Mongo) ] # # Open the database Set Variable [ $r ; Value: MBS( "MongoDB.OpenDatabase"; $Mongo; $DatabaseName ) ] # Open the collection by naming it after our table name Set Variable [ $r ; Value: MBS( "MongoDB.OpenCollection"; $Mongo; Get(LayoutTableName)) ] # # insert or update a record Set Variable [ $r ; Value: MBS( "MongoDB.UpdateOne"; $mongo; // we select record with ID field JSONSetElement ( "{}" ; "ROWID" ; Get(RecordID); JSONNumber ); // and here pass whole record including all fields we like to move JSONSetElement ( "{}" ; "$set" ; $json ; JSONObject ); // and here allow to insert, if record is new JSONSetElement ( "{}" ; "upsert" ; True ; JSONBoolean )) ] # # cleanup Set Variable [ $x ; Value: MBS( "MongoDB.Release"; $Mongo ) ] # # check error log and show them here If [ MBS("HadErrors") ] Set Variable [ $r ; Value: MBS("ErrorLog") ] Show Custom Dialog [ "Error Log" ; $r ] End If # # success Exit Script [ Text Result: 1 ] As you see we connect here and "upsert" the record. That means the record is either inserted or updated, which happens automatically. Since we use the same table names as in FileMaker for the collections in MongoDB as well as the field names. You could do the same with a SQL database and our SQL functions. e.g. mirror everything to MariaDB or PostgreSQL. But the key thing is that MongoDB doesn't need to predefine the schema and we can just pass in the data as JSON without caring about anything. MongoDB will do insert or update as needed. And it keeps history if you like as well as does synchronization to multiple server. Optionally you can of course do the same backwards. Have the webserver write back answers (votes, comments, sign-ups) to the MongoDB and have a script in FileMaker regularly go and download new entries there to the FileMaker database. Please do not hesitate to contact us if you have questions.
Recommended Posts