Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×
The Claris Museum: The Vault of FileMaker Antiquities at Claris Engage 2025! ×

Recommended Posts

Posted

Back from the conference in Liechtenstein, I wrote down a few of the tidbits we learnt.

Missing serial numbers

If you use serial numbers for new records in FileMaker, you may notice that sometimes the serial numbers have a gap. Why could this happen?

  • Someone deleted a record
  • Someone created a record, but never committed it.
  • FileMaker crashed or network got disconnected while a transaction runs, so it never completes.

You need to know that on record creation, the client will request a new serial number from the server. If the record doesn't get committed or gets reverted, the serial value is used, but no record saved. If you like to make sure the serial number doesn't get lost, please call Commit script step after creating the new record to make sure the empty record is definitely stored.

Please consider moving to UUID numbers instead, so people can't get the serial numbers and they are more random but unique.

Export on Server

Did you remember when FileMaker got the ability to create PDF files on server?
In one of the technical sessions at a developer conference we got the information, that this was only possible because the code for PDF generation moved from the FileMaker Pro app into the database engine code. That code for the database engine is used everywhere including Server and FileMaker Go. Once the PDF creation code moved, you could use it everywhere.

We discussed at the conference why you can't export to a FMP12 file on server scripts. Various Ideas came up, but nobody knew exactly. To figure out what code runs when FileMaker does an export, I setup a script to export in an endless loop and sampled the process with activity monitor. In the thousands of lines of code, we can see that most of the export code is in the FMEngine library, but some little parts are in FileMaker Pro itself:

281 Draco::FMExportManager::PerformExport(Draco::FMWindowModel&)  (in FMEngine) + 1032  [0x10cdaaa68]
| 263 Draco::FMExportManager::ExportRecords(Draco::File&, Draco::DBError&)  (in FMEngine) + 360  [0x10cda93f4]
| + 118 exportFMProLayout(Draco::DataExportInfo&)  (in FileMaker Pro) + 308  [0x102924814]
| + ! 95 LAY_GenerateLayout(FMDocWindow&, Draco::FMLayout&, unsigned char, bool, bool, bool, Draco::HBSerialKey<(unsigned char)5> const&, CNewLayoutModel*)  (in FileMaker Pro) + 952  [0x102760fe4]
| + ! : 76 Draco::FMLayout::Commit(bool, Draco::InteractiveErrorHandler*, bool)  (in FMEngine) + 64  [0x10ce2f218]

After seeing this, the answer to the question above is that the code to generate a new layout and to export the layout is in the FileMaker Pro application. To export server side, this code would need to be refactored and moved into FMEngine. There may be a couple of technical difficulties to overcome in order to do this. Or an export on server could skip layouts and just create the fmp12 file with tables and records, but without layouts.

How many CPU cores does FileMaker Server use?

We started a FileMaker Server on a Mac and check how many processes are there. We get 29 processes and over 400 threads. We have mainly this processes running:

fmserverd The server core engine.
fmsased The process running server side scripts.
fmscwpc The process for custom webpublishing
fmwipd The process for the DATA API.
fmsib The process doing backups.
java Some parts still need Java, so a copy of Java runs.
node Some parts of FileMaker use JavaScript and are run with the node application.

With so many processes, it is quite good to have 4 or 8 cores on the server. We can run multiple scripts at the same time for various users with PSoS, WebDirect or Data API and use multiple CPU cores. Every script runs on its own thread, so it can be scheduled to different cores. 

The server process itself runs over 50 threads. Several of the threads are open to listen for incoming commands on network sockets. The other processes for WebDirect, Data API and server scripting keep an open connection to the server core and threads listen to take commands. There are threads for logging, timed schedules and various background things happening. In the core all read and write operations to the database have to be serialized and only one thread can at a time modify the structures. 

On the end you need to measure how high the load on the server is. We nowadays use virtual machines. Frequently we start with small ones with only 2 cores. That is fine for development where just one or two people use the server at a time. Later when using in production, you may go up to 4 cores. And if more load is there, go to 8 cores or higher. The server on idle should be 1% or less and with normal usage be somewhere at 20% when busy. Why? Because you want to have reserves for peak times when lots of people come or multiple server side scripts run in parallel.

PSoS vs Job Queue

Do you use Perform Script on Server frequently? Please make a simple measurement: Have a script create a new record with current timestamp, do a PSoS with a script, that goes to that record and stores a second timestamp. Then you know how quick the script launches on server. If this is just a second, it may be fine for you. But in huge solutions this can take 10 seconds.

On each PSoS start, the FileMaker engine on the server starts a new session for the remote user. This means a thread starts, which opens a connection to the server core, requests data and loads the relationship graph for the files into data structures into memory. Then it runs the start script of the solution before coming to your script.

Instead, you can run one or multiple scheduled script on the server. These scripts do an endless loop to look for new job records in a job queue table. Each script looks for a new records in the job table and loops over them. On each record, it tries Open Record script step to lock the record. On success it performs the script listed in the record with the parameter given in the record. The script result is stored in a field and the job marked done. After the loop is done, the scripts waits a second before looking again for new jobs.

On the client, you can launch a job by creating a new record in the job table. Then loop with script pauses to wait for the result or come back later for asynchronous jobs. If implemented well, you can have multiple scripts on the server and get from job creation to execution within one second. 

Now let's see what we learn on the next conference...

Posted (edited)
5 hours ago, MonkeybreadSoftware said:

You need to know that on record creation, the client will request a new serial number from the server. If the record doesn't get committed or gets reverted, the serial value is used, but no record saved. If you like to make sure the serial number doesn't get lost, please call Commit script step after creating the new record to make sure the empty record is definitely stored.

The same thing will happen in a local file. But the proper solution is to define the field to generate the serial number on commit only:

image.png.838acafbf560ec4eb268619840ea1c47.png

This way you can still revert a new record while keeping an unbroken consecutive series of serial numbers.

 

5 hours ago, MonkeybreadSoftware said:

Please consider moving to UUID numbers instead

That's not a solution when you do need serial numbers (for example, in some jurisdictions invoices are required to be numbered serially).

 

Edited by comment
Posted

Thanks for pointing out the option with creating serial on commit.

 

And for your invoices table, I have to say that primary key should not necessarily be the invoice ID.

Invoice IDs here are often defined as built from year, month and counter within the month, so we calculate them in the script.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.