Leaderboard
Popular Content
Showing content with the highest reputation since 09/09/2024 in Posts
-
LaRetta was one of the most fiercely loving and loyal friends I've ever had, despite never having had the pleasure to meet her in person. I'm so blessed to have worked with her until the end. She was unashamedly opinionated and caring, about people, justice, and about our craft. She sent me this passage from Barbara Kingsolver late last year: And she followed it with: I love you LaRetta, and miss you dearly. Guess I'm a little hokey too. ❤️ Your friend, Josh2 points
-
Here is another way you could do this. It uses conditional formatting to identify the exact items that cause the conflict with other orders. Again, more work is required if you want to see exactly which overlapping order has the offending item, but it might not be worth the trouble. BookingItemsConflict.fmp122 points
-
Is it safe to assume the context is an Order with a portal to LineItems? If so, I would do something like the attached. Re your calculation, I don't understand the logic it attempts to implement. And I don't think it can be done by calculation alone. Depending on the format you need this, there may be a much simpler alternative: sort your line items by ProductID and use a summary field that counts them with restart. This will work if you print your orders from the LineItems (as you should), as well as in a sorted portal. SimilarChildrenNumerator.fmp122 points
-
I don't understand your question, especially this part: It seems you have a parent-child relationship, with DeliveryNotes being the child table. In such arrangement, there should be a foreign key field in the child table holding the parent record's unique ID value. There should be no fields in the parent table that refer to the parent's children. To display the parent's children on the parent layout you would use a portal. This is the standard practice for a parent-child relationship. If you have some sort of a special requirement that isn't covered by this then please explain in more detail. For example, if you wish to isolate a specific child record, you could click on it in the portal (thereby setting a global field or variable to its unique ID) and then use a 2nd relationship or a filtered one-row portal to display it. P.S. Please use the standard font when posting. P.P.S Please update your profile to reflect your version and OS so that we know what you can use.1 point
-
Yes. Every script has its own parameter that must be passed to it explicitly when the script is called. To pass its own parameter to a subscript, the calling script would need to do: Perform Script [ “abc”; Parameter: Get ( ScriptParameter ) ] Keep in mind that a subscript does not "know" it's a subscript. It runs independently and does not inherit anything from the calling script.1 point
-
This is a little too much to take in all at once. Consider simplifying the issue and/or breaking it up to individual points. Speaking in general, the script parameter remains constant throughout the life of the script, while a script variable can be defined and redefined at any point. If you want to export a file using a variable as the file's name, and the file's name is supposed to contain the count of exported records, then of course you will want to define that variable as close to the export as possible and not before any event that can modify the current found set. HTH.1 point
-
In MBS FileMaker Plugin 15.3 we have a Format button on macOS for the data viewer's detail view. If the data is XML or JSON, we can use the format and colorize functions: JSON.Format & JSON.Colorize, XML.Format & XML.Colorize. Let's say you have some variables in the data viewer with XML or JSON content. When you double click the text, you get a new window showing the detail. Here we find the Format button added by MBS Plugin. Press the button and it will format the content: If we get a parse error or the content is not XML/JSON, we beep. If you click OK, the formatted text is stored in the variable. If you press cancel, the unformatted text stays in the variable. Please try the feature and let us know what you think. Available for macOS in MBS FileMaker Plugin 15.3.1 point
-
Just to comment on this line of thinking: “I have a script that I would like to use in multiple "similar" situations”. Be aware that what seems economical can lead to highly conditional scripts that are difficult to maintain or change without risk. Separate scripts without much indirection or abstraction are more easy to maintain and troubleshoot. Consider the trade off you’re making.1 point
-
You cannot use a variable to specify the field in the Edit Find Request window. But a find request can also be constructed through setting fields - and then you can use the script parameter to select which field to use, similar to what we discussed only recently here: https://fmforums.com/topic/110860-script-problem-remove-container-file-from-a-record-trying-to-transition-to-json-to-pass-multiple-parameters/#findComment-4940811 point
-
As you can see from the release notes, folders for custom functions have been implemented: https://help.claris.com/en/pro-release-notes/content/index.html1 point
-
1 point
-
There are various tools that can compare DDRs such as: BaseElements https://baseelements.com/ CrossCheck http://www.fm-crosscheck.com FMDiff http://fmdiff.com/ FMperception https://www.fmperception.com/ InspectorPro https://www.beezwax.net/products/inspectorpro-8 I have no recommendation to make since I don't use any of them. You can also use general tools for comparing XML documents (either DDR or the output of Save a Copy as XML). The result may be more difficult to read but it could be all you need for a one-time task such as you describe. Oh, and for fields only you could simply compare the results of ExecuteSQL() querying the FileMaker_BaseTableFields table.1 point
-
I don't know if that's a good analogy to your situation. Anyway the answer here is yes, at least WRT interference. They would simply login as different users and their privilege set would deny them access to any records other than those tagged by their account name. Then it's up to the developer to prevent situations where a bunch of records labeled <<no access>> would crop up - such as replacing the Show All Records command with a bogus find (any find will automatically omit records for which the user has no access). And there may be other details to consider e.g. serial numbering of records. Maybe not: https://support.claris.com/s/article/New-FileMaker-data-migration-tool?language=en_US1 point
-
Actually, it's quite the opposite: you would have a List layout to show a bunch of records (typically non-editable). Then you use a popover or a card window to drill into a specific record for more details and/or editing. It's functionally similar to list-detail layout but you have more space to work with, since the detail temporarily conceals the list.1 point
-
Yes, that's exactly what I meant. You are not alone in that wish, it's been often suggested (usually as a type of a layout part). As a side note: while I often switch between form and table view in solutions for my own use, I would almost always create separate layouts in solutions designed for others. So there would be a button for "detailed view" on the list layout and a "back to list" button on the form layout. Also keep in mind the possibilities offered by list-detail layouts, popovers and card windows.1 point
-
When I read this, I was taken aback: I thought surely a button cannot override the layout setup and allow access to a view which the developer has disabled?? But you are right, it does do exactly that. I consider this a bug. I think you have no choice other than to customize the bar for its specific layout. It's not like we have the option to share objects across layouts anyway.1 point
-
That's not going to work. Exactly. A button activates only by tabbing into it. Again, your hunch is correct. You don't need to add the selected button's object name to the script parameter. In fact, the buttons do not need to have object names at all (at least not for this). You only need to add a recognizable value to the script parameter of each button. It could be as simple as 1, 2 and 3 or perhaps something more explicit - say "current", "found" and "all". Then extract this value from the script parameter and use it to branch your script.1 point
-
The simple method is to open a new window, isolate the current record and do the export. Then close the current window to return to the original found set. Alternatively you could switch to a layout that has the fields you want in the order you want them and do Save Records as Excel from there. But if that's all such layout would be used for, it's hardly worth the effort.1 point
-
From what I can see the problem is that you are passing the container field's value instead of its name. Try defining the script parameter along the lines of = JSONSetElement ( "" ; [ "container_field_name" ; GetFieldName ( document::document_file ) ; JSONString ] ; [ "container_file_name" ; document::document_filename ; JSONString ] ) You could also get by with: JSONSetElement ( "" ; [ "container_field_name" ; "document::document_file" ; JSONString ] ; [ "container_file_name" ; document::document_filename ; JSONString ] ) but this would break if you renamed the container field.1 point
-
Just a random thought: Whether a field is optimized for static or interactive content is a matter of formatting the specific instance of the field on a specific layout. You could have two separate layouts showing the same field optimized differently. Or two different instances of the same field on different panels of a tab/slide control. Or even just hiding one of them conditionally.1 point
-
Not really. Your formula for constructing the JSON is correct and if the referenced field contains the text "Active" you should be getting the result you expect. Is it possible that the two tests were performed from different records?1 point
-
It won't happen in the 1st file until you populate the Data__lxn field in the newly created child records. It will happen in the 2nd file, but only after you commit the record (that's a good thing: you don't want portal records to fly up and down while you're still working on them).1 point
-
For the next version of MBS FileMaker Plugin in 15.3 we add the Window.SetRoundCorners function to provide round corners. At the recent Vienna Calling conference a developer asked if we can get the edges of the card in FileMaker to be round. And yes, that is indeed possible. Once the card is shown, the MBS Plugin can find the card window and apply round corners to it. This even works on Windows: This seems to work fine in FileMaker Pro on macOS and Windows. It does of course not work for WebDirect or FileMaker Go. To add the round corners, you simply call our plugin function Window.SetRoundCorners just after showing the card. The plugin finds the front window and applies them. Here is an example: Show card with round rectangle: New Window [ Style: Card ; Name: "Card" ; Using layout: “Tabelle” ; Height: 400 ; Width: 600 ] Set Variable [ $r ; Value: MBS("Window.SetRoundCorners"; 0; 12) ] Please try with 15.3 plugin and let us know how well it works for you.1 point
-
This file shows how I would approach this using the aforementioned method of filtering a portal to display only unique values. A few notes: For simplicity, I have left out the Positions and Subjects tables and used meaningful values for PositionID and SubjectID in the Assignments join table instead. This has no impact on the calculation formulae that need to be used. To some extent, this is a cop-out: I believe I could have done without the cCombinedKey field in the Assignments table. But it would have taken some time and - perhaps more importantly - the formula used for portal filtering would be much more difficult to understand. A note about your setup: I don't understand why you need the Levels table. Does it hold any other information besides an ID and the level? It seems to me that a custom value list of these levels would be quite sufficient. The other thing that puzzles me is the checkbox of these levels shown in your screenshot. It looks like users actually select multiple levels for each unique combination of Position and Subject, and your script breaks these down to individual records. And now you are asking how to combine them back to the original form? Wouldn't it be easier just to store the data as entered by the user? Link to the file (expires in 24 hours): https://wormhole.app/3D9xaz#GF8aSO2FXKXPIp8mfOLBkQ1 point
-
Did you notice that MBS FileMaker Plugin 15.2 includes a new feature to add keyboard shortcuts for the result data types for a formula? If you visit the manage database dialog, you can use keyboard shortcuts to pick data types for a field: Text ⌘ T Number ⌘ N Date ⌘ D Time ⌘ T Timestamp ⌘ M Container ⌘ R Calculation ⌘ L Summary ⌘ S If you define the formula, you get a dialog like the one shown above. The MBS Plugin looks for the popup menu on the bottom left and adds the same shortcuts for the data types. If it finds the menu, it adds the shortcuts to the menu entries. This way you can press e.g. command-T to pick text. Just a little convenience, but our clients asked us for it. Enjoy!1 point
-
This is a very simple arrangement. The left-most portal, where you select the category, is a portal that shows records from the current table (Category) - a.k.a a list-detail layout: https://help.claris.com/en/pro-help/content/creating-portals-list-detail.html Selecting a category in this portal causes the corresponding record to become the current record. And the portal to the Product table shows only records that are related to current record.1 point
-
🕯️ I was informed today of the passing of @LaRetta this past February. Thank you LaRetta for the many years of sage wisdom and insights to our community you will be missed!1 point
-
Sad news to hear. She was kind, sharp as a tack and very funny. I was fortunate to have worked with her. She will be missed.1 point
-
This is indeed a great loss to the FM community. No one can equal her sharp eye for mistakes and her ability to pull a great idea out of a bucket of mediocre ones. Above all, her good spirits and great sense of humor made it a pleasure to collaborate with her. It was a privilege to know her.1 point
-
This is the second time you are posting a comparison between XSLT 1.0 and XSLT 2.0 and 3.0 and just like the first time it is full of inaccurate and false statements. This is absolutely and unequivocally wrong. XSLT 1.0 recognizes the following data types, defined in the XPath 1.0 specification: node-set boolean number string The XSLT 1.0 specification adds: result tree fragment as another data type (although it's no more than a special case of node-set). True, in XSLT 2.0 there are more data types - most notably date, time and dateTime. But that doesn't mean you cannot "perform real (as opposed to what?) arithmetic, date calculations, and type validation" in XSLT 1.0. There is only one Muenchian method - and whether it's a "convoluted workaround" is a matter of opinion. True, XSLT 2.0 introduced built-in grouping which is often more convenient. Often, but not always. Technically, it's true. But if you are running FileMaker Pro 2024 or later, you already can produce multiple outputs in a single transformation because the libxslt processor supports both the EXSLT exsl:document extension element as well as the multiple output documents method added in the XSLT 1.1 specification. This is true. But is is also true that a named template is not much different from a user-defined function. And again, the libxslt processor introduced in FMP 2024 does support defining custom functions using the EXSLT extensions. Not really. A "sequence" is just an expansion of the "node-set" concept to allow items other than a node and duplicate items. Hardly a "paradigm shift" and certainly XSLT 1.0 is also a functional programming language. Here is my conclusion: As I said in the previous round, there are very few things you cannot accomplish using XSLT 1.0 - esp. with the rich support of extensions that the built-in processor provides (see my post here). The most important point remains the question of performing the transformation as an integral part of importing and exporting records. Currently that's only possible with the built-in XSLT 1.0 processor (please correct me if I am wrong on this).1 point
-
It wouldn't have worked even with an indexed field, because a value list based on a field will never include a blank value. You could define a calculation field in the related table that combines the "real" value with a placeholder, for example: List ( Valuefield ; " " ) Then define the value list to use values from this calculation field and (if necessary) make the target field auto-enter a calculated value substituting 3 consecutive spaces with nothing.1 point
-
Well, yes - if you are joining 2 tables, and both tables have an id field, then you must specify from which table the values of id should be fetched, e.g. SELECT \"ESQL_Cities|Sel\".id instead of just: SELECT id As an aside, I doubt ExecuteSQL() is a good tool to use here. Do a search for "dwindling value list" to learn some FM native methods.1 point
-
The space character is the character immediately after the last hyphen in the text. Suppose the text were just: "a - b". The length of this text is is 5, and the position of the hyphen is 3. 5 - 3 = 2, so your expression: Right ( db ; len - lastSpace ) will return the last two characters of the text, i.e. " b". It depends. If the hyphen separator is always followed by a space, you might simply subtract it: Right ( db ; len - lastSpace - 1 ) A better solution would look for the position of the entire separator pattern " - " (a hyphen surrounded by spaces) and do: Let ( [ len = Length ( db ) ; lastSeparator = Position ( db ; " - " ; len ; -1 ) ] ; Right ( db ; len - lastSeparator - 2 ) ) This would allow you to correctly extract "Carter-Brown" from "Smith - Jones - Carter-Brown". If you cannot be sure the space/s will always be there, you may use Trim() on the result (this is assuming the extracted portion will not contain any significant whitespace characters).1 point
-
This list is full of inaccurate, even downright false statements. For example, both xsl:key and xsl:message are available in XSLT 1.0. Not to belittle the advantages of XSLT 2.0 and 3.0, it needs to be stated that XSLT 1.0 is Turing-complete - which means it can produce any output that depends solely on the input. True, some operations - such as grouping - are easier to perform in XSLT 2.0 +, but that's just a matter of convenience. If I had to point out the main advantages of the later versions, I would focus on: Dates: XSLT 2.0+ has dedicated functions to handle dates, times and dateTimes (what we call timestamps in FM), including the ability to generate the current date and time. Random: XSLT 2.0+ can generate random numbers. The XSLT 3.0 random generator is especially powerful. RegEx: XSLT 2.0+ supports processing text using Regular Expressions. JSON: XSLT 3.0 can both parse JSON input data as well as produce a JSON output. Still, even with this in mind it needs to be pointed out that many XSLT 1.0 processors support extensions that enhance their capabilities beyond pure XSLT 1.0. The processor embedded in FMP has always allowed producing the current date and time or generating a random number, as well as other goodies. And now, If you are using the latest versions of FMP, you also get access to a wide array of functions that manipulate dates. So really it's back to a question of convenience. The crucial point here, IMHO, is this: as a database developer, your interest in XSLT is purely for input and output. So I'll be watching the next installment to see if it provides a way to replace the embedded processor during import and export. If not, then there is very little attraction to having this available in a plugin. You can always do as I have done for a long time now and use the standalone Saxon processor from the command line.1 point
-
@fbugeja I notice you have cross-posted this question on Claris Community. This doesn't happen very often, but the answer you received there is better than any of the other options mentioned here. It may not be easy to understand at first glance, but I think it's worth spending the necessary time to learn it.1 point
-
What you ask for is impossible to do precisely. The reason for this is that the popover area is always positioned with its center aligned with the popover button. If your button is positioned to the right of the center of the area you want to cover, there will be either a slit exposed on the left side, or an additional band covering stuff on the right side. Or a little bit of both, as you can see in the attached file where I have adjusted the right-side popover to the approximate dimensions. There are other methods that would allow precise positioning of the covering area, such as a card window (already mentioned) or a popover with invisible button that you pop open with a script, or even a slide control. AUSCOIN+.fmp121 point
-
I would use a card window to ensure precise sizing and placement . You’ll find how to do so by referring to Dan Smith’s card positioning demo file here.1 point
-
That's why I said "ideally". Denormalization is a necessary evil, not a goal.1 point
-
Hey Ocean, Great question. Check out our documentation on resetting the audit log here. If you're interested, here's the explanation: The audit log keeps a record of past changes to allow MirrorSync to merge conflicts, however it can get huge, up to a maximum of 1 TB. The solution is to use the configuration client to reset it. Log in, right click the configuration, mouse over Reset SyncData and click "Only AuditLog". Despite its size, the audit log isn't critical to MirrorSync's function; you won't need to replace your spoke or even expect much longer syncs after you wipe it out. You may not even notice the downside: that MirrorSync won't be able to merge some record changes after you delete the audit log. However, after resetting this problem will quickly resolve itself; after a few days of normal use it'll be as though nothing happened, and you'll have your ~80GB of space back. Hope that clears things up! - Adam1 point
-
1 point
-
The term you're thinking of might be, not magic key, but 'multi-key.' It can indeed be quite useful, not only for going to related records, but displaying those records in a portal.1 point
-
1 point
-
Actually, there is. The table you call the Track table is a join table. The table you are missing is the real Tracks table, where each track would be a unique record. I believe it can - at least this part: You need to construct a self-join of the (not) Track table as: Track::Track# = Track2::Track# AND Track::Job# ≠ Track2::Job# This allows each track belonging to the current job to look at its "siblings" that belong to other jobs and ask: are any of you pending? The exact method to do that depends on what type of field is Pending and how it is populated (ideally, it would be a Number field with the value of 1 if true, 0 or empty for false). Scripting is also a possibility, and it doesn't limit you to "some sort of pop-up message". You can populate a global variable or a field when loading a job record. But I am not sure you need to do this, at least not for the highlighting part (I haven't really thought about the 2nd portal option).1 point
-
1 point
-
Just thinking out loud for a moment: I think the obstacle here is printing the orders for your staff. To send an email confirming the order with a PDF of the order attached to it, you could use a script performed on the server (this needs to be checked). But the Print command is not supported in PSOS and printing on the customer's machine would not help, even if it were easy.1 point
-
I am not sure you do, since you are so eager to disregard the warnings. I strongly disagree. A primary key must satisfy two conditions: (a) it must be unique and (b) it must be immutable. In practical terms, this often translates to a meaningless value, such as a serial number or a UUID. A primary key that depends on another field does not satisfy these conditions (see 3NF). In your proposed scheme, the serial number depends on the type field = and as I already pointed out, if you modify the type you are very likely to get a duplicate, even if you are the only user. That part is actually easy. If you are scripting the process of creating a new record, you can find the previous records with the same type and get the value from the last one. Or use the ExecuteSQL() function to do the same thing. Or you could define a self-join relationship matching on type and get the last/max value from there. That's the easiest part, just do something like: AgreementType & "-" & Right ( Year ( AgreementDate ) ; 2 ) & SerialIncrement ( "-000" ; SerialNumber ) but that's something you do at the very end, so it's hardly "a step in the right direction". And we haven't even mentioned the need to reset the serial numbers at the beginning of each year.1 point
-
Filemaker has a simple and reliable mechanism to number all records in a table (regardless of type) sequentially: https://help.claris.com/en/pro-help/content/automatic-data-entry.html Your proposed numbering scheme, which would require maintaining a separate series for each type, is not simple to implement - esp. in a multi-user scenario where you run the danger of two users creating a new record at the same time and ending up getting the same serial number. This may not be a task you want to undertake if you are "brand new to FMP". I would advise you to just number all your records sequentially. I doubt anyone will notice the difference. Note also that you don't need a series of custom dialogs to get user input for a new record. They could just fill out the fields directly on a layout and commit the record when ready. Even with a custom dialog, the user can populate up to 3 fields/variables in a single step. And don't use global variables (prefixed with $$) where script variables (prefixed with $) will do.1 point
-
I presume you are talking about the macOS Character Viewer? You can open it from the Input menu . Re your 2nd question, consider using custom menus or macOS Shortcuts. Please update your profile with your version and OS.1 point
-
https://community.claris.com/en/s/question/0D5Vy000006IQnpKAG/free-prompt-engineering-training-resources https://www.cloudskillsboost.google/paths/118 https://microsoft.github.io/AI-For-Beginners/ https://www.edx.org/learn/artificial-intelligence/harvard-university-cs50-s-introduction-to-artificial-intelligence-with-python https://www.coursera.org/learn/prompt-engineering https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/ https://www.deeplearning.ai/short-courses/llmops/ https://www.coursera.org/learn/big-data-ai-ethics https://www.edx.org/learn/computer-programming/edx-ai-applications-and-prompt-engineering1 point
-
Does the attached test work for you? InsertFromRedirectURL.fmp121 point
This leaderboard is set to Los Angeles/GMT-07:00