Leaderboard
Popular Content
Showing content with the highest reputation since 06/19/2024 in Posts
-
LaRetta was one of the most fiercely loving and loyal friends I've ever had, despite never having had the pleasure to meet her in person. I'm so blessed to have worked with her until the end. She was unashamedly opinionated and caring, about people, justice, and about our craft. She sent me this passage from Barbara Kingsolver late last year: And she followed it with: I love you LaRetta, and miss you dearly. Guess I'm a little hokey too. ❤️ Your friend, Josh2 points
-
Here is another way you could do this. It uses conditional formatting to identify the exact items that cause the conflict with other orders. Again, more work is required if you want to see exactly which overlapping order has the offending item, but it might not be worth the trouble. BookingItemsConflict.fmp122 points
-
Is it safe to assume the context is an Order with a portal to LineItems? If so, I would do something like the attached. Re your calculation, I don't understand the logic it attempts to implement. And I don't think it can be done by calculation alone. Depending on the format you need this, there may be a much simpler alternative: sort your line items by ProductID and use a summary field that counts them with restart. This will work if you print your orders from the LineItems (as you should), as well as in a sorted portal. SimilarChildrenNumerator.fmp122 points
-
Such manipulation is certainly possible, using either the While() function or a recursive custom function or even a looping script. However, it is far from being a convenient way to process data. Filemaker is designed to store data in a structure of records and fields - and it has plenty of built-in tools to assist in processing such data. In the given example, sorting the data by date and by employee, and summarizing the amounts (?) using a summary field would allow you to produce the expected result much more easily. If your data is actually structured properly as a table of Employees and a child table for the amounts (where each amount is a separate record with fields for Date, EmployeeID and Amount) then I would suggest you start there instead of producing the "Input List" first then try to wrestle with it.2 points
-
2 points
-
For the next version of MBS FileMaker Plugin in 15.3 we add the Window.SetRoundCorners function to provide round corners. At the recent Vienna Calling conference a developer asked if we can get the edges of the card in FileMaker to be round. And yes, that is indeed possible. Once the card is shown, the MBS Plugin can find the card window and apply round corners to it. This even works on Windows: This seems to work fine in FileMaker Pro on macOS and Windows. It does of course not work for WebDirect or FileMaker Go. To add the round corners, you simply call our plugin function Window.SetRoundCorners just after showing the card. The plugin finds the front window and applies them. Here is an example: Show card with round rectangle: New Window [ Style: Card ; Name: "Card" ; Using layout: “Tabelle” ; Height: 400 ; Width: 600 ] Set Variable [ $r ; Value: MBS("Window.SetRoundCorners"; 0; 12) ] Please try with 15.3 plugin and let us know how well it works for you.1 point
-
This file shows how I would approach this using the aforementioned method of filtering a portal to display only unique values. A few notes: For simplicity, I have left out the Positions and Subjects tables and used meaningful values for PositionID and SubjectID in the Assignments join table instead. This has no impact on the calculation formulae that need to be used. To some extent, this is a cop-out: I believe I could have done without the cCombinedKey field in the Assignments table. But it would have taken some time and - perhaps more importantly - the formula used for portal filtering would be much more difficult to understand. A note about your setup: I don't understand why you need the Levels table. Does it hold any other information besides an ID and the level? It seems to me that a custom value list of these levels would be quite sufficient. The other thing that puzzles me is the checkbox of these levels shown in your screenshot. It looks like users actually select multiple levels for each unique combination of Position and Subject, and your script breaks these down to individual records. And now you are asking how to combine them back to the original form? Wouldn't it be easier just to store the data as entered by the user? Link to the file (expires in 24 hours): https://wormhole.app/3D9xaz#GF8aSO2FXKXPIp8mfOLBkQ1 point
-
This is a very simple arrangement. The left-most portal, where you select the category, is a portal that shows records from the current table (Category) - a.k.a a list-detail layout: https://help.claris.com/en/pro-help/content/creating-portals-list-detail.html Selecting a category in this portal causes the corresponding record to become the current record. And the portal to the Product table shows only records that are related to current record.1 point
-
🕯️ I was informed today of the passing of @LaRetta this past February. Thank you LaRetta for the many years of sage wisdom and insights to our community you will be missed!1 point
-
This is indeed a great loss to the FM community. No one can equal her sharp eye for mistakes and her ability to pull a great idea out of a bucket of mediocre ones. Above all, her good spirits and great sense of humor made it a pleasure to collaborate with her. It was a privilege to know her.1 point
-
This is the second time you are posting a comparison between XSLT 1.0 and XSLT 2.0 and 3.0 and just like the first time it is full of inaccurate and false statements. This is absolutely and unequivocally wrong. XSLT 1.0 recognizes the following data types, defined in the XPath 1.0 specification: node-set boolean number string The XSLT 1.0 specification adds: result tree fragment as another data type (although it's no more than a special case of node-set). True, in XSLT 2.0 there are more data types - most notably date, time and dateTime. But that doesn't mean you cannot "perform real (as opposed to what?) arithmetic, date calculations, and type validation" in XSLT 1.0. There is only one Muenchian method - and whether it's a "convoluted workaround" is a matter of opinion. True, XSLT 2.0 introduced built-in grouping which is often more convenient. Often, but not always. Technically, it's true. But if you are running FileMaker Pro 2024 or later, you already can produce multiple outputs in a single transformation because the libxslt processor supports both the EXSLT exsl:document extension element as well as the multiple output documents method added in the XSLT 1.1 specification. This is true. But is is also true that a named template is not much different from a user-defined function. And again, the libxslt processor introduced in FMP 2024 does support defining custom functions using the EXSLT extensions. Not really. A "sequence" is just an expansion of the "node-set" concept to allow items other than a node and duplicate items. Hardly a "paradigm shift" and certainly XSLT 1.0 is also a functional programming language. Here is my conclusion: As I said in the previous round, there are very few things you cannot accomplish using XSLT 1.0 - esp. with the rich support of extensions that the built-in processor provides (see my post here). The most important point remains the question of performing the transformation as an integral part of importing and exporting records. Currently that's only possible with the built-in XSLT 1.0 processor (please correct me if I am wrong on this).1 point
-
It wouldn't have worked even with an indexed field, because a value list based on a field will never include a blank value. You could define a calculation field in the related table that combines the "real" value with a placeholder, for example: List ( Valuefield ; " " ) Then define the value list to use values from this calculation field and (if necessary) make the target field auto-enter a calculated value substituting 3 consecutive spaces with nothing.1 point
-
This list is full of inaccurate, even downright false statements. For example, both xsl:key and xsl:message are available in XSLT 1.0. Not to belittle the advantages of XSLT 2.0 and 3.0, it needs to be stated that XSLT 1.0 is Turing-complete - which means it can produce any output that depends solely on the input. True, some operations - such as grouping - are easier to perform in XSLT 2.0 +, but that's just a matter of convenience. If I had to point out the main advantages of the later versions, I would focus on: Dates: XSLT 2.0+ has dedicated functions to handle dates, times and dateTimes (what we call timestamps in FM), including the ability to generate the current date and time. Random: XSLT 2.0+ can generate random numbers. The XSLT 3.0 random generator is especially powerful. RegEx: XSLT 2.0+ supports processing text using Regular Expressions. JSON: XSLT 3.0 can both parse JSON input data as well as produce a JSON output. Still, even with this in mind it needs to be pointed out that many XSLT 1.0 processors support extensions that enhance their capabilities beyond pure XSLT 1.0. The processor embedded in FMP has always allowed producing the current date and time or generating a random number, as well as other goodies. And now, If you are using the latest versions of FMP, you also get access to a wide array of functions that manipulate dates. So really it's back to a question of convenience. The crucial point here, IMHO, is this: as a database developer, your interest in XSLT is purely for input and output. So I'll be watching the next installment to see if it provides a way to replace the embedded processor during import and export. If not, then there is very little attraction to having this available in a plugin. You can always do as I have done for a long time now and use the standalone Saxon processor from the command line.1 point
-
I see two problems with your request: First, if you have a user named Smith and another user named Smithson there is no way to know when the user enters "the last letter of their login name" unless you already know who is trying to login (in which case why bother with a login procedure?). The other problem is much more serious: it seems you are designing your own "login" procedure that works after the user is already logged in somehow (possibly as a guest?). This is an extremely bad and dangerous practice - see here why:1 point
-
I am afraid we are not talking about the same thing. My suggestion is to divide the problem into two parts. In the first part you define a self-join relationship of the Orders table that identifies orders that overlap the current order's time span. This is easy to do using the existing, stored, fields of the Orders table: Orders::DateIn ≤ Orders 2::DateOut and Orders::DateOut ≥ Orders 2::DateIn and Orders::OrderID ≠ Orders 2::OrderID The second part is to identify which of the overlapping orders are conflicting - i.e. have a same product. This could be done in a number of ways, for example filtering the portal to Orders 2 by a calculation of: not IsEmpty ( FilterValues ( Orders::ProductIDs ; Orders 2::ProductIDs ) ) where ProductIDs is an unstored calculation field = List ( LineItems::ProductID ) Any records displayed in such filtered portal would be conflicting. You will have to make an additional effort to see exactly why they're conflicting, but perhaps it does not matter? Anyway, the idea is that the number of overlapping orders should be fairly small, so using an unstored calculation to find the conflicting ones among them should be sufficiently quick. Otherwise I see no choice but to move to a denormalized solution where the dates need to be replicated in the LineItems table, and you must take great care that this happens on every relevant layout, in every relevant scenario. --- Caveat: untested code.1 point
-
What do you see when you select Manage > External Data Sources… ?1 point
-
I need to correct something I wrote earlier: This implies that in normal circumstances the two tests should return the same result and that the only difference is the unnecessary complexity added by using PatternCount() instead of a direct comparison. That is not the case. Let's assume that both of the compared fields are calculation fields returning a result of type Date. And that the date format used by the file is m/d/y, with no leading zero for the month. Now, let's have an example where DateA is Feb 2, 2025 and DateB is Dec 2, 2025. These two are different dates and if the comparison is performed in the date domain: DateB = DateA the result will be False. But the suggested comparison: PatternCount ( DateB ; DateA ) will start by converting the dates to Text, and then: PatternCount ( "12/2/2025" ; "2/2/2025" ) will return 1 (True). In addition to a false positive, it is also possible to get a false negative if one or both of the fields contains user-entered data which may or may not have leading zeros.1 point
-
That's actually wrong. You may not notice it's wrong if your date is never a Sunday, but in such case your formula will return the date of the following Monday - i.e. the starting day of the next week. The correct formula to use would be: date - DayOfWeek ( date - 1 ) + 1 ; I don't see that I made any suggestion regarding portal filtering - other than to warn you that it will get slow as your number of records increases. I think that could be simplified to: IsEmpty ( Employee::gFilter ) or Time::Week_Start = Employee::gFilter If that doesn't work the same way for you then there is something wrong with the data in one (or both) of the fields.1 point
-
I don't know (I am currently stuck at v.18). But I wouldn't be surprised if it's still the same.1 point
-
I see the same thing (in version 18). This is apparently a bug. But the solution is simple: do not go back to the script. And if you do, do not click OK. Or switch it back to 'File' before clicking OK. Or do not save the script changes.1 point
-
I am not sure what exactly you are asking or what to look at in the attached file. From what I can see, the JSON in the GRANT::JSON field in the 4th record of your file is properly formatted - at least by the rules that Filemaker uses for formatting JSON (there is no official standard for this and you may see various online formatters return different results). Well, Grant in your JSON is also an array. The keys of any array are the numerical indexes of the array's child elements. The District array is a grandchild of Grant, and you will see it listed if you look at JSONListKeys ( GRANT::JSON ; "Grant[0]" ) or JSONListKeys ( GRANT::JSON ; "Grant[1]" ) and so on.1 point
-
Mmm... maybe I spoke too soon. Now that I have implemented the suggested solution myself, I notice a nasty "jump" as the popover settles in its place. Still, it's something worth learning - and the flaw may be be less noticeable in your situation where you need less padding. The great advantage here is that it's all done just by styling the popover object components (popover, popover content area and padding), with no need for extra buttons, scripts, script triggers and what have you. PopoverContentArea.fmp121 point
-
@fbugeja I notice you have cross-posted this question on Claris Community. This doesn't happen very often, but the answer you received there is better than any of the other options mentioned here. It may not be easy to understand at first glance, but I think it's worth spending the necessary time to learn it.1 point
-
What you ask for is impossible to do precisely. The reason for this is that the popover area is always positioned with its center aligned with the popover button. If your button is positioned to the right of the center of the area you want to cover, there will be either a slit exposed on the left side, or an additional band covering stuff on the right side. Or a little bit of both, as you can see in the attached file where I have adjusted the right-side popover to the approximate dimensions. There are other methods that would allow precise positioning of the covering area, such as a card window (already mentioned) or a popover with invisible button that you pop open with a script, or even a slide control. AUSCOIN+.fmp121 point
-
1 point
-
The term you're thinking of might be, not magic key, but 'multi-key.' It can indeed be quite useful, not only for going to related records, but displaying those records in a portal.1 point
-
Actually, there is. The table you call the Track table is a join table. The table you are missing is the real Tracks table, where each track would be a unique record. I believe it can - at least this part: You need to construct a self-join of the (not) Track table as: Track::Track# = Track2::Track# AND Track::Job# ≠ Track2::Job# This allows each track belonging to the current job to look at its "siblings" that belong to other jobs and ask: are any of you pending? The exact method to do that depends on what type of field is Pending and how it is populated (ideally, it would be a Number field with the value of 1 if true, 0 or empty for false). Scripting is also a possibility, and it doesn't limit you to "some sort of pop-up message". You can populate a global variable or a field when loading a job record. But I am not sure you need to do this, at least not for the highlighting part (I haven't really thought about the 2nd portal option).1 point
-
1 point
-
Just thinking out loud for a moment: I think the obstacle here is printing the orders for your staff. To send an email confirming the order with a PDF of the order attached to it, you could use a script performed on the server (this needs to be checked). But the Print command is not supported in PSOS and printing on the customer's machine would not help, even if it were easy.1 point
-
I am not sure you do, since you are so eager to disregard the warnings. I strongly disagree. A primary key must satisfy two conditions: (a) it must be unique and (b) it must be immutable. In practical terms, this often translates to a meaningless value, such as a serial number or a UUID. A primary key that depends on another field does not satisfy these conditions (see 3NF). In your proposed scheme, the serial number depends on the type field = and as I already pointed out, if you modify the type you are very likely to get a duplicate, even if you are the only user. That part is actually easy. If you are scripting the process of creating a new record, you can find the previous records with the same type and get the value from the last one. Or use the ExecuteSQL() function to do the same thing. Or you could define a self-join relationship matching on type and get the last/max value from there. That's the easiest part, just do something like: AgreementType & "-" & Right ( Year ( AgreementDate ) ; 2 ) & SerialIncrement ( "-000" ; SerialNumber ) but that's something you do at the very end, so it's hardly "a step in the right direction". And we haven't even mentioned the need to reset the serial numbers at the beginning of each year.1 point
-
That's an interesting observation, I wasn't aware of that. Apparently, the 'update matching records` option when importing relies on the index - and the index stores only the first ~100 characters of the indexed value. Try making it a Text field with auto-entered calculated value (replacing existing value).1 point
-
I would do it like this: Enter Find Mode [ ] Set Field [ YourTable::ClientName; $searchClient ] New Record/Request Set Field [ YourTable::CaseNumber; "*.*" ] Omit Record Perform Find [ ] where $searchClient is the client name you are looking for. --- P.S. Please update your profile with your current version.1 point
-
Here is the relevant portion from a script I posted somewhere: Set Variable [ $filePath ; Value:"Exported.csv" ] Export Records [ “$filePath” ] Open Data File [ “$filePath” ; Target: $myFile ] Read from Data File [ File ID: $myFile ; Target: $csv_data ; Read as: UTF-8 ] Set Variable [ $csv_data; Value:Substitute ( $csv_data ; Char (11) ; ¶ ) ] Set Data File Position [ File ID: $myFile ; New position: 0 ] Write to Data File [ File ID: $myFile ; Data source: $csv_data ; Write as: UTF-8 ; ] Close Data File [ File ID: $myFile ] As I said, this will replace the vertical tab characters in the exported file with carriage returns. However, that does not mean that every child name will be on a separate line. The carriage returns will appear only between the child names, not before the first one or after the last.1 point
-
Hello tweller927, that does sound like Firewall issue. You can download the support files from our docs page here. Please let me know if you run into any trouble or if you have any other questions. Matt 360Works Support1 point
-
Except for account name: https://help.claris.com/en/pro-help/content/running-scripts-on-server.html1 point
-
This is not a good way to ask a question. First, I don't know how MS SQL behaves. Next, I don't know what exactly "too many ?'s" means. If you are constructing the SQL in the Data Viewer, you should be getting a (more or less) helpful error message. As it is, it's not clear neither what the problem is nor the expected solution. It would be much better to post a file with sample data alongside the exact result you expect to get from it. I can only guess you are after something like the attached example. SubGroupSQL.fmp12 The best way to do this is to use the "Fast Summaries" method.1 point
-
I would use a script for this. I believe you need a script to get the data to Google Charts anyway, so that could be just a part of that. The tricky part here is that you need a "cell" in each line for every employee, even if they have no amount on that date. So I would do 3 preliminary steps: first, get a list of unique dates (sorted in chronological order, unlike your "Input List"). Then get a list of unique employees. Then use the "Fast Summaries" method to get the actual amounts to be charted. I would store these in a JSON object so they can be easily retrieved in the final step. The final step would use two nested loops: first, create a line for each unique date. In each line, create a cell for each employee and retrieve the corresponding amount from the JSON (which of course will return nothing for employees with no amount on that date). --- BTW, just for my own amusement I ran a script to summarize the "Input List" provided in your question. I had to correct it first because some lines have a space before the Employee N value. But the values I am getting are quite different from the ones shown in your "Output List". For example, you show the amount of 1482 for Employee 1 on 4 July 24 - but Employee 1 has no entry at all on that date!1 point
-
And if you receive the data exactly as you've indicated, I would first convert it into tables and records as Comment suggests. Converting the import can be handled in a single looping script. Once normalized, you can generate reports, place portals on an Employee's layout, and anything else you can imagine. 😀1 point
-
Ah. I forgot to mention that when I suggested exporting the field's contents on the server. That won't work in version 16. The Data File script steps originated in version 18.1 point
-
The question makes very little sense, because the two functions have very little, if anything, in common. The Case() function evaluates one or more test expressions and returns the result for the first test expression that evaluated as true. The List() function simply constructs a return-separated list of all its arguments. The important point here is that the List() function ignores empty values. So the difference between your: If ( a ; b ) & ¶ & If ( c ; d ) and mine: List ( If ( a ; b ) ; If ( c ; d ) ) is that when a is false and c is true, you will end up with an empty line above d, while my result will be just d. Or, in more general terms, your result will always contain a carriage return, while mine will have it only when both tests are true. And, as Søren already pointed out, the Case() function would only ever output either b or d, never both.1 point
-
You're wrong, two distinctive different functions! If both IF's are true, will two "lines" of result occur simultaneously, in Michaels function above! It could almost be written like: But on top of that it deals with the case, if the first or second half is undefined, by leaving out the pilcrow if need be! --sd1 point
-
1 point
-
I suspect your formula could be simplified to: List ( If ( B1::AR ALERT = 1 ; B1::AR ALERTS ) ; If ( B1::MR ALERT = 1 ; B1::MR ALERTS ) ) No. Conditional formatting can only change the text style, fill color and icon color. See also: https://fmforums.com/topic/110317-change-field-color-as-well-as-the-color-of-the-rectangle-shape-tool-upon-click/?do=findComment&comment=492669 -- P.S. Please fix you keyboard so that you don't appear to be SHOUTING.1 point
-
Could also be written this way: Let( tt = final ; Choose( Min( Mod( tt; 1 ) * 10; 5 ); tt & ",0"; Int(tt) + 0,5;Int(tt) + 0,5;Int(tt) + 0,5;Int(tt) + 0,5; tt ) ) (if being outside europe substitute commas with punctuations...) --sd1 point
-
Not really. Because we still don't know what the result should be when the decimal is below .1 or above .4. I am guessing (!) you want to do something like: Let ( [ r = Mod ( final ; 1 ) ] ; Int ( final ) + If ( 0 < r and r < 0.5 ; 0.5 ; r ) ) which would give the following results in my example: 1.0 ==> 1.0 1.1 ==> 1.5 1.2 ==> 1.5 1.3 ==> 1.5 1.4 ==> 1.5 1.5 ==> 1.5 1.6 ==> 1.6 1.7 ==> 1.7 1.8 ==> 1.8 1.9 ==> 1.9 2.0 ==> 2.01 point
-
https://community.claris.com/en/s/question/0D5Vy000006IQnpKAG/free-prompt-engineering-training-resources https://www.cloudskillsboost.google/paths/118 https://microsoft.github.io/AI-For-Beginners/ https://www.edx.org/learn/artificial-intelligence/harvard-university-cs50-s-introduction-to-artificial-intelligence-with-python https://www.coursera.org/learn/prompt-engineering https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/ https://www.deeplearning.ai/short-courses/llmops/ https://www.coursera.org/learn/big-data-ai-ethics https://www.edx.org/learn/computer-programming/edx-ai-applications-and-prompt-engineering1 point
-
Here's a simple method to duplicate the found set: Unsort Records Set Variable [ $n; Value:Get ( FoundCount ) ] Loop Set Variable [ $i; Value:$i + 1 ] Exit Loop If [ $i > $n ] Go to Record/Request/Page [ $i ] [ No dialog ] Duplicate Record/Request // perform changes to the duplicated record End Loop1 point
-
Does the attached test work for you? InsertFromRedirectURL.fmp121 point
-
Trim( GetValue ( Substitute ( text ; "/" ; ¶ ) ; 1 ) ) This will give you the first value. Replace the last '1' with a 2 to retrieve the second. Hey Bruce, I agree that it is good to study ALL functions but there are 45 text functions so it certainly doesn't hurt to provide a solution from which to learn, no? :-) BrainOnAStick, you might wish to protect from a field with more than one front slash. For instance, what if an address is 141 1/2 E. Main, for example. Just be aware of the potential problem with it.1 point
-
On a Mac, it's Option-7. You also have a button for it in the "Specify Calculation" window, under "Operators".1 point
This leaderboard is set to Los Angeles/GMT-07:00