• Content count

  • Joined

  • Last visited

  • Days Won


jbante last won the day on November 2 2016

jbante had the most liked content!

Community Reputation

136 Excellent

About jbante

  • Rank
  • Birthday

Profile Information

  • Gender
    Not Telling
  • Location
    CA, USA

Contact Methods

  • Website URL

FIleMaker Profile

  • FM Application
    15 Advanced
  • Platform
    Cross Platform
  • Skill Level
  • Certification
  • Membership
  1. Let ( [ #error = Get ( LastError ) ; #newError = If ( #error ≠ 0 ; Trim ( #error & " " & scriptStep ) ) ; It sure looks to me like the function is trying to ignore error code 0, even if there are previously recorded errors. It looks like FileMaker 15 is successfully setting $Error to empty until the first error, and successfully not appending new lines for error code 0. So FileMaker 15 isn't crashing. Great!
  2. I just tested this, and FileMaker 15 custom functions were happy to set local variables that persist outside the function for me. It looks like the ErrorList function in your demo file isn't setting the $Error variable because the custom function chooses not to save error code 0, and the script doesn't trigger any other errors for it to save.
  3. Since your result from step 1 is sorted, you can get your result from step 3 without making another ExecuteSQL call. You could do a binary search on the result from step 1 to find the last value within your upper limit, and use LeftValues ( $step1Result ; $positionOfLastValueToInclude ). This may or may not be faster; try it, measure the time, and use the faster approach.
  4. When working in FileMaker, you should usually presume that ExecuteSQL is one of the slower links in the chain, not the calculation engine. You may have a point in this case, but only because naïve parsing of return-delimited lists with calculations is a quadratic-time operation — not that it makes much difference up to a couple hundred rows or so.
  5. What is the reason you would prefer to do this with SQL instead of a function in FileMaker's calculation engine? You could easily use a (non-SQL) custom function to calculate an average from the return-delimited list result of your first query.
  6. For even the simplest of the calculation options, some historical data is necessary to fit the harmonic series. However, as long as you don't need the accuracy provided by local current and topography details, you shouldn't need that reference data after you've fit your coefficients.
  7. Judging from a quick Google search, tide calculations can range from moderately complicated to you-can-get-a-masters-degree-in-it complicated. How accurate do you need it to be, and how much detail do you need? Do you just want high and low times, or do you want to calculate height for a given time? I suppose you could do a simple calculation taking into account only the angles of the sun and moon relative to a position in a 2D model of the solar system. You could get a little more complicated by taking into account the declination of the sun (season) and moon, i.e. 3D model of the solar system. I understand that local currents and topography can also have a big effect, but then you're getting into needing to feed-in lots more location-specific data to the calculation. Mathematically, we're talking about a harmonic series for the more basic versions, i.e., a sum of sin functions representing the different periodic cycles and with different coefficients representing the relative strengths of their effects. You could pick out which effects you want to take into account (sun and moon phase at minimum, seasons and ellipsoidal orbit effects if you want to get fancier, latitude of the location if you want to show off), and fit a harmonic series with those components to a data set of actual tides.
  8. In general, there's more to how Google does what it does than just having smart people and good software. Google doesn't work "smarter, not harder"; Google works "smarter AND harder". In particular, you're right that solutions to searching massive data sets quickly (i.e. in sub-linear time with respect to the size of the data set) exist. Indexing, for example. FileMaker has indexes. (There are other types of indexes that might be nice to add, but not that would help the content of this thread.) However, there are no solutions for quickly making massive data sets searchable, at least in terms of doing it with few CPU cycles. Building indexes is a super-linear affair. (The time it takes to build an index grows faster than the amount of data to index.) This is due to computing constraints more fundamental than gravity. Google's solution is to throw billions of dollars of hardware at the problem (with some non-trivial intelligence going into how to parallelize the indexing), but the total compute time is still super-linear. Search is fast when you ask for it because some computer has spent a lot of time preparing for the search. "Big iron" has some different techniques at their disposal, but the governing principles of computing are no different. There are problems big enough that using FileMaker for them is impractical (by which I mean either impossible or takes substantially more resources to accomplish than a solution using competing tools); but those problems are rare, and that threshold is very high. Very few people are trying to index the entire internet, and those who are will have to spend monumental resources to do it, no matter how smart they are.
  9. I don't know of any apps that let you access barometer data from a URL scheme. FileMaker has a Product Ideas section of their own forum where they take feature requests.
  10. Use substitute to convert the hyphen to a return, then you can get the different numbers with GetValue: Let ( [ _values = Substitute ( Table::field ; "-" ; ¶ ) ; _left = GetValue ( _values ; 1 ) ; _right = GetValue ( _values ; 2 ) ; ] ; /* ... */ )
  11. That's a start. Yes to having a "TRANSACTIONSSUMMARIZED" table. No to having one record per month (or other period) to "evaluate". In my suggestion, there should be nothing to evaluate (on the client or server) at the time of the report except a find for the target TransactionSummary records. No calculation fields to evaluate; no summary fields (the FileMaker field type) to evaluate. (Or at least only over some negligibly small found set.) I'm suggesting you achieve this by updating the applicable TransactionSummary records at the time of each transaction: TransactionSummary::total = TransactionSummary::total + Transaction::amount, not TransactionSummary::total = Sum ( Transaction::amount ). If the TransactionSummary results are getting out of sync with the actual transactions, start by looking into transactional scripting (database transaction, not business transaction).
  12. I suppose you could say I suggested denormalizing, but not quite in the same sense as how I usually hear that word used. Those stored "summary" records? (Summary of the data, not a FileMaker summary or calculation field.) That's exactly what I'm talking about. (Except it may make sense to store the summaries of those, too.) I suggest you do some research into how data warehouses are structured, star schemas, etc.
  13. Nope. Well, yes, but not using techniques any different from what we could do in FileMaker for the same effect. Any time you see a report showing summarized values over large quantities of records showing up super fast, it's because the expensive calculations were done before any user asked for the report (even for reports where the user thinks they're doing something ad hoc). There are two ways to do this: batching and streaming. The idea to run a report after hours that you've already mentioned is the batching option. I think you should consider streaming. Rather than accumulating your summary values in reports after the fact, each transaction updates the appropriate summaries at the time of the transaction continuously over the course of the period. The processing time isn't hours and hours at a time when you're building reports; it's milliseconds at a time as each transaction is processed. As you mentioned, this can be a problem if your updates to the summaries get out of sync with your transaction data. 2 things about that: (1) That's a bug that you need to track down and fix, and (2) you can re-normalize the data periodically while you're doing your archiving (which you mentioned you're already doing) until you do track down that bug.
  14. Needing to understand how it works would defeat half the value of using someone else's code. Encapsulation is a beautiful thing.
  15. Using text UUIDs can make certain database operations very slightly slower than their numeric equivalents, but I've never seen an application where that was the performance bottleneck. There's practically always something else you can do to get a more substantial speed improvement. If you do insist on numeric UUIDs, my function that Mike linked to could work. I started that family of functions before FileMaker introduced the Get ( UUID ) functions. Now that we have Get ( UUID ), you can just use one of many functions to convert Get ( UUID ) from hexadecimal to decimal. This is slower to generate each UUID, but the resulting UUIDs are slightly smaller and there's better security since the content is driven by the random number generator behind Get ( UUID ). (The result of the Get ( UUID ) function is a type 4 UUID, which is supposed to be created by a cryptographic-strength random number generator.)