Jump to content


  • Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About AndrewPurvis

  • Rank

Recent Profile Visitors

779 profile views
  1. AndrewPurvis

    Calculating on two dimensions

    I see your point about StdDevP vs StdDev here: only when a population is fixed, including time, can I count it as the entire population. I had always considered that a purely backward look at the totality of data available represented the population. I am, indeed, using past performance as an approximation of probable future performance. Thus far, though admittedly with only a modest sample since developing the system (32 total, played by three 5-star and one 4-star, getting an aggregate 25-7, with games leaning slightly to favor 5-star decks) has been almost exactly on target for predictions, facing a variety of opposing decks, some of them multiple times. I am hesitant to use the Laplacian adjustment, as you describe it here. Due to the large reserve I have of decks that have never been used—58 rated 4 stars and another 12 rated 5 stars, in addition to a further 128 with lower ratings, though they will not see service unless I feel like throwing one in for further testing of predictions—most will not have a great deal of time in service. They will see anywhere from 4 to 8 games, producing a significant shift if I use the Laplacian adjustment. I am instead making predictions for individual decks based upon what I have from star ratings, and tracking those, as well as adding them to the pool of records being evaluated. Significant errors in individual cases will show up immediately and can be evaluated from there, as the conditions of each game are also tracked (6 recent games excepted). The nature of the star rating system is also relevant here. It is based on my interest in having FMP shuffle up the deck and deal out new starts that I can then play out. If I find something more intriguing or compelling, I give it a higher rating. If I find it more frustrating, I give it a lower rating. In this sense it is not based on any objective foundation, but once I found a high correlation, I was intrigued enough to look at how it might work moving forward. It does not assume anything about the opposition other than the deck across the table will fall within the range of capabilities represented by the more than 2,000 games in the pool played by those decks (sometimes against one another). Obviously, two 5-star decks would be expected to run about 50% against one another, given a large enough sample, barring specific strength-weakness match-ups that might skew the results toward one or the other. Thus, it assumes a fairly normal distribution of opposing decks. However, given that, I would still expect better performance from a 5-star deck against any random opposing deck than I would from, say, a 3-star deck, making it a better bet to win over time. I had actually encountered Kendall tau calculations in my research into what might be useful in this use case, but I had not run my data through the formula. Based on what you have provided here, I would need to use Tau-b, as there are significant numbers of ties in the rankings on both sides. My expectation is not actually 90% or better, but rather a mean of 90% (in the range of 80% to 100%) for 5-star, and so on. My reason for taking the difference between the scaled percent and the star rating is to see how far the two differ in individual cases, to which I apply, as a population, StdDevP (currently) to see how closely they correlate. My thinking is that there will be those that fall outside of the expected ranges, and the standard deviation would give me a sense of how closely to the expected distribution these values came out, where a value of <1 would show a distribution clustered closer around the mean, and one >1 would indicate a population that was not as accurately reflective in performance of the expectations (as expressed by star rating). It seems, however, that the Kendall tau might be more appropriate here. Ultimately, the star rating is a measure of how much I drool over the thought of going after an opponent with the deck. If I dread pulling it out, that will a ) be less fun for me and b ) likely be less successful with regards to performance. (Note I am not concerning myself with the likelihood my opponent will enjoy the experience here, as I prefer 4- and 5-star decks to those of lesser ratings.) In its first incarnation, back in November of 2006, this database was envisioned as a means of improving my deck design habits and those of a friend of mine, as he was my primary opponent (and indeed, the only one tracked here at this time, skewing the data somewhat). That said, my ultimate goal is to provide a free tool others can download and use to design decks and track their performance, ideally seeing what concepts work better or less well and leading to improvements. With that in mind, your feedback on all of this, both at the design level and the statistical level, is fantastic, and I greatly appreciate it.
  2. AndrewPurvis

    Calculating on two dimensions

    I had initially used StdDev rather than StdDevP, before checking with one far more knowledgeable than I. His thinking was that it was a complete population in the sense that they represented the full number meeting the requirements of minimum games played and star rating, rather than a sampling of that population. As to things Laplacian, I would be at a loss (my degrees are in English). The approximation of the correlation between expected performance and actual is achieved based on the idea that 5-star decks might be expected to hover around 90%, with a 20% drop off by star. I achieve this by taking the winning percentage and multiplying by 4, then adding 1 to the product. The result is anything from 1 ((0*4)+1) to 5 ((1*4)+1). I average the difference between the scaled percent and the star rating, then take the StdDevP of that to determine how closely grouped the figures are. The value of that is largely as a measure of how accurately the rating system reflects reality, as any number greater than 1 suggests I may have something off, a very small sample size, or both. All that said (and probably too much off of the topic of databases for this forum), you have provided me a great deal of food for thought as to how I am producing this, and I will have to digest it and act accordingly before I put an executable out there for people toy around with.
  3. AndrewPurvis

    Adding a dimension to portals

    Thank you. I will give this a try. I wonder if it is possible to create a table that is constructed, virtually, of other records. The outcome of the division would be that the table would create record 1 out of 1-3 from the source table, record 2 from 4-6, and so on. If I then displayed those records, with the fields corresponding to the data in the other table, might this prevent the need for multiple portals side by side? And the number of records I am looking at would never exceed 100 (practically speaking, never 99, actually), generally hovering around 18 to 24.
  4. AndrewPurvis

    Adding a dimension to portals

    I had once seen someone, someplace, attempt to explain what I am asking, but it didn't register with me at the time, and I have not found it since. I have data I want to view in a portal such that records 1 through 3 are across the first portal row, then records 4 through 6 are in the second row, etc. I realize I could probably do something with the related records that would get their sequential number, divide by three and then set it up so the first row would display data from two tables away, but FileMaker says to avoid this. In this case, I think I would have to set up the portal to display records from the child table, then place values from three different grandchild tables based upon the modulo of the division (modulo 1, modulo 2, modulo 0, then on to next row). The second problem here is working out how to assign them to the proper rows such that the proper row is 1 + Floor (n/3). I have a value list I can use for sorting the related records properly before the division, so that is not a problem. Can something like that work? I would include a graphic, but I have not even gotten something to fail elegantly enough to produce one that is meaningful.
  5. AndrewPurvis

    Calculating on two dimensions

    I have a database (admittedly a frivolous one) that tracks my rating (1-5 stars) for Magic decks I have built. It also tracks how decks have performed. In one table, I have created threshold values for five groups, based on games won of those played: those ≤20%, those >20% and ≤40%, those >40% and ≤60%, those >60% and ≤80%, those >80%. I then want to look at the degree of correlation between these two values (star rating and winning percent), which I do based on a minimum number of games played. First of all, what I have works fine, even when I drop my threshold to one game played and bring up 378 related records. The time to get everything pulled and calculated (the size of each star rating's population, the average number of games played for each population, their average winning percent, the StdDevP of each population as scaled winning percent relates to star rating, the number in each population that falls into each of the five groupings of winning percentages, and the percentage of each group by winning percent), as well as charting that last part is perhaps 1 second, or a hair over. However, I would like to speed this up and simplify the relationship graph as it relates here. This is how it looks now; I build my interface for just my own use, so it doesn't have to be particularly well labeled. So far, everything is being done with calculation fields, and I am wondering if I am doing things the wrong way. Below is the portion of my relationships graph that relates to these two tables. Global fields in the Aggregator table hold values of 1 through 5. This and the threshold minimum number of games played form the relationships. Each of those relates to one of the TOs that has the data about the percentage of games won by each deck and the star rating of that deck. Each deck has its winning percent checked by five fields, returning a 1 if the value is true for the range conditions set (say, >.4 and ≤.6). Those fields then display across each row in the center grid above. Each row is a relationship, and each column is the same field as it is viewed by the different relationships. I figure there has GOT to be a smarter way of doing this, but given that this beast has evolved over a period of nearly a decade, I am still finding other efficiencies on old stuff. This part is relatively new, so I am almost certainly doing it poorly.
  6. AndrewPurvis

    Need help creating total counts

    You want to use Get ( FoundCount ) for this. It is fast and efficient. The really great part is that if you look through a relationship, it will return the number of related records. However, you can also write individual calculation fields that will find it for conditions you specify inside of them. This would reduce the need to have a relationship for each province. Your parent (Provinces) table would have a record for each province. Your Members table would then have a calculation field that is defined solely as "Get ( FoundCount )" (without the quotation marks, of course). In a table displaying data from the Provinces table, you would list each province and place the child table's counting field next to it. A potentially better solution would be to use a Cartesian relationship between two copies of the Provinces table, and link to the children past the second copy. This would allow you to use a portal to display the provinces and their related counts in rows that FMP will keep aligned. You list 8 provinces here, but what happens if you add another? Will you want to change your interface, or just have it update itself? The portal will do the latter, and with the formatting options and button capabilities, you will have no trouble viewing things the way you want and navigating to the child records you wish to review.
  7. AndrewPurvis

    Finding overlapping data

    I stripped the immaterial bits out to get this to under 8MB from 1.58GB. I realize I probably have a great deal of normalization to do, as I am not a professional developer, and this database has evolved rather like Windows: new things added on as the technology allowed, and the old never (or rarely) thrown away. Some of what remains in the relationship graph is extraneous for the purposes here, but the relevant fields are all in place. While the schema probably needs a ton of work, the underlying field definitions should be about as useful as they can be for this. Deck Tester Copy.fmp12
  8. AndrewPurvis

    Finding overlapping data

    I don't see how a portal could even get the right data because I cannot use a single relationship to test for all cards and return a result when some cards do not match. I use portal filtering successfully on numbers that can run into the hundred when specifying minimum games played, minimum wins, maximum losses, minimum winning percent, and minimum power rating (itself a complex, unstored calculation), so the numbers don't worry me. The decks, once set, do not generally change, though there are sometimes tweaks in the last minutes after I import them from the design side. I can see using a script for 1:1 deck comparisons that sets a fixed value, but I don't see how I can compare one deck to all other decks without cycling through each time. The point is that years on I may create a deck that is 90% the same as a previous deck, which is what I want to identify without having to perform a manual search and review.
  9. AndrewPurvis

    Finding overlapping data

    A deck may have any number of card equal to or greater than 60, though most are just 60. My personal collection is roughly 5,000 unique titles. The current deck count is in the hundreds, but not likely to reach 1,000 for years. The number of decks with common cards would reach into the dozens. The only fields needed for the relationships are these: Contents::CardCount Contents::CardName Contents::DeckName Contents::DeckVersion The DeckName/DeckVersion pairing is unique. A given CardName can exist only once in a DeckName/DeckVersion pairing, but there could be anywhere from 10 to 20-some unique CardName entries (each unique records) within a DeckName/DeckVersion pairing, and (generally) a CardCount between 1 and 4. Here is a partial example of two decks, CardCount and CardName: 4 Champion's Drake 4 Coralhelm Commander 3 Hada Spy Patrol 3 Skywatcher Adept 4 Training Grounds and... 2 Champion's Drake 4 Hada Spy Patrol 4 Skywatcher Adept 2 Training Grounds 3 Venerated Teacher Other cards would be involved, but if these were it, I would be looking from the first deck at the second and see 10 cards in common (minimum of CardCount where CardName is the same), and that would be a 71.43% overlap. From the second deck's perspective, there is a 83.33% overlap. The problem is that I need to find a way to compare things that do not necessarily have cards in common, then calculate the percentage of cards in common.
  10. I'd tear my hair out at times, but I shave my head.

  11. AndrewPurvis

    PNG Button Issue

    The question is this: Is the alpha layer set on the PNG file? It is possible for something to appear to have an alpha layer simply because it has a background that matches the color. Using the GIMP, you can create the alpha layer yourself, however, by selecting it and specifying it as the alpha color.
  12. AndrewPurvis

    Deselect Field or Bypass Tab Order

    Another solution that has worked for me, without problem, is using Go To Field [] with no field specified. This will actually send focus to field NULL.
  13. AndrewPurvis

    Narrowing List of Related Records

    I have a database in which data could be in one of two places. This tracks games between two Magic players, and Games::FirstDeckName holds the name of the deck that pays first, with Games::SecondDeckName holding the other deck. I have a layout that presents the user with a means of selecting any deck that has played any number of games greater than 0 (Matchups::Friend), then selecting from a list of only those decks that participated in those games (Matchups::Foe). It then shows their records against one another and summaries of the games, with the summaries acting as links back to the related games. That works great. However, the second menu uses a value list constructed by a relationship that has a child key field set to the names of both decks (carriage-return-delimited). If a (Friend) deck has played against five other decks, when I go to Matchups::Foe, I get, of course, six results: the five decks the Friend has played against AND the Friend deck itself. What I want to do is have a conditional value list that only includes values not equal to the primary key (Matchups:Friend). All I can think to do at this point is use a Script Trigger (triggered on change) on the Matchups:Friend field to set a field that has the related values, though with the current Friend value removed with a Replace function, done in such a way as to eliminate any unsightly extra carriage returns. Then I could use that field as the value list (it can be, but need not be, Global, since there is never any cause to have more than a single record in the table). Is there a programmatic solution that avoids writing a script, attaching a script trigger and creating a new field?
  14. AndrewPurvis

    Finding overlapping data

    I have a database that stores the contents of—I'll just say it—Magic decks in such a way as each card-number-deck combination is its own record (this allows for other features not available in a single text field for each deck). My ultimate goal would be to have a means of comparing a given deck to all other decks for the number of cards in common, but this will always be less than 100%. I have a Decks table, a Comments table, and a Contents table. The relevant fields are these, where "relevant" is interpreted loosely: Decks::DeckName, Decks::VersionNumber, Comments::DeckName, Comments::VersionNumber, Contents::DeckName, Contents::VersionNumber, Contents::CardName, Contents::CardCount In Decks, I can see, for a given DeckName and VersionNumber the contents of that version of a deck, and all notes on it (stored in Comments::DeckNotes). I can also view in a portal the list of all related records from Contents::CardCount and Contents::CardName (how many of which cards). I can even view individual card art via a popover button. This is great and lets me do about everything I want. Except one key thing. I want to be able to see, when looking at a deck, all other decks in the database that have a minimum number of cards in common with the deck being viewed. I currently use a self-join on Contents::CardName to see other decks that have included that one card, but I want to be able to look at all cards from one deck to find, for instance, all decks with a >=60% cards in common. Can this be done? ***Update*** I would love to reply in thread, but there is no option available on the page for me to respond to the reply.
  15. I have a database that contains information on groups of objects that, in the physical world, are discrete entities. I want to create a table that can extrapolate and hold the discrete elements, assign unique values to each, then randomize them. For my purposes, it is simple enough to think of the table in question as having only two fields: ItemCount and ItemName. To use the common example of fruit, I could have the following: 4 Orange 3 Apple 2 Banana 1 Pomegranate From this I want to have a table hold 10 records, using ItemName and SequenceNumber: Orange Orange Orange Orange Apple Apple Apple Banana Banana Pomegranate From there, I should be able to use a pretty simple randomizing tool to look at the number of records that have blank SequenceNumber fields and apply the next incremented number to one of them, cycling until each has been given a number, then sort it into a stack, akin to shuffling a deck of cards, from which I can look at the first, then the second, and so on. There are a number of things I plan to do once it is in that sorted order, but those are simple enough once this step is done. My current thinking is something I fear would be horribly kludgy and inefficient: looping through each original record and decrementing a local variable until it is done, then looping down to the next record, populating a blank table one record at a time, but it seems there must be a better, more efficient, way.

Important Information

By using this site, you agree to our Terms of Use.