Jump to content

AndrewPurvis

Members
  • Content Count

    14
  • Joined

  • Last visited

Community Reputation

0 Neutral

About AndrewPurvis

  • Rank
    newbie

Recent Profile Visitors

1,291 profile views
  1. I see your point about StdDevP vs StdDev here: only when a population is fixed, including time, can I count it as the entire population. I had always considered that a purely backward look at the totality of data available represented the population. I am, indeed, using past performance as an approximation of probable future performance. Thus far, though admittedly with only a modest sample since developing the system (32 total, played by three 5-star and one 4-star, getting an aggregate 25-7, with games leaning slightly to favor 5-star decks) has been almost exactly on target for predict
  2. I had initially used StdDev rather than StdDevP, before checking with one far more knowledgeable than I. His thinking was that it was a complete population in the sense that they represented the full number meeting the requirements of minimum games played and star rating, rather than a sampling of that population. As to things Laplacian, I would be at a loss (my degrees are in English). The approximation of the correlation between expected performance and actual is achieved based on the idea that 5-star decks might be expected to hover around 90%, with a 20% drop off by star. I achieve th
  3. Thank you. I will give this a try. I wonder if it is possible to create a table that is constructed, virtually, of other records. The outcome of the division would be that the table would create record 1 out of 1-3 from the source table, record 2 from 4-6, and so on. If I then displayed those records, with the fields corresponding to the data in the other table, might this prevent the need for multiple portals side by side? And the number of records I am looking at would never exceed 100 (practically speaking, never 99, actually), generally hovering around 18 to 24.
  4. I had once seen someone, someplace, attempt to explain what I am asking, but it didn't register with me at the time, and I have not found it since. I have data I want to view in a portal such that records 1 through 3 are across the first portal row, then records 4 through 6 are in the second row, etc. I realize I could probably do something with the related records that would get their sequential number, divide by three and then set it up so the first row would display data from two tables away, but FileMaker says to avoid this. In this case, I think I would have to set up the portal to d
  5. I have a database (admittedly a frivolous one) that tracks my rating (1-5 stars) for Magic decks I have built. It also tracks how decks have performed. In one table, I have created threshold values for five groups, based on games won of those played: those ≤20%, those >20% and ≤40%, those >40% and ≤60%, those >60% and ≤80%, those >80%. I then want to look at the degree of correlation between these two values (star rating and winning percent), which I do based on a minimum number of games played. First of all, what I have works fine, even when I drop my threshold to one game pl
  6. You want to use Get ( FoundCount ) for this. It is fast and efficient. The really great part is that if you look through a relationship, it will return the number of related records. However, you can also write individual calculation fields that will find it for conditions you specify inside of them. This would reduce the need to have a relationship for each province. Your parent (Provinces) table would have a record for each province. Your Members table would then have a calculation field that is defined solely as "Get ( FoundCount )" (without the quotation marks, of course). In a table
  7. I stripped the immaterial bits out to get this to under 8MB from 1.58GB. I realize I probably have a great deal of normalization to do, as I am not a professional developer, and this database has evolved rather like Windows: new things added on as the technology allowed, and the old never (or rarely) thrown away. Some of what remains in the relationship graph is extraneous for the purposes here, but the relevant fields are all in place. While the schema probably needs a ton of work, the underlying field definitions should be about as useful as they can be for this. Deck Tester Copy.fmp12
  8. I don't see how a portal could even get the right data because I cannot use a single relationship to test for all cards and return a result when some cards do not match. I use portal filtering successfully on numbers that can run into the hundred when specifying minimum games played, minimum wins, maximum losses, minimum winning percent, and minimum power rating (itself a complex, unstored calculation), so the numbers don't worry me. The decks, once set, do not generally change, though there are sometimes tweaks in the last minutes after I import them from the design side. I can see using a
  9. A deck may have any number of card equal to or greater than 60, though most are just 60. My personal collection is roughly 5,000 unique titles. The current deck count is in the hundreds, but not likely to reach 1,000 for years. The number of decks with common cards would reach into the dozens. The only fields needed for the relationships are these: Contents::CardCount Contents::CardName Contents::DeckName Contents::DeckVersion The DeckName/DeckVersion pairing is unique. A given CardName can exist only once in a DeckName/DeckVersion pairing, but there could be anywhere from 10 to 20-some
  10. I'd tear my hair out at times, but I shave my head.

  11. The question is this: Is the alpha layer set on the PNG file? It is possible for something to appear to have an alpha layer simply because it has a background that matches the color. Using the GIMP, you can create the alpha layer yourself, however, by selecting it and specifying it as the alpha color.
  12. Another solution that has worked for me, without problem, is using Go To Field [] with no field specified. This will actually send focus to field NULL.
  13. I have a database in which data could be in one of two places. This tracks games between two Magic players, and Games::FirstDeckName holds the name of the deck that pays first, with Games::SecondDeckName holding the other deck. I have a layout that presents the user with a means of selecting any deck that has played any number of games greater than 0 (Matchups::Friend), then selecting from a list of only those decks that participated in those games (Matchups::Foe). It then shows their records against one another and summaries of the games, with the summaries acting as links back to the related
  14. I have a database that stores the contents of—I'll just say it—Magic decks in such a way as each card-number-deck combination is its own record (this allows for other features not available in a single text field for each deck). My ultimate goal would be to have a means of comparing a given deck to all other decks for the number of cards in common, but this will always be less than 100%. I have a Decks table, a Comments table, and a Contents table. The relevant fields are these, where "relevant" is interpreted loosely: Decks::DeckName, Decks::VersionNumber, Comments::DeckName, Comments::Vers
  15. I have a database that contains information on groups of objects that, in the physical world, are discrete entities. I want to create a table that can extrapolate and hold the discrete elements, assign unique values to each, then randomize them. For my purposes, it is simple enough to think of the table in question as having only two fields: ItemCount and ItemName. To use the common example of fruit, I could have the following: 4 Orange 3 Apple 2 Banana 1 Pomegranate From this I want to have a table hold 10 records, using ItemName and SequenceNumber: Orange Orange Orange Orange
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.