Jump to content
Claris Engage 2025 - March 25-26 Austin Texas ×
The Claris Museum: The Vault of FileMaker Antiquities at Claris Engage 2025! ×

This topic is 4382 days old. Please don't post here. Open a new topic instead.

Recommended Posts

Posted

Hi 

I have a series of pictures in filemaker container fields. I have 30 records with a photo in a container field in each record. I want to play back the records randomly in a slideshow. I can get the random number  of the record and set it to loop and playback. Aside from the fact that filemaker has no way to cross-dissolve on record to the next (which I would like to see added someday)  I have the users flag their favorite photos and their least. Based on their votes in field "VOTE" the can choose a number of one to ten .   I would like to find a calculation that would play their favorites more often than the others. I am not sure how to write a calc like that. 

So 30 records and they vote on slide 8 and it plays 20 percent more than the others until the vote changes to another number . I am at a loss as to how to do this. AS a novice I can see duplicating a record but I don't think that would really work after a while. I just want to find a way to get the highest vote slide to be viewer more often 20 per cent more until there is another vote. Any help here would be wonderful Thanks

Posted (edited)

a way to get the highest vote slide to be viewer more often 20 per cent more

 

Not sure I understand this requirement the way have defined it.

Suppose each picture is shown for 5 seconds: in a period of 12.5 minutes, 150 different pictures will be shown. With 30 pictures to choose from, a "regular" picture will be shown 5 times, for a total of 25 seconds, while the "favorite" picture will be shown 6 times (a 20% increase in frequency), for a total exposure of 30 seconds. Is this what you had in mind?

 

Before you say yes or no, I suggest you consider the significance of the favorite when the total number of pictures is say 5, and then again with the total increased to 1,000.

 

 

filemaker has no way to cross-dissolve on record to the next

 

Not from one record to the next - but perhaps from one picture to the next, using the web viewer?

 

Edited by comment
Posted

I wrote a RandomWeightedRank custom function for dealing with a situation similar to this. I use it to randomly select elements from a ranked list (such as a sorted found set of records) so that elements at the beginning or end of the list are more likely. For your situation, I might sort the photo records according to how many votes each has, then select the next slide to jump to by going to a record number based on RandomWeightedRank ( Get ( FoundCount ) ). The effect is similar to duplicating each record a number of times according to its voted rank and using Go to Record with Round ( Random * Get ( FoundCount ) ; 0 ), but without having to duplicate the records. (To make sure that records with an equal number of votes wind up with an equal probability of being selected, I might have an unstored calculation field set to voteCount + Random and re-sort on that field before each Go to Record. There are cleaner ways to accomplish this, but they aren't as expedient in a pinch, nor as concise to describe.)

 

The function I came up with may not work for you, depending on exactly how you want vote counts to affect how frequently different photos come up in the rotation. Perhaps we could better help you with a more precise scenario. For example, if I have 5 photos in rotation with 0 votes, 1 vote, 1 vote, 2 votes, and 5 votes each, each photo will be selected about 7% (1/15), 17%, 17%, 27%, and 33% of the time, respectively, under the method I just described. (The percentages add to 101% due to rounding error. Also, the domination of the top voted photo will be less extreme the larger the set gets — about 6.5% of the time for 30 photos.) Is that close enough to what you're trying to accomplish? What would your ideal probabilities be for the (0, 1, 1, 2, 5)-vote scenario I described?

 

An enhancement that might be nice would be to make sure the same record doesn't get chosen twice in a row. To do this, just save a $variable with the primary key of the last photo you were on, and repeat the random record selection procedure until the new selected record's primary key doesn't match (unless Get ( FoundCount ) < 2). Additionally, you might make more recent photos of the same rank less likely to come up next by adding a lastDisplayTimestamp field to each photo and using that as an additional sort criterion. (If you do this, you should NOT be sorting on a voteCount + Random calculation as I described above, since that would "dominate" the sort.)

Posted

First of all  Thanks so much. I was struggling with this all night but I am a real novice I am not sure how to implement a function into my file. I need to take a bit of time this morning to try to try your solution. I can't thank you enough for your time and only hope that one day I will be able to help like people here help.  It is outstanding. Hats off to you ! I will get back to you later after I digest all this. Thanks God Bless

 
Might you have a sample of this file to upload that I could use to understand?
 
I might just be able to add the record number to a field of all the record number more than once and then somehow get a random choice from that list. But i am not sure how to get a random value from a text field.
 
 
 
Posted

Jeremy,

I am having a hard time understanding how your custom function is supposed to be applied here.

I have a table with 10 records. Each record has a SerialID and a Rank value as follows:

• Records 1 - 5 have a rank of 1;
• Records 6 - 8 have a rank of 2;
• Records 9 - 10 have a rank of 3.

I am now going to repeatedly select randomly one of the 10 records, with the expectation that the probability to draw a record increases with its rank. Accepting your arbitrary weighting, whereby 2 is twice as likely as 1, and 3 is 3 times as likely as 1, I would expect the following distribution of the draws:

• Records 1 - 5: 5.88% each;
• Records 6 - 8: 11.76% each;
• Records 9 - 10: 17.64% each.

I don't see how to achieve this using your function.

Posted (edited)
Your probabilities are close; but, assuming I wrote the function right, they should actually be:
 
- Records 1 - 5, rank 1, Sum ( 1:5 ) / Sum ( 1:10 ) / 5 = 5.4% each
- Records 6 - 8, rank 2, Sum ( 6:8 ) / Sum ( 1:10 ) / 3 = 12.7% each
- Records 9 - 10, rank 3, Sum ( 9:10 ) / Sum ( 1:10 ) / 2 = 17.2% each
 
Without making recent selections less likely to be selected next as in the demo file I posted, we can select records with the above probabilities by doing this:
 
Loop
Sort Records [by rank, ascending; then by unstoredRandomCalculation, <any order>]
Go to Record/Request/Page [RandomWeightedRank ( Get ( FoundCount ) )]
End Loop
 
Note that using the lastDisplayTimestamp I described and demoed above instead of the unstoredRandomCalculation will make the probabilities for different records within a ranked group converge to uniformity faster. Also, rejecting the previously selected record as I demoed will nudge the probabilities for the entire set very slightly closer to uniform that what I describe in this post.
Edited by Jeremy Bante
Posted

I calculated the probabilities as follows:

 

5 records x 1 + 3 records x 2 + 2 records x 3 = 17 tickets altogether.

 

Each ticket has a 1 in 17 chance of winning, therefore:

- Records 1 - 5, 1/17 = 5.88% each
- Records 6 - 8, 2/17 = 11.76% each
- Records 9 - 10, 3/17 = 17.65% each
 
 

 

we can select records with the above probabilities by doing this:

 
Sort Records [by rank, ascending; then by Random, ascending]
Go to Record/Request/Page [RandomWeightedRank ( Get ( FoundCount ) )]

 

I am afraid that doesn't work for me. My "found count" is 10 and testing with a population of 10k records I get the following distribution:

 

1    1.76%
2    3.71%
3    4.84%
4    7.71%
5    9.23%


6    10.55%
7    12.48%
8    14.86%


9    16.65%
10    18.21%

 

I presume one is meant to average the distribution within each group?

Posted
Within the found set, each record gets the same number of tickets as its record number; record 1 gets 1 ticket, record 2 gets 2 tickets, ... , record 9 gets 9 tickets, and record 10 gets 10 tickets, for a total of 55 tickets.
 
1/55 = 1.8%
2/55 = 3.6%
3/55 = 5.4%
4/55 = 7.2%
5/55 = 9.1%
6/55 = 10.9%
7/55 = 12.7%
8/55 = 14.5%
9/55 = 16.3%
10/55 = 18.1%
 
Judging by your results, that's working right for the scenario where each of the 10 records has a different rank. But we want each group by rank to pool their tickets, so the 15 in 55 chance of selecting 1-5 is uniformly distributed among the records with rank 1. This is achieved by uniformly randomizing the sort order within ranks before each selection.
Posted
Within the found set, each record gets the same number of tickets as its record number; record 1 gets 1 ticket, record 2 gets 2 tickets, ... , record 9 gets 9 tickets, and record 10 gets 10 tickets, for a total of 55 tickets.

 

No, that's your scenario, not mine - and not the OP's either, I think. In my scenario, any record can hold any quantity of tickets, and the quantities do not need to be unique.

 

 

The solution in the given example could be arrived at by performing  =

Let (
r = Random
;
Case (
r < 1/17 ; 1 ;
r < 2/17 ; 2 ;
r < 3/17 ; 3 ;
r < 4/17 ; 4 ;
r < 5/17 ; 5 ;

r < 7/17 ; 6 ;
r < 9/17 ; 7 ;
r < 11/17 ; 8 ;

r < 14/17 ; 9 ;
10 )
)

 

Now, that shouldn't be too hard to generalize, using a list of probabilities as the input.

Posted

 

I might just be able to add the record number to a field of all the record number more than once and then somehow get a random choice from that list. But i am not sure how to get a random value from a text field.

 
 
 

 

Going with this idea, the easiest way to get a random value from that list would be:

GetValue(YOURLIST; Truncate(Random * (ValueCount(YOURLIST) - 1) + 1; 0))

 

You'd just have to decide how many times you want the "top voted" record to be repeated in the list, to get the increased probability you want. The formula should work no matter how many records you wind up with, so you don't have to hardcode probability ranges for 30 records, then constantly update them.

 

As for how many times the "top voted" record should be repeated in the list, that depends what you meant by 20% more. In a set of 30, each record has roughly a 3.3% chance of being chosen. Did you mean for the "top voted" record to have a 4% chance (which is 3.3% * 120%)? If so, then my method won't work, since just adding a single repetition will give it a 2/31 chance of playing, at roughly 6.4%, or about 100% more.  If you meant you wanted it to play at 23.3% (3.3% + 20%), you'd have to add 8 repetitions.

 

Either way you're still hardcoding to a 30 record set, so you probably want to whip up a calculation to figure out how many repetitions to add, depending on the current total number of records.

Posted
No, that's your scenario, not mine - and not the OP's either, I think.

 

The original poster's scenario is not very precisely defined (interpreting their "20% more frequent" figure as an example rather than a specification), and either of our proposed solutions achieves the desired effect. Perhaps Mountain can correct us if either of us misunderstands. You and I have added some nuances, but those nuances are our own. Even if Mountain was more particular about the distribution he wanted, revising the parameters of a problem to fit an easy solution is a time-honored tradition. Achieving precisely the distribution you describe was not my goal, but the technique I described is awfully close anyway (and I doubt that's coincidental). I was merely clarifying what my solution does, not fitting it to your, or the original poster's, parameters. I don't claim to have the one and only solution.

 

I proposed my solution because I've done it before, the most difficult part is ready to be pasted into another, and what remains to implement is straightforward — it's cheap. A different solution could be made more general, but I don't accept that more general solutions are more virtuous prima facie. They're often (not always!) more complicated, less optimized for particular purposes, or more demanding of their users. A more general solution might save work down the road, but it also might be a waste of time if the need for more general capabilities never actually comes up. When we mis-judge, over-simple solutions waste less development effort than over-general ones, thus the YAGNI principle. If the original poster has a more complete idea of how much more frequently photos with more votes should come up in rotation, a more general solution than mine may be necessary. However, it isn't clear that that's the case.

Posted

Hi All

Jeremy has graciously answered my call to do what I would like to do with my pictures. I was not possibly as precise as I needed to be but the demo he provided pointed me in the right direction and I can learn from it. I am very happy with the solution. I will have to try it in my educational setting and maybe at some later date it will be modified. I am just thankful as a newbie that people would bother to answer my questions. And people are doing so here which I am thankful for. I aspire to do the same with others when I get more of an understanding of scripting. Thanks Comment and Jeremy

Posted

I have given this some more thought, and I find I cannot agree with any one of your points.

 

 

 Achieving precisely the distribution you describe was not my goal, but the technique I described is awfully close anyway (and I doubt that's coincidental).

 

I think there can be no doubt that the "closeness" in results is purely coincidental. After all, in my scenario the distribution is entirely arbitrary (that's the whole point).  I could have just as easily picked a different distribution with drastically different odds. Your scenario has fixed odds per record, with the total number of records being the only variable.

 

 

 

A different solution could be made more general, but I don't accept that more general solutions are more virtuous prima facie.

 

I can only speak of the present case. I think that a scenario where an event has several possible outcomes with the probability of each outcome separately specified is a very common one. It is common enough for Excel's random number generator to include it among the seven (IIRC) types of distribution you can choose from. In fact, it is so common it has its own Wikipedia entry:

http://en.wikipedia.org/wiki/Categorical_distribution

 

That's why I believe a more general solution would be useful and consequently I have gone ahead and constructed one:

http://www.briandunning.com/cf/1517

 

In fact, it's even more general than just providing a way to generate a random discrete variable, as shown by the examples in the link.

 

See also:

http://www.briandunning.com/cf/716

http://www.briandunning.com/cf/416

http://www.briandunning.com/cf/417

Posted
I presumed that your calculated probabilities were based on your interpretation of my description of the RandomWeightedRank function. After all, your first statement of your calculated probabilities is preceded by, "Accepting your arbitrary weighting ... I would expect the following distribution of the draws". My mistake, I suppose. However, under that presumption, the similarity of our calculations (not just their results) is noteworthy. A number of "tickets" is assigned to each record according to rank and the probability of selecting any one record is evenly distributed within each group by rank. I don't care to attempt a proof or disproof this late on a Sunday evening, but I wouldn't be surprised if it turned out that our two variations on that same idea are never more than some small bounded error different from each other.
 
I agree that categorical distributions are common, especially for categorical data. The original poster's problem involves ordinal data, which can often be modeled with a categorical distribution, but invites ordinal distributions, too.
 
On generalized solutions, I only meant to say that more general solutions to programming problems should not be accepted as better just by virtue of their generality — a developer reflex that I believe is worth at least questioning when the opportunity presents itself. Your Bin function is a great example of the points I raised. It has the flexibility to use a greater variety of finite discrete probability distributions, but it doesn't get any closer to answering the question of what that distribution might be. It's more general at the expense of being more complicated, more "powerful" at the expense of more developer effort, pre-computation, or both. We could maximize the generality of our efforts ad absurdum by writing a Turing machine script with the understanding that we just have to pass it the correct parameter to get whatever situation-specific result we want, but that doesn't get us any closer to solving any situation-specific problem. We make computers useful to our feeble human minds by building tools more specialized than the computer itself, not more general. The degree to which we should do that, and in what increments, is a question for which there is no good simple answer. Of course more general solutions are useful, even fun (and worthwhile for being fun even when they don't turn out to be useful otherwise), but for some applications more generality might be overkill, and sometimes even simultaneously underkill.
 

This topic is 4382 days old. Please don't post here. Open a new topic instead.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.