A popular class of randomized algorithms is what are called “Monte Carlo algorithms”. These algorithms are applied to decision problems and give a “yes/no” answer. The peculiarity with Monto Carlo Algorithms is that if they say “no” you can be sure that the answer is “no” while if they say yes, the answer is “yes” with only a certain probability! The most “popular” such algorithm is the Primality Test which uses Fermat’s Little Theorem.
I have found an interesting practical instance of a Monte Carlo algorithm. It is regarding the decision I’ve been trying to make over the last two months – is she the ideal girl for me? From my “experiences” I’ve found that the algorithm I use for this is Monte Carlo!
I input a girl into my algorithm and ask it if there is a possibility of a relationship with her. If it says no, I can completely believe it and get on with life. A ‘yes’, however, means trouble. It means there is a possibility of a relationship, but there is no guarantee. An MBA like me is actually supposed to revel in ambiguity, but what the hell.
The way computer scientists get around this peculiarity of this algorithm is by running it multiple times. They define a probability limit (say 95%) and say that “if I am 95% sure that it is a yes, then I’ll take it as a yes”. Now, the number of times the algorithm needs to be run in order to get 95% confidence is determined by the probability that the algorithm is right when it says “yes”.
What has been happening in my case is that this probability is not too high (from historical data). Hence, I need to run it a really large number of times in order to reach the required confidence interval. Consequently, it is taking a fairly long time (there have been occasions in the past when the algorithm kept saying yes for 2 months and then finally said “no”).
Wanted: A better algorithm! Hopefully a Las Vegas.