### Ending the Monkey Typewriter Problem

#### by Mithrandir

In this post I will show the last results concerning this problem while also showing that there is a way to stop any Monte Carlo simulation from doing useless runs.

First, a summary of the problem:

- The first article introduced the reader to the problem and showed a way to obtain the right results, using another method: a Markov Chain was used to model the process.
- The second one, introduced the reader to the problems involved when this is solved via a Monte Carlo simulation. In fact, it can be shown that the problems encountered there are general, that is, the errors are getting larger or the number of iterations must be raised as the dimension of the problem grows.
- A few days ago, a mathematical deduction of one method was presented. Right now we will present the results and finally conclude this presentation.

Following the mathematics from the previous post, we have written the following code to test our assumptions. One can esily check that this is almost the same as the code used in the second installment of the problem, with only slight modifications.

#include <stdio.h> #include <stdlib.h> #include <time.h> #include <math.h> #define SEED 5 #define MAXITER 1000000 #define eps 1e-5 int full(char *keys, int n) { int i; for (i = 0; i < n; i++) if (!keys[i]) return 0; return 1; } int test(int n) { unsigned long long t = 0ll; /* the time counter */ int fallplace; char *keys = (char*) calloc(n, sizeof(char)); /* the keys */ while (!full(keys, n)){ keys[rand() % n] = 1; /* new keystroke */ /* comment above and uncomente below for second case */ // fallplace = rand() % n;keys[fallplace] = !keys[fallplace]; /* end here */ t++; } free(keys); return t; } int main(int argc, char** argv) { int n; unsigned long long iters = 0ll; /* needed number of iterations */ double s; /* variance */ double m; /* mean */ double old_m; /* old meean */ double sigma; /* variance */ double old_v; /* old variance */ double Iapp; /* information quantity Iapp */ unsigned long long value; /* one simulation step */ if (argc != 2) return 1; srand(time(NULL)); sscanf(argv[1], "%d", &n); s = m = old_m = 0ll; for (;iters < SEED; iters++){ /* see comments*/ value = test(n); old_m = m; m += (value - m) / (iters + 1); s += (value - old_m) * (value - m); } /* first value */ old_v = sqrt(s / iters); value = test(n); old_m = m; iters++; m += (value - m) / (iters); s += (value - old_m) * (value - m); sigma = sqrt(s / iters); Iapp = log(sigma / old_v) / log(2); while (fabs(Iapp) > eps && iters < MAXITER){ /* next simulation round */ iters++; value = test(n); old_m = m; m += (value - m) / iters; s += (value - old_m) * (value - m); old_v = sigma; sigma = sqrt(s / iters); Iapp = log(sigma / old_v) / log(2); } printf("Value: %5.2f\nRuns: %lld\n", m, iters); if (Iapp > eps) printf("Not finished \n"); return 0; }

We used fast inline formulas for computing the mean and the average of the results. These methods can be found on wikipedia and I won’t go into details about them here, as this is not the purpose of this post.

One little comment needs to be made. On lines 52-57 we compute a few initial values to obtain the first value for the . Mainly, this is done to prevent division by 0 or by numbers very close to 0.

And now the results.

The first case can be summarized below:

Number of keys | Number of presses | Number of runs |
---|---|---|

1 | 1.0 | 6 |

2 | 3.0 | 1365 |

3 | 5.4 | 414 |

4 | 8.2 | 1222 |

5 | 11.4 | 1230 |

6 | 14.7 | 909 |

7 | 18.1 | 423 |

8 | 21.9 | 960 |

9 | 25.6 | 510 |

10 | 29.1 | 472 |

11 | 32.6 | 944 |

12 | 37.4 | 1210 |

13 | 41.2 | 278 |

14 | 46.9 | 271 |

15 | 49.9 | 456 |

16 | 53.4 | 447 |

17 | 60.1 | 551 |

18 | 62.5 | 919 |

19 | 67.5 | 1083 |

20 | 70.6 | 137 |

The second case is summarized below:

Number of keys | Number of presses | Number of runs |
---|---|---|

1 | 1 | 6 |

2 | 4 | 35926 |

3 | 10.1 | 122 |

4 | 22.5 | 386 |

5 | 42.4 | 650 |

6 | 80 | 138 |

7 | 162 | 334 |

8 | 334 | 238 |

9 | 586 | 257 |

10 | 1.17e3 | 702 |

11 | 2.31e3 | 252 |

12 | 4.45e3 | 1069 |

13 | 8.7e3 | 391 |

14 | 18e3 | 124 |

15 | 37e3 | 356 |

16 | 74e3 | 546 |

17 | 133e3 | 302 |

18 | 264e3 | 174 |

19 | 548e3 | 900 |

20 | 1.04e6 | 280 |

We can now draw some conclusions.

We see that in the first case, the results are exactly the same as those obtained from a full run of a big number of simulations. Moreover, we see that the exact value is obtained very quick. Thus, for this linear case the above idea can be successfully used to reduce the needed number of runs.

For the second case we have the same results. A smaller number of runs were needed to obtain those values we were interested in. And the errors are very small, as one can easily see.

One more thing needs to be shown: the fact that the number of needed runs is not monotonically increasing as one would expect at first sight of the idea. This is partly due to the random number generator and partly due to the fact that some values are more easily obtained that others.

Also, we could obtain results with a greater precision if this was needed, just by modifying the eps definition. Experimentally, this has to have at least twice the number of decimals we are interested in our results.

—-

I believe that this is a breakthrough in reducing the Monte Carlo complexity. I will use this idea once more when I will implement another Monte Carlo simulation, this time for a real process, nor for a thought experiment like this one with the monkey.