If you have too much memory
We have integrated new nodes into our cluster. All of the new nodes have a local SSD for fast temporary scratch data. In order to find which are the best options and IO scheduler I have written a script which tries a lot of combinations (80 to be precise) of file system options and IO schedulers. As the nodes have 64 GB of RAM the first run of the script took 40 hours as I tried to write always twice the size of the RAM for my benchmarks to avoid any caching effects. In order to reduce the amount of available memory I wrote a program called memhog which malloc()
s the memory and then also mlock()
s it. The usage is really simple
$ ./memhog Usage: memhog <size in GB>
I am now locking 56GB with memhog and I reduced the benchmark file size to 30GB.
So, if you have too much memory and want to waste it… Just use memhog.c.