|Most members of the HeuristicLab development team are currently attending GECCO 2012 in Philadelphia. However, before leaving we finalized the HeuristicLab 3.3.7 release, packed with new features for benchmark testing, parameter variation experiments and much more.
One of the core incentives for this latest release was that it is still very cumbersome to find suitable parameter settings for a particular problem instance and algorithm. We therefore started to collect algorithm parameters and results of executed test-runs in a systematical way in the so-called optimization knowledge base (OKB). The OKB serves as a centralized experiment and result storage and will be useful for detailed post-analyses about algorithmic behavior.
To perform such analyses in a structured manner, the availability of benchmark instances is essential. HeuristicLab 3.3.7 now includes various libraries of published benchmark problem instances for combinatorial optimization problems (TSPLIB, QAPLIB, Taillard, Golden, Cordeau, Solomon, etc.) and regression/classification problems (Keijzer, Korns, Nguyen, real world problems, etc.). HeuristicLab architects Andreas Beham and Gabriel Kronberger blogged about this in detail on the HeuristicLab Blog.
Finally, we added a new feature that makes it more comfortable to create parameter variation experiments, which is described in detail in the Parameter Variation Experiments article on the blog.