The interpretation of metagenomes relies on sophisticated computational approaches such as short read assembly, binning and taxonomic classification. All subsequent analyses can only be as meaningful as the outcome of these initial data processing methods. Tremendous progress has been achieved during the last years. However, none of these approaches can completely recover the complex information encoded in metagenomes. Simplifying assumptions are needed and lead to strong limitations and potential inaccuracies in their practical use. The accuracy of computational methods in metagenomics has so far been evaluated in publications presenting novel or improved methods.
However, these snapshots are hardly comparable due to the lack of a general standard for the assessment of computational methods in metagenomics. Users are thus not well informed about general and specific limitations of computational methods. This may result in misinterpretations of computational predictions. Furthermore, method developers need to individually evaluate existing approaches in order to come up with ideas and concepts for improvements and new algorithms. This consumes substantial time and computational resources, and may introduce unintended biases.
To tackle this problem we founded CAMI, the initiative for the “Critical Assessment of Metagenome Interpretation” in 2014. It evaluates methods in metagenomics independently, comprehensively and without bias. The initiative supplies users with exhaustive quantitative data about the performance of methods in all relevant scenarios. It therefore guides users in the selection and application of methods and in their proper interpretation. Furthermore it provides valuable information to developers, allowing them to identify promising directions for their future work. CAMI organized in 2015 the first community-driven benchmarking challenge in metagenomics. The second CAMI challenge (CAMI II) started on January 16th, 2019. CAMI III is currently in preparation.