About CAMI
The interpretation of metagenomes relies on sophisticated computational approaches such as short read assembly, binning and taxonomic classification. All subsequent analyses can only be as meaningful as the outcome of these initial data processing methods. Tremendous progress has been achieved during the last years. However, none of these approaches can completely recover the complex information encoded in metagenomes. Simplifying assumptions are needed and lead to strong limitations and potential inaccuracies in their practical use. The accuracy of computational methods in metagenomics has so far been evaluated in publications presenting novel or improved methods.
However, these snapshots are hardly comparable due to the lack of a general standard for the assessment of computational methods in metagenomics. Users are thus not well informed about general and specific limitations of computational methods. This may result in misinterpretations of computational predictions. Furthermore, method developers need to individually evaluate existing approaches in order to come up with ideas and concepts for improvements and new algorithms. This consumes substantial time and computational resources, and may introduce unintended biases.
To tackle this problem we founded CAMI, the initiative for the “Critical Assessment of Metagenome Interpretation” in 2014. It evaluates methods in metagenomics independently, comprehensively and without bias. The initiative supplies users with exhaustive quantitative data about the performance of methods in all relevant scenarios. It therefore guides users in the selection and application of methods and in their proper interpretation. Furthermore it provides valuable information to developers, allowing them to identify promising directions for their future work. CAMI organized in 2015 the first community-driven benchmarking challenge in metagenomics. The second CAMI challenge (CAMI II) started on January 16th, 2019. CAMI III is currently in preparation.
CAMI portal benchmarking functionalities
The new CAMI portal benchmarking functionalities enable online and straightforward assessment of software for common metagenome analyses, including metagenome assembly, genome and taxon binning, and taxonomic profiling. For this, the portal computes metrics and visualizations using the CAMI-supported evaluation packages AMBER, MetaQUAST, and OPAL. Currently, assessments can be performed on datasets of the CAMI I and II Challenges.
A typical benchmarking workflow can be summarized as follows. (1) A user downloads CAMI benchmark data from the CAMI benchmarking portal, (2) they apply their favorite or own method they are developing on the data, and (3) upload and evaluate their method result on the portal. The portal frees users from the need to install and execute additional evaluation software, while allowing them to rank and compare their results with the results of other users.
License and Terms
Usage: The CAMI benchmarking portal is free for everybody to use, and logging in is optional. Logging in allows users to share reproducible results and rank them among other public results for display on the portal, and to store them indefinitely.
Terms: By using this service, users agree to our terms of use and privacy policy.