CAMI brings together the metagenomics research community to facilitate the benchmarking of metagenomic computational methods, promote standards and good practices, and further accelerate advancements in this rapidly evolving field of bioinformatics.
New
- Browse results of previous CAMI Challenges for metagenome assembly, genome and taxon binning, and taxonomic profiling
- Upload and evaluate your new results, and add them to the online benchmark
If you use the CAMI benchmarking service, datasets, or results in your work, please consider citing:
- Meyer, F., Fritz, A., Deng, ZL. et al. Critical Assessment of Metagenome Interpretation: the second round of challenges. Nat Methods 19, 429–440 (2022). DOI: 10.1038/s41592-022-01431-4
- Sczyrba, A., Hofmann, P., Belmann, P. et al. Critical Assessment of Metagenome Interpretation—a benchmark of metagenomics software. Nat Methods 14, 1063–1071 (2017). DOI: 10.1038/nmeth.4458
CAMI follows the principles described in:
- Meyer, F., Lesker, TR., Koslicki, D. et al. Tutorial: assessing metagenomics software with the CAMI benchmarking toolkit. Nat Protoc 16, 1785–1801 (2021). DOI: 10.1038/s41596-020-00480-3
To assess assembly, genome and taxon binning, and taxonomic profiling, this service uses the following software:
- Mikheenko, A., Saveliev, V. & Gurevich, A. MetaQUAST: evaluation of metagenome assemblies. Bioinformatics 32, 1088–1090 (2016). DOI: 10.1093/bioinformatics/btv697
- Meyer, F. et al. AMBER: assessment of metagenome BinnERs. Gigascience 7, giy069 (2018). DOI: 10.1093/gigascience/giy069
- Meyer, F. et al. Assessing taxonomic metagenome profilers with OPAL. Genome Biol. 20, 51 (2019). DOI: 10.1186/s13059-019-1646-y
For datasets, please refer to the DOIs on the datasets page.