Grigori Fursin

Grigori Fursin
Alma mater
Known forMILEPOST GCC, cTuning foundation, Collective Knowledge framework, Collective Mind workflow automation, Artifact Evaluation at ACM and IEEE conferences
Awards
  • Test of Time Award (IEEE/ACM International Symposium on Code Generation and Optimization, 2017)[1]
Scientific career
FieldsComputer engineering
Machine learning
Institutions
ThesisIterative Compilation and Performance Prediction for Numerical Applications (2004; 20 years ago (2004))
Websitefursin.net

Grigori Fursin is a British[2] computer scientist, president of the non-profit CTuning foundation, founding member of MLCommons,[3] and co-chair of the MLCommons Task Force on Automation and Reproducibility.[4] His research group created open-source machine learning based self-optimizing compiler, MILEPOST GCC, considered to be the first in the world.[5] At the end of the MILEPOST project he established cTuning foundation to crowdsource program optimisation and machine learning across diverse devices provided by volunteers. His foundation also developed Collective Knowledge Framework and Collective Mind workflow automation framework[6] to support open research. Since 2015 Fursin leads Artifact Evaluation at several ACM and IEEE computer systems conferences. He is also a founding member of the ACM taskforce on Data, Software, and Reproducibility in Publication.[7][8][9]

Education

[edit]

Fursin completed his PhD in computer science at the University of Edinburgh in 2005. While in Edinburgh, he worked on foundations of practical program autotuning and performance prediction.[10]

Notable projects

[edit]
  • Collective Mind - collection of portable, extensible and ready-to-use automation recipes with a human-friendly interface to help the community compose, benchmark and optimize complex AI, ML and other applications and systems across diverse and continuously changing models, data sets, software and hardware.[11][6][12][13]
  • Collective Knowledge – open-source framework to help researchers and practitioners organize their software projects as a database of reusable components and portable workflows with common APIs based on FAIR principles,[14] and quickly prototype, crowdsource and reproduce research experiments.
  • MILEPOST GCC – open-source technology to build machine learning based compilers.
  • Interactive Compilation Interface – plugin framework to expose internal features and optimisation decisions of compilers for external auto tuning and learning.
  • cTuning foundation – non-profit research organisation developing open-source tools and common methodology for collaborative and reproducible experimentation.
  • Artifact Evaluation - validation of experimental results from published papers at the computer systems and machine learning conferences.[15][16][17]

References

[edit]
  1. ^ HiPEAC info 50 (page 8), April 2017
  2. ^ Companies House profile, June 2015
  3. ^ MLCommons press-release, December 2020
  4. ^ MLCommons Task Force on Automation and Reproducibility, June 2022
  5. ^ World's First Intelligent, Open Source Compiler Provides Automated Advice on Software Code Optimization, IBM press-release, June 2009 (link)
  6. ^ a b Fursin, Grigori (June 2024). Enabling more efficient and cost-effective AI/ML systems with Collective Mind, virtualized MLOps, MLPerf, Collective Knowledge Playground and reproducible optimization tournaments. doi:10.48550/arXiv.2406.16791.
  7. ^ "The ACM Task Force on Data, Software, and Reproducibility in Publication". Retrieved 5 December 2017.
  8. ^ Fursin, Grigori; Bruce Childers; Alex K. Jones; Daniel Mosse (June 2014). TRUST'14. Proceedings of the 1st ACM SIGPLAN Workshop on Reproducible Research Methodologies and New Publication Models in Computer Engineering at PLDI'14. doi:10.1145/2618137.
  9. ^ "ACM TechTalk "Reproducing 150 Research Papers and Testing Them in the Real World: Challenges and Solutions with Grigori Fursin"". Retrieved 11 February 2021.
  10. ^ Grigori Fursin. "PhD thesis". Retrieved 21 May 2017.
  11. ^ Fursin, Grigori (June 2023). Toward a common language to facilitate reproducible research and technology transfer: challenges and solutions. keynote at the 1st ACM Conference on Reproducibility and Replicability. doi:10.5281/zenodo.8105339.
  12. ^ Online catalog of automation recipes developed by MLCommons
  13. ^ HPCWire: MLPerf Releases Latest Inference Results and New Storage Benchmark, September 2023
  14. ^ Fursin, Grigori (October 2020). Collective Knowledge: organizing research projects as a database of reusable components and portable workflows with common interfaces. Philosophical Transactions of the Royal_Society. arXiv:2011.01149. doi:10.1098/rsta.2020.0211. Retrieved 22 October 2020.
  15. ^ Fursin, Grigori; Bruce Childers; Alex K. Jones; Daniel Mosse (June 2014). TRUST'14. Proceedings of the 1st ACM SIGPLAN Workshop on Reproducible Research Methodologies and New Publication Models in Computer Engineering at PLDI'14. doi:10.1145/2618137.
  16. ^ Fursin, Grigori; Christophe Dubach (June 2014). Community-driven reviewing and validation of publications. Proceedings of TRUST'14 at PLDI'14. arXiv:1406.4020. doi:10.1145/2618137.2618142.
  17. ^ Childers, Bruce R; Grigori Fursin; Shriram Krishnamurthi; Andreas Zeller (March 2016). Artifact evaluation for publications. Dagstuhl Perspectives Workshop 15452. doi:10.4230/DagRep.5.11.29.