The field of Artificial Intelligence has rapidly grown in recent years, leading to the development of complex machine learning models with billions of parameters and significant computational cost.
These models require a significant amount of energy for both training and prediction, resulting in a substantial carbon footprint. To mitigate this issue, a new paradigm of frugal AI needs to be established, focusing on developing models that are efficient in terms of memory, computation, power consumption, and carbon emissions.
The workshop ”Simplification, Compression, Efficiency and Frugality for Artificial Intelligence” aims to bring together experts and practitioners in the field to discuss and explore new methods for improving the energy efficiency of AI models. The focus of the workshop will be on developing frugal models that are able to handle data with high memory and computation efficiency, while also being robust to noise. The workshop will explore several challenges, including but not limited to:
- Deploying energy-efficient models. Approaches like pruning, quantization, knowledge distillation and neural architecture search can be explored to reduce the energy consumption of models during deployment.
- Limited computational resources. Algorithms can be developed that can train models using limited computational resources, reducing energy consumption and carbon emissions.
- Training with limited data. The focus here is on developing algorithms that can effectively learn from small datasets.
- Reducing energy consumption at the hardware level. The workshop will consider quadratic levers on energy consumption, such as reducing voltage supply or operating frequencies, to reduce the energy consumption of the underlying hardware.
- Probabilistic machine learning algorithms. The workshop will explore the use of probabilistic machine learning algorithms to handle noisy computations, reducing the energy consumption of models during deployment.
- Real power consumption benchmarking at both train and test time.
- Energy-limited application and challenges, like AI deployment on battery-powered devices, smart energy saving and deployment of Tiny ML models.
- Brave new ideas.
Environmental impact section
As researchers and practitioners in the field of AI, it is important to consider the environmental impact of our work. The energy consumption and CO2 emissions generated by running algorithms can have significant consequences for the environment. That is why we will strongly encourage the inclusion of a section in the workshop papers that specifically addresses this issue (right before the conclusion).
To help facilitate this process, there are many publicly available tools that allow for easy estimation of energy consumption and CO2 emissions for running algorithms. Some examples of open-source libraries that can be used for this purpose include (but are not limited to:
If you have any question related to this point, you are welcome to reach us at enzo.tartaglione@telecom-paris.fr.
Submission
The submission will be through CMT3, at the link below:
https://cmt3.research.microsoft.com/ECMLPKDDworkshop2023/Track/43/Submission/Create
The following kinds of submissions will be considered:
- ⚠Extended abstract, up to 5 pages, also including already published material (they will not be included in the proceedings book, but will be optionally shared on the website )⚠
- Short paper, between 6 and 11 pages, including references
- Full paper, between 12 and 16 pages, including references
The submissions will be single blind.
The Workshops and Tutorials will be included in a joint Post-Workshop proceeding published by Springer Communications in Computer and Information Science, in 1-2 volumes, organized by focused scope and possibly indexed by WOS.
Papers authors will have the faculty to opt-in or opt-out for the inclusion in the proceedings.
The official template for the workshop paper submission is provided at this link.
An Overleaf template is also available at this link.