AI system's energy consumption remains significant, according to Mistral AI environmental report
The environmental impact of generative AI models, such as Mistral Large 2, can be quantified using a comprehensive lifecycle analysis (LCA) focusing on three key metrics: greenhouse gas (GHG) emissions, water consumption, and materials use. A recent peer-reviewed study by Mistral AI provides one of the most detailed examples of this approach.
After 18 months of training and usage, the key quantified impacts for Mistral Large 2 are as follows:
- GHG emissions: Approximately 20.4 kilotons of CO₂ equivalents (ktCO₂e) generated, primarily due to the energy-intensive training process and inference operations.
- Water consumption: Estimated at 281,000 cubic meters (approximately 112 Olympic-sized swimming pools), mainly used in cooling data centers housing GPUs running the model.
- Materials use: About 660 kilograms of antimony equivalents (Sb eq), a standard measure for resource depletion reflecting the consumption of rare metals and minerals required for hardware production associated with the AI infrastructure.
The study also breaks down the marginal environmental cost per inference, offering a granular view on the cost of individual AI operations:
- 1.14 grams of CO₂e,
- 45 milliliters of water,
- 0.16 milligrams of Sb eq.
The LCA was conducted in collaboration with Carbone 4, a CSR and sustainability consultancy, and the French ecological transition agency ADEME, ensuring robust methodology. The assessment considered not only the training phase but also the entire lifecycle, including inference over 18 months.
Mistral's study sets a benchmark for transparency and standardization in reporting AI's environmental costs, an area previously opaque. The model size correlates with environmental impact roughly in a linear fashion, emphasizing the importance of optimizing model selection based on application needs. Comparisons with other AI providers show discrepancies, highlighting the need for uniform reporting standards to better assess industry-wide impact.
Here's a summary table:
| Metric | Mistral Large 2 (18 months) | Per 400-token inference | |----------------------------|------------------------------------|------------------------------| | Greenhouse Gas Emissions | 20.4 kilotons CO₂e | 1.14 g CO₂e | | Water Consumption | 281,000 m³ (≈112 Olympic pools) | 45 mL | | Materials Use (Resource Depletion) | 660 kg antimony equivalents (Sb eq) | 0.16 mg Sb eq |
This method of quantification, using lifecycle analysis and breaking down impacts from training to inference, allows stakeholders to understand and compare environmental costs of generative AI models accurately across core sustainability metrics.
Note: No conflicting data was found in the sources; the figures come from the first publicly available, peer-reviewed LCA specifically for a large language model. Mistral contends that customers can minimize the environmental impact of GenAI by opting for smaller case-specific models. Mistral argues that the impact of training the model, the ongoing environmental cost of running that model, and the portion of the model's lifespan spent on inference versus training are essential details for users, developers, and policy makers. During inference, the model consumed about 45 ml of water and generated about 1.14 grams of CO2e. Mistral AI published a peer-reviewed report on the environmental impact of its Mistral Large 2 LLM. Mistral AI's report shows that AI's environmental impact is influenced by its geographic location. AI datacenters often employ cooling towers to keep equipment from overheating, which can be problematic in drought-prone regions where water is scarce and expensive. The materials consumption for this stage included the consumption of materials necessary to generate and supply electricity to data centers. Mistral suggests grouping queries to minimize wasted compute cycles. Mistral notes 29% of materials consumption occurred during the training and inference stage.
- The environmental impact of generative AI models, such as Mistral Large 2, is quantified using a lifecycle analysis (LCA) that focuses on three key metrics: greenhouse gas (GHG) emissions, water consumption, and materials use.
- After 18 months of training and usage, the key quantified impacts for Mistral Large 2 are 20.4 kilotons of CO₂ equivalents (ktCO₂e) in GHG emissions, 281,000 cubic meters (approximately 112 Olympic-sized swimming pools) in water consumption, and 660 kilograms of antimony equivalents (Sb eq) in materials use.
- The LCA for Mistral Large 2 was conducted in collaboration with Carbone 4 and the French ecological transition agency ADEME, ensuring a robust methodology that considered not only the training phase but also the entire lifecycle, including inference over 18 months.
- The study by Mistral AI demonstrated the importance of optimizing model selection based on application needs, as the model size correlates with environmental impact roughly in a linear fashion. This method of quantification allows stakeholders to understand and compare environmental costs of generative AI models accurately across core sustainability metrics.