International Journal of Computer
Trends and Technology

Research Article | Open Access | Download PDF
Volume 73 | Issue 12 | Year 2025 | Article Id. IJCTT-V73I12P108 | DOI : https://doi.org/10.14445/22312803/IJCTT-V73I12P108

Energy Consumption and Computational Demand of Modern AI Systems: A Practical Survey


Yingqiong Gu

Received Revised Accepted Published
29 Oct 2025 30 Nov 2025 13 Dec 2025 30 Dec 2025

Citation :

Yingqiong Gu, "Energy Consumption and Computational Demand of Modern AI Systems: A Practical Survey," International Journal of Computer Trends and Technology (IJCTT), vol. 73, no. 12, pp. 52-56, 2025. Crossref, https://doi.org/10.14445/22312803/IJCTT-V73I12P108

Abstract

The rapid growth of Artificial Intelligence (AI) has led to unprecedented demand for computational resources and energy consumption. Large-scale deep learning models, particularly Convolutional Neural Networks (CNNs) and transformer based architectures, require substantial computing power during both training and inference. As AI systems are increasingly deployed at scale—from cloud data centers to edge devices—energy efficiency and sustainability have become critical concerns. This paper presents a practical survey of the computational and energy demands of modern AI systems. We analyze energy consumption across training and inference stages, compare cloud-based and edge-based deployments, and discuss the environmental impact of data centers. Furthermore, we examine emerging directions in energy-efficient AI, including model compression, quantization, knowledge distillation, and hardware-aware optimization. The goal is to provide engineers and researchers with a concise, deployment-oriented reference for understanding AI energy challenges and selecting practical approaches toward sustainable AI systems.

Keywords

Artificial Intelligence, Energy Consumption, Computational Demand, Cloud AI, Edge AI, Sustainable AI.

References

[1] Emma Strubell, Ananya Ganesh, and Andrew McCallum, “Energy and Policy Considerations for Deep Learning in NLP,” Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 3645-3650, 2019.
[CrossRef] [Google Scholar] [Publisher Link]
[2] David Patterson et al., “Carbon Emissions and Large Neural Network Training,” arXiv:2104.10350, pp. 1-22, 2021.
[CrossRef] [Google Scholar] [Publisher Link]
[3] Ruth Cordova-Cardenas, Daniel Amor, and Álvaro Gutiérrez, “Edge AI in Practice: A Survey and Deployment Framework for Neural Networks on Embedded Systems,” Electronics, vol. 14, no. 24, pp. 1-39, 2025.
[CrossRef] [Google Scholar] [Publisher Link]