Tech News

Why some AI models spray 50 times greenhouse gas to answer the same question

Like it or not, large language models are quickly embedded in our lives. They can also cause us to fall into climate chaos faster due to their powerful energy and water demand. However, some LLMs may release more planetary pollution than others, a new study found.

According to the post The forefront of communication. Unfortunately, perhaps unsurprisingly, more accurate models tend to have the greatest energy costs.

It is difficult to estimate how bad LLM is for the environment, but some studies have shown that training CHATGPT is at most 30 times more energy than that used in the average U.S. in a year. What is unclear is whether some models are steeper than their peers’ energy costs when they answer the question.

Researchers from the University of Applied Sciences in Hochschule München, Germany, evaluated 14 LLMSs ranging from 7 to 72 billion parameters, which fine-tune the understanding and language generation of the model – on the 1,000 benchmarks, questions on various topics.

LLM converts each word or part of the word into a numeric string called a token in the prompt. Some LLMs, especially the reasoning LLMS, also insert special “thinking tokens” into the input sequence to perform other internal calculations and reasoning before the output is generated. The conversions performed by the LLM on the token and subsequent calculations use energy and releases CO2.

The scientists compared the number of tokens generated for each model they tested. The study found that on average, the inference model creates 543.5 questions, each of which has a mind token, while the concise model requires only 37.7 tokens per problem. For example, in the Chatgpt world, GPT-3.5 is a concise model, while GPT-4O is an inference model.

The authors found that this reasoning process drives energy demand. “The environmental impact of questioned LLMs is strongly determined by their reasoning methods,” Maximilian Dauner, a researcher at Hochschule München University of Applied Sciences, said in a statement. “We found that the inference-based model produces 50 times the CO2 emissions than the concise response model.”

The more accurate the model, the more carbon they generate, the more they emit. The inference model Cogito, with 70 billion parameters, has a precision of up to 84.9%, but it also emits three times more CO2 than similarly sized models that produce a more concise answer.

“Currently, we see the inherent clear accuracy, sustainability trade-off of LLM Technologies,” Dauner said. “No model that makes emissions less than 500 grams of CO2 equivalent has a higher accuracy than 80% when correctly answering 1,000 questions.” CO2 equivalent is the unit used to measure the climate impact of various greenhouse gases.

Another factor is the theme. The study says that problems that require detailed or complex reasoning, such as abstract algebra or philosophy, lead to emissions six times higher than more direct topics.

However, there are some warnings. Emissions are very dependent on the structure and model of the local energy grid you examine, so it is not clear how well these findings are generalized. Nevertheless, the study authors said they hope that the work will encourage people to “selective and thoughtful” with LLM use.

“Users can greatly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that really require the functionality,” Dauner said in a statement.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button