This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

‘ChatGridPT’: Large language models could tackle energy sector challenges

21 June 2024

Large language models like ChatGPT could help run and maintain the grid, a new study suggests.

Image courtesy of Harvard John A. Paulson SEAS
Image courtesy of Harvard John A. Paulson SEAS

Much has been discussed about the promise and limitations of large language models in industries such as education, healthcare and even manufacturing. But what about energy? Could large language models (LLMs), like those that power ChatGPT, help run the energy grid? 

New research, co-authored by Na Li, Winokur Family Professor of Electrical Engineering and Applied Mathematics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) suggests that LLMs could play an important role in co-managing some aspects of the grid, including emergency and outage response, crew assignments and wildfire preparedness and prevention. 

However, security and safety concerns need to be addressed before LLMs can be deployed in the field. 

“There is so much hype with large language models, it’s important for us to ask what LLMs can do well and, perhaps more importantly, what they can’t do well, at least not yet, in the power sector,” said Le Xie, Professor of Electrical & Computer Engineering at Texas A&M University and corresponding author of the study. 

“The best way to describe the potential of LLMs in this sector is as a co-pilot. It’s not a pilot yet – but it can provide advice, a second opinion, and very timely responses with very few training data samples, which is really beneficial to human decision making.”

The research team, which included engineers from Houston-based energy-provider CenterPoint Energy and grid-operator Midcontinent Independent System Operator, used GPT models to explore the capabilities of LLMs in the energy sector – and identified both strengths and weaknesses. 

The strengths of LLMs – their ability to generate logical responses from prompts, to learn based on limited data, to delegate tasks to embedded tools and to work with non-text data such as pictures – could be leveraged to perform tasks such as detecting broken equipment, real-time electricity load forecasting, and analysing wildfire patterns for risk assessments.

However, there are significant challenges to implementing LLMs in the energy sector – not the least of which is the lack of grid-specific data to train the models. For obvious security reasons, crucial data about the national power system is not publicly available and cannot be made public. 

Another issue is the lack of safety guardrails. The power grid, like autonomous vehicles, needs to prioritise safety and incorporate a large safety margin when making real-time decisions. 

LLMs also need to get better about providing reliable solutions and transparency around their uncertainties, said Li. 
“We want foundational LLMs to be able to say, ‘I don’t know’ or ‘I only have 50 percent certainty about this response’, rather than give us an answer that might be wrong,” said Li. 

“We need to be able to count on these models to provide us with reliable solutions that meet specified standards for safety and resiliency.”

All of these challenges give engineers a roadmap for future work. 

“As engineers, we want to highlight these limitations because we want to see how we can improve them,” said Li. 

“Power system engineers can help improve security and safety guarantees by either fine-tuning the foundational LLM or developing our own foundational model for the power systems. 

“One exciting part of this research is that it is a snapshot in time. Next year or even sooner, we can go back and revisit all these challenges and see if there has been any improvement.” 


Print this page | E-mail this page

MinitecBritish Encoder