Some time ago I was asked by an energy commissioner to provide an estimate of how much the forecasting accuracy has improved over the past two decades, and what accuracy level she could expect in her area. When asked such a question, it’s comparable to being asked, what is the ideal place to go on holidays? The answer depends on very many factors. But, how do you explain such a complex question or problem in a simple way to someone not familiar with the many dependencies of forecast accuracy?
Thinking about today’s world driven more by economics than by engineering or science, where everything has to relate to an economic factor and indicate whether there is need for subsidy or not, I reckoned that giving a simple and yet qualified answer is impossible without an investigation and analysis of exactly that case. The example shows how strangely we deal with complexity today and how the focus on economic growth has reduced our lives to rather simplistic models of a reality that in fact is growing exponentially in complexity, in all areas of our life.
What Level of Accuracy Can We Expect? It’s Not So Simple
So let me explain why progress in forecasting accuracy is not something that can be easily put in numbers and related to one economic factor: forecasting for wind and solar resources has been in constant movement in the past decade. Many new and different algorithms and methodologies have been developed for all sorts of problems around the world. When we look at accuracy, however, the picture becomes very skewed. In some countries or areas, forecasting accuracy — when measured in terms of an average statistical metric like a mean absolute error — is higher than for 10 years ago; in other areas, it’s lower or the same.
Why is this? One thing that holds true across locations is that almost all countries, regions, and areas in which wind or solar energy has been part of the energy transition have experienced constant change, be it in capacity, policy, or economy of scale. But the speed of change differs across areas, as the deployment of wind and solar energy is increasing in some areas from month to month, and in other areas the change is less rapid, perceptible year over year. In addition, as is well known, dispersion lowers a forecast error significantly. When assets remain distributed evenly over an area, the accuracy increases without changing anything in the forecasting methodology. But as soon as the dispersion level decreases or there are areas with stronger or weaker deployment, such biases change the forecast accuracy, again without changing the algorithm.
Let’s have a look at Ireland. In 2000, Ireland’s transmission system operator (TSO) placed a moratorium on wind energy development at an installed capacity of 500 MW. The TSO could not handle more wind securely on the island. About two decades later, Ireland is running the island grid with wind feeding nearly 70 percent of the demand in windy hours and is proud of an installed wind fleet of 5000 MW. How was that possible? Did the forecast accuracy improve that much that the TSO can run with an increase of a factor of 10 in installed capacity and feed-in? The short answer is no. It was possible through a continuous development of improved security measures and forecasting tools as deployment increased and new issues arose for the grid operation. Forecasting started with average long-term forecasts for the next day, and today contain long-term and short-term forecasts with their respective uncertainties, ramping reserve forecasts, and a high-speed shut down warning system. If only one part of that system improves — for example, accuracy of a day-ahead forecast — it does not necessarily improve the entire system. There is a set of tools that need to work together in an optimal way.
Measuring and giving incentives to a forecast provider is complex, as it is necessary to investigate the impact on the costs of suboptimal grid operation due to an inaccurate or bad forecast and the definition of what “bad” actually means in terms of operation, security, and costs. Is “bad” a large short-term error that may lead to a lack of reserve on the grid and hence congestion or even blackouts? Or is “bad” many small errors that increase the balancing costs significantly? If a missing warning on a high wind speed event can cause a system blackout or costs that would be so significant that the energy commissioner gets involved, one simple metric is no longer sufficient to evaluate the forecast accuracy of such a complex system. It makes a difference, though, if accuracy measures are an indication of what is important for the user, and thereby help the forecast provider direct efforts for improvements to where they have impact. For this reason, a single measure is just as useless as having one forecast for everything!
How to Deal with the Complexity
In the IEA Wind Task 36, I was involved, together with a group of experts, in the development of a best practice guideline for the industry, the “IEA Recommended Practice for Selecting Renewable Power Forecasting Solutions.” The guideline helped support the idea and encourage and inspire the industry to start analyzing the sources of impact that selected forecast types have on internal processes, security, and their associated costs. Such a process assists the development of validation schemes that measure progress and provide information about low accuracy and where improvement is needed. In other words, it is an encouragement to “become comfortable with the uncomfortable” and accept the complexity of today’s energy systems. This then makes it possible to sustainably improve and enhance forecast accuracy rather than letting forecast providers tweak one metric that may have little or no impact on the real operational issues. It is an individual process that takes time, but once established, it can lead to sustainable growth and incorporation of more and more renewable energy until we reach 100 percent. What seems impossible today can only become possible if we have the courage to engage in complex problem-solving. The reward is to achieve the impossible.
Corinna Möhrlen
WEPROG
Mohamed Abuella says
Thanks for posting this article…
Yes, relying on the economic factors only to evaluate the forecast accuracy of renewable is a dangerous thing. If some folks are trying to test the forecasts with economic dispatch (caring only about the cost) to convince the electric operators about the value of their forecast models. Meanwhile, for electric power systems, the forecasts should be tested against the security constrained economic dispatch or the optimal power flow should be run with the forecasts to check out the cost as well as the security or reliability constraints are not violated (constraints such as, generators, transmission lines, and node voltages rating and limits).
Jack Fox says
Thanks Corinna for posting this.
I completely agree that improvements to forecasting accuracy should not be seen as a silver bullet.
A wholistic approach centred on a thorough understanding of forecasting uncertainty, combined with a considered integration of the realities (i.e. the errors) of forecasting with grid operation and market design is increasingly important to ensure cost-effective, efficient and reliable electric grid supply.