The 10th anniversary edition of the ESIG forecasting workshop returned to the scene of the inaugural workshop in St Paul, MN on June 19-21, 2018. To many of those that attended the first workshop on Feb. 21-22, 2008, it was apparent that there have been significant but in some cases subtle changes over the past decade. One of the not-so-subtle changes from the first workshop was that the average outside temperature during the 2018 workshop was 70.3° F (21°C). This was about 64 F (36 C) degrees warmer than the average temperature of 6.7°F (-14° C) recorded during the 2008 event when a low temperature of -11° F (-24°C) greeted arriving attendees on the day before the workshop. This change provided immediate evidence that the ESIG community had learned how to extract greater value from meteorological information in its planning processes!
This dramatic change in the workshop’s external environment provided some motivation to reflect upon the changes in the content and perspectives of the workshops over the past 10 years and look forward to the likely direction of its future evolution.
Another obvious sign of change was the name on the workshop agenda. In 2008, it was Utility Wind Integration Group. During the following decade, the name first evolved to UVIG and then during the past year to ESIG. The evolution of the name is symbolic of changes in the stakeholder perspective on the nature, scope and interconnection of the weather forecasting issues that are relevant to energy system operation. The 2008 event was narrowly focused on wind power forecasting. Over the past decade there has been an emergence of a more integrated view of the interconnected nature of weather-related impacts in many parts of the electric grid system and that these relationships need to be considered in an integrated way in order to optimize the operation of the electric system.
There has also been a major change in the scope, focus and funding of North American R&D activities in renewable energy forecasting. At the 2008 workshop, there wasn’t a session or a presentation focused on forecasting-related R&D activity. However, North American forecasting-related R&D activity rapidly expanded after the 2008 workshop. There have been a number of significant forecasting-related research projects in North America over the past 10 years. These have included major sensor deployment and modeling projects sponsored by the Department of Energy to improve wind and solar forecasting performance. In addition, there have been other smaller projects sponsored by the DoE as well as other public (e.g., the California Energy Commission) and private (e.g., EPRI) entities. Presentations and even entire sessions that focused on the design and results of these projects gradually became a core part of the workshop agenda during the past decade. This continued in 2018 and is likely to be a key component of future workshops.
A common phrase heard in the discussions at the 2008 workshop was, “we need more accurate forecasts.” The consensus of the ESIG community is that the R&D (among other factors) has enabled the accuracy to significantly improve over the past 10 years. There have been many presentations (including some at the 2018 event) during past decade’s workshops that documented improvements in wind or solar forecast performance for specific applications. However, the lament of, “we need better forecasts” was still heard during the breaks between the 2018 sessions.
One continuing issue is that a representative quantification of the amount of improvement in the state-of-the-art performance over this period has been elusive. This has been a topic for lively discussion at workshops throughout the past decade. The primary reason is that forecast performance is highly variable. It depends on the evaluation metric, the location, the weather regime, amount and quality of data from the forecast target facility, the amount and diversity of aggregation and many other factors. In addition, the evidence indicates that there is not a single combination of forecasting tools that will perform best for all application scenarios.
A consequence of this issue is that it is very difficult to determine the actual degree of improvement in state-of-the-art forecast performance (if any) that has resulted from specific or collective R&D activities. In practice, almost all forecasting improvement projects “demonstrate” improvements because the project’s comparison benchmark is typically defined to facilitate a favorable project outcome. This topic has yielded spirited debate at recent workshops, and is likely to continue with that component of DoE’s 2nd solar forecast improvement project that is focused on formulating methods to provide a more rigorous and consistent benchmarks of forecast performance.
Another major change that has evolved over the past decade is the increase in awareness of the difference between “accuracy” and “value.” “Accuracy” is an assessment of how well a forecast agrees with observations of a forecast variable. This is unavoidably tied to what one means by “how well.” “Value” is a measure of how much benefit the forecast information provides to a user’s application. As the number of renewable energy applications have expanded, it has become apparent to some researchers and forecast providers, that many forecast users are (with adaptation of lyrics from a 1980 movie), “looking for accuracy in all the wrong places.” Several presentations and discussions at the 2018 workshop noted that the use of traditional forecast performance metrics such as MAE or RMSE do not provide a good measure of the value to specific applications and often provide the wrong incentive for forecast improvement efforts. Many went on to describe how forecasts can be (or should be) more optimally formulated and evaluated for specific operational issues or scenarios. These statements were often focused on the prediction of extreme or atypical events such as large short-term ramp rates or variability in wind or solar production and situations that lead to rapid wind production cut-outs such as high wind speed or icing.
Another potential contributor to forecast value that has been underutilized in applications is uncertainty estimation. Estimates of uncertainty are typically linked to some form of a probabilistic forecast. Users typically dislike probabilistic forecasts (and often avoid their use) because they generally do not align well with the deterministic decision-making environment of most users and they add complexity to the forecast information flow. An increasing amount of time has been devoted to this topic in recent workshops and that was certainly the case in the 2018 workshop. A major emerging topic in this area is the flow of probabilistic forecast information directly into energy and market management systems.
Although there have been many changes in the content and perspective of the workshops over the past 10 years, the core mission really has not changed and is still to facilitate the optimal use of forecast information to achieve the required high level of electric system reliability at the lowest possible cost. The perspective on what that statement means and how it can best be accomplished is likely to continue to evolve, perhaps in surprising ways, over the next decade. However, the expectation among many stakeholders still appears to be that the ESIG forecasting workshops will continue to provide one of the most valuable forums to facilitate the optimal evolution of that perspective.
John W Zack
Senior Advisor
UL Renewables
Leave a Reply