May 28, 2023

Weather News Road Conditions

Weather News & Forecast

Paper on a new way to test the accuracy of tropical cyclone forecasts released online in Weather and Forecasting – Hurricane Research Division

3 min read

Evaluating the impact of observing systems, new modeling systems, or model upgrades on forecasts of tropical cyclones (TCs) is vital to ensure optimal forecast improvements. Differences between observations (what really happened) and forecasts are calculated and are called forecast errors. Historically, analyzing these errors involved just looking at average errors.  Yet doing so can lead to misleading conclusions if the errors do not follow a regular pattern, like if one forecast or set of forecasts is particularly poor. This paper presents a new, straightforward way (that we call a consistency metric) to combine useful information from several different metrics to enable a holistic comparison between two sets of forecasts and provides examples of how it improves upon older verification techniques.

An important way to improve forecasts is to look at those that failed and learn why that happened.  Scientists measure the errors by taking the difference between the forecast and what actually happened, and a group of forecasts are typically averaged together to create an average error.  Because forecasters and models predict many different things all over the globe, the amount of information can be overwhelming and time-consuming to sort through. Perhaps this is why the average error, the simplest combination of all this information, is routinely used.  

The average error can be skewed by a few forecasts with very large errors and yield misleading results.  This paper introduces a new tool, a consistency metric (e.g., Fig. 1a) , that combines information from  three different measures of how good TC forecasts are to understand whether results are consistent across forecasts These measures include 1) the percentage difference between the average error of two sets of forecasts (e.g., Fig. 1b), 2) a metric that compares forecasts from two sets of forecasts head-to-head and records how often one outperforms the other (e.g., Fig. 1c), and 3) the percentage difference between the median error (e.g., Fig. 1d) of two sets of forecasts, where there are as many forecasts with errors larger than the median as there are with errors smaller than the median. This paper demonstrates how the consistency metric can be used to compare TC forecasts and provides examples of how it improves upon older verification techniques.

Fig. 1. The impact of dropsonde data on TC intensity forecasts. (a) The consistency metric, where shaded boxes indicate forecast lead times where a set of forecasts were better (green) or worse (rust) than another set, captures the information from the three metrics below. (b–d). The green line represents the percentage difference between one set of forecasts (EXP A) and another (EXP B). For example, if EXP A had an average error of 19 kt, and EXP B had an average error of 20 kt, the percent difference is 19/20 or 5%.  The average of all percentage values across all forecast lead times is given in the bottom right corner. Panel d shows the same thing except for median values instead of average values.  Panel c shows the percentage of times individual EXP A forecasts are better than EXP B forecasts, and the total number of forecasts, or Sample Size, is shown at the bottomNote that while the average error (panel b) shows improvement at nearly every forecast lead time, particularly after 60 h, the head-to-head forecast comparison (panel c) and the median error across all forecasts (panel d) show that these improvements are not consistent across the three metrics. This is concisely shown in panel a at the top, as the consistency metric indicated that mostly neutral (gray) results occurred after 60 h despite the average-error improvement.

 Important Conclusions:

  • The new consistency metric is a straightforward way to guide analysis and increase confidence in results.
  • This could in turn help forecasters learn from challenging cases and accelerate and optimize developments and upgrades in NWP models.

For more information, contact The full study can be found at The consistency metric was developed in part under the auspices of the Cooperative Institute for Marine and Atmospheric Studies (CIMAS), a Cooperative Institute of the University of Miami and the National Oceanic and Atmospheric Administration, cooperative agreement NA20OAR4320472 while the lead author was supported by the FY18 Hurricane Supplemental (NOAA Award ID NA19OAR0220188). 


2023-03-20 17:12:31

All news and articles are copyrighted to the respective authors and/or News Broadcasters. eWeatherNews is an independent Online News Aggregator

Read more from original source here…

Leave a Reply

Your email address will not be published. Required fields are marked *