Towards a Reproducible Photovoltaic Modeling Validation Process
Lelia Deville, Kevin S Anderson, Marios Theristis
Sandia National Laboratories, Albuquerque, NM, United States

To predict the output of a photovoltaic (PV) system, a variety of models are required, like plane-of-array (POA) irradiance transposition, cell/module temperature, and others depending on what meteorological data are available. As the data flow through the modeling pipeline, uncertainties can arise. As these uncertainties accumulate, it can be difficult to determine whether the issues are due to measurement or modeling errors. When multiple models are being used in an estimation, there is no way to directly attribute errors to any one model specifically. Furthermore, when modeling pipelines are assessed independently, lacking a standardized process, it leads to irreproducible outcomes, making it challenging to conduct meaningful side-by-side comparison of different models. In an effort to decrease these uncertainties, a process for model validation was created for the commonly used models in the PV modeling pipeline. This process uses three different phases of analysis to gauge a model’s performance: basic error analysis (RMSE, MBE, etc.), residual analysis, and baseline model comparison. This outline is demonstrated for irradiance transposition and decomposition, module temperature, incidence angle modifier (IAM), and PV performance modeling in the form of individual Jupyter Notebooks. Using a full year of hourly field data, these notebooks allow for modelers to evaluate their models at a wide range of conditions to determine seasonal or time-of-day performance. The annual energy yield is calculated using both modeled and measured values to observe the direct impact of the model’s errors. The analysis also provides insight into which variables within the model could be negatively affecting the model’s output. Using a well-established and validated model of the same type, the user’s model is compared to the baseline model. Following this process will allow modelers and model developers to improve their tools and will ultimately result in lower uncertainty and more consistent results.