Battle of the module reliability rankings


On the surface, PV Evolution Labs (PVEL) and the Renewable Energy Test Center (RETC) may appear to have a fair amount in common. Both have been in the business of testing solar modules for around decade, with tests that go above and beyond what is needed to qualify for UL or IEC certification.

For years, PVEL has been producing an annual Module Reliability Scorecard, showing the results of its module testing and identifying top performers, with the fifth annual edition released today at the SNEC trade show in Shanghai. But this year RETC has released its inaugural PV Module Index, a ranking which looks a whole lot like the PVEL Scorecard, including many identical tests.

RETC’s Module Index was released to the press a day prior; and while the beginning of the SNEC show is the most strategic time during the year to catch the eye of PV makers, the timing hardly seems like a coincidence.

We take a look at the two to find the similarities, and the differences.

All the same tests, presented differently

One striking feature of the PV Module Index versus the Module Reliability Scorecard is that they run all of the same tests. Both use a Damp Heat test that runs at 85 degrees Celsius and 85% humidity for 2000 hours and both feature a Thermal Cycling test that runs from 85 C to -40 C; however while PVEL specifies 800 hours of 3-hour cycling, RETC does not specify how many 6-hour cycles it runs products through.

Many other tests are quite similar, with both testing for Potentially Induced Degradation (PID). And while RETC’s Module Index ranks modules according to a Humid Freeze test which does not appear as its own category in the PVEL Scorecard, PVEL includes this test as part of its Dynamic Mechanical Load Sequence.

RETC has emphasized that in addition to testing reliability it also looks at what it describes as Performance Indicators and Quality Indicators. In some ways the tests and data shown by these similar to those shown in the PVEL Scorecard, but in some places different.

For example, RETC’s Performance Indicators look at Light Induced Degradation (LID) tests, and PVEL has LID tests as part of its Scorecard. However, the Performance Indicators also includes an assessment that combines module efficiency and PAN files run through a PVsyst simulation package, to model expected energy output in different climates.

PVEL also collects PAN files and shares this data through its Product Qualification Program (PQP), but does not present these as part of its Scorecard, meaning that while the data collected is similar, RETC has found a way to provide additional information in its Module Index.

However, instead of running simulations on PAN files, PVEL’s data collection includes results from modules that sit in an actual field for a year, through its field exposure program.

Quality and the BOM

In both of these sets of testing regimes, the essential question is identical: how to determine actual performance over an extended lifetime in the field, under varying climate conditions.

To this end both companies will be including tests for Light and Elevated Temperature Induced Degradation (LeTID) — a separate performance issue to LID — in future editions of their ranking reports.

In an effort to further differentiate its Index, RETC includes a Quality section that looks at not only how well modules perform over a series of tests, but also how many tests manufacturers participate in, which could be seen as a reward for not cherry-picking the tests that your module will do well in.

One aspect of RETC’s Quality ranking is whether or not manufacturers re-test their products when they change bill of materials (BOM):

RETC works with customers to help them analyze their changes to see if additional testing should take place in order to have confidence that a change does not result in negative reliability results. Those manufacturers that are transparent, show a commitment towards testing their specific changes are noted as demonstrating high levels of commitment to quality.

However, a specific BOM is the baseline for its competitor, as PVEL is very clear that it only certifies a specific BOM, and that if this BOM changes, the module must be submitted again to the program. It also shares the specific BOM that it tests with its downstream partners through its PQP.

Additionally, both companies state that they send in representatives to factories to watch over the process of producing modules for testing, from opening of materials through all process steps to the sealing of crates for shipment. But in the Scorecard, PVEL is more specific on this point, citing its five-point factory witness system:

In the end, more companies competing on who can deliver the most stringent test results and share the most information could be good for the industry. It is also notable that many companies showed up as top performers in both sets of results, with Jinko and LONGi among the brands achieving recognition from both RETC and PVEL.