Business owner looking at company books

We welcome the opportunity to submit our views on this fifth consultation of the Market Performance Framework Reform Programme.

Our response

2. Proposed MPF Performance STANDARDS: To what extent do you consider that the standards for the following KPIs are set at an appropriate level to incentivise good outcomes for all customers for…

Meter reading is a core service that business customers expect and depend on for accurate billing. While reduced competitive pressures dilute incentives for retailers to provide meter reading services to a high standard, it is vital that the MPF provides strong incentives. Therefore, while we understand the reasons for performance standards not being set at 100% (to reflect where some meters cannot be read on first attempt due to challenging circumstances), the standard needs to be stretching in order to drive a high level of performance to the benefit of business customers.

We support a key part of MOSL’s rationale for setting performance standards, which sets a target that is stretching compared to average market performance, while also taking into account the range of individual retailer performance. From the supplementary data provided in the consultation, it is positive to see that the proposed 85% and 75% standards will represent a stretching target, particularly for those retailers currently performing well below the market average. However, in respect of the external target of 85%, we want this to be increased to 90% as a number of retailers are already at 85%, or have exceeded this. As biannual meters make up the majority of the market, it is important that the target is stretching enough to drive improved performance.

For the benefit of customers, it is important that retailers are meeting or exceeding the required standards for meter reading. Given the difference in market performance regarding non smart meters vs smart, we support the proposal for two KPIs measuring these separately. Combining the two into one measure would make it very difficult to set a single performance standard given the current variance in performance. It would also create the risk that retailers with a high proportion of smart meters could rely on these to meet the performance standard, at the expense of those non smart meters that are harder to read. We do not want to see customers with non-smart meters deprioritised, leading to a reduction in their billing accuracy and service delivery.

We agree the performance standard for smart meters should be set higher than the non smart meter standard to reflect the relative ease of obtaining smart reads. While the proposed standard for external meters appears reasonable, MOSL and the PAC need to determine why current performance is lower for biannually read internal meters, and all monthly read meters, which has led to the proposal for lower performance standards for these meters. It is important that the reasons for lower performance is then addressed as the location of a smart meter, and the read cycle it’s on, should not make a material difference to how successfully smart meter reads are obtained. To ensure retailers are required to meet stretching performance standards in respect of all smart meters, MOSL and the PAC need to prioritise this as an area of review, as all smart metered customers need to receive the benefits of more regular reads and the associated benefits these bring.

We agree that the same performance standard should be used for all non-market meters. Given the low number of meters in this category, there is insufficient variance in terms of read frequency and meter type for different performance standards to be meaningful. We also agree this should be a stretching target for wholesalers, given this has been set at 98%. Ensuring non-market meters are frequently read should be as important for wholesalers as reading their regular household meters, so their entire customer base receives a consistent level of service. It is important they are strongly incentivised to do so.

In addition, given the low number of non-market smart meters, we understand the reason for setting separate standards, as is proposed for market meters under M01 and M02. However, the risk of good smart meter performance masking poor non-smart meter performance may increase as more smart meters are rolled out. This may lead to retailers de-prioritising the reading of non-smart meters, which will lead to a deterioration in billing accuracy and service standards. We therefore agree that the PAC has a key monitoring role to play in this area, and separate performance standards may need to be developed if the number of non-market meters increases.

We agree with the proposed standard of 100% for submission of transfer reads for the reasons MOSL has provided. As the market codes allow retailers to submit estimates in the limited circumstances where an actual meter read cannot be taken, there should be no reason why a read cannot be submitted within the required timescale.

While we agree with the proposed performance standard of 100%, retailers successfully meeting this will not necessarily result in improved customer outcomes. Given estimated reads are permissible in limited circumstances, this can lead to customers not receiving an accurate opening bill, with the potential for a bill shock in the future. Retailers must strive to obtain actual reads when a customer transfers, with estimated reads only used as a last resort. Where retailers are meeting this standard with a high level of estimated reads, we would expect the PAC to monitor this carefully and act swiftly to address poor retailer behaviour and inform any changes required to the codes. Ultimately, the relevant section of the market codes may need to be strengthened to ensure compliance in this area, and encourage more actual reads to be taken.

While we generally agree with the proposed standard, we have the same concerns as outlined in respect of M04. Market performance against this standard is likely to be high as an estimated read can be submitted on time in the event an actual read is not obtained, thereby greatly reducing the possibility of late reads. However, poorly estimated opening bills, for example, those based on long unread meters, could have a detrimental impact on customers. More work is needed to review retailers’ transfer read obligations to ensure the right outcomes are being produced for customers. We comment further on this in response to Q2.11.

Customers rightly expect bills to be based on actual reads. Our 2021 SME customer preferences on meter reading frequencies research showed that 88% believe it is important that their water and sewerage bills are based on meter reads rather than estimates, rising to 90% for micro businesses. We therefore agree with the need for more visibility in this area to improve meter reading services. Peer comparison could be a way of encouraging retailers to improve the number of actual transfer reads. However, we do not agree with measuring this as a KPI. A failure to meet a performance standard should mean that a trading party has failed to meet their obligations under the performance framework, and the market codes. However, as retailers are permitted to submit estimated reads in certain circumstances in respect of transfer reads, it does not mean that there has been a performance failure, unless there is evidence that estimates have not been submitted in line with the circumstances outlined in the codes. On the same basis, highlighting M09 performance in peer comparison tables may have limited value for customers as they are likely to interpret ‘poor’ performance as retailers breaching a rule or obligation, when this would not be the case if estimated reads have been submitted in line with the codes.

To genuinely drive improvements in this area, a KPI is needed that incentivises retailers to improve the quality of transfer reads, and drive better customer outcomes. We support the work that is currently being undertaken with the aim of strengthening the market codes so it is clearer that an actual read is expected for customer transfers. Work in this area needs to continue to ensure that the overall code provisions are fit for purpose, and are not inadvertently driving poor retailer behaviours. Alongside this, work is needed to better define ‘good’ vs ‘poor’ quality estimation in respect of transfer reads, as this is a more robust indicator of billing accuracy and therefore how well a retailer is performing. We would then support the implementation of a KPI which can measure the level of ‘poor’ quality estimation, underpinned by a revised market code which is clear that estimation is only permitted in limited circumstances.

At this stage, we believe M09 as proposed should be an additional performance metric, as we agree there must be more visibility concerning actual transfer reads vs estimated reads. This will allow the PAC to scrutinise performance, uncover reasons for why the number of estimated readings is high, and help shape proposals for any further changes in obligations that may be needed. In addition, peer comparison tables available to retailers will be useful may encourage them to improve the number of actual reads being obtained

3. Proposed MPF Performance CHARGES – To what extent do you consider that the proposed charges for the following KPIs are more appropriate than the current framework’s Market Performance Standards (MPS) in incentivising good outcomes for all customers?

In terms of the design of M01, we strongly agree with retailers incurring charges on unread meters until a read is obtained. This provides a greater incentive than only applying a charge at the point the read is missed, as currently, retailers have limited incentive to re-attempt to read a meter before the next cycle is due. In terms of the charge value, although this is an improvement on the current MPS18 charges of £10 every 200 business days, we have concerns with the rationale behind the proposed charge for biannual meters. We believe there is a risk that it will not drive improved performance as it is a weaker financial incentive compared to the other charge options that have been disregarded.

MOSL highlight a degree of market uncertainty regarding the introduction of higher charges. However, this should not be a driving factor of what an appropriate charge value should be. It is important that charges are set at a level that will incentivise retailers to perform their meter reading services to a high standard for the benefit of business customers. Concerns regarding billing and charges consistently account for the main root cause of business complaints to CCW, and is a key area of dissatisfaction in our biennial Testing the Waters research. Driving up meter reading performance should result in billing accuracy improving, which should reduce associated billing and charges complaints, as well as improve customer satisfaction with retail services.

In setting an appropriate charge, it is important this takes into account the average cost of reading meters, both the cost of regular cyclic reads and the cost of additional reads to resolve a failure. It should not be more cost effective for retailers to pay penalties rather than read meters, as this risks hard to read meters becoming deprioritised, which will lead to a deterioration in billing accuracy.
With this as the primary consideration, we believe that Option 2 – £3.30 per failure – is more likely to provide a greater incentive on retailers than the current proposed option. This is on the basis that the value is closer to the average cost of reading a meter (according to the value detailed in Option 3), so is more likely to drive the right behaviour. As £3.30 is a higher value than the proposed option, it is also more likely to exceed the cost of taking additional reads quicker than the proposed option, which means the incentive to attempt such reads will take place quicker. We therefore believe that Option 2 strikes the right balance between increasing retailers’ incentivisation, and not being too punitive. We believe that higher charges should provide an increased incentive on retailers to ensure that any problems reading meters are resolved, and encourage them to work with customers to obtain reads and resolve access issues. Therefore, it is not necessarily the case that higher value penalties will result in those charges being incurred, on a significant basis.

We support the proposed £17.50 per failure charge for monthly market meters. As these customers are more likely to have greater levels of fluctuating consumption, it is vital that retailers obtain reads at the required frequency to ensure accurate billing. While the charge is a lot higher than the proposal for biannual meters, we do not believe this presents a market instability risk given the low percentage of monthly meters as evidenced by MOSL.

We agree that the charges for M02 should be the same as M01 given the simplicity of applying a common standard, and so the incentivisation across all meter read types is consistent. This is particularly important in circumstances where visual reads may still be needed when there is a fault with the AMI functionality. In all circumstances, retailers need to be incentivised to ensure that all meter reads are being received and taken in line with the required frequency. However, as outlined in our response to Q3.1, we believe the performance charge should be based on £3.30 per failure, rather than the proposed £2.50.

We support the proposed charge of £17.50 per failure for non-market meters, and agree that given the low numbers, it is simpler to apply the same charge regardless of the meter read frequency. A sharper incentive is also appropriate as wholesalers are monopoly providers, so the natural incentives to perform meter reading services to a high standard are reduced. In addition, poor performance against these meters has an impact on some household customers too.

At a minimum, a £5 charge per failure is appropriate given the likelihood that the ad-hoc nature of transfer reads means their cost will be higher than the average meter reading cost. However, given the limited timescale for submission, it is likely that any charge, regardless of the value, is more likely to incentivise retailers to submit an estimate rather than attempt to obtain an actual read. For the reasons we provided in our answer to Q2.7, we do not believe that retailers meeting the proposed standard, and incurring a low number of charges (if any), will be indicative of good customer outcomes. We want to see the code strengthened in parallel with the reform of the MPF.

We are comfortable with the proposed charges. However, as estimated reads can be submitted when it has not been possible to obtain an actual read within the submission window, it is unlikely that retailers need extra incentives to submit reads on time. As outlined in our comments for M04, this would also not necessarily indicate improved customer outcomes in terms of accurate billing.

We agree there cannot be a charge associated with this KPI, as it would likely prompt significant challenge from retailers on the basis that they are permitted to submit estimated reads. However, the absence of a charge reinforces our previous point that as currently proposed, M09 is not workable as a KPI. We believe this needs to be changed to an additional performance metric, and MOSL need to develop a new KPI that measures good vs poor estimation, as well as further examining whether any changes to market codes are needed to drive the submission of more accurate reads.

We do not agree that the PAC should be constrained in terms of when, and by how much, it can adjust performance standards. We believe this compromises the principle of flexibility, and means the PAC is less able to quickly respond where it has concluded performance standards are not driving the desired customer outcomes. We agree that if performance standards are constantly changing, this is likely to be ineffective at improving performance. However, we believe that a committee of market experts with a clear remit to only recommend ‘appropriate and effective’ performance rectification measures, is unlikely to collectively make poor decisions.

The setting of the proposed performance standards has been largely informed by indicative modelling at this stage. It is therefore too risky to constrain the PAC’s abilities to review and adjust the standards, particularly if it can be concluded from new data that some significant changes may be needed. We do not believe there should be constraints on how much the performance level can be changed by, nor limits on how many times it can be changed in a given period. It is particularly important within the first two years of the implementation of the revised MPF that the PAC has this agility to make changes where necessary.

A core principle of the revised MPF is flexibility, and the ability to react quickly when poor performance needs to be addressed. Where PAC identifies a potential issue, it is therefore important for the process leading to a change being implemented to be as streamlined as possible. We are concerned that the proposed process outlined in the consultation may not achieve this. For example, it may not be appropriate to consult with the industry on every proposed change, as long as PAC has clearly evidenced the rationale for the change, and provided sufficient notice to trading parties. We believe the PAC needs to be empowered to make reasoned decisions without the need for a prescriptive process.

We therefore believe the penalty cap should be removed in line with the views we have expressed in previous MPF consultations, and the proposal MOSL made in consultation 4. We do not believe anything has materially changed between then and now to warrant keeping it in place. Under the new MPF, if trading parties are incurring a significant amount of charges, this will be because of poor performance that they need to be incentivised to address. This could be undermined by a penalty cap. The risk of trading parties incurring significant charges for failures outside of their control is also low due to some proposed exemptions (e.g. the C1/B5 bilateral request exemption under M01), and performance standards not being set at 100%. We therefore do not believe that there are any circumstances that warrant keeping the cap in place.

While we have not agreed with every MOSL recommendation in the consultation, we would like to thank the MPF programme team for the quality of the information in the consultation documents, particularly the supplementary information in document 4. The level of detail, and clear rationale for proposals, has made it easier to understand everything in the technical detail needed. The industry’s desire for this level of detail was clear, so it is positive that MOSL responded accordingly.