Post Royal Commission World – Why Is Financial Services Customer Data Prone To Error? 


The Hayne Royal Commission raised a question mark over the quality of customer data held by financial institutions and emphasised that action after the fact via costly data remediation events was not good enough. In a post royal commission world, it is important to know why customer data is so prone to error. 

Firstly, the more data there is, the greater the margin for error. Secondly, data is inherently ‘dirty’. If new data is entering a system, remediation activities will not be going away; nor should they. Customer data must, at all times, be treated with ongoing and systematic data quality processes.   


Consider a typical financial services institution in which there are multiple business units managing constantly-changing customer data from multiples channels, across multiple technology platforms and in line with ever-changing business rules and regulatory requirements. Keeping data clean is understandably a challenging, but not an insurmountable task.  

In my view, the main challenges impacting data quality are the following: 


Sheer size of many financial institutions, not just in terms of customer numbers or dollars, but also in terms of organisational size. Banks, superannuation funds, insurers, wealth managers and administration companies are huge organisations. They have clearly delineated organisational roles, but unintended or incorrect changes can quite readily slip through the cracks.  


Who is responsible for what? Lack of specific responsibility or knowledge around which department owns data quality and the various issues that arise often occurs. Data quality is an organisation-wide effort from the call-centre, to investments, legal, compliance, marketing and to the board and back again.  


Grandfathering is extensive because of the number of decades that some financial institutions have been in operation. Depending on the industry (superannuation is a perfect example), constant legislative change also complicates things. 


Not all data is controlled on a single platform. Organisational data can be spread across multiple systems, across multiple vendors, multiple geographies and multiple teams. Determining responsibility for data is often difficult and confusing in organisations, and more so the bigger the organisation becomes.  


Timing of data integrity checks across different systems may mean that by the time an issue is found in area A, area B has already used the data, acted upon it and transacted across thousands of customer accounts. A cohesive and comprehensive organisational data integrity process is difficult to achieve.  


How do we solve the data quality problem? 

It’s a big question with a long answer, but data quality can be made much simpler with the right people, process and technology. To truly do data well, a dedicated internal data quality team, perhaps with the support of a third-party specialist is a good move. Then, of course, data quality management technology like Investigate is essential to monitor, remediate and report on data quality effectively and economically.  


Best regards 


Stephen Mahoney – Executive Director 


If your organisation needs assistance with data remediation, QMV can help.

QMV have performed hundreds of data remediation projects. We utilise an innovative and flexible approach to assist clients identify and create visibility of data quality issues.

QMV’s extensive work in data quality management led to the development of Investigate our data quality software solution.

Please reach out to QMV for further information on p +61 3 9620 0707 or submit an online form 


QMV provides trusted advisory, consulting and technology to Australia’s leading superannuation, insurance, banking and wealth management organisations.

Like what you see? Please subscribe to receive original QMV content! 

You may also benefit from our free monthly pensions and superannuation regulatory updates