New data shows the largest refinance volume in history.
According to recently published data, there was over $1.1 trillion in first lien mortgage refinances in Q2 2020. What’s more, trends indicate that Q3 2020 purchase lending will rise by 30-40% which would push to another record high.
With the high speed of turnover in servicer and investor transfers, the data being boarded onto your systems will barely have time for a true quality control check before they are released to another entity. Similarly, the new loans coming onto your books will have barely boarded their initial servicing system before RESPA notifications fly out the door and they board your system.
How accurate can the data be?
How many were reviewed? One in ten? Five in a hundred?
There are errors in your data. But how can you find them? Which are crucial to be corrected right now?
Loan Data Quality Assurance
Several years ago, large mortgage banks acquired something in the range of sixty thousand loans each month. Before they had even been boarded to the servicing system, they had been sold to another investor. The loans were to be interim serviced for 40 days. QC departments consisted of ten to twenty people.
Before the loans had even been boarded to the servicing system, they had been sold to another investor.
Their approach was to review a sampling of loans – say ten – per deal (each pool held roughly six thousand loans). If they found an error, they would look at another ten loans. Even when errors were identified, there was little recourse as the loans were already sold. Or, if the loans were found to be wanting, they were pulled from the pools and bundled as “scratch and dent” which then were sold for pennies on the dollar to get them out of portfolio.
How does your firm compare?
How many loans get a quality check that ensures every data point boarded is accurate?
How many loans have boarded this year alone with incorrect arm information or faulty origination elements?
It’s no secret that bad data creates an inability for organizations to trust their decisions. Worse, it forces firms to continually repurchase loans, creating a difficult position for representing quality to their clients and investors.
BaseCap customers run an automated rules engine against their data to ensure accuracy. With loan automation, errors are detected and resolved in real-time. Through custom policy creation, users determine whether the data on each field stands up to the level of scrutiny they require. Then, they run these policies against thousands of fields in thousands of data sets.
Ready to prep your organization for more loan volume? Tour the BaseCap platform today.
Thanks for reading!
Sign up for new content about data observability, automation, and more.
About BaseCap
BaseCap is the intuitive data validation platform that operations teams use to feed quality data to AI algorithms.
"I think the tool is great because it's an out of the box solution where you can give a business admin, or someone that's knowledgeable enough from a tech perspective and a business perspective, to really drive and make the changes and really own the administration of the tool."
Jeff Dodson, Lument