It’s time to take a hands-off approach to bad data.
It just seems to happen naturally - as the IT Manager, you get lumbered with everything integration.
While it’s expected that your team will resolve all integration errors, our experience tells us that 95% of integration errors post-go-live are, in fact, data issues. And the problem with that is that while you may own the framework, you can’t fix bad data.
If you currently work with a legacy integration layer, it’s probably a familiar frustration. You receive error notifications, so you have to push the data back through to replicate the error and see what’s going on. Only to discover it’s nothing you can help with – the data is at fault, not the integration. So, you need to track down the data owner and ask them to fix the issue. And then, you need to push it through again to check that the error is resolved. While that sounds simple enough, it doesn’t take into account that the data set could have accumulated hundreds of thousands of new records over just a few months.
Lesson learned? When your business decides to invest in a new integration layer, make sure you (or your partner) build smarts into it from the outset. So, when there's an error notification, the issue gets pushed directly past the IT team and to the person who owns the troublesome data, so they can sort it out and even test it at their end. It will save time, effort and frustration.
For example, let’s say an accounts receivable team member sets up a new customer in your ERP but misses some non-optional data or puts in an impossible email with extra spaces. They’ll be directly notified of an error at their end and can open a dashboard filter by their integration point to view the data values. Once they’ve updated the original at-fault data entry, they hit ‘retry’, and the data is automatically pushed through again.
And as you and your team can view failures, resolutions, or ‘no action taken’ on a dashboard, you’ll be able to address continuing data errors by making changes to the source systems. You then have the integration information that really matters in front of you: Yes, it failed – but now it’s working. Issue resolved; what’s next?
From your perspective, it resolves those post-go-live integration issues faster. You’ll be in charge of making sure the source systems are fixed so bad data can't come through again, while the owners of the data can fix the failures. There’s no need for you to chase anyone; the data fix process is automated and reportable – and best of all, it was all hands-off.