2012-01-01
I’d like to thank everyone who followed my duplicate payments serialisation in 2011. Hopefully it gave everyone pause for thought. Given it’s a New Year, and thoughts of 2011 are now fading from memory, I’d like to take the opportunity to recap and summarise on the serialised content and to let you know that, for those who are interested in acquiring assistance in setting up scripts, you can drop us a line at DataConsulting. Email or call anytime; details are attached.
The duplicate payment serialisation was delivered in 5 parts.
In part 1 we introduced the notion that duplicate payments typically occur in 0.1% to 1% of transactions. Factors such as systems and processes will influence this percentage but even the best controlled systems run foul from time to time simply because they are operated by people. Controls as we know are only as good as the people who operate them; they can be circumvented or ignored.
In part 2 we illustrated the point that no system is ever 100% perfect. We considered the search for duplicate payments, using the analogy of needles in haystacks. And to minimise the volume of false positives and unwanted duplicates, we recommended excluding records based on an arbitrary threshold. In addition, we suggested applying scores based on the likeliest of scenarios. In this way, value and effort are prioritised; returning the most money for the least amount of effort. In part 2, we also looked at who might perform the analytic. For those who have the inclination, the time and the appetite to perform a duplicate payments review, the rewards are enormous. Of course, to perform a review such as this isn’t for everyone but we explain the choices and their associated pitfalls.
In part 3 we considered how Vendor Master Data might be a contributing role to the likelihood of duplicate payments occurring by considering the common scenarios which lead to duplication.
In part 4 we paid close attention to duplicate detection, the scenarios that can lead to duplication and the 80/20 rule which, albeit anecdotal, suggests that 80% of your duplicates will be found with 20% effort. It was in this section that we divulged details on what a potential duplicate analytic might look like including sections such as:
In part 5 we concluded our analysis by looking at the types of data you might want to include in your analysis, and the methods by which this could be obtained.
I hope that the serialisation has given you some interesting insights into how analytics are developed. I appreciate that it is not a blueprint for writing the ACL script, but is a guide that could help some of the more experienced ACL users. That said, if duplicated payments are on your agenda this year then please give us a call; we can either advise, assist or deliver your scripting. I can think of no more cost effective methods of delivering returns to your bottom line than duplicate payment analysis.
Good luck!
Ian Anderson – Senior Consultant
ian.anderson@www.dataconsulting.co.uk
075 9565 2712