The rise of big data.
Over the last few years, I’ve been writing more and more about analytics. When I first started out in technology, the dot com crash was beginning. I had opted career-wise to go the corporate route. Early on during an ERP implementation, I remember coming across the concept of separating your transactional and reporting systems. At the time, it was for very practical reasons – performance being a key reason. Let’s just say that was a few years ago!
With the rise of entire systems and teams solely devoted to spinning up and supporting the reporting machine, this premise held for many years. We built entire bureaucracies around reporting. It was usually at the tail end of the tiger, getting whipped around based on process and/or systems changes. Add new fields or start using previously unused fields and forget to update your extractors? Oops. Scramble, scramble, scramble to get it fixed for reporting to make it usable. We pushed these processes as far as the technology would allow, constantly extracting, optimizing and tuning these progressions.
We always had a lot of data, even if we didn’t always know what to do with it. Am I alone in that feeling? Theoretically, it seemed like we should do more with data. Our data was designed to support individual functions. Given internal structures at many organizations this was the best we could hope for. So we optimized our own functions, sometimes at the cost of the overall process.
And then we had the rise of big data. Descriptive, became our battle cry. Descriptive and predictive were already being used inside many companies – we just didn’t call them that. Descriptive analytics were our Key Performance Indicators or monthly metrics. Predictive analytics were our forecasts based on historical data from previous years paired with current year trends to determine how we were doing. Our attempts to peer into our crystal ball and see what the future holds. Prescriptive analytics was out of the grasp of many as it requires an in-depth knowledge of not only what is going on, but how to influence, course correct or magnify. Frankly, for many this is still out of reach.
But I’m hung up on something. These exercises are fine, and, in many cases, necessary. But it’s all so rear-view mirror looking to me. I find myself wondering – where are the preventive analytics? A virtual apple a day keeps the problems away. We spend so much time cleaning problems up after they happen. How do we get our systems and processes to be far more proactive so that the problems don’t occur in the first place?
If we are all so good at integration, how to we get to this next level in our solutions?
For example, let’s say I am issuing an invoice to one of my clients. When I issue that invoice, I include a Purchase Order number and an hourly rate. In my case, I am often sending PDF’s to a central mailbox where they are scanned and routed through for approvals. Here is where I wish the preventive analytics started. What if I put the wrong purchase order number on the invoice due to a simply keying mistake? What if the hourly rate isn’t what was on the PO due to a mistake? Yeah, exactly. My invoice went into some fix-it queue to be dealt with. And now that it has hit exception processing…uh oh. We have now diverged from a happy path and, depending upon how far it got on the other side, fixing the problem can become quite difficult. It is HERE that I see so much potential for machine learning and artificial intelligence – preventing the problems from occurring in the first place. How do we get our systems and processes to evolve to catch these problems sooner? How could that Purchase Order or hourly rate problem get caught and sent back to me BEFORE it ever got in the other systems? Then, I can correct the problem and re-submit before it causes a headache.
For the sake of demonstration, I chose a simple example – the proverbial apple. But many examples of this abound if we step back and look at our processes. How about you? What problems do you wish we could prevent from happening altogether?