Data infrastructures should be flexible, scalable, secure and compliant. (I’m not going to mention cloud-based, because if that sounds like news, then you should stop reading right now and give me a call ). Your systems of engagement (app, mobile app, CRM, store, etc.) are the outer layer of the data infrastructure – they are sources of raw data and consumers of information & insights. Therefore, when building your data setup, they must be part of it. Working within an architecture that was not designed with data in mind can be challenging. CRM/Sales Operations and BI teams often fight. Placing Sales Operations in the org chart is a challenge. People are confused what the CRM and the BI tools are for. Harnessing data feels like an uphill exercise. You get different reports from different executives, depending on the tool they used to generate the report… These are fun little peeves coming from not handling data holistically. The true problem is that gradually, the organisation starts hemorrhaging money & missing out on opportunities. Most of the projects I have been involved in had a common difficulty: adding efficiency on top of the core product.
Whether it is
you want to get better and you want to scale. Here are the top 3 problems I have seen standing in the way of scale…
I’m departing from the general CRM here, because this particular problem is very pronounced for SFDC users due to the costs associated with its implementation and maintenance. The usual cause, in my experience, is critical data not coming in salesforce.com because the CRM isn’t part of the data processes. SFDC is very powerful, but it can only show its capabilities if the data of the customers, partners, competitors, etc. is in. This can happen one of two ways: people type in (e.g. sales people fill the outcome of sales pitches), or the data flows in from another interface (an app, a webform, an email). As to the former, from the employee perspective, entering data into the CRM is one more additional thing to do, usually something that brings no added benefit to them. As to the latter, major unexpected costs and delays come from the integrations between the CRM and other interfaces (application databases, ERP, DWH, KYC, etc.). The reason for both is often the lack of common logic & strategy across different data handling processes.
Transformations & modelling need to be done to the data. It comes unstructured or faulty into the data pipeline. Fixing the data at the source and coming up with reusable tables suitable for all stakeholders are extra efforts, which are needed if there are no quality checks in place. At the very least, the data pipeline will have some sort of streaming (real-time or batch), it will be processed and democratised at the output. These will have, however, no value if pieces of data travelling through the pipeline under the sale title do not mean the same thing. A very simple example: “City” (in the CRM filled by a salesperson, or in a webform filled by a customer) can be a free text field, a drop down or auto-detected. There are three levels of structuring data at the source, which determine the transformation efforts later on. The difference in outcome is upgrading from never-ending 3 months to stellar 3 minutes to get a nice looking report of, say: “Revenue per City per Month”.
Privacy-by-design implies design (or more frequently, re-design) of your data handling: process, architecture and structures (tools). Oh – and mindset, too. I’m a sucker for compliance. I actually enjoy implementing GDPR (and the rest of the data protection regulations), but I understand that this is a quirk. If all of your data handling is tied together, making it compliant is a lot quicker and a lot less painful. Tying the CRM into the data architecture allows personal data (the thing the GDPR protects) to be filtered out and extra layers of access & change control to be installed on it. Personal data (and PII) will come in from the raw data sources and will sit in the raw data vault. Then it will acquire its special status. It will then be extracted into a special pipeline and repository, to which access is controlled. Most business models need only aggregated data for their BI tools, hence, personal data will not flow in that direction. It will be displayed in the CRM, under the need-to-know rule and any modification of it will be recorded by the logs existing in the CRM tool. The bottom line is there will be less effort and your data will be compliant, if you think of all data handling tools as pertaining to one value stream.
The good news is that avoiding this hassle only asks for putting all the smart people handling data in the same z-/room and making them come up with the data value stream TOGETHER. The earlier you think about data as a byproduct of the customer journey, the better – it will help you in the task of understanding how your product is perceived, what works and what doesn’t.
Take time to think about data – how it will come in, what touchpoints are good to use to build metrics, how you want to consult the customer profile, what insights you’d like to find in your reports. When you have 10 customers, you know their data by heart – you need no structure to give you insights. Nevertheless, your goal is to grow, and without a good data architecture, you will become detached from your customers and the market. They will no longer be able to speak to you clearly. I’m not recommending that you close the door to gut-feeling and intuition, but keep them disciplined through a steady inflow of facts. This is how you pave the way to being always ON TOP.