If you ask the average person why manual data entry into CRM is bad, they’ll probably say ‘because it takes away selling time’.

But a recent study run by Truly.co and a Fortune500 company found another reason to do away with manual data entry — almost half the structured data entered by reps was either incomplete or wrong.

Now, you might think the point of this study was to perpetuate the stereotype of the ‘lazy’ rep who doesn’t follow process, selects random CRM fields to beat validation rules and causes Sales Ops chaos.

 

The truth is actually the opposite… we were betting on reps being better than machines.

 

The study was originally commissioned to investigate how well machines could replicate the judgment of the company’s reps when it came to entering data.  The client had a sophisticated sales infrastructure, including dedicated sales trainers and data scientists, so our initial assumption was that they were a lean, mean, data entering machine.

And what we were trying to understand was if our technology could complement them by making a ‘best guess attempt’ at filling in the 15% of data fields that they typically missed.

 

Experiment Setup

In this experiment, we created a set of virtual fields to mimic the fields that reps were entering into CRM when they called on clients.

Over 100,000 reps calls were automatically recorded, transcribed and analyzed.

Then a rules engine was used to simulate the decision making of the rep based on the call contents.  This rules engine was configured by Sales Ops, who ultimately owned CRM configuration and therefore the definition of each of these fields.

 

Learnings: data consistently 30-40% Off

Our findings showed that there was a consistent deviation of 30-40% between the call outcomes that reps were logging and what sales ops was expecting.

Take for example, the call outcome logged as ‘Left a Voicemail’.  The chart below shows a distribution of ‘rep talktime’ — a measure of how many seconds the rep spoke on a connected call.  This measure focuses specifically on speech and ignores things like ringtones, hold music, pauses, etc.

left voicemail disposition

This chart shows two things.  The first is unsurprising — that reps mislabel calls 10% of the time.  After all, this is manual data entry and a 10% error rate wasn’t unexpected.

 

The second insight was more remarkable –30% of the time that the reps marked the call as “Left VM”, they didn’t actually leave a voicemail.

 

30% of the time that the reps marked the call as “Left VM”, they didn’t actually leave a voicemail.

 

Now, upon discovering this, the sales ops team started asking itself a bunch of questions.

 

1. Is the rep doing something wrong?  When we looked at the data, we found that the reps were indeed hitting a voicemail box, so there was truth in what they were saying.  Also, there were no other picklist options that were related to ‘voicemail’ and there wasn’t a specific expectation anywhere that reps were to leave voicemails.

 

2. Does leaving a voicemail improve performance? This was a question that nobody had asked.  If not, we could save the reps hours a week and it wouldn’t fundamentally change the impact/behavior

 

3. Is the solution more picklist options?  The concern with this of course is that adding more options increases complexity and the odds that something will go wrong.  Options like ‘Hit Voicemail Box, Didn’t Leave Voicemail’ become visually and cognitively taxing, and data confidence would surely go down.

 

4. Has it always been this way?  The inevitable question then becomes how long this has been going on.  Did something change in the training?  Did the reps figure something out over time?  When it comes to these types of analyses, we can always do the analysis moving forward by changing CRM options but doing it backwards is often such a heavy task that it’s rarely ever done.

 

5. What do we do next?  These questions all forced the company to rethink their assumptions regarding their data.

 

What they did

After pondering these questions for weeks, the company realized that to reach the next level, they needed to master their data.  Using reps wasn’t going to get them there and if they could crack the code on automated data entry, they could get back 5 hrs/rep/week… a huge upside to their sales efforts.

 

As a first step, they first starting rolling out ‘shadow’ CRM fields to try to automate the inputs that reps were already doing.  They then compared the two outcomes and iterated on the automation rules until they had high data confidence.  Then they gradually started removing the required fields from the rep’s day to day experience, resulting in greater productivity and results for the entire org.

 

While the initiative is in its early days, revenue impact has been almost immediate — by removing bias and surfacing insights from calls directly in daily activity reports, managers were able to uncover 1.5 net new cross sell opportunities per rep per week.