We say and hear phrases like "our research suggests that...", or "I have observed this myself..." pretty often. What follows the phrase is usually logical... or at least presented that way. But it may not be so.
As
much we love to believe and portray that we are rational managers,
using logical decision-making trees, and considering all possible
alternatives, we don't do so. So what comes in our way?
1. We have made up our minds even before we see the actual research findingsTake
two people with opposing views, and present some data favouring one
side. One side is as quick to dismiss it, as the second side is to
accept it. The data couldn't influence the decision. Instead, it got
'rationalised' by the receiver.
2. Our 'sample size' is too smallLike
it or not, we move around in the same kind of groups and people. As a
result, we get to observe or receive only a limited set of
information. Even the first-hand research done through market visits and
'vox-pops' are exactly this - small samples of a huge population. How
can our findings be complete?
3. We tend to polarise all findingsMost
data is presented as averages, and the differences are less / more
significant. But during decision-making, we categorise it into yes/no or
present/absent groups. Just a few minutes into a discussion, data
presented as 'men use gadgets more frequently than women' will morph
into 'men are gadget-crazy, and women don't use gadgets at all'.
4. Our minds can't hold opposite views togetherPeople are both selfish and generous, passive and aggresive, friendly and
unapproachable (in different situations, at different times). The same
is with the research findings. But we expect things to be either good or
bad. We simply can't synthesise information when they're partly good
and partly bad. Of course, we do lose out on important information when
we force things to be that way.
So in a nutshell, our interpretations are neither right nor wrong, just incomplete. Now if only we could be ok with that.
No comments:
Post a Comment