I have been asked by my personal fitness trainer to list out all the food I eat in one week. And already, I am trying to plan my meals to "look good" and well, alter the way I eat food.
My doctor also has asked me to consider purchasing a home blood pressure monitor, which I did. For a week now, I have been monitoring my BP - once in the morning and once in the evening. I am beginning to see trends. But a few things have left me wondering: are the trends that I am seeing in my home BP monitor caused by my medication or by the simple fact that I am monitoring my BP regularly.
That's the problem with self-reported measurements: Once respondents are asked, they either put their best foot forward or resent the questioner for asking such an inane, stupid question. It is human to do that - and I wouldn't blame it all on the respondent. Some questions I guess are simply too inane and unclear. In fact, some questions are not supposed to be even asked of respondents - but that doesn't mean they are unobservable and measurable. They still are - one just has to use a different way of measuring them.
One of my pet-peeves is 'self-reported attention levels' in which one asks respondents to rate "on a scale of 1 to 3, with 3 being very highly attentive and 1 not at all attentive, how attentive are you to watching TV at 0800-0830h?" Results of these are then 'correlated' with real ratings to come up with "effective, high-attention GRPs".
But the mere asking of the question in itself is faulty: What does "highly-attentive" mean? What does "not at all attentive" mean? And by having a scale, does that mean that 2.5 scores do not exist - or do they? And if they do, what do they mean? What does a 2.5 score mean if it did exist? If it does exist, why not? What if my 'self-reported attention' somehow falls in between 3 and 4?
It's the same with asking people "what is important when you purchase a car, a soft-drink, a pair of runners, or a laundry detergent?" People will always say "price" if you give the that option. And if you had the option of "value for money", it will always come out at least amongst the top 33 percentile of the attributes or considerations that you have listed down for them to rate. The only way you can really go beyond this is if you did more analytics on the results - perhaps, structural equation modeling and response modeling - relating the attributes with (yet another) self-reported variable of purchase intent or purchase history, or relating 'latent' factors with other latent dependent factors.
The question on self-reported attention, however, is highly questionable.
My solution? Don't ask consumers about their "attention level". Ask them what other things do they do whilst watching TV. Of course, it goes without saying that one has to establish first that indeed, watching TV and doing something else - perhaps, being on the phone, surfing, or talking to someone else - is somehow related to attention negatively or positively.
I have gone on a rant - but really, it's not about measurements that I am against. I am against making simplistic measurements and taking them as if they are papal doctrines and dogmas. I am against shortcuts. I am against theoretically unfounded measurements and unsound assumptions - measurements that are done for the sake of having a number.
It's time that measurements in the comms planning business be revisited and be revalidated.
Comments