Women traditionally face disadvantages in business. These can be especially pronounced in some societies. There is good reason to want to change this, not least the moral argument. That said, it is often helpful to go beyond the moral argument. What impact does empowering/hiring female micro-entrepreneurs have on society? Is it a zero-sum game, where the men get hurt when the women get hired? Spoiler, it isn’t. Beyond its specifics, the paper I am talking about was published in a marketing journal which raises the question: is it marketing? Who cares? I don’t.
Working In The Field
Chatterjee, Chauradia, and Pedada, in a recently published paper, look at whether interventions to hire female micro-entrepreneurs work. Getting data from actual behaviors in the field is an aim for most academics. This paper does that, teaming up with a social enterprise, the Digital Empowerment Foundation to look at the effects of their activities in India. The field data makes the paper so much more impactful.
Of course, few things in life come without some problems. The challenge for data from the field is that it is messy. You rarely capture exactly the data you want to get. Events happen beyond your control. Plus, your interventions are rarely that well-targeted. In an ideal experiment you would want to randomly assign micro-entrepreneurs individually to different conditions but often that isn’t possible. For example, for practical reasons, training might be group-based in a specific location, e.g., a village. You might randomly assign training modules to different groups but you can’t reasonably ask someone who is hovering around poverty to travel a large distance to take training at a different village to ensure you get your randomization correct.
There is often a tension between external validity (think generalizability) and internal validity (think control over what is going on in the test). As professors like to teach, there certainly isn’t an automatic tradeoff. You certainly often have tests that are not internally or externally valid. Just because something has no relationship to anything outside the lab doesn’t mean it was well done. It is, however, true that externally valid tests tend to lose a bit of control. Still, the upsides are often worth it.
What To Look For?
When assessing a field experiment you have an intervention, e.g., something that was done to the treatment group but not the control group. You want the control group to be as similar as possible to the treatment group before the intervention. If you then see a big difference between the groups after the intervention you are a happy experimenter. Is this perfect evidence of causation? No. Still, it is an excellent sign.
What Did They Find?
So what did the interventions to hire more female micro-entrepreneurs achieve?
The male micro-entrepreneurs seemed to do a better job serving their clients when they had competition with, and the ability to learn from, female colleagues.
The world isn’t zero-sum. It is very often possible to break barriers for those who have traditionally been excluded, here women, and have everyone be better off.
Is It Marketing? Who Cares?
I am not a fan of disciplinary silos. It is trendy to say that in academia but I genuinely mean it. (See here for more on the importance of range). One of the biggest criticisms of marketers I make is that they haven’t read outside marketing. I know I bang on about Tobin’s Q far too much (see here), but that was an example. Scholars go down intellectual dead-ends like the inappropriate use marketers made of Tobin’s Q as a result of lack of understanding beyond their discipline. In the case of Tobin’s Q marketers not knowing anything about financial accounting caused the problem.
In many ways, you can’t really understand your own discipline properly if you don’t know something about other disciplines. This explains why psychology, economics, computer science, etc… are so important to marketing programs but we can’t just have an official canon of knowledge if we want marketing to make its own intellectual contribution. To do a study right, you might need knowledge from accounting, finance, HR, climate science, political science, philosophy or pretty much anything else.
Of course, there are practical challenges when you abolish intellectual walls. If a paper relies on in-depth knowledge that the reviewers simply don’t have that is a problem. (See the Sokal hoax, here, for a journal just allowing any old bullshit to get through).
Despite the challenges thinking broadly is important. I also want our work to matter. Life rarely has clear disciplinary boundaries. This paper addresses an important issue but it is about supporting micro-entrepreneurs. Does this make it entrepreneurship? Does it make it economics where a lot of similar work happens? Is it marketing? Who cares? I don’t and neither should you.
For more on zero-sum thinking see here, here, here, and here
Read: Aindrila Chatterjee, Amit J. Chauradia & Kiran Pedada (2024) Rural women microentrepreneurs, consumer acquisition, and value delivery: Evidence from a quasi-experiment in rural India, Journal of the Academy of Marketing Science, Online 23rd September, 2024