Skip to main content
lab in the field.jpg

Strategy

A ‘Lab in the Field’ Approach to Evidence-Based Management

A ‘Lab in the Field’ Approach to Evidence-Based Management

Simplified experimentation in the field may be the best of both worlds, provided its results are viewed with the proper perspective.

Which would you rather base major business decisions on: interventions that just seem intuitive and trendy, or interventions backed by real evidence? If you chose the latter, you can understand the ascent of experimental techniques for determining whether something truly causes something else – such as randomised controlled trials (RCTs) or A/B tests. Theoretically, randomisation-based statistical methods are considered the gold standard for estimating real impact, which is all about figuring out the difference between what happens when an intervention is adopted versus what would happen if it did not get used.

Still, there is a significant sticking point in the adoption of RCTs. These are designed to elicit rigorous and unambiguous inference in specific contexts, but their execution is often time-consuming and expensive, and the answers often incomplete given the complexity of the questions behind them. Often, the extent of the complexity comes to light after the experiment, as leaders strain to turn nuanced, sometimes even contradictory results revealed through RCTs into actionable insights. Noisy data gleaned from the field only makes this problem worse. Given the time and expense involved in RCTs, receiving vague results can be disappointing indeed.

So what is the alternative? The most common approach in management and social sciences is to rely on an artificial lab environment that allows running small-scale yet highly controllable experiments in a protected setting. What lab experiments may lack in veracity, they make up for in specificity. They are ideal for testing out partial theories and targeted hypotheses cheaply and relatively quickly. On the other hand, their limited scope and artificial context mean they usually can provide only indirect evidence on the expected value of an intervention. It’s like relying on seemingly reasonable pen-and-paper sketches of a plane when attempting to get a real one to take off.

We think that in many cases, there is a middle ground between full-blown RCTs and very artificial lab-based studies: evidence-based management that brings a lab-like experimental approach to the field as part of a more open-ended process of exploration. When big questions with murky implications are at play, contained experiments in the field with finely honed parameters can produce results that serve as points of light in pitch-darkness – not enough to stride confidently ahead, but sufficient to edge towards the next bit of illuminating information. Think of it as the wind tunnel to test out model planes – not the real thing, but superior to just the paper sketches and much cheaper than (failing with) the real thing.  

The impact of teaching design thinking to middle-school students

Our recent study in the context of an Indian educational NGO demonstrates how this can work. Our recent article, Does design thinking training increase creativity? Results from a field experiment with middle-school students, describes our “lab in the field” study conducted in partnership with Agastya International Foundation, a Bangalore-based NGO. Agastya focuses on science education programmes, one of its key goals being to teach creative problem-solving. The NGO’s leadership team was considering adding design thinking to the Agastya curriculum as a complement to its science-based education, but hesitated to take such a step without at least some evidence indicating whether and how it might be effective.

We spent several months exploring whether a full-scale RCT might make sense, but our pilot studies made it clear it would be too costly and impractical to execute in the given context. We also considered the traditional “lab” approach involving the manipulation of underlying psychological mechanisms and the recruitment of typical study participants from universities or online platforms such as Mechanical Turk. However, these methods seemed very removed from Agastya’s context. Eventually we came up with a hybrid: a lab-like study conducted in a fairly controlled environment over a short period of time, but using a limited set of design thinking training materials and recruiting participants from the actual pool of students that come to Agastya’s programmes.

We integrated our experiment into a workshop Agastya was conducting during a school holiday week. The study would still be run in the field but take four days rather than the weeks or even months that a full-blown RCT would have required. Students came from nearby villages to Agastya’s campus in Kuppam (near Bangalore), where they were randomly separated into three “treatment groups” and a “control group” of 50-75 students each.

The treatment groups completed three design thinking exercises. These were selected (with inputs from a panel of Agastya instructors and colleagues in our field) from an extensive workbook on design thinking developed by another NGO, Design for Change. The aim was to extract the “active ingredients” – the portions of the handbook that were most likely to influence outcomes like creativity and confidence. The control group underwent Agastya’s standard experiential science instruction. At the end of the four days, all four groups were given paper-and-pencil tests of creativity and confidence.

An analysis of the experimental data revealed that the design thinking students came up with more creative ideas than the control group, even though the average originality of each idea was somewhat lower. Interestingly, design thinking also appeared to boost students’ confidence more than the regular Agastya courses – especially for the female learners.

Instead of answers, better questions

Viewed one way, the results of our “lab in the field” experiment are quite remarkable. With a modest sample size and a relatively weak intervention conducted over four days, we saw marked improvements in the sheer volume of creative output generated by the students. We might have seen even more impressive results with greater scale and duration, though the only way to be sure would be to conduct a more extensive follow-up study. By the same token, one could hypothesise that with more time in the classroom, the seemingly suppressive effect of design thinking on originality might get nullified, and we could see better results on that aspect of creativity too.

Detractors of our method will also find plenty of fodder here. Critics would deem it rash to draw grand conclusions from a one-off exercise that lasted four days and measured outcomes in very specific ways that – even though drawn from academic literature – are admittedly far from perfect. In addition, it remains unclear whether the effects we observed might wear off rather than strengthen with time, and whether design thinking training would work as well in the hands of other instructors (Agastya’s trainers are a very gifted and motivated lot).

We ourselves make no far-reaching claims for our study. Whether Agastya should integrate design thinking training, let alone use it to replace parts of its current curriculum, is a question that cannot be answered by our four-day trial alone. However, our “lab in the field” experiment was not designed as the final word to put Agastya’s questioning to rest – but more modestly to advance and enrich the overall conversation around it. With any luck, it suggested how design thinking training could help while pointing to its potential pitfalls – information that can be used as grist to the decision-making mill. Further exploration will no doubt be needed, but at least the direction of that exploration is clearer. The management decision is now a bit more informed, and hence a bit less like throwing a dart in the dark.

We believe the “lab in the field” paradigm could be applied to all sorts of organisational problems. For example, one could address questions like “Will Agile teams work for us?” with quick-fire hackathons in which teams in different Agile configurations work on the same mini-project. If some configurations consistently perform better than others, it suggests which types of projects can be handled well with Agile teams in certain contexts.

Ultimately, hybrid lab/field experiments may represent the best of both worlds in at least some contexts. Grounded in a mindset of evidence-based management, the approach may be better than pure lab-based methods for deriving solutions that match the complexity of today’s organisational problems. But the findings from any single study need to be interpreted in their proper perspective rather than taken as conclusive on their own. At the end of the day, this is one more tool for making decisions based on real data rather than relying just on either intuition or hype.

Edited by:

Benjamin Kessler

About the author(s)

About the research

Related Tags

Future of Management

About the series

Future of Management
Summary
Changes in culture and technology are reshaping strategies, decisions and processes in business and beyond. Take a deep dive into what managers are doing – or need to do – to make the most of this.
View Comments
No comments yet.
Leave a Comment
Please log in or sign up to comment.