This is Part 5 of the Sound Check series, exploring the science and structure behind retail music.
Forty years of research. Hundreds of studies across psychology, marketing, neuroscience, and consumer behavior. Different methodologies, different sample sizes, different retail categories, different countries. And they all land in the same place.
Music changes how people behave in commercial environments.
That sentence should feel unremarkable by now. If you have been following this series, you have seen the evidence from multiple angles. Tempo affects pace. Mode affects mood. Volume affects duration. Familiarity affects attention. Genre affects perception. None of this is contested. The directional findings have been replicated enough times that arguing against them requires ignoring a body of work that spans four decades and crosses disciplinary lines.
So the interesting question is not whether music matters. The interesting question is what anyone has done with that knowledge.
The Gap Between Knowing and Controlling
Here is what the research actually established: music is a variable. Milliman demonstrated in 1982 that slower background music correlated with longer time spent in supermarkets and higher sales totals. He replicated the finding in restaurants in 1986. The mechanism was straightforward. Tempo influenced pace, pace influenced dwell time, dwell time influenced purchasing.
That was more than forty years ago.
The relationship between dwell time and purchasing has been confirmed so many times since then that it functions as a baseline assumption in retail analytics. Longer visits correlate with larger transactions. This is not a novel claim. Every retailer with foot traffic data and POS records can see it in their own numbers.
But look at what happened next. Or rather, what did not happen next.
The logical follow-up to Milliman's work would have been: which specific musical parameters produce which specific dwell time effects in which specific retail contexts? Not "does slow music work" but "does 78 BPM in a minor key with acoustic instrumentation and low harmonic complexity produce a measurably different dwell pattern than 82 BPM in a major key with electronic instrumentation and moderate harmonic complexity in this particular store, with this particular customer base, selling this particular product mix?"
That question has never been answered. Not because it is unanswerable, but because the infrastructure to answer it has not existed.
Where the Research Stops
Andersson and colleagues noted in 2012 that the majority of research on music and consumer behavior has been conducted in experimental settings, often with undergraduate students as participants. This is not a criticism of the researchers. Controlled experiments are how you isolate variables. That is sound methodology. But it means the findings describe what happens under laboratory conditions with convenience samples, not what happens on a Tuesday afternoon in your store with your actual customers.
Turley and Milliman identified this gap back in 2000. The distance between laboratory findings and real-world application remains wide. Their explanation was simple: no measurement infrastructure exists at the store level to connect specific atmospheric variables to specific commercial outcomes in real time.
Twenty-six years later, that sentence still describes the default condition for nearly every retailer running background music.
Think about what that means. The research tells you music is a powerful behavioral variable. The research also tells you that nobody has built the measurement apparatus to control that variable with any precision in a live commercial environment. The studies get published. The findings get cited. And the playlist keeps running on shuffle.
What Closing the Loop Looks Like
The phrase is deliberately mechanical. A closed loop has four parts: specify, deploy, measure, adjust. Each part depends on the one before it.
Specify means choosing musical parameters with intention. Not picking songs. Not selecting a genre. Defining the compositional characteristics that the research suggests will produce a particular behavioral response in a particular context. Tempo, mode, instrumentation, harmonic complexity, rhythmic density, vocal presence, dynamic range. These are measurable, controllable attributes. They can be prescribed.
Deploy means putting that specification into a live retail environment. Actual store. Actual customers. Actual conditions.
Measure means capturing commercial outcomes during deployment. Foot traffic patterns, dwell time by zone, transaction data, basket composition, conversion rates. The same metrics retailers already track, correlated against the specific musical parameters that were playing when those metrics were generated.
Adjust means using the correlation data to refine the next specification. If a particular parameter set produced a measurable increase in dwell time but no corresponding increase in conversion, that tells you something. If a different parameter set shortened dwell time but increased average transaction value, that tells you something else. Each deployment generates information that makes the next deployment more precise.
This is not a complicated concept. Retailers run this kind of iterative measurement on pricing, on visual merchandising, on store layout, on digital advertising. The methodology is familiar. The only thing that changes is the variable being measured.
The Dataset That Compounds
Here is where it gets interesting.
Every hour of specified music deployment, correlated with foot traffic and point-of-sale data, generates information. Not generic information. Specific information about how specific musical parameters affect specific commercial outcomes in a specific store with a specific customer base.
Run it for a week and you have preliminary data. Run it for a month and patterns start to stabilize. Run it for a quarter and you have something that did not exist before: an empirical model of how musical composition variables correlate with commercial performance in your particular retail context.
Run it for a year and the dataset starts to compound. Seasonal patterns emerge. Day-of-week effects become visible. The model gets more granular with every deployment cycle.
And here is the part that matters for competitive positioning: the dataset is non-transferable.
Your competitor can read the same research you read. They can hire the same consultants. They can subscribe to the same music services. What they cannot do is acquire a year of correlational data between musical parameters and commercial outcomes that is specific to your store, your customer demographics, your product mix, your floor plan, your brand positioning. That data does not exist anywhere except inside your measurement apparatus. It cannot be purchased. It cannot be replicated without running the same process in the same environment for the same duration.
Every other form of competitive intelligence in retail can be observed, copied, or bought. Pricing is visible. Visual merchandising can be photographed. Store layouts can be walked. Digital strategies can be reverse-engineered. A compound dataset correlating musical composition with commercial performance in your specific context is invisible to external observation and impossible to reconstruct.
The Only Question Left
This series started with a simple observation: the research on music and commercial behavior is extensive, convergent, and largely ignored in practice. The studies confirm that music is a behavioral variable. Retailers treat it as decoration.
The gap is not in the science. The science is settled enough to act on. The gap is in the measurement infrastructure. Nobody has connected the specification of musical parameters to the measurement of commercial outcomes in a continuous, real-time loop at the individual store level.
That gap is closable now.
The technology to specify music at the parameter level exists. The technology to measure foot traffic, dwell time, and transaction data exists. The analytical methods to correlate atmospheric variables with commercial outcomes exist. None of these components are new. What would be new is connecting them.
If you accept the research, and four decades of convergent findings make it difficult not to, then running music without measuring its commercial impact is running an uncontrolled variable in your retail environment every hour you are open. You are paying for the variable. You are deploying the variable. You are just not measuring what it produces.
The retailers who start measuring first will have a compound dataset that grows more valuable with every deployment cycle while remaining completely invisible to everyone who does not.
That is not a pitch. That is arithmetic.
Related reading: Sound Check: Music Is a Variable, Closing the Loop on Retail Analytics, and How to Measure the ROI of In-Store Music.
Key Takeaway: Running music without measuring its commercial impact is running an uncontrolled variable in your store every hour you are open — close the loop by connecting musical parameters to sales data.
The research is settled. The measurement infrastructure exists. Entuned closes the loop between musical parameters and commercial outcomes in your specific store.
Ask About a Pilot Program