Talk:Focus@Will

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled comment[edit]

This is neuroscience based you say. Provide sources. — Preceding unsigned comment added by 109.190.115.38 (talk) 21:04, 10 December 2014 (UTC)[reply]

Borderline AfD[edit]

Just now I came close enough to nominating this article as AfD that I reviewed the deletion manual.

The situation is that this company had some venture capital to throw around, and the idea sounds cool, so they enjoyed a flurry of press coverage circa 2013.

From what I can see, it's been mostly crickets since.

One of the claims to this service being more than just another tiny menu of audio wallpaper is that the neurological gains have been put to the scientific test.

Well, yes, you can at least accurately say that this slate of sandpapered audio wallpaper has been run through the scientific sausage machine.

But what does that really mean? Not much, it appears.

CONFLICT OF INTEREST DISCLOSURE: At the time of conducting and reporting on these experiments, the author held a contract position as the Science Director at Focus@Will, the streamlined music service used in the experiments.

Abstract:

The tested form of streamlined music, which was tested primarily by listeners who felt they benefited from this type of music,significantly outperformed plain music on measures of perceived focus, task persistence, precognition, and creative thinking, with borderline effects on mood.

Translation #1: That's what you need to write when you're pushing something second-rate to arxiv and you want to publish in a reputable journal ever again.

Because translation #2 (reformatted for clarity):

  • 909 participants completed the first testing session
  • of these, 157 completed Testing Sessions Two and Three by our deadline
  • 10 of these were not able to follow instructions about background music listening and were removed from all analyses
  • of those remaining, 50 completed Testing Session Four by our deadline

Of the 50 fully qualified participants, 40 of the 50 completed the cross-over component in one order, and only 10 of 50 completed the cross-over component in the opposite order.

So you have a not so big pile of numbers, and you can of course feed these number through the p-machine. But why even bother? At most you've got a pool of twenty subjects remaining if you analyze a balanced cross-over, and it's not even clear how to winnow the 40 on one side of the cross-over down to a comparable pool of ten. If you don't do this, your p values could easily pertain to cross-over order rather than to treatment/control.

To put a squishy point on this, this paper is a high grade of horse shit, because it does everything right—except for dumping the entire dataset into the round file because of salvageable participation effects.

Here's the design, in four one-week sessions:

  • no background music
  • split A/B
  • split B/A
  • no background music

Now that I look at this, it's not even a correct design:

What it really should have done:

  • no background music
  • split A/B
  • no background music
  • split B/A
  • no background music

What they ended up with:

  • 40 people: none, A, B, none
  • 10 people: none, B, A, none

If they had done it right:

  • none, A, none, B, none
  • none, B, none, A, none

By the time the dust settled, only 10 usable subjects had gone directly from no music to their own streaming music week over week (many started to go this route, but then quickly dropped out).

Presumably what happened is that people who spent a week getting used to their own streaming preferences and were then switched to the weird music condition said to themselves "what is this noise" and quit the study, whereas the weird noise condition was more palatable when approached from the no-music condition.

This particular drop-out number is cleverly not reported.

The authors know this is a problem, and they rush to get out in front of this, before my own terrifying interpretation springs to mind:

We can assume these participants were largely motivated to continue testing as a result of their desire to obtain free access to the streamlined music service, but it is not clear why this should be more likely to be the case for participants who heard streamlined music first.

Ahh, they are just after more freebie ... no need to think further.

But who would make such a damaging confession in those words, if not to distract the audience from an even more apt and damaging interpretation that people who listen to their own music first think the product sucks too much when directly compared to endure even one week?

It may be that people choose pleasure over productivity, given a harsh step function from one to the other, just as the step function from pizza and coke and chocolate cake to tofu and garbanzo bean salad might not be the most effective dietary intervention.

But the study surely didn't want to underline that the subjective experience of switching from the music you love to their weirdly denatured music salad will subjectively feel like going from chocolate cake to garbanzo beans.

There's this group of hipsters with a fresh pile of VC moola and a weird hypothesis about music and productivity, with a professional neurologist on board:

  • many hipster-facing citations circa 2013

Actual neurological results once the dust settled:

  • crickets; no credible non–first party citations whatsoever (that I could find this morning)

At it seems to me, what credible citations exist were all written in a flurry of speculative presupposition, aka famous for possibly being poised to possibly become famous for something that would definitely be cool, if true.

So where does this finally land on the AfD spectrum? I have no idea. I've never been able to coherently articulate the precise Wikipedia boundary between OR and pertinent critical reflection. — MaxEnt 16:22, 3 April 2021 (UTC)[reply]

CODA: I'm not a practicing scientist, but I would guess that where you can get away with four legs instead of five (as I proposed above) is when the subjects have difficulty discerning which study leg they are participating in — say with a white pill designed to impact a blood marker that won't lead to an immediate sense of well being. But even here you need high confidence in your belief that the effect latency is such that you can proceed to the B/A leg with no washout after the A/B leg. — MaxEnt 16:29, 3 April 2021 (UTC)[reply]