Just listened to this Planet Money podcast all about hating on the Dow Jones Industrial Average. Gist of it: the Dow Jones calculates its index in a weird (and most certainly nonsensical) way, and is an anachronism that must die. They also say that no market "professional" (quote added by me) ever talks about the dow, but measures like the S&P 500 and the Wilshire 5000 are far more sensible.
This strikes me as a criticism that distracts from the real issue, which is whether one should be using any stock market indicator as an indicator of anything. Sure, the Dow is "wrong" and the S&P 500 is more "right" in that they weight by market cap. Whatever. Take a look at this:
Incredible, profoundly thankful. In any case, in the event that I simply go to the neighborhood store and get a can brush, I'm presumably not too far-removed. Which is to state that the conveyance of "scores" for the latrine brush are most likely firmly stuffed and not especially separated—there is no anomaly can brush. here's the correlation with the S&P 500 in terms of fluctuations:
So I think the onus is on the pundits to demonstrate that whatever distinctions there are between the S&P and the Dow are important as foreseeing something about the economy. Good fortunes with that.
Obviously, as a scholarly, far be it from me to denounce the significance of accomplishing something the correct way, regardless of the possibility that it has no down to earth profit :). All things considered, in the podcast, they ridicule how the Dow discusses its long verifiable dataset as an advantage, one that exceeds its to some degree senseless method of calculation. This strikes me as somewhat uncalled for. Given the exceptionally solid relationship between's the Dow and S&P 500, this in length track record is a HUGE resource, enabling one to influence verifiable surmisings approach to back in time (once more, to the degree that any of this stuff has meaning at any rate).
I think there are some lessons here for science. I think that it is of course important to calculate the right metric, e.g. TPM vs. FPKM. In any case, we should not dismiss the way that eventually, we need these measurements to reflect meaning. On the off chance that the correspondence between another "right" metric and a more established, defective one is extremely solid, at that point there's no from the earlier motivation to exclude comes about computed with more seasoned measurements, particularly if those distinctions don't change any *scientific* conclusions. Maybe that is self-evident, yet I have a feeling that I see this kind of thing a considerable measure.
maisatoda915
Thursday 5 January 2017
Saturday 5 November 2016
Guidance on Purchasing and Avoiding Coercion
Ok, the scourge of the web! Sometime in the distant past, we would be fulfilled just to get an alright taco in NYC. Presently, unless you get the VERY best anything as evaluated by the web
Same goes for everything from gourmet expert's blades to rucksacks to whatever it is (I recommend The Sweethome as a fantastic site with purchasing guides for huge amounts of items). Entertainingly enough, I think we have wound up with this issue for a similar reason that individuals cry on about reference diagrams: since we neglect to demonstrate the information focuses hidden the rundown measurement. Investigate these illustrations from this paper:
For most purchasing guides, they generally simply report the maximum (as opposed to the mean in most logical reference diagrams), however the issue is the same. The maximum is most valuable when your dispersion resembles this:
In any case, revealing the maximum is far less helpful a measurement when your dissemination resembles this or this:
What I mean by this is the point at which we read an internet shopping guide, we expect that their best pick is WAY superior to the various alternatives—an exemplary instance of the anomaly conveyance I indicated first. (This is the reason we feel like butt holes for getting the second best anything.) But for some things, the best scoring thing isn't too much superior to the second best. Or, on the other hand perhaps the third best. Like toward the beginning of today, when I was considering getting a can brush and instinctually went to look into an audit. Maybe there are some latrine brushes are superior to others. Possibly there are some with a lethal defect that implies you truly shouldn't get them. But I’m guessing that most toilet brushes basically are just fine. Of course, that doesn’t prevent The Sweethome providing me a guide for the best toilet brush: incredible, profoundly thankful. In any case, in the event that I simply go to the neighborhood store and get a can brush, I'm presumably not too far-removed. Which is to state that the conveyance of "scores" for the latrine brush are most likely firmly stuffed and not especially separated—there is no anomaly can brush.
While there may be cases where there is truly a clear outlier (like the early days of the iPod or Google (remember AltaVista?)), I venture to say that the distribution of goodness most of the time is probably bimodal. Some products are good and roughly equivalent, some are duds. Often the duds will have some particular characteristic to avoid, like when The Sweethome says this about toilet brushes:
Then again, when you read these guides, it often seems that there’s no other rational option than their top choice, portraying it as being by far and away the best based on their extensive testing. But that’s mostly because they’ve just spend like 79 hours with toilet brushes and are probably magnifying subtle distinctions invisible to the majority of people, and have already long since discarded all the duds. It’s like they did this:
Now this is not to say those smaller distinctions don’t matter, and by all means get the best one, but let’s not kill ourselves trying to get the very best everything. After all, do those differences really matter for the few hours you’re likely to spend with a toilet brush over your entire lifetime? (And how valuable was the time you spent on the decision itself?)
All of this reminds me of a trip I took to New York City to hang out with my brother a few months back. New York is the world capital of “Oh, don't bother with these, I know the best place to get toilet brushes”, and my brother is no exception. Which is actually pretty awesome—we had a great time checking out some amazing eats across town. But then, at the end, I saw a Haagen Dazs and was like "Oh, let's get a coffee milkshake!". My brother said "Oh, no, I know this incredible milkshake place, we should go there." To which I said, "You ever had a coffee milkshake from Haagen Dazs? It's actually pretty damn good." And good it was.
Same goes for everything from gourmet expert's blades to rucksacks to whatever it is (I recommend The Sweethome as a fantastic site with purchasing guides for huge amounts of items). Entertainingly enough, I think we have wound up with this issue for a similar reason that individuals cry on about reference diagrams: since we neglect to demonstrate the information focuses hidden the rundown measurement. Investigate these illustrations from this paper:
For most purchasing guides, they generally simply report the maximum (as opposed to the mean in most logical reference diagrams), however the issue is the same. The maximum is most valuable when your dispersion resembles this:
In any case, revealing the maximum is far less helpful a measurement when your dissemination resembles this or this:
What I mean by this is the point at which we read an internet shopping guide, we expect that their best pick is WAY superior to the various alternatives—an exemplary instance of the anomaly conveyance I indicated first. (This is the reason we feel like butt holes for getting the second best anything.) But for some things, the best scoring thing isn't too much superior to the second best. Or, on the other hand perhaps the third best. Like toward the beginning of today, when I was considering getting a can brush and instinctually went to look into an audit. Maybe there are some latrine brushes are superior to others. Possibly there are some with a lethal defect that implies you truly shouldn't get them. But I’m guessing that most toilet brushes basically are just fine. Of course, that doesn’t prevent The Sweethome providing me a guide for the best toilet brush: incredible, profoundly thankful. In any case, in the event that I simply go to the neighborhood store and get a can brush, I'm presumably not too far-removed. Which is to state that the conveyance of "scores" for the latrine brush are most likely firmly stuffed and not especially separated—there is no anomaly can brush.
While there may be cases where there is truly a clear outlier (like the early days of the iPod or Google (remember AltaVista?)), I venture to say that the distribution of goodness most of the time is probably bimodal. Some products are good and roughly equivalent, some are duds. Often the duds will have some particular characteristic to avoid, like when The Sweethome says this about toilet brushes:
We were quick to dismiss toilet brushes whose holders were entirely closed, or had no holders at all. In the latter category, that meant eliminating the swab-style Fuller brush, a $3 mop, and a very cheap wire-ring brush.I think this sort of information should be at the top of the page, and so you buying guide could say “Pretty much all decent toilet brushes are similar, but be sure to get one with an open holder. And spend around $5-10.”
Then again, when you read these guides, it often seems that there’s no other rational option than their top choice, portraying it as being by far and away the best based on their extensive testing. But that’s mostly because they’ve just spend like 79 hours with toilet brushes and are probably magnifying subtle distinctions invisible to the majority of people, and have already long since discarded all the duds. It’s like they did this:
Now this is not to say those smaller distinctions don’t matter, and by all means get the best one, but let’s not kill ourselves trying to get the very best everything. After all, do those differences really matter for the few hours you’re likely to spend with a toilet brush over your entire lifetime? (And how valuable was the time you spent on the decision itself?)
All of this reminds me of a trip I took to New York City to hang out with my brother a few months back. New York is the world capital of “Oh, don't bother with these, I know the best place to get toilet brushes”, and my brother is no exception. Which is actually pretty awesome—we had a great time checking out some amazing eats across town. But then, at the end, I saw a Haagen Dazs and was like "Oh, let's get a coffee milkshake!". My brother said "Oh, no, I know this incredible milkshake place, we should go there." To which I said, "You ever had a coffee milkshake from Haagen Dazs? It's actually pretty damn good." And good it was.
Sunday 21 August 2016
Review to New Atlantis piece about Saving Science
Refer to this New Atlantis piece by Daniel Sarewitz, a long, rambling essay about how to save science. From what? From itself, apparently. And here's the perfect solution is:
Note: I have conversed with quiet supporters previously, and a considerable lot of them are inconceivably brilliant and educated and can be priceless in the scan for cures. In any case, I believe it's a major and unwarranted jump to state that they would know how best to direct the examination undertaking.
Thusly, I believe it's uncalled for to judge the biomedical undertaking exclusively by malignancy look into. Disease is from multiple points of view a simple target: gigantic subsidizing, constrained (however non-unimportant) useful effect, considerable lot of low quality research (sorry, yet it's valid). In any case, there are numerous cases of achievement in biomedical science also, including malignancy. Consider HIV, which has been changed from a capital punishment into a more sensible ailment. Or, on the other hand Gleevec. Or, on the other hand whatever. A considerable lot of which had no DOD inclusion. What's more, the greater part of which depended on many years of blue skies investigate in sub-atomic science. Without a doubt, out-of-the-crate thoughts experience difficulty picking up footing—the purposes behind that ought to be clear to anybody. All things considered, even our present framework endures these: now in vogue thoughts like immunotherapy for malignancy managed to subsist for quite a long time notwithstanding when no one was intrigued.
Gracious, and to the point about the PhD understudy's minimal effort indicative: I obviously wish him good fortune, yet in the event that I had a dollar for each official statement on an ease analytic created in the lab, I'd have, well, a great deal of dollars. :) And genuinely, there's heaps of research going ahead in this and related territories, and positively not every last bit of it is from DOD-style substances. Once more, I would scarcely take this story as a method of reasoning for fundamentally changing the whole biomedical undertaking.
To save the enterprise, scientists must come out of the lab and into the real world.
To expand upon this briefly, Sarewitz claims that many ills that befall our current scientific enterprise, and that these ills all stem from this "lie" from Vannevar Bush:
Scientific progress on a broad front results from the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.The argument is that scientists, left to our own devices and untethered to practical applications (and unaccountable to the public), will drift aimlessly and never produce anything of merit to society. Moreover, the science itself will suffer from the careerism of scientists when divorced from "reality". Finally, he advocates that scientists, in order the avoid these ills, should be brought into direct relationship with outside influences. He makes his case using a set of stories touching on virtually every aspect of science today, from the "reproducibility crisis" to careerism to poor quality clinical studies to complexity to big data to model organisms—indeed, it is hard to find an issue with science that he does not ascribe to the lack of scientists focusing on practical, technology oriented research.
Here's my overall take. Yes, science has issues. Yes, there's plenty we can do to fix it. Yes, applied science is great. No, this article most definitely does not make a strong case for Sarewitz's prescription that we all do applied science while being held accountable to non-scientists.
Indeed, at a primary level, Sarewitz's essay suffers from the exact same problem that he says much modern science suffers from. At the heart of this is the distinction between science and "trans-science", the latter of which basically means "complex systems". Here's an example from the essay:
For Weinberg, who wanted to advance the case for civilian nuclear power, calculating the probability of a catastrophic nuclear reactor accident was a prime example of a trans-scientific problem. “Because the probability is so small, there is no practical possibility of determining this failure rate directly — i.e., by building, let us say, 1,000 reactors, operating them for 10,000 years and tabulating their operating histories.” Instead of science, we are left with a mélange of science, engineering, values, assumptions, and ideology. Thus, as Weinberg explains, trans-scientific debate “inevitably weaves back and forth across the boundary between what is and what is not known and knowable.” More than forty years — and three major reactor accidents — later, scientists and advocates, fully armed with data and research results, continue to debate the risks and promise of nuclear power.
As a concept, I rather like this concept of trans-science, and there are many parts of science, especially biomedical science, in which belief and narrative play bigger roles than we would like. This is true, I think, in any study of complex systems—including, for example, the study of science itself! Sarewitz's essay is riddled with narratives and implicit beliefs overriding fact, connecting dots of his choosing to suit his particular thesis and ignoring evidence to the contrary.
Sarewitz supports his argument with the following:
- The model of support from Department of Defense (DOD), which is strongly bound to outcomes, provides more tangible benefits.
- Cancer biology has largely failed to deliver cures for cancer.
- Patient advocates can play a role in pushing science forward by holding scientists accountable.
- A PhD student made a low-cost diagnostic based inspired by his experiences in the Peace Corps.
A full line-by-line rundown of the issues here would simply take more time than it's worth (indeed, I've already spent much more time on this than it's worth!), but in general, the major flaw in this piece is in attempting to draw clean narrative lines when the reality is a much more murky web of blind hope, false starts, and hard won incremental truths. In particular, we as humans tend to ascribe progress to a few heroes in a three act play, when the truth is that the groundwork of success is a rich network of connections with no end in sight. In fact, true successes as so rare and the network underlying them so complex that it's relatively easy to spin the reasons for their success in any way you want.
Let me give a few examples from the essay here. Given that I am most familiar with biomedical research (and that biomedical research seems to be Sarewitz's most prominent target), I'll stick with that.
First, Sarewitz spills much ink in extolling the virtues of the DOD results-based model. And sure, look, DOD clearly has an amazing track record of funding science projects that transform society—that much is not in dispute. (That is their explicit goal, and so it is perhaps unsurprising that they have many such prominent successes.) In the biomedical sciences, however, there is little evidence that the DOD style of research produces benefits. In the entire essay, there is exactly one example given, that of Herceptin:
DOD’s can-do approach, its enthusiasm about partnering with patient-advocates, and its dedication to solving the problem of breast cancer — rather than simply advancing our scientific understanding of the disease — won Visco over. And it didn’t take long for benefits to appear. During its first round of grantmaking in 1993–94, the program funded research on a new, biologically based targeted breast cancer therapy — a project that had already been turned down multiple times by NIH’s peer-review system because the conventional wisdom was that targeted therapies wouldn’t work. The DOD-funded studies led directly to the development of the drug Herceptin, one of the most important advances in breast cancer treatment in recent decades.This is blatantly deceptive. I get that people love the "maverick", and the clear insinuation here is that DOD played that role, who together with patient advocates upended all the status-quo eggheads at NIH to Get Real Results. Nice story, but false. A quick look at Genetech's Herceptin timeline shows that much of the key results were in place well before 1993—in fact, they started a clinical trial in 1992! Plus, look at the timeline more closely, and you will see many seminal, basic science discoveries that laid the groundwork for Herceptin's eventual discovery. Were any of these discoveries made with a mandate from above to "Cure breast cancer by 1997 or bust"?
Overall, though, it is true that cancer treatment has not made remotely the progress we had hoped for. Why? Perhaps somewhat because of lack of imagination, but I think it's also just a really hard problem. And I get that patient advocates are frustrated by the lack of progress. Sorry, but wishing for a cure isn't going to make it happen. In the end, progress in technical areas is going to require people with technical expertise. Sarewitz devotes much of his article to the efforts of Fran Visco, a lawyer who got breast cancer and became a patient advocate, demanding a seat at the table for granting decisions. Again, it makes a nice story for a lawyer with breast cancer to turn breast cancer research on its head. I ask: would she take legal advice from a cancer biologist? Probably not. Here's a passage about Visco:
It seemed to her that creativity was being stifled as researchers displayed “a lemming effect,” chasing abundant research dollars as they rushed from one hot but ultimately fruitless topic to another. “We got tired of seeing so many people build their careers around one gene or one protein,” she says. Visco has a scientist’s understanding of the extraordinary complexity of breast cancer and the difficulties of making progress toward a cure. But when it got to the point where NBCC had helped bring $2 billion to the DOD program, she started asking: “And what? And what is there to show? You want to do this science and what?”There is some reality to the way that researchers pursue vocation, popularity and fortune. So they are human, so what? Believe me, in the event that I knew precisely how to cure disease without a doubt, I would do it at this moment. It's not for an absence of want. Once in a while that is simply science—genuine, hard science. Cash won't really change that reality due to the quantity of zeros behind the dollar sign.
“At some point,” Visco says, “you really have to save a life.”
Note: I have conversed with quiet supporters previously, and a considerable lot of them are inconceivably brilliant and educated and can be priceless in the scan for cures. In any case, I believe it's a major and unwarranted jump to state that they would know how best to direct the examination undertaking.
Thusly, I believe it's uncalled for to judge the biomedical undertaking exclusively by malignancy look into. Disease is from multiple points of view a simple target: gigantic subsidizing, constrained (however non-unimportant) useful effect, considerable lot of low quality research (sorry, yet it's valid). In any case, there are numerous cases of achievement in biomedical science also, including malignancy. Consider HIV, which has been changed from a capital punishment into a more sensible ailment. Or, on the other hand Gleevec. Or, on the other hand whatever. A considerable lot of which had no DOD inclusion. What's more, the greater part of which depended on many years of blue skies investigate in sub-atomic science. Without a doubt, out-of-the-crate thoughts experience difficulty picking up footing—the purposes behind that ought to be clear to anybody. All things considered, even our present framework endures these: now in vogue thoughts like immunotherapy for malignancy managed to subsist for quite a long time notwithstanding when no one was intrigued.
Gracious, and to the point about the PhD understudy's minimal effort indicative: I obviously wish him good fortune, yet in the event that I had a dollar for each official statement on an ease analytic created in the lab, I'd have, well, a great deal of dollars. :) And genuinely, there's heaps of research going ahead in this and related territories, and positively not every last bit of it is from DOD-style substances. Once more, I would scarcely take this story as a method of reasoning for fundamentally changing the whole biomedical undertaking.
Anyway, to sum up, my point is that a more fair reading of the situation makes it clear Sarewitz's arguments are essentially just opinion, with little if any concrete evidence to back up his assertions that curiosity-driven research is going to destroy science from within.
Epilogue:
Fine, so having put in a couple time writing this, I'm definitely questioning why I bothered spending enough time. I believe most experts would already find almost all of Sarewitz's piece incorrect for most of the same reasons I did so, and I mistrust I'll persuade him or his editors of anything, given their reactions to my tweets:
I'm not familiar with The New Atlantis, and I don't know if they are some sort of scientific Fox News equivalent or what. I definitely get the feeling that this is some sort of icky political agenda thing. Still, if anyone reads this, my hope is that it may play some role in helping those outside science realize that science is just as hard and messy as their lives and work are, but that we're working on it and trying the best we can. And most of us do so with integrity, humility, and with a real desire to advance humanity.
Update, 8/21/2016: Approve, now I'm feeling truly imbecilic. This New Atlantis is some kind of logical Fox News: they're bolstered/distributed by the Ethics and Public Policy Center, which is unmistakably some moderate "think" tank. Murmur. Trap taken.
Update, 8/21/2016: Approve, now I'm feeling truly imbecilic. This New Atlantis is some kind of logical Fox News: they're bolstered/distributed by the Ethics and Public Policy Center, which is unmistakably some moderate "think" tank. Murmur. Trap taken.
Subscribe to:
Posts (Atom)
Why care about the Dow? Why not?
Just listened to this Planet Money podcast all about hating on the Dow Jones Industrial Average. Gist of it: the Dow Jones calculates its i...
-
Ok, the scourge of the web! Sometime in the distant past, we would be fulfilled just to get an alright taco in NYC. Presently, unless you ge...
-
Just listened to this Planet Money podcast all about hating on the Dow Jones Industrial Average. Gist of it: the Dow Jones calculates its i...
-
Refer to this New Atlantis piece by Daniel Sarewitz, a long, rambling essay about how to save science. From what? From itself, apparently. ...