Best google searches leading here

"How to drink Duvel"

"Why is Malta so conservative?"

"Seacucumber culture map for Albania"

"Middle East feminism graphs"

"Her suit began inflating"

"Anichkov penis horse face"

Topics
Tags
Archive

Entries in science communication (5)

Thursday
Aug252011

Scientific thought of the day

Richard Dawkins: "The power of a scientific theory may be measured as a ratio: the number of facts that it explains divided by the number of assumptions it needs to postulate in order to do the explaining."

Sunday
Apr032011

Why Economists should never be in charge of science policy

In the Economist this week is an article about the growing problem of antibiotic resistance. The issue is quite straightforward, the overuse of antibiotics is creating increasing levels of antibiotic resistance in pathogenic bacteria. This means that higher and longer doses of antibiotics are required to deal with infections, with more adverse effects and increasing mortality. The Economist has quite a good outline of the effects that this could have, such as changing the risk level of surgery.

Where the Economist fails is in what to do about the problem. In the article, and even more so the editorial discussion, the Economist comes down heavily in favour of just inventing new antibiotics. The obvious routes, such as banning routine agricultural use of antibiotics, increasing regulation of doctor's prescriptions and restricting last resort antibiotics to specialists are more or less dismissed as "against human nature".

Invent new antibiotics, it sounds so simple, why hasn't someone tried it yet? The issue, according to the Economist, is not that it is a very difficult scientific problem with all the low hanging fruit already captured ("early researchers were just lucky"), no, the issue is the structure of economic incentives. Those poor multinational pharmaceutical corporations, there just isn't the financial reward available to the company that invents the next penicillin. So what we need to do is put together a big Prize, then all the companies will say "oh, maybe we should cure MSRA now". What a naive, money-orientated view of the world.

By endorsing the Prize option, the Economist implicitly buys into several fallacies of medical research. Firstly, they assume that the multinational pharmaceutical corporations are centres of medical innovation. Not true. In fact, it is difficult to think of single groundbreaking drug that was developed by a multinational pharmaceutical company. The centres of innovation are undoubtedly the universities. The typical course of a drug would be decades of research in universities, working out the basic mechanisms and activity, followed by the generation of a spin-off company to perform the grunt-work of finding the optimal structure, delivery system and dose, followed by Big Pharma buying up the spin-off company and performing the clinical trials. Multinational pharmaceutical companies are simply not good at basic research or innovation, what they are good at is business - running clinical trials, GLP production, regulation, distribution and marketing. How would a big fat prize to Big Pharma, going straight into the pockets of the CEOs and stockholders, filter down to research scientists in universities? It just does not fit the reality of medical research.

Secondly, even if you modified the Prize option so that the benefit went to the scientists, what good is that going to do? Medical researchers are already working as hard as they can, the unlikely possibility that we will win the research jackpot can't possibly make us work any harder. I often get ribbed by friends for daring to suggest that people strive for excellence in their jobs for more than just purely financial motivations. To me it is obvious that people take joy in success, and that independence and happiness can actually compensate for a bucket-load of cash. Maybe I am wrong. Maybe merchant bankers and corporate lawyers really would move from London to become a sanitation worker in Uzbekistan if it meant a 1% increase in after-tax pay. But that is not the way medical research works. People don't enter medical research for the decades of training, long hours, poor pay or job insecurity. No, people enter medical research because they are interested in solving scientific or medical problems. It can be altruism, a burning desire to help others, or simple self-interest, enjoying being the first to solve a difficult problem. The type of person who is already driving themselves 60 hours a week for a salary that a banker would laugh at is simply not able to ramp up the effort for an unrealistic chance at a prize.

No, a Prize option is completely the wrong formula. If the Economist wants to encourage the discovery of new antibiotics, they should advocate putting more money into the tested method of peer-reviewed grants. This ensures that the money is spent on the best and most promising research within universities. And to medical researchers it gives them what they want most - not a 1 in a million chance at a Prize, but instead secure merit-based funding to do medical research.

Saturday
Mar262011

An alternative model for peer review

There is no doubt that the current model of peer review is an effective but inefficient system. The high quality of publications that complete peer review is a testomy to the effectivity of the peer review system, as poor papers rarely get accepted in well reviewed journals. However the efficiency of the review system is very low.

Consider that the highest ranked journals have acceptance rates of around 10% and even the middle-ranked journals have acceptance rates of less than 50%. Most papers get published sooner or later, but with the career reward of publishing in high impact factor journals, it is not unusual for a publication to get rejected four or five times as the authors work their way down the journal ranking list. Considering that each review will generally consist of three reviewers, a single paper that had a tough time could consume the (unpaid) time of fifteen reviewers before it is finally accepted. This is an enormous burden on the scientific community, and is largely a wasted burden - afterall, each journal editor only gets to see three of those fifteen reviews when making a decision to accept or decline an article. It also considerably slows down the dissemination of information, as it is not unusual for the entire review process to consume a year or more.

So let's consider an alternative model for peer review, one which keeps the critical aspects that provide effectiveness, but which changes the policies that produce inefficiency. Consider now a consortium of four or five publishers, which may include 20 journals that publish papers on immunology. Rather than authors submit to the individual journals, the authors would submit to a centralised editorial staff, which is paid for by the publishers but which is independent of each journal. An immediate advantage would be the ability to have many more specialised editors available, allowing for better decisions on choosing and assessing the reviews.

Each paper would then be sent out to five or six reviewers, and the reviews would be made available to each of the journals. The editorial staff at the journals would be able to make an assessment of the paper and put forward an option to accept, conditionally accept or decline the paper. This information would be transmitted back to the consortium, and would be provided to the authors. The authors would then be able to make their choice of which offer to accept. In effect, each journal would be making a blind offer to the authors to publish their paper, with full knowledge of the reviews but without the knowledge of whether the other journals put in a bid.

Consider the benefits of this alternative model to each player:

1. The journal gets to judge on more complete information, with double the number of reviews available for each paper, selected by more specialised editorial staff.

2. The reviewing community will more than halve the number of reviews required, while actually providing more information to the journals.

3. The authors will no longer have to make strategic decisions in choosing where to submit, they will simply submit to the consortium and have the option to publish in the top ranked journal which is interested in the paper.

4. The scientific community will have access to cutting-edge research months or even years earlier than under the current system.



Friday
Aug132010

2010s worst failure in peer review

Even though it is only August, I think I can safely call 2010s worst failure in the peer review process. Just as a sampler, here is the abstract:

Influenza or not influenza: Analysis of a case of high fever that happened 2000 years ago in Biblical time

Kam LE Hon, Pak C Ng and Ting F Leung

The Bible describes the case of a woman with high fever cured by our Lord Jesus Christ. Based on the information provided by the gospels of Mark, Matthew and Luke, the diagnosis and the possible etiology of the febrile illness is discussed. Infectious diseases continue to be a threat to humanity, and influenza has been with us since the dawn of human history. If the postulation is indeed correct, the woman with fever in the Bible is among one of the very early description of human influenza disease.

If you read the rest of the paper, it is riddled with flaws at every possible level. My main problems with this article are:

1. You can't build up a hypothesis on top of an unproven hypothesis. From the first sentence it is clear that the authors believe in the literal truth of the Bible and want to make conclusions out of the Bible, without drawing in any natural evidence. What they believe is their own business, but if they don't have any actual evidence to bring to the table they can't dine with scientists.

2. The discussion of the "case" is completely nonsensical. The authors rule out any symptom that wasn't specifically mentioned in the Bible ("it was probably not an autoimmune disease such as systemic lupus erythematousus with multiple organ system involvement, as the Bible does not mention any skin rash or other organ system involvement") because medical observation was so advanced 2000 years ago. They even felt the need to rule out demonic influence on the basis that exorcising a demon would be expected to cause "convulsion or residual symptomatology".

This really makes me so mad. The basis for getting published in science is really very simple - use the scientific method. The answer doesn't have to fit dogma or please anyone, but the question has to be asked in a scientific manner. How on earth did these authors manage to get a Bible pamphlet past what is meant to be rigorous peer review? Virology Journal is hardly Nature, but with an impact factor of 2.44 it is at least a credible journal (or was, until this catastrophe). At least the journal has apologised and promised to retract the paper:

As Editor-in-Chief of Virology Journal I wish to apologize for the publication of the article entitled ''Influenza or not influenza: Analysis of a case of high fever that happened 2000 years ago in Biblical time", which clearly does not provide the type of robust supporting data required for a case report and does not meet the high standards expected of a peer-reviewed scientific journal.

Okay, Nature has also made some colossally stupid mistakes in letting industry-funded pseudo-science into their pages, but in the 21st century you would hope that scientific journals would be able to tell the difference between evidence-based science, and faith-based pseudo-science.

Sunday
Nov082009

Polling on science in America

Interesting figures released by Pew on the opinion of the American public on science and scientists.

Effect of science on society

Mostly positive 84%
Mostly negative 6%


Contribute "a lot" to society's well-being

Military 84%
Scientists 70%
Clergy 40%
Journalists 38%
Business executives 21%

US scientific achievements are the best in the world

American public agreement 17%
American scientists agreement 49%

Major problems for science, as identified by scientists:

The public does not know very much about science 85%
Public expects solutions to problems too quickly 49%
New media oversimplify scientific findings 48%
Lack of funding for basic research 46%

Differences of opinion between scientists and the public*:

Agreement with theory of evolution 87% vs 32%
Agreement with theory of climate change 84% vs 49%
Support for use of animals in research 93% vs 52%
Support for compulsory vaccination of children 82% vs 69%
Support for embryonic stem cell research 93% vs 58%

* Presumably these would be even more striking if broken down by scientific discipline.

Scientific training changes ideology

US Public US scientists
Liberal 20% 52%
Moderate 38% 35%
Conservative 37% 9%