The University of Leuven is looking to recruit a new faculty member in Leuven, explicitly targetted at foreign senior post-docs. Details are here, if you know anyone suitable please pass the information on!
Entries in immunology (10)
It is easy to discuss equality in science through anecodote. Just by spending most of my waking adult life on university campuses across three continents I am fairly confident in saying that sexual equality is better in biology and medicine than in chemistry or physics, is great at undergraduate level and lagging at professorial level, and is better in Australia than in Belgium. Much better than anecodote, though, is quantitative analysis, which is why I love this website. If you don't publish your research it is a hobby, not science, and a good publication record is the A to Z of career success for a scientist. This website collates data on authorship across time and across disciplines, at a global level, and assesses the participation of women. There are a few caveats: papers are only assessed if they are listed in the JSTOR database, and a gender is only assigned by first name analysis (using the US Social Security database as a reference, so it probably fails for first names not commonly used in the US). Still, it is an absolutely beautiful reference point.
There is an wealth of knowledge in this database, but my interest is in molecular immunology, so how are we performing? Well, the question kind of depends on "compared to what?" In 1991-2010, 29.7% of authors on molecular immunology papers were women. This is an improvement from 1971-1990 (23.9%), and a huge improvement from pre-history (being everything from 1970 and before, at 13.7%). It is also outstanding compared to fields such as mathematics, where women still only account for 10% authors (maths clearly has a problem with women; anyone who says the reverse is kidding themselves). But 29.7% is still a long way from 50%. Even among first authors (typically PhD students or post-docs), only 33.2% of molecular immunology authors were women, and among last authors (typically professors) only a dismal 15.4% were women.
I've said before what I think the problem is (hint, it is men), but this database gives us a resource to see who is fixing the problem, and how fast, and who is content to live in the stone-age and try to do science with a 50% lobotomy. So many questions arise. Why has virology been more equal than immunology throughout the time period? I would love to see a break-down by country to know if this is a discipline-thing, or is a statistical quirk due to regional differences in sexism correlating by chance with regional differences in research focus.
Oh, and for the trivia-minded, within molecular biology the most equal area of research is heat shock proteins, while the most sexist is prostaglandins. In the entire database, the most female-dominated area of research is gender studies (57.8% female authors), while the most male-dominated area of research is a discipline of mathematics called Riemannian manifolds (99.3% male authors). Check it out.
Fact: Circumcision protects against HIV infection
There are three tiers of evidence for the protective effect of circumcision against HIV infection. Firstly, there are the epidemiological observations, where rates of HIV in circumcised and uncircumcised populations are compared. Secondly, there are the case-control observations, where rates of HIV in circumcised and uncircumcised individuals are compared. And thirdly, there are the randomised clinical trials, where men are assigned to either circumcision or no circumcision and the effect of future HIV infection is compared.
We can deal with these in turn. The first are the epidemiological surveys. There are multiple relevant studies, all with similar effects, but one of the best designed is the multicentre study set in four cities in different regions of Africa. These studies show a much lower rate of HIV infection in west and north Africa compared to east and south Africa. The infection prevalence closely mirrors the religious border, with lower rates in Muslim Africa and higher rates in Christian Africa. Despite the glee with which certain Muslim scholars touted this as a representation of increased sexual restraint among Muslims, the multicentre study showed very few differences in sexual activity (number of sex partners, prostitution levels, etc). This is not evidence for the role of circumcision in protection against HIV, but it is very strong evidence that something is different between these two communities and that it has a strong role in protection against HIV.
The epidemiological studies lead multiple groups of researchers to investigate the circumcision hypothesis using case-control experiments (comparing the infection rate in circumcised and uncircumcised men). With dozens of different studies, all of various quality, the best way to assess the results is from the systematic reviews that have been performed. This systematic review in 2005 looked at all 36 studies into circumcision that had been performed to date. Among the 18 general population studies, seven studies showed a protective effect and two studies showed a harmful effect (right). The difficulty of general population studies is that the rate of HIV infection is low enough that it can be difficult to control for bias and to generate enough statistical power. High-risk studies, by contrast, tend to have higher HIV rates and to have less bias in risk factors, often leading to additional statistical power. Among the 18 high-risk group studies, 13 studies showed a protective effect and no studies showed a harmful effect (left).
The key criticism of any case-control study is that there may be confounding effects. When these confounding effects are known (eg number of sexual partners) they can be controlled for, but when confounding effects are unknown they cannot be controlled for. It is therefore always theoretically possible that there is some unknown confounding effect that has a strong correlation with circumcision and is protective against HIV infection. The only way to control for this possibility is to have a randomised clinical trial, where HIV-negative men enroll and are then randomly assigned to either the control group or the circumcision group. In this ideal experiment any confounding factors will be randomised to the two different groups and the effect only of the treatment can be identified. This randomised clinical trial design is the exact experiment performed by three independent groups.
The study by Auvert et al enrolled a total of 3,274 uncircumcised men in South Africa, tested for HIV and assigned half to be circumcised. The group then followed up both cohorts for HIV status, condom use, sexual activity and so forth, and found a 60% protective effect for HIV infection among the circumcised group. Condom use, sexual activity and the like were nearly identical among the two groups, normalisation for these factors resulted in a 61% protective effect. The study by Bailey et al enrolled uncircumcised 2784 men in Kenya, tested for HIV, assigned half to be circumcised and again followed up for HIV infection and behaviour change. Again, no changes in sexual behaviour were observed and the risk of HIV was reduced by 60%. Finally, the study by Gray et al, with a similar design in Uganda, enrolled 4996 uncircumcised men and found a net protective effect of 60%. All three trials were independently run along best practice guidelines with blinded tested, yet all three trials found the identical effect of 60% protection - which was also the average protective effect observed in the case-control studies. Together, with multiple independent lines of evidence pointing towards the same result, the effect can be considered conclusive, to the point that it is now considered unethical to conduct more clinical trials as it would mean withholding treatment to the control group - the same way that we cannot ethically conduct more clinical trials on proven vaccines.
Several reoccurring criticisms come up for these three clinical trials. The most common objection is that all three trials were stopped early. This is true, however they were specifically stopped early by the ethical board review because the results were clear early on. It is now a built-in feature to clinicial trials that intermittent review will take place and the trials will be halted if adverse events surpass a particular level (so that excess participants are not exposed to the treatment) or if the protective effect surpasses a particular level (because it is considered unethical to withhold the treatment from the control group at this point). This is not a unique feature of the circumcision trials and is an agreed upon compromise between getting perfect scientific results and treating the participants of the trial in an ethical manner. The other main criticism that is raised is that the control group for circumcision is not like a traditional placebo - the trial doctors are blinded but the participants are not, and those assigned to the circumcision group may drop out at higher rates, creating a bias. While theoretically possible, each of the three studies investigated this possibility by looking at the drop-out rate. For example the Gray study found that out of 4996 enrollments, only 37 dropped out (24 in the circumcision group and 13 in the control group), not enough to create any substantial bias.
One further comment. The direct protective effect of circumcision is only known to protect men during vaginal intercourse. It is also likely to protect men during anal intercourse, but this has not been studied. It provides little to no direct protection to the woman, however mathematical modelling suggests that when the take-up of circumcision reaches 50% the "herd immunity" effect would reduce HIV infection among females and uncircumcised men by 25-30%. While not exactly a "silver bullet", this would make an impact to millions of people within southern Africa, where existing circumcision rates are low.
Myth: the foreskin must be functional or it would have been eliminated by evolution
This myth comes in two flavours. The first is that because it is present it must be functional, the second that if it actually was detrimental in HIV infection it would be selected against. As to the first, evolution does tend to result in the loss of anatomical features with no function, however there are strong exceptions for sexually dimorphic features. Thus a male nipple has no function, but there is strong sexual selection to keep the female nipple. In the absence of selection to create a suppressive pathway against nipple development in males, the useless male nipple is maintained. In all likelihood the foreskin is similar to the male nipple, as the male relic of the female labium. As to the second argument, it must be remembered that evolution is responsive, not predictive. Prior to the entry of widespread HIV infection there would have been no evolutionary pressure against the foreskin. If HIV had been a common infection for millions of years, the continued existence of a foreskin would indeed be a mystery, but this is not the case.
Myth: HIV protection is just a matter of cleanliness
A commonly stated myth about circumcision is that the HIV protection is simply due to the ease of keeping the circumcised penis clean and that good hygiene would replicate the effect. As a starting hypothesis, this is not an unreasonable model to test. A prediction of this model would be that circumcision would protect against a broad range of sexually transmitted infection (STIs), as the "cleanliness hypothesis" would not predict any special status for HIV protection. The ability of circumcision to protect against multiple STIs has been tested in epidemiological studies and randomised trials and so far no effect has been reliably measured for any STI other than HIV. It is possible that for some rare STIs in addition to HIV there may be protective effects and also possible that there is a weak effect against HSV, but to date the evidence against the "cleanliness hypothesis" is very strong - the protective effect of circumcision does not appear to be due to differential cleanliness and is almost certainly due to the unique biological properties of HIV outlined below.
Myth: there is no known biological mechanism to explain the protective effect of circumcision on HIV
The gold standard for incorporating a technique into evidence-based medicine is success in randomised clinical trials, as already proven for circumcision. However medical researchers prefer to understand the mechanism of protection for any intervention, as it allows optimisation or replacement with simpler strategies. It is sometimes claimed that there is no plausible mechanism by which circumcision protects against HIV, however a review of the literature demonstrates that the known biology of HIV suggests an optimal infection route via the foreskin. Unlike some sexually transmitted viruses, like HPV, that are able to directly infect through the skin, HIV is an exceptionally poor virus at crossing the epidermal barrier - purified HIV placed on the skin will remain safely external. Instead, HIV has to rely on two different mechanisms to breach the epidermal barrier - microabrasions and cellular trafficking.
Microabrasions are small tears in the skin barrier which expose the inner tissue and blood to the environment, allowing a direct passageway for HIV to enter. One common cause of microabrasions are sexually transmitted diseases, which often form small ulcers to allow increased shedding. These ulcers allow the reverse infection of HIV, which is why the transmission rate of HIV increases 100-fold with coinfection with other sexually transmitted infections (such as HSV-2). This accounts for the recent data suggesting that anti-HSV-2 treatment programs may reduce HIV spread. The skin of different organs is more or less prone to microabrasions. The mucosa of the anus is the thinnest, followed by the vagina, followed by the oral cavity, followed by the penis, which correlates with the increasing risk of HIV acquisition per sexual act (anal receptive > vaginal receptive > oral receptive > insertive).
The second mechanism is that of cellular traffic. HIV infects through the CD4 receptor, using the coreceptors CCR5 and CXCR4. The expression pattern of these receptors limits the infection of HIV to CD4 T cells, macrophages and dendritic cells. Typically, these cells are found in circulation (which is why intravenous injection of HIV in contaminated blood provides the highest efficiency infectious route), but activated CD4 T cells and naive dendritic cells also circulate into the tissue. In the skin, the top layer is a keratinised barrier with dead cells, with the living tissue deep below this layer. The mucosa is quite different - as a functional interface it requires living cells to directly border the environment. While most of these cells are epithelial in origin, and hence not infected by HIV, dendritic cells lie just below the surface. The reason for this is the role of dendritic cells in antigen sampling, ironically a defense mechanism against common mucosal pathogens. Critically, these dendritic cells do not only lie just below the surface, but the also push thin dendrites through the epithelial cell barrier so that they directly contact the surface (right). We even know exactly how the dendritic cells form these dendrites, as a key paper in Science demonstrated that dendritic cells that lack the chemokine receptor CX3CR1 still home to the epithelial cell surface, but they are unable to produce the dendrites that penetrate to the surface (left).
With regards to circumcision, the key risk is the region of the inner foreskin, which has more in common with the mucosal surface of the vagina than with the keratinised surface of the rest of the penis. During an erection the inner foreskin of the uncircumcised penis is exposed (right), creating a region of relatively thin tissue that does not exist on the surface of an erect circumcised penis. This is the tissue that is thinner and populated by surface level dendritic cells, so it is also the tissue which is most prone to microabrasions and to cellular trafficking via infected dendritic cells. In the circumcised penis this tissue is absent, with the region covered in a thicker layer of non-mucosal skin. It is therefore likely that the biological mechanism of circumcision protection is simply the removal of this mucosal surface during intercourse.
Fact: Condoms are more protective than circumcision
The protective effect of circumcision on HIV is around a 60% lifetime protection. For single event condom use the protective factor is around 99% (with a 1.6% slippage factor), which results in an 80% protection rate. When condom usage is accompanied by sex ed classes on how to use a condom correctly the lifetime protection rate goes up to 95%. Clearly a correctly used condom is more protective than circumcision.
However, it is important to note that this does not mean that circumcision has no added value. In the randomised control trials men were still advised to wear condoms, but as you might expect 100% condom usage was not achieved (total condom usage was the same in both groups). The protective effect of circumcision in these trials is therefore an additive effect on top of typical condom usage. Public health is an experimental science and it needs to differentiate between the ideal effects of a treatment and the actual effects of implementation. For example, assuming that all sex was consensual (clearly not the case), voluntary abstinence would block the transmission of HIV. The ideal effect is therefore 100% protection. What happens when abstinence advice is rolled out as a campaign? Absolutely nothing. Circumcision may provide little additional protection when combined with ideal condom use, but in terms of public health what matters is that it provides substantial protection when combined with actual condom use.
Myth: Religious circumcision originated because of the health benefits
A number of religious supporters have lept upon the scientific evidence for the protective effects of HIV as support for ritual religious circumcision. They tout the proposition that the religious tradition of circumcision is validated by the scientific evidence, which therefore validates other aspects of their religion. This is by far an overly generous idea, for several important reasons:
1. While the western world tends to think of circumcision as the removal of the entire foreskin, anyone familiar with men would not be surprised to find out that religion has found many weird ways to manipulate the male penis. For example, there is the dorsal slit circumcision, where the foreskin is cut only along one side of the penis, leaving it flapping below. In some places it is then common to create a hole in the free foreskin and fold it back over the penis, sometimes called the "cowboy cut" as the result looks a little like a cowboy hat. While most of these traditional circumcisions have not been tested for protective effects, based on the biological mechanism of HIV protection, it is highly likely that only the full foreskin removal will result in substantial protection.
2. HIV only originated within the past 100 years, so any protective effect of circumcision would have been non-existent at the time these practices originated. As circumcision has little to no protective effect to other STIs, there is currently no scientific basis on which to claim the practice was beneficial at the time they originated.
3. Ritual circumcision in traditional contexts is highly dangerous. These surgical operations were carried out in non-sterile circumstances by untrained religious leaders. This stands in stark contrast to the modern non-surgical approach to circumcision, where typically a band is used to cut off blood circulation to the foreskin so that it falls off - in exactly the way the umbilical cord is removed, leaving behind the belly-button. While modern (secular) circumcision has extremely low rates of complication (on the order of 1.5% minor events such as swelling, 0% severe events), traditional/religious circumcision can have much higher rates of complication (with adverse event reports of over 10% reported, including severe events). The cost-benefit ratio of religious circumcision was therefore almost certainly a net negative, while the cost-benefit ratio of modern secular circumcision produces a net positive.
Myth: Circumcision reduces the pleasure of sex
This is a very common myth used in opposition of circumcision, often accompanied by an anecdote of some man they know who has a "botched" circumcision and now has pain during sex (anecdotes of uncircumcised men who have pain during sex are duly ignored). Fortunately, in science we can actually go beyond anecdotes and look at some hard data on sexual pleasure.
Firstly, what are the effects of circumcision on subsequent adult sexual pleasure?
* when 1410 American 18-59 year old men were asked if they had "trouble achieving sexual gratification" in the past 12 months, around 45% of men reported sexual dysfunctional, with slightly higher rates in uncircumcised men. This small decrease in sexual dysfunctional in circumcised men remained significant even after controlling for variables such as race, age and sexual preference.
* the same study found that circumcised men had a more varied sexual practice, with more masturbation and oral sex, inconsistent with a hypothesis that sex is less enjoyable to circumcised men.
* Payne et al directly tested the sensitivity of circumcised and uncircumcised penises by measuring the response to touch on the ventral and dorsal surfaces. No difference was observed in sensitivity between the two groups.
* most studies are performed on men circumcised as infants, with relatively few men being circumcised as adults. The recent push for adult circumcision in Kenya has allowed a survey of men before and two years after adult circumcision (with a randomised control group). No increase was observed in sexual dysfunction and most men actually reported an increase in sexual pleasure (64% said their penis was "much more sensitive" and 55% said it was "much easier" to reach orgasm). A Ugandan study found that men circumcised as infants were more likely to have earlier and more promiscuous sex than uncircumcised men.
Not all studies find such strong results as the Kenyan survey, which suggests a strong increase in sexual pleasure. Indeed, the three randomised clinical trials for HIV protection found no change in sexual behaviour. Thus the conservative reading of these studies would be that there is no decrease in sexual pleasure among circumcised men, whether circumcised as infants or adults. The only plausible exception may be within the group of men who have religious-traditional (non-modern) circumcision, where relatively little study has been performed.
Myth: Circumcision is the male equivalent of female genital mutilation
Female genital mutilation is the practice of scraping away part or all of the external genitalia of a woman, typically the removal of the clitoris and labium. While it is euphemistically called "female circumcision" it has almost nothing in common with male circumcision. Sexual dysfunctional, while not ubiquitous, is increased in women who have been genitally mutilated, and sexual pleasure is generally decreased. Multiple health risks are associated with the practice, especially increased risk of complication and even death during childbirth. Female genital mutilation is not protective against HIV, and may even increase the risk of HIV infection, either during the mutilation procedure itself or due to additional tissue damage during sexual intercourse. Male circumcision should never be compared to female genital mutilation, a procedure that is more akin to penectomy.
Do parents have a right to circumcise an infant, or should they wait until he can make his own decision in adulthood?
The Declaration of the Rights of the Child upholds the right of children to have autonomy as individuals. This does not, however, preclude parents making decisions in the interest of the child, as a child cannot be considered to be truly autonomous. There are multiple examples that are widely accepted for parents making decisions for a child - such as in the area of education. The best comparison to infant circumcision is that of vaccination: both confer protection to infectious disease, both are irreversible and both result in a small chance of minor side-effects (such as swelling for a few hours to days). While this provides a basis for parental right to circumcision, it does not provide an unrestricted mandate - the least damaging form of intervention must be used (ie, non-surgical sterile circumcision over religious circumcision) and the benefits need to be placed in context to alternative (eg, if a vaccine for HIV is successfully generated the rationale for circumcision will be lost, just as the eradication of smallpox eliminates the rationale for the smallpox vaccine - a procedure with more complications than infant circumcision).
Another version of this objection, with somewhat more validity, is that since HIV is generally a sexually transmitted disease the protective effects do not kick in until the child reaches adulthood and has sex, at which time he can decide for himself. Well... perhaps, although it would be naive to assume that all men wait until they are 18 to have sex. Even if you were to wait until the age of 16, the surgical advice for adult circumcision is to have no sexual intercourse or masturbation for at least two weeks following the procedure. That may be quite a hard sell to a 16 year old boy, while being entirely irrelevant to an infant. Again, the best comparison is to vaccination. We have available an outstanding vaccine against human papilloma virus (HPV), which provides substantial (but not 100%) protection against cervical cancer in women who catch HPV. As HPV is a sexually transmitted disease you could advocate that this procedure should also be delayed until the age of 18, but with such mild side-effects as tenderness for a couple of days why not vaccinate all children as young as possible? Several Christian groups object to the HPV vaccine of girls on the basis that it will create "moral hazard" and promote promiscuous sex, but there is no actual evidence to suggest that girls are refraining from sex due to a fear of cervical cancer, and no evidence to suggest that the vaccine changes the rate of sexual activity.
In 1998 Andrew Wakefield published a paper which has severely damaged public health in the last ten years. Based on his observations of only twelve children, nine that he claimed had autism, and without a control group, he concluded that the measles/mumps/rubella vaccine caused autism. As a hypothesis, this was fine, unlikely, but not impossible. He saw nine children with autism, reported that their parents linked this onset with the MMR vaccine, and put it in the literature. Why on earth on underpowered observation like this made it into the Lancet is beyond me, but there is nothing wrong with even outlandish hypotheses being published in the scientific literature. Was it a real observation, or just an effect of a small sample size? Was it a causative link, or just due to coincidence in timing?
As with any controversial hypothesis, after this one was published a large number of good scientists went out and tested it. It was tested over and over and over again, and the results are conclusive - there is no link between the MMR vaccine and autism.
In itself, this was of no shame to Andrew Wakefield. Every creative scientist comes up with multiple hypotheses that end up being wrong. People publish hypotheses all the time, then disprove them themselves or have them disproven by others. If you can't admit being wrong, you can't do science, and it is in fact the mark of a good scientist to be able to generate hypotheses that others seek to knock down. Ten of the thirteen authors on the study were able to see the new data and renounce the hypothesis.
The shame to Andrew Wakefield is not that his hypothesis was wrong. No, the shame he has brought upon himself was by being unscientific, unscrupulous and unethical:
- Firstly, Wakefield did not present his paper as a hypothesis generator, to be tested by independent scientists. Instead he went straight to the media and made the outrageous claim that his paper was evidence that the MMR vaccine should be stopped. This is not the way science or medicine works and was a conclusion unsupported by the data. Worst of all it was a conclusion that many parents without scientific training were tricked into believing. Vaccination rates for MMR went down (autism rates have remained unchanged) and children started dying again of easily preventable childhood diseases. A doctor does not see half a dozen children that developed leukemia after joining a football team and then hold a press conference telling parents that playing sports causes cancer in children, which is the direct equivalent of Wakefield's actions.
- Secondly, it has now been conclusively demonstrated that his original data was fraudulent. Interviews with the parents of the original nine children with autism show that he faked much of the data of the time of onset, taking cases where autism started before the MMR vaccine and reversing the dates to suggest that the vaccine started the autism. Analysis of the medical records of these children show that as well as the timing being incorrect, many of the symptoms were simply faked and non-existent. The evidence on this charge alone makes Wakefield guilty of professional misconduct and criminal fraud.
- Thirdly, unknown to the coauthors of the study and the parents of the children, Wakefield had a financial conflict of interest. Before the study had begun, Wakefield had been paid £435 643 to find a link between vaccines and disease as part of a lawsuit. Every scientist must disclose their financial interests in publication so that possible conflicts are known - Wakefield did not. If he had disclosed this to the press conferences the media may have been slightly more skeptical about his outlandish claims.
These last two issues, scientific misconduct and financial conflict of interest, are the reason why the paper was formally retracted by the Lancet. Studies that are wrong don't get retracted, they just get swamped by correct data and gradually forgotten. Instead, the retraction indicates that the Wakefield paper was fradulent and should never have been published in the first place. Likewise, the British General Medical Council investigated the matter and found that Wakefield "failed in his duties as a responsible consultant" and acted "dishonestly and irresponsibly", and thus struck him off the medical registry.
The worst part about this sorry affair is that it is still dampening down vaccination rates. Literally hundreds of studies, with a combined cohort size of a million children, have found no link between the MMR vaccine and autism, yet one fraudulent and retracted study of nine children is still talked about by parents. Some parents are withholding this lifesaving medical treatment from their children, and their good intentions do nothing to mitigate the fact that cases of measles and mumps are now more than 10 times more likely than they were in 1998, and confirmed deaths have resulted. And Andrew Wakefield, the discredited and disbarred doctor who started this all? Making big money in the US by selling fear to worried parents, and deadly disease to children who have no say in it at all.
And this is how you deal with anti-vaccine campaigners
The mechanism by which antibodies were formed was once one of the oldest and most perplexing mysteries of immunology. The properties of antibody generation, with the capacity of the immune system to generate specific antibodies against any foreign challenge – even artificial compounds which had never previously existed – defied the known laws of genetics.
Three major models of antibody production were proposed before the correct model was derived. The first was the “side-chain” hypothesis put forward by Ehrlich in 1900, in which antibodies were essentially a side-product of a normal cellular process (Ehrlich 1900). Rather than a specific class of proteins, antibodies were just normal cell-surface proteins that bound their antigen merely by chance, and the elevated production in the serum after immunisation was simply due to the bound proteins being released by the cell so that a functional, non-bound, protein could take its place. In this model antibodies “represent nothing more than the side-chains reproduced in excess during regeneration and are therefore pushed off from the protoplasm”.
Figure 1. The “side-chain” hypothesis of antibody formation. Under the side-chain hypothesis, antibodies were normal cell-surface molecules that by chance bound antigens (step 1). The binding of antigen disrupted the normal function of the protein so the antigen-antibody complex was shed (step 2), and the cell responded by replacing the absent protein (step 3). Notably, this model explained the large generation of specific antibodies after immunisation, as surface proteins without specificity would stay bound to the cell surface and not require additional production. The model also allowed a single cell to generate antibodies of multiple specificities.
The “side-chain” model was replaced by the “direct template” hypothesis by Haurowitz in 1930. Under this alternative scenario, antibodies were a distinct class of proteins but with no fixed structure. The antibody-forming cell would take in antigen and use it as a mould on which to cast the structure of the antibody (Breinl and Haurowitz 1930). The resulting fixed-structure protein would then be secreted as an antigen-specific antibody, and the antigen reused to create more antibody. In preference to the “side-chain” hypothesis, the “direct template” hypothesis explained the enormous potential range of antibody specificities and the biochemical similarities between them, but it lacked any mechanism to explain immunological tolerance.
Figure 2. The “direct-template” hypothesis of antibody formation. The direct-template hypothesis postulated that antibodies were a specific class of proteins with highly malleable structure. Antibody-forming cells would take in circulating antigen (step 1) and use this antigen as a mould to modify the structure of antibody (step 2). Upon antibody “setting”, the fixed structure antibody was released into circulation and the antigen cast was reused (step 3). In this model specificity is cast by the antigen, and a single antibody-producing cell can generate multiple different specificities of antibody.
A third alternative model was put forward by Jerne in 1955 (Jerne 1955). The “natural selection” hypothesis is, in retrospect, quite similar to the “clonal selection” hypothesis, but uses the antibody, rather than the cell, as the unit of selection. In this model the healthy serum contains minute amounts of all possible antibodies. After the exposure to antigen, those antibodies which bind the antigen are taken up phagocytes, and the antibodies are then used as templates to produce more antibodies for production (the reverse of the “direct template” model). As with the “direct template” model, this hypothesis was useful in explaining many aspects of the immune response, but strikingly fails to explain immunological tolerance.
Figure 3. The “natural selection” hypothesis of antibody formation. The theoretical basis of the natural selection hypothesis is the presence in the serum, at undetectable levels, of all possible antibodies, each with a fixed specificity. When antigen is introduced it binds only those antibodies with the correct specificity (step 1), which are then internalised by phagocytes (step 2). These antibodies then act as a template for the production of identical antibodies (step 3), which are secreted (step 4). As with the clonal selection theory, this model postulated fixed specificity antibodies, however it allowed single cells to amplify antibodies of multiple specificities.
When Talmage proposed a revision with more capacity to explain allergy and autoimmunity in 1957 (Talmage 1957), Burnet immediately saw the potential to create an alternative cohesive model, the “clonal selection model” (Burnet 1957). The elegance of the 1957 Burnet model was that by maintaining the basic premise of the Jerne model (that antibody specificity exists prior to antigen exposure) and restricting the production of antibody to at most a few specificities per cell, the unit of selection becomes the cell. Critically, each cell will have “available on its surface representative reactive sites equivalent to those of the globulin they produce” (Burnet 1957). This would then allow only those cells selected by specific antigen exposure to become activated and produce secreted antibody. The advantage of moving from the antibody to the cell as the unit of selection was that concepts of natural selection could then be applied to cells, both allowing immunological tolerance (deletion of particular cells) and specific responsiveness (proliferation of particular cells). As Burnet wrote in his seminal paper, “This is simply a recognition that the expendable cells of the body can be regarded as belonging to clones which have arisen as a result of somatic mutation or conceivably other inheritable change. Each such clone will have some individual characteristic and in a special sense will be subject to an evolutionary process of selective survival within the internal environment of the cell.” (Burnet 1957)
Figure 4. The “clonal selection” hypothesis of antibody formation. Unlike the other models described, the clonal selection model limits each antibody-forming cell to a single antibody specificity, which presents the antibody on the cell surface. Under this scenario, antibody-forming cells that never encounter antigen are simply maintained in the circulation and do not produce secreted antibody (fate 1). By contrast, those cells (or “clones”) which encounter their specific antigen are expanded and start to secrete large amounts of antibody (fate 2). Critically, the clonal selection theory provides a mechanism for immunological tolerance, based on the principle that antibody-producing cells which encounter specific antigen during ontogeny would be eliminated (fate 3).
It is important to note that while the clonal selection theory rapidly gained support as explaining the key features of antibody production, for decades it remained a working model rather than a proven theory. Key support for the model had been generated in 1958 when Nossal and Lederberg demonstrated that each antibody producing cell has a single specificity (Nossal and Lederberg 1958), however a central premise of the model remained pure speculation – the manner by which sufficient diversity in specificity could be generated such that each precursor cell would be unique. “One aspect, however, should be mentioned. The theory requires at some stage in early embryonic development a genetic process for which there is no available precedent. In some way we have to picture a “randomization” of the coding responsible for part of the specification of gamma globulin molecules” (Burnet 1957). Describing the different theories of antibody formation in 1968, ten years after the original hypothesis was put forward, Nossal was careful to add a postscript after his support of the clonal selection hypothesis: “Knowledge in this general area, particularly insights gained from structural analysis, are advancing so rapidly that any statement of view is bound to be out-of-date by the time this book is printed. As this knowledge accumulates, it will favour some theories, but also show up their rough edges. No doubt our idea will seem as primitive to twenty-first century immunologists as Ehrlich’s and Landsteiner’s do today.” (Nossal, 1969).
It was not until the research of Tonegawa, Hood and Leder that the genetic principles of antibody gene rearrangement were discovered (Barstad et al. 1974; Hozumi and Tonegawa 1976; Seidman et al. 1979), rewriting the laws of genetics that one gene encoded one protein, and a mechanism was found for the most fragile of Burnet’s original axioms. The Burnet hypothesis, more than 50 years old and still the central tenant of the adaptive immune system, remains one of the best examples in immunology of the power of a good hypothesis to drive innovative experiments.
Barstad et al. (1974). "Mouse immunoglobulin heavy chains are coded by multiple germ line variable region genes." Proc Natl Acad Sci U S A 71(10): 4096-100.
Breinl and Haurowitz (1930). "Chemische Untersuchung des Prazipitates aus Hamoglobin and Anti-Hamoglobin-Serum and Bemerkungen ber die Natur der Antikorper." Z Phyisiol Chem 192: 45-55.
Burnet (1957). "A modification of Jerne's theory of antibody production using the concept of clonal selection." Australian Journal of Science 20: 67-69.
Ehrlich (1900). "On immunity with special reference to cell life." Proc R Soc Lond 66: 424-448.
Hozumi and Tonegawa (1976). "Evidence for somatic rearrangement of immunoglobulin genes coding for variable and constant regions." Proc Natl Acad Sci U S A 73(10): 3628-32.
Jerne (1955). "The Natural-Selection Theory of Antibody Formation." Proc Natl Acad Sci U S A 41(11): 849-57.
Nossal and Lederberg (1958). "Antibody production by single cells." Nature 181(4620): 1419-20.
Nossal (1969). Antibodies and immunity.
Seidman et al. (1979). "A kappa-immunoglobulin gene is formed by site-specific recombination without further somatic mutation." Nature 280(5721): 370-5.
Talmage. (1957). "Allergy and immunology." Annu Rev Med 8: 239-56.
A major investment of my time last year and this year was in putting together an application for a European Research Council Start grant. The process was quite an ordeal, with both a substantial written grant and a challenging oral defense, probably consuming over 100 hours of my time. Fortunately, with excellent independent researchers in the laboratory, great research continued to be done in the laboratory while I was locked away with the computer.
Being open to researchers across Europe, in any discipline, the competition is fierce, however there are some large advantages to the ERC Start grant process: 1) the committee looks favourably upon large ideas, rather than safe ideas; 2) the competition is segregated according to career stage, so that I was only competing with other researchers less than five years out from their PhD; 3) the funding is sufficient in scale and duration to really put forward a grand plan. Just recently I found out that the application was approved, and the VIB put out the following press release:
VIB receives high score from European Research Council (ERC)
Two young top researchers awarded €1.5 million research grants!
Leuven - VIB landed two research grants worth 1.5 million euros each. The prestigious grants are courtesy of the European Research Council (ERC) and are aimed at giving talented young scientists the opportunity to develop their own research team. The honor fell to Adrian Liston and Patrik Verstreken, both recently transferred to VIB-K.U.Leuven from abroad.
The European Research Council
ERC was created to encourage excellent research in Europe. ERC starting grants give young talented researchers the opportunity to develop a research group. At present, there are still too few opportunities in Europe for young scientists to initiate and lead their own research, which is extremely unfortunate as it results in top researchers leaving the region to develop their careers elsewhere.
Adrian Liston studies autoimmune diseases.
The immune system is our body's defense system and allows it to fight off foreign substances and micro-organisms. In people with an autoimmune disease, the immune system has gone awry: it can no longer distinguish between the body's own and foreign substances and ends up attacking vital tissues and organs. Adrian Liston studies immune system cells (T cells) that are responsible for this malfunction. With his ERC research grant, he plans to bridge the gap between his research on mice models and humans. This may be a first step in the development of new therapies for autoimmune diseases.
Patrik Verstreken explores the communication between brain cells.
Brain disorders take a major toll on society. Many brain diseases are caused by the disruption of communication between brain cells. Finding a solution depends on understanding this communication in the smallest detail. Patrik Verstreken uses the fruit fly as his model organism for studying genes involved in the communication between brain cells. The ERC research grant gives him the opportunity to expand his research to more complex neural communication networks that control behavior. This step is crucial if we are to understand neurological disorders such as Parkinson's disease.
Vertebrates are unique in developing an immune system capable of anticipating pathogens that are yet to evolve. Birds and mammals have taken this "adaptive" immune system to the pinnacle, with T cells and B cells using a randomised form of genomic engineering. The advantage of a system based on randomised generation is striking - by making every T cell and B cell unique it becomes exceptionally difficult for pathogens to "out-evolve" their hosts. Regardless of how a pathogen will change, pre-existing T cells and B cells will be capable of recognising the new modified pathogen. The importance of the adaptive immune system to humans is evident in the fatal consequences of its absence, such as patients with end-stage AIDS or primary immunodeficiencies caused by genetic mutations. These benefits greatly outweigh the cost of the adaptive immune system in resources used and the threat of autoimmune disease.
But does the adaptive immune system make vertebrates more healthy? There is no obvious evidence that it does. In a key essay on the topic, Hedrick argues that vertebrates do not appear to have a lower pathogen-induced mortality rate than invertebrates. Instead, he argues that the development of the adaptive immune system provided only a short-term benefit, with pathogens rapidly being specialised to vertebrate hosts. The result is an immunological arms race, with each side incrementally ratcheting up the armaments. Vertebrates are essentially impervious to non-specialised pathogens unless rendered immunodeficient, but the additional mortality from specialised pathogens is probably equivalent to the invertebrate state.
This still-controversial hypothesis high-lights an important aspect of evolution by natural selection. It has highly inefficient consequences. Natural selection takes place at the level of the individual and evolution takes place at the level of the species. Most importantly, natural selection only occurs in the present. An individual who has an advantage for even a single generation will be over-represented in the next generation. A species that has an advantage for a single generation will be able to exploit more resources for reproduction. The long-term consequences - that each species will waste more resources in an ever more expensive battle - is irrelevant.
The evolutionary arms-race between host and pathogen is one incredibly important example. A more illustrative example of the patent futility of this arms-race comes from Sir David Attenborough, one of the leading science communicators of all time. In Life in the Undergrowth, he films two species of harvest ants living in the desert. Each population needs to collect seeds to survive, however the number of seeds produced in the desert is so low that there is fierce inter-species competition. One species of ant is diurnal, the other nocturnal, and each is capable of collecting the entire daily seed dispersal. In order to survive, every second night the nocturnal ants spend an evening carrying rocks to cover the entry hole of the diurnal ants. The diurnal ants can't collect seeds the next day as they need to spend a day clearing the rocks from the entrance. This gives the nocturnal ants a night to harvest the uncollected seeds. The following day the diurnal ants are able to collect every seed and that night the nocturnal ants spend carrying rocks. Two species end up literally carrying rocks backwards and forwards every second day.
The elegance of evolution is the beauty of such specialised behaviour, but the consequences are gross inefficiency in resource use. If each species simply spent alternative cycles conserving resources both species could survive with a higher population density than currently exists. But neither species can be the first to stop the wasteful use of resources, as that would give a fatal advantage to the other, and so they are trapped together in a cycle of carrying stones. The battles of night ants vs day ants and of hosts vs pathogens illustrate the bizarre, elaborate and ofttimes perverse consequences of evolution by natural selection
6th century BCE – The first known diagnosis of diabetes was made in India. Doctors called the condition medhumeha, meaning "sweet urine disease", and tested for it by seeing whether ants were attracted to the sweetness of the urine.
1st century CE – Diabetes was diagnosed by the ancient Greeks. Aretaeus of Cappadocia named the condition διαβήτης (diabētēs), meaning "one that straddles", referring to the copious production of urine. It was later called diabetes mellitus, "copious production of honey urine", again referring to the sweetness of the urine. Unlike the Indian doctors, Greek doctors tested this directly by drinking a urine sample. At the time a diagnosis of diabetes was a death sentence: "life (with diabetes) is short, disgusting and painful" (Aretaeus of Cappadocia).
It is probably that the ancient Egyptians and early Chinese cultures also independently discovered diabetes.
10th century CE - Avicenna of Persia provided the first detailed description of diabetes (diagnosed through "abnormal appetite and the collapse of sexual functions" as well as the "sweet taste of diabetic urine"). He also provided the first (partially) effective treatment, using a mixture of lupine, trigonella and zedoary seed.
1889 – Joseph von Mering and Oskar Minkowski in Germany developed the first animal model of diabetes using dogs, discovering the role of the pancreas.
1921 - Federick Banting and Charles Best in Canada first cured canine diabetes by purification and injection of canine insulin.
1922 - For the first time diabetes stopped being a death sentence. In 1922 Federick Banting and Charles Best treated the first human patient with bovine insulin. Notably they decided to make their patent available globally without charge.
1922-1980 - Treatment of patients with animal insulin or human insulin extracted from cadavers. Substantial life extension but also significant side-effects.
1955 - Determination of the protein sequence of insulin by Federick Sanger in the United Kingdom.
1980 - First commercial production of recombinant human insulin, by Genentech.
Today there is no cure for diabetes, but when treated it only results in an average loss of 10 years (the same as smoking).
It has long been known that the several causes of cancer are infectious. Typically a virus contains a number of oncogenes to enhance its own proliferation, and in an infection gone wrong (for both virus and host) a viral oncogene is incorporated into the host DNA, creating an uncontrollable tumour cell. One of the best examples of this is human papillomavirus (HPV), a virus which infects most sexually active adults and is responsible for nearly every case of cervical cancer worldwide (which is why all girls should be vaccinated before they become sexually active).
However these cases are not "infectious cancers", they are infectious diseases which are capable of causing cancer. True infectious cancers, where a cancer cell from one individual takes up residency in a second individual and grows into a new cancer, were unknown until recently. With the publication of a new study in PNAS we now have three examples of truly infectious cancers.
1. In the most recent study, researchers in Japan documented the tragic case of a 28 year old Japanese woman who gave birth to a healthy baby but within two months had been diagnosed with acute lymphoblastic leukemia and died. At 11 months of age the child also become ill and was diagnosed with acute lymphoblastic leukemia. Genetic analysis of the tumour cells in the baby demonstrated that the tumour cells were not from the child herself, but rather maternal leukemia cells that had crossed the placenta during pregnancy or childbirth and had taken up residency in their new host. With this information, retrospective analysis indicates that this is probably not a one-off event, and at least 17 other cases of mother-to-child transmission of cancer have probably occurred.
2. In addition to mother-to-child transmission of cancer, cancer can spread from one identical twin to another. Identical (mono-zygotic) twins have identical immune systems, preventing rejection of "transplanted" cells, unlike non-identical (di-zygotic) twins. Thus a tumour which develops before birth in one identical twin can be transferred in utero to the other identical twin, where it can grow without being rejected. In one improbable but highly informative case, a set of triplets were born where two babies were identical and the third was non-identical. A tumour had arisen in one of the identical twins in utero and had passed to both other foetuses, but had been rejected by the non-identical foetus and accepted by the identical foetus. Of course, with the advent of medical transplantation, transmission of infectious cancers is now no longer limited to the uterus. Transplantation of an organ containing a cancer into a new host can allow the original cancer to grow and spread, as transplantation patients are immunosuppressed to prevent rejection. There is also a single case of a cancer being transmitted from a surgeon who cut his hand during surgery to a patient who was not immunosuppressed.
3. In a medical mystery well known to Australians, the population of Tasmanian Devils has been crashing as a fatal facial tumour has been spreading across the population. The way the fatal tumours have spread steadily across Tasmania and sparing Devils on smaller islands first suggested a new infectious disease that causes cancer, similar to HPV in humans. However a suprising study demonstrated that the cancer was directly spreading from one Devil to the next after having spontaneously developed in a single individual. These scrappy little monsters attack each other on first sight, biting each other's faces. The cancer resides in the salivary glands and gets transmitted by facial bites to the new Devil. Unfortunately for Tasmanian Devils, a genetic bottleneck left all Devils so genetically similar that they are, for immunological purposes, all identical twins. This means that the cancer cells transmitted from one Devil to another through biting are able to grow and kill Devil after Devil. The cancer from a single individual has already killed 50% of all Devils, and it is possible that we will have to wait until the cancer burns out by killing all potential hosts before reintroducing the Devil from the protected island populations. As unlikely as this seems, another similar spread occurs in dogs, where a cancer that arose in a single individual wolf is being spread through sexual transmission from dog to dog around the world. This example also illustrates the point made about cancers being "immortal" - the original cancer event may have occured up to 2500 years ago, with the tumour moving from host to host for thousands of years without dying out.
I am writing today from the European Congress for Immunology in Berlin. A talk by Thomas Boehm was the highlight of the first day for me.
The Boehm laboratory has been looking at the genetic evolution of thymus development. The thymus is the nursery for T cells, the coordinator of the adaptive immune response. The Boehm laboratory analysed the genetic phylogeny of sample species spanning the 500 million years of thymus evolution and found several key genes that have been conserved through this process. The master coordinator of thymus development, Foxn1, had already been known, but how this master coordinator worked was a mystery, so the Boehm laboratory used the evolutionary analysis to try to recapitulate thymic development in zebrafish and mice.
In zebrafish, Weyn and colleages were able to use live imaging to analyse the genes that the thymus needs to express in order to recruit progenitor cells. This was done by using genetic expression of coloured dyes, making the primordial thymus glow red and the progenitor cells glow green. They found that just two conserved genes, Ccl25a and Cxcl12a, were synergistically acting to draw in all the precursor cells.
In mice, Bajoghli and colleages tried to use the knowledge gleaned from evolutionary analysis to completely bypass Foxn1. The rationale is that if we know exactly what Foxn1 does to drive thymic development then we should be able to recapitulate thymic development in the absence of Foxn1 by simply expressing the downstream genes. So the Boehm team took the four key genes that were conserved over 500 million years of thymic development, Ccl25, Cxcl12, KitL and Dll4, and expressed them in isolation or in combination in thymic cells that were genetically deficient in Foxn1. Normally, these deficient thymic cells cannot attract T cell precursors. However, Bajoghli and colleages found that just as in zebrafish, two genes in mice were able to essentially restore the capacity to recruit precursors, Ccl25 and Cxcl12. A third gene, KitL, allowed these cells to proliferate and increase in number. What these three genes could not do, however, was turn the precursors into T cells. That job required the fourth gene, Dll4, which had no role in recruitment or proliferation but which was essential for the differentiation of recruited precursors into T cells. Through evolutionary genetics the gene network of an entire organ is being unravelled.
Some of this research is current unpublished, other aspects just came out in the journal Cell.