Dec 112009
ResearchBlogging.orgA few weeks ago, I wrote that the goal of a structural biology research program ought to be to “characterize the conformation and energy of key, functionally-relevant members of the protein’s structural ensemble and identify the pathways between them.” The Nature paper last week, among other examples I mentioned in the preceding post, described functionally significant minor members of the native-state ensemble, and this is certainly an area where structural studies are making a lot of progress. But what about the other part of that statement, the transition pathways? How are we to study them, and what can we learn about them? Experiments alone are unlikely to tell us everything we want to know about the intermediates between different native structures. We can, however, use simulations validated by experiments to investigate the mechanisms of structural change. Today in Cell, research primarily performed by my coworkers Alexandra Gardino and Janice Velos demonstrates that the bacterial signaling protein NtrC  rapidly samples its active conformation even when it is not phosphorylated. Moreover, they confirm predictions that the intermediates between these two states are stabilized by hydrogen bonds not present in either one.

Phosphorylation, or the covalent addition of a phosphate group to a protein, frequently appears as a way of transferring information within a cell. Often this chemical modification is described as a “switch” that flips a protein from an “off” state to an “on” state. The receiver domain of the bacterial protein NtrC (part of the nitrogen-fixing pathway) gets phosphorylated on  aspartate 54 in response to environmental stimuli, causing a change in conformation.  This causes NtrC to switch from a dimer to a hexamer, with the result that it binds to DNA, and eventually activates transcription of target genes. As you can see from the figure on the right, the change from inactive (red, PDB: 1DC7) to active (green, PDB: 1DC8) phosphorylation causes significant changes in the ’3445 face’ of the protein (dark colors), involving changes in not only the position of the helices, but also their length. That means numerous hydrogen bonds are broken and formed during the process, which would naturally lead one to suspect that going from one form to the other takes a while and is difficult to do. Neither is true

That NtrC converts rapidly from its active to its inactive state has been known for some time. Volkman et al. showed in 2001 that the 3445 face experiences some kind of chemical exchange process (2). Moreover, mutations that cause NtrC to become active in the absence of phosphate do not cause it to adopt the active conformation. As shown in Fig. 2e in (1) and Fig. 4 in (2), the NMR spectra for these mutants show peaks that lie partway between the active and inactive chemical shifts. This indicates that the exchange process reflects conversion between the active and inactive conformations in the absence of phosphorylation, and that this process is fast on the NMR timescale. The activating mutations merely shift the relative populations of the active and inactive states. As Janice’s folding experiments in the current paper show, these mutations operate by destabilizing the inactive state, i.e. they make it less energetically favorable relative to the active conformation. By contrast, phosphorylation of D54 significantly stabilizes the active state. However, conformational exchange is still observed in the phosphorylated protein, indicating that the protein samples the inactive state even when it is presumably activated. NtrC is never “locked” into one state or the other.

This description obviously does not jive with the typical language describing phosphorylation as a “switch” that turns signaling pathways “on”. One might well wonder why phosphorylation matters if the unphosphorylated protein can sample the active conformation. Of course, the NtrC receiver domain might not behave the same in isolation as it does when it’s integrated into a whole protein. NtrC signaling in vivo involves communication from the receiver domain to the DNA binding domain, as well as changes in oligomeric state, neither of which are addressed by examining the receiver domain alone in solution. Given that D86N is functionally activating, it’s likely that these results translate to the whole protein, but phosphorylation may still be considered an effective “switch” because of the way it extends the lifetime of the active state. That is, the receiver domain samples the active state occasionally in solution but doesn’t stay there long enough for the full sequence of steps that are required to result in transcription. In this model, the effect of the phosphorylation is simply to hold the protein in its active conformation long enough for the full transcriptional activation to occur.

Talking about lifetimes is all well and good, but that’s a pretty nebulous discussion unless you have some idea of the rates at which these forms are interconverting. The dispersion traces in Fig. 2 show that the rate is fast, so fast, in fact, that a refocusing field of 1000 Hz has a negligible ability to suppress the effects of conformational exchange in the unphosphorylated protein. In order to fit these curves, Alex and Janice had to use an alternative approach to determine the intrinsic relaxation rate. Once they did that, they found that the rate of interconversion exceeded 104 /s for the unphosphorylated protein, but was only around 2000 for the phosphorylated form. How can such a complex process, requiring so many bonds to be broken, occur so quickly? The only way to know is to examine the pathway of transition between the inactive and active forms.

Identifying transition pathways between structural states is a tough problem, though, because the conformations of intermediates and transition states are poorly populated in the solution ensemble. Obtaining a high-resolution structure of one of these states through experiment is essentially impossible, which suggests that our best hope for getting detailed information about the transition pathway is by simulating the conformational changes in the protein. This also presents a problem, however, because these events occur on the microsecond to millisecond timescale, which means that a simulation would need to be on the order of a second long to really sample the relevant fluctuations. Even using hugely parallel computer nodes, however, simulating a protein at equilibrium for as little as a microsecond takes several months. You would need to run such a simulation for years to adequately sample even this rapid of a transition. To get around this time barrier, researchers perform various tricks with their simulations, simplifying the representation of molecules and forces or biasing them so that transitions happen more frequently.

The latter approach was used by Ming Lei in his targeted molecular dynamics simulation of this transition (3) — he applied an external force to the protein in that simulation that pushed it from one state to another. Here some rotation students put in some good work. Aleksandr (you may have noticed that the Kern lab attracts a lot of people named Alex) carefully examined Ming’s simulation to identify unusual interactions and noticed that a number of short-lived hydrogen bonds appeared to be forming, specifically hydrogen bonds that were not present in the active or inactive conformations. Of course, when you include a fictional force pushing a  protein one way or the other you may end up with artifacts in the simulation, so Janice, along with rotating students Ce Feng and Phillip, tested the simulation predictions by generating mutants that were incapable of forming these hydrogen bonds.

As you can see from Fig. 4, these mutations dramatically slowed the conformational fluctuation, confirming the importance of these bonds in lowering the energy barrier of the structural transition. Even though the bonds exist for only a few nanoseconds in the simulation, they appear to play a significant role in lowering the energy barrier. Note, however, that the effect of these mutations is not necssarily additive — the double S85D/Y101F mutant is no slower than either S85D or Y101F alone. This suggests that these hydrogen bonds stabilize the transition pathway at different points, so that different non-native bonds are responsible for lowering the energy barriers of multiple independent steps. This finding has significant implications for efforts to model structural transitions using simplified potentials based entirely on native-state contacts; it is possible, perhaps even likely, that such simulations will miss critical interactions that stabilize intermediates along these transition pathways.

Our ability to successfully design functional proteins will require that we consider not only the structural heterogeneity of the native state, but also how the movement of the protein between different conformational substates can be tuned by raising or lowering the energy barrier between them. Here, NMR experiments have shown how mutations and phosphorylation alter the energy landscape of NtrC and alter the balance between its inactive and active conformations. Moreover, this study validates the prediction from a targeted molecular dynamics simulation that NtrC uses short-lived, non-native hydrogen bonds to facilitate the transition between these two conformational states. Beyond expanding our knowledge of NtrC’s conformational energy landscape, these findings suggest possibilities for other proteins that are activated by phosphorylation, one of nature’s most pervasive signaling methods.

1) Gardino, A., Villali, J., Kivenson, A., Lei, M., Liu, C., Steindel, P., Eisenmesser, E., Labeikovsky, W., Wolf-Watz, M., Clarkson, M.W., & Kern, D. (2009). Transient Non-native Hydrogen Bonds Promote Activation of a Signaling Protein Cell, 139 (6), 1109-1118 DOI: 10.1016/j.cell.2009.11.022

2) Volkman, B.F., Lipson, D., Wemmer, D.E., and Kern, D. (2001) Two-State Allosteric Behavior in a Single-Domain Signaling Protein. Science, 291 (5512), 2429-2433. DOI: 10.1126/science.291.5512.2429

3) Lei, M., Velos, J., Gardino, A., Kivenson, A., Karplus, M., and Kern, D. (2009). Segmented Transition Pathway of the Signaling Protein Nitrogen Regulatory Protein C. Journal of Molecular Biology, 392 (3), 823-836 DOI: 10.1016/j.jmb.2009.06.065

 Posted by at 3:02 PM  Tagged with:
Mar 172009
ResearchBlogging.orgThe observation of plaques composed primarily of amyloid-β (Aβ) peptides in the brains of Alzheimer’s patients long ago gave rise to a hypothesis that Aβ was the agent that caused the disease. The plaques themselves, composed of long, insoluble fibrils of Aβ, were believed to cause the synapse loss and nerve death characteristic of the disease, and some data supports this model. However, several experiments have suggested an alternative possibility: that the symptoms of Alzheimer’s may be attributed to soluble Aβ oligomers. In this view the fibrillar deposits may be an incidental feature of Alzheimer’s disease, or even a defense mechanism whereby the body tries to get rid of the oligomers by forcing them into insoluble aggregates. In the March 10 edition of PNAS, a team led by researchers at Massachusetts General Hospital claim to have reconciled these two models. Using fluorescence microscopy, they find that amyloid plaques are surrounded by a “halo” of Aβ oligomers that kill the surrounding synapses.

The authors of this studied used fluorescence labeling to identify plaques, oligomers, and synapses in thinly-sliced tissue sections and living brains. They performed their experiments in mice that had been genetically manipulated so as to develop amyloid plaques. When they examined the brains of live mice, Koffie et al. noticed that the fibrillar plaques were surrounded by a cloud of the oligomers, as you can see for yourself in the figure below. On the left you can see the plaque core labeled by a fluorescent dye, and the middle image shows fluorescence associated with an antibody that specifically binds to amyloid oligomers. When these images are merged, the diffuse “halo” of oligomers becomes obvious. The authors see a similar result when they perform a similar experiment using thin slices of brains.

The authors also used a fluorescent-conjugated antibody to identify elements of the post-synaptic density (PSD), so that they can identify healthy synapses in the brain. Experiments in tissue sections demonstrated that the number of healthy synapses was reduced not only right next to the plaque, but also in a region extending up to 50 µm away (a length comparable to the diameter of a human hair). Aβ oligomers are also enriched in this region, and the relative concentration of the oligomer roughly correlates with the loss of synapses. By comparing the pattern of Aβ fluorescence to that of the PSD, the authors determined that oligomers were associated with many synapses, and that interactions between PSD and Aβ oligomers resulted in decreased synapse size. The relationship between Aβ binding and reduced synapse size was also shown to hold in control mice expressing normal levels of native amyloid precursor protein.

The observation that the presence of Aβ oligomers correlates with synapse loss, and the apparent degradation of synapses by Aβ, indicates that the soluble oligomers are a significant cause of Alzheimer’s symptoms, although this study does not rule out the possibility that the plaque itself is also toxic. Even if the plaques have no immediate toxic effect, the authors propose that they serve as reservoirs, releasing synaptotoxic Aβ oligomers into the surrounding neural tissue, increasing the size of the lesions beyond the extent of the plaque itself. In this way Koffie et al. believe they have reconciled the previous models — oligomers are directly toxic, plaques release toxic oligomers, so both can serve as causative agents in Alzheimer’s disease.

If this model is accurate, it implies that Alzheimer’s disease may be quite resilient to attack. Antibodies or drugs that break up the Aβ oligomers will be effective in mitigating the synaptic damage, but as long as the plaques persist they will continue to replenish the pool of oligomers. Treatments that successfully break up the plaques will probably result in worsening symptoms due to the release of toxic oligomers as the fibrils disintegrate. These possibilities reinforce the idea that the most treatment for Alzheimer’s will involve reducing the concentration of amyloidogenic Aβ peptides to prevent them from forming plaques in the first place.

(1) Koffie, R., Meyer-Luehmann, M., Hashimoto, T., Adams, K., Mielke, M., Garcia-Alloza, M., Micheva, K., Smith, S., Kim, M., Lee, V., Hyman, B., & Spires-Jones, T. (2009). Oligomeric amyloid associates with postsynaptic densities and correlates with excitatory synapse loss near senile plaques Proceedings of the National Academy of Sciences, 106 (10), 4012-4017 DOI: 10.1073/pnas.0811698106

Jan 232009
ResearchBlogging.orgNumerous and diverse biological processes depend on the functioning of an internal clock. Biological timers determine your heart rate, the frequency of cell division, and the way you feel at 3 AM, among other things. Similarly, mechanical and electronic clocks serve essential functions in many kinds of man-made devices. As we begin to develop synthetic organisms for medical and industrial purposes, it will be useful for us to be able to construct timers within these micro-organisms to control their activity. In two recent papers, scientists have created molecular systems in mammalian and bacterial cells with tunable oscillation periods.

Although the methods used to construct these oscillators and the kinds of cells they were made in differ significantly, the two systems had one key similarity. Both oscillators used both a positive and a negative feedback loop. In principle, it should be possible to construct an oscillating system using only a negative feedback loop. For instance, you could have a system in which a transcriptional activator enhances the expression of a functional protein as well as that of a transcriptional repressor. As the concentration of the repressor increases, that of the activator falls, causing levels of the functional protein and the repressor to fall, allowing the concentration of the activator to rise again. By tuning the lag in this system one could in theory produce an oscillator with a range of possible frequencies. Yet many systems seem to have evolved with a positive feedback loop as well (in which the activator enhances its own expression).

This curious feature was the subject of a series of simulations reported by Tsai et al. (1) last July in Science. Their studies indicated that a system using only a negative feedback loop would produce a periodic oscillation just as expected. However, they also found that systems relying only on negative feedback were limited in that it was difficult to adjust the frequency of the oscillation without also altering its amplitude. Introducing a positive feedback loop stabilized the system so that the oscillator could be tuned to a wider range of frequencies without altering peak amplitude.

The benefits of this approach were demonstrated in data reported by Stricker et al. last November in Nature (2). They constructed a circuit that expressed green fluorescent protein in an oscillatory manner in response to stimulation by arabinose and isopropyl β-D-thiogalactopyranoside (IPTG). They created a circuit (see their figure, right) in which every component ran off a hybrid promoter that could be activated by AraC (which binds arabinose) and inhibited by LacI (which binds IPTG). Arabinose binds to and activates AraC, while IPTG binds to and inactivates the LacI repressor, with the result that all three genes are transcribed, and the cells fluoresce due to the presence of GFP. As the concentration of LacI increases, the activating power of the IPTG decreases, leading to an eventual repression of transcription and the end of fluorescence. As the LacI proteins get degraded the IPTG concentration again becomes sufficient to activate transcription, leading to a new fluorescent phase.

Stricker et al. found that they could alter the frequency and amplitude of this oscillation by altering the growth conditions of the bacteria (temperature and nutrient availability) as well as by adjusting the concentrations of the activating reagents arabinose and IPTG. By attempting to match computer models of their oscillator to the data they collected, they found that the time needed for translation, folding, and multimerization played a critical role in establishing the existence and period of the oscillation. Stricker et al. constructed an additional circuit using only negative feedback from LacI, proving that this was possible, but they found that in this case the period was not very sensitive to IPTG concentration and the oscillations were not as regular.

A similar system was constructed in Chinese hamster ovary cells by Tigges et al., who described their results recently in Nature (3). The circuit they constructed used tetracycline (TC) and pristinamycin I (PI) as activating molecules. The tetracycline-dependent transactivator (tTA) served as the positive feedback lood, activating transcription of itself, GFP, and the pristinamycin-dependent transactivator (PIT). In this system, increased levels of PIT cause the production of antisense RNA to tTA, causing its mRNA to be destroyed prior to protein production. This, in turn, diminishes production of all proteins until the reduced levels of PIT allow tTA to again activate transcription. They found that they could control the period of oscillation by altering the gene dosage (i.e. the quantity of DNA used to transfect the cells).

The oscillating systems constructed in these papers serve more as test cases and examinations of principles than as functional pieces of synthetic systems. You will not be using an E. coli alarm clock any time soon. However, it has always been true that you learn more from trying to build something than from trying to tear it apart. These attempts to construct artificial periodic oscillators have provided interesting insights into those that have evolved naturally. The knowledge gained from these experiments will help us to understand oscillatory systems like the circadian rhythm and cardiac pacemaker, in addition to illuminating design principles for synthetic biology.

(1)T. Y.-C. Tsai, Y. S. Choi, W. Ma, J. R. Pomerening, C. Tang, J. E. Ferrell (2008). Robust, Tunable Biological Oscillations from Interlinked Positive and Negative Feedback Loops Science, 321 (5885), 126-129 DOI: 10.1126/science.1156951

(2)Jesse Stricker, Scott Cookson, Matthew R. Bennett, William H. Mather, Lev S. Tsimring, Jeff Hasty (2008). A fast, robust and tunable synthetic gene oscillator Nature, 456 (7221), 516-519 DOI: 10.1038/nature07389

(3)Marcel Tigges, Tatiana T. Marquez-Lago, Jörg Stelling, Martin Fussenegger (2009). A tunable synthetic mammalian oscillator Nature, 457 (7227), 309-312 DOI: 10.1038/nature07616

 Posted by at 12:45 AM
Aug 142008
ResearchBlogging.orgHow does the human brain react to the communication of emotion? Does the observation or imagination of emotions have anything in common with the personal experience of them? It is possible that the brain uses a setup in which seeing a person experience an emotion, imagining that emotion, and feeling that same emotion all use completely independent circuitry. Yet since all of these experiences make references to the same emotional state, it is also reasonable to think that some of the pathways are shared. In a recent article from PLoS ONE, a team of researchers uses functional Magnetic Resonance Imaging (fMRI) to determine similarities and differences in the patterns of brain activation following various means of communicating disgust. PLoS ONE is open access, so go ahead and open the article up in another window.

First, a word about fMRI, for those unfamiliar with it. As the name would suggest, fMRI is an elaboration of the standard MRI techniques used image the interior of your body without the use of potentially harmful radioactivity. Neuronal activity in the brain causes a local depletion of oxygen from the blood, followed by a localized increase in blood flow. Because the magnetic properties of iron in the blood change with its oxygenation state, it is possible to detect these hemodynamics using magnetic resonance imaging. Thus, fMRI is able to indirectly detect neural activity, although the fMRI signal lags behind activity by a few seconds. A given fMRI signal also encompasses a large number of individual neurons and therefore can only serve as a rough map to where things are happening in the brain. These temporal and spatial limitations limit the conclusions that can be drawn reliably from fMRI, but the observed correlations can provide valuable insights.

Jabbi et al. used fMRI to map the neural response of subjects to various encounters with disgust. Previous research had shown that a particular region of the brain (the IFO) showed increased activity when subjects either tasted something disgusting, or viewed a short clip of someone else tasting something disgusting. For this study, Jabbi et al. had participants read short scripts (samples can be found in the supplementary materials) intended to make the reader imagine being disgusted, pleased, or not feeling anything. They found that reading disgusting passages induced a neural response in this region of interest, just as it had for the cases of tasting or observing disgust.

While this may seem completely unsurprising, it bears some consideration. The experience of personal disgust differs significantly from the experience of observing disgust in others. Similarly, imagining or reading about disgust creates a very different subjective experience than, say, drinking quinine. Given that these are all quite different feelings, it is somewhat surprising that a single area is activated by all three.

Of course, there is a fine line to consider here — the passages meant to make the subjects imagine disgust may have actually disgusted them. The paragraphs that the authors make available in the supplementary materials are written in second person and involve things like accidentally ingesting animal waste. Because the subjects are reading passages that ask them to imagine themselves being disgusted, and the passages are themselves disgusting, the act of imagination may be contaminated by an immediate personal experience of disgust. In a more elaborate experiment it might be of value to use passages written in the third person. Additionally, it might be useful to employ passages in which the characters, because of particular phobias or personal experiences, are disgusted by items or actions the reader is likely to find innocuous.

Whether the readers where themselves disgusted or not, the overall response in the brain differed for each of the stimuli, as shown by a map of correlated activity (Figure 2). While the area outside the IFO activated by observation was relatively small, both the disgusting taste and the disgusting scripts produced widespread activity relative to a neutral taste or script. In general there was not much overlap between the networks, except for a small region shared by the imagination and experience groups. The authors propose that the similarities of imagining, observing, and experiencing emotion are due to the common activation of the IFO, while the differences between these are due to the largely distinct networks of correlated activity. Different modes of exposure to disgust may therefore act in complementary, rather than independent, ways.

Additionally, this result appears to be consistent with the view that our recognition of observed disgust and our imagination of disgust rely on an internal simulation of our own feelings of disgust. However, these experiments cannot establish exactly what a particular region of the brain is doing, so this remains an open question.

While this research does not indicate whether these results can be generalized to other emotional states, this finding may interest developers of media that make use of multiple modes of communication, specifically video games. Games often rely on video cutscenes to convey story and emotion, but this approach may be wasting a significant amount of potential. The participatory nature of games makes it possible to approach emotional communication not only through the observational route, but also the experiential route.

Consider the case of Agro’s fall in Shadow of the Colossus. Observing the cutscene, and hearing the voice of Wander, the player can understand that Wander feels grief at this event, in much the same way that anyone watching a movie could understand it. Additionally, the emptiness of the game’s landscape and the forced collaboration between the player and the Agro AI has helped to create a relationship between the player and the horse. Thus, in observing Agro’s fall, the player may feel his own sense of grief at the event, increasing the emotional resonance of the moment.

This suggests a possible, if lengthy, experiment. It would be interesting to compare the fMRI profile of a subjects observing Agro’s fall under two conditions: one in which they have actually played the game up to that point, and another in which they have watched the game as a movie, with exploration and battles recorded previously from an expert player’s run. Would the first group have activity in both the observational and experiential networks, or would each group activate a different network? What implications might these outcomes have for the development of emotionally fulfilling games?

Of course fMRI studies are not some holy grail that makes everything clear. The work of Jabbi et al. has given us a rough map to where things are happening, but understanding exactly what is happening and how it is happening will require additional experiments and possibly new investigative techniques. Nonetheless, this is an interesting piece of the puzzle, and perhaps some food for thought.

Mbemba Jabbi, Jojanneke Bastiaansen, Christian Keysers (2008). A Common Anterior Insula Representation of Disgust Observation, Experience and Imagination Shows Divergent Functional Connectivity Pathways PLoS ONE, 3 (8) DOI: 10.1371/journal.pone.0002939