Aug 182009
 
This post continues my series about selected articles from the dynamics-focused topical issue of JBNMR.

ResearchBlogging.orgIt is helpful, in examining some NMR articles, to understand that NMR spectroscopists have a long and resilient tradition of giving their pulse sequences silly names. You can think of it as the biophysical equivalent of fly geneticist behavior. From the basic COSY and NOESY experiments (pronounced “cozy” and “nosy”) to the INEPT spin-echo train, to more complicated pulse trains such as AMNESIA and DIPSI (which, I am not making this up, is used in an experiment sometimes called the HOHAHA), the field is just littered with ludicrous acronyms (look upon our words, ye mighty, and despair). A team from Josh Wand’s lab now joins this club by developing a multiple optimization for radially enhanced NMR-based hydrogen exchange (AMORE-HX) approach. The name is ridiculous, but the experiment fills an important role and illustrates a very active area of technical development in NMR.

The experiment they developed is intended to measure the rate of hydrogen/deuterium exchange at amide groups on the backbone of the protein. This sort of exchange reaction proceeds pretty quickly for most residue types, and can be either acid- or base- catalyzed. For it to happen, however, two things must be true. One of them is that the amide proton must not be in a hydrogen bond already. Also, the site of the reaction must be accessible to water. These requirements should indicate to you that HX measures the rate of local unfolding and can therefore be interpreted as a measure of fold stability at each NH group on the backbone. This data is of obvious interest to researchers studying protein folding. In addition, because some structural transitions are proposed to involve an unfolded state this may have explanatory power for protein interactions and regulation.

A typical HX experiment involves taking your protein, switching it rapidly into >75% D2O buffer, then placing it in the magnet and taking a series of HSQC or HMQC spectra that separate signals from backbone NH groups by the proton and nitrogen chemical shift. These spectra can be taken with very high time resolution (<2 min each), and the rate of exchange can then be measured by the decay of peak intensity as hydrogen is replaced by deuterium. Assuming that the chemical step occurs significantly faster than the rate of local unfolding and refolding, this decay can be directly interpreted as a local unfolding rate. This works quite well, but as proteins get larger there is a significant likelihood of signal overlap. It would be nice, with these large proteins, to separate the hydrogen signals using an additional chemical shift — say, that of the adjacent carbonyl. Unfortunately, taking these decay curves using 3-dimensional spectra like the HNCO turns out to be impossible because of the way these experiments are collected. Multidimensional NMR spectra rely on a series of internal delays during which a coherence acquires the frequency characteristics of a particular nucleus. In a typical experiment, the delays are multiples of a set dwell time, the length of which is determined by the frequency range one wishes to examine. Typically the collection proceeds linearly through the array, so for m y dwell times and n z dwell times you would collect 1D spectra with the delays:

0,0 0,y 0,2y 0,3y … 0,my

then

z,0 z,y z,2y z,3y … z,my

and so on until

nz,0 nz,y nz,2y nz,3y … nz,my

This is called Cartesian sampling, and it has some advantages. The numerous data points typically do a good job of specifying resonance frequencies, and processing this data is a fairly straightforward proposition. The glaringly obvious disadvantage is time, of which a great deal is required. Completely sampling either one of these dimensions separately can take less than 30 minutes, but sampling both can push a triple-resonance experiment into the 60 hour range. Most annoyingly, because triple-resonance spectra can be really rather sparse, this extremely long experiment often over-specifies the resonance frequencies. That is, much of this time is spent collecting data you don’t need.

Because spectrometer availability and sample stability are not infinite, there is considerable interest in making this process more efficient. One of the methods for doing so is called radial sampling. In this approach, the spectrum is built up from a series of “diagonal” spectra that lie along a certain defined angle with respect to the two time domains (imagine the above array as a rectangle with sides of my and nz to get a rough idea of what this means). If these angles are judiciously chosen, the spectrum can then be rebuilt from just a few of them with only modest losses in resolution. Gledhill et al. apply this approach as a means of addressing their time-resolution problem. Guided by a selection algorithm, they use just four angles (at 500 MHz) to resolve more than 90% of the peaks possible in myelin basic protein. As a result, they were able to collect HNCO-based HX data with 15-minute resolution. This isn’t enough to catch the fastest-exchanging peaks, but it’s more than sufficient to catch core residues.

Gledhill et al. used some additional tricks to gain extra speed in the experiment, however. Using band-selective excitation, they cut down the experiment’s relaxation delay to 0.6 s, which is important because this delay is a considerable portion of the duration of each transient. Having done this, they started to get really clever. Because this experiment is being used to measure the intensities of known frequencies, it is possible to significantly reduce the amount of processing required by employing the 2D-FT only for those regions that contained actual peak intensity. Moreover, they could extract peak intensities from each individual angle plane. Because they did not interleave the collection, this enabled them to substantially increase the time-resolution when necessary.

For peaks that exchanged quickly Gledhill et al. took relaxation data from the individual angle spectra, to maximize the time-sensitivity of the data. For slowly-exchanging peaks, they averaged the data from the angle spectra to maximize the signal-to-noise ratio. The resulting intensity curve seems a bit noisy, but this is an acceptable price to access new peaks. More importantly, the precision of the overall rate (as opposed to the instantaneous intensity) appears to be on par with simpler methods of measuring HX.

Successful use of the AMORE-HX experiment will depend on a wise selection of acquisition angles, a process that may benefit from further optimization. Because the HNCO has relatively good dispersion, the pulse sequence should enable HX measurements for just about any protein that is suitable for NMR. This would allow for a direct assessment of large enzymes and complexes, as well as a measurement of local stabilities in domain-domain interfaces.

Gledhill, J., Walters, B., & Wand, A. (2009). AMORE-HX: a multidimensional optimization of radial enhanced NMR-sampled hydrogen exchange Journal of Biomolecular NMR DOI: 10.1007/s10858-009-9357-4

Aug 062009
 
ResearchBlogging.orgPart of the motivation for my previous post about the spectral density was the recent appearance online (and upcoming appearance in print) of my paper in the Journal of Biomolecular NMR, which is open access, so you can open it up from home and read along as I tell you about it. The obscure-sounding title “Mesodynamics in the SARS nucleocapsid measured by NMR field cycling” means that we were able to characterize an interesting fluctuation in a protein from the SARS coronavirus, and that we used a cool technique to do it.

In the previous post, I mentioned that NMR dynamics studies ought to use data collected at multiple static magnetic field strengths. This is typically accomplished by increasing the strength, because of clear advantages in sensitivity and resolution at high and ultra-high field. Corresponding author Alfred Redfield, however, created a device (left) to capture information about relaxation at lower magnetic field strengths while retaining the advantages of, say, a 500 MHz magnet. This is accomplished by field-cycling, which in this case means physically moving the sample from the center of the magnet’s superconducting coil to a spot several centimeters away. If one has carefully measured the magnetic field gradient with respect to distance, one can reproducibly measure relaxation at a desired (lower) field within the bore of the 500 MHz magnet.

As you might surmise from the photograph, Al built the field-cycling device himself, often jury-rigged from whatever parts were convenient. For instance, as you can see at right, the push-rod that connects the sample in the tube to the motor assembly was made from an arrow purchased at a sporting goods store. I’ll count myself lucky if I’m half as creative and active in my 50s as Al is in his 70s. Al has used this device to investigate the dynamics of nucleic acids and lipids, but he was interested to see what we could learn about proteins by examining relaxation at low magnetic fields. Relatively low, at any rate — the weakest magnetic field I used is higher than you would typically encounter in, say, a clinical MRI. In his 31P research, however, Al has gone near zero field during the relaxation period.

Elan Eisenmesser, now a professor at the University of Colorado Health Sciences Center, did some initial investigations using this technique on cyclophilin A, and edited the pulse sequences so they could control the field-cycling device. Unfortunately, the results in CypA were kind of boring because for that protein the dynamics on the ps-ns timescale are relatively homogeneous. At this time, Elan was also working on the N-terminal domain of the SARS nucleocapsid protein (henceforth SARSN). As you can see from the structure at left (explore it at the PDB), SARSN has a long β-hairpin (sticking out to the right) which is known to be flexible. The hairpin is thought to interact with RNA as part of the viral assembly process, as well as binding to several host proteins during the process of infection. As Elan prepared to move on, he passed the project to me, and with Wladimir Labeikovsky assisting for the first couple of months, I took a bunch of spectra under various conditions.

You can see what a low-field spectrum looks like at right: this is an HSQC from an R1 experiment where excitation and acquisition were performed at 50.7 MHz (15N) and the relaxation period took place at ~17 MHz (blue). I’ve also overlayed a spectrum collected entirely at 50.7 MHz (red). The peaks are all in the same place and the sensitivity is good, but the signal/noise ratio is clearly lower for the 17 MHz spectrum, and we get some sidebands from the water on the right side of the spectrum. Getting the water signal to behave was a significant challenge for these experiments and took several tries to get right.

Besides the experiments I performed personally, spectra were collected by Elan and Geoffrey Armstrong at the Rocky Mountain magnet facility (the 900 MHz R1 and NOE) and Karl Koshlap at the UNC Pharmacy School (500 MHz data). Karl’s involvement was necessitated by a change in sample conditions and the unfortunate incident our spectrometer had with an HDTV channel (chronicled here, here, and here).

In the end I managed to gather relaxation data from four high fields (using standard equipment) and two low fields (using the field cycler). The R1 data are shown in Figure 2, and if you read the previous post then they shouldn’t surprise you very much. For most of the protein, R1 decreases steeply as the strength of the static magnetic field is increased, but for a subset of amides this field dependence is substantially reduced. Most of these residues fall into a continuous stretch encompassing the β-hairpin of SARSN and an adjacent loop (shown on the structure in Figure 1). In addition, the heteronuclear NOE measurements for these residues show a very large RNOE/R1 ratio at 50 MHz that decreases substantially as the field increases (Figure 3). As I discussed in the last post, these patterns of field dependence are characteristic of flexible regions in a protein, but more specifically they indicate flexibility with an internal correlation time of around a nanosecond or so.

One might expect a large, relatively unconstrained feature like the hairpin to have flexibility on multiple timescales. In particular, it seems like the sort of structural element that might move with a time constant of microseconds or milliseconds. These slower motions can’t be fit with great accuracy using the experiments performed here, but evidence of their absence can be found in the R2 experiments performed at 500 and 600 MHz (Fig. 4). Assuming that they are correlated with changes in chemical shift, we would expect motions on this timescale to increase the R2, but in the hairpin this relaxation rate is substantially reduced, consistent with high flexibility on the nanosecond timescale (low S2).

In order to gain a more complete picture of the dynamics, I fit the relaxation data to model-free formulations of the spectral density. For most residues, the classic Lipari-Szabo formalism worked quite well, although the S2 are generally higher than I like. An analysis of the fits, however, indicated that many residues needed to be fit to a more complex model, called extended model-free or model 5. In this model the spectral density is given as:

where S2f and S2s are order parameters for a fast and slow internal motion, respectively, and τs is the internal correlation time for the slow motion (τf is assumed to be ~0). The residues that were fit to this alternative model happened to be those with anomalous R1 and NOE dispersions, meaning they mostly belonged to the β-hairpin and the loop incorporating residues 60-65.

Ultimately I didn’t include the low-field data in the quantitative fits. The large random errors in these rates (error bars in Fig. 2) meant that the more precise high-field data would dominate the fits, for one thing. For another, the low-field data were not entirely consistent with the high-field results. Although the general features of the relaxation at 17 and 30 MHz agree with predictions from high field, the observed low-field R1 differ substantially from predictions. This could be due to a number of error sources, the two biggest being positioning error and interference between the CSA and dipolar relaxation mechanisms (because we cannot suppress this interference in the fringe field). Al also thinks some of the error may be due to the influence of a low-amplitude fluctuation in the globular portion of SARSN. Qualitatively the R1 behave much as we would expect, but bringing them into line quantitatively will take more work.

The upshot of all of this effort to fit the dynamics is that the residues in the hairpin have an interesting duality. On very short timescales (< 10 ps or so) they are quite rigid, much like the rest of the protein. On a slightly longer timescale, however, they are very flexible, with S2s of around 0.6, and similar internal correlation times across the entire feature in the range of 600-800 ps (Fig. 5). Because the correlation time of this fluctuation is significantly faster than molecular tumbling but much slower than typical backbone fluctuations, Al called them “mesodynamic”, a word Dorothee seems to like. At any rate, these observations led us to propose that the hairpin fluctuates widely (based on S2s and τs) as a coherent structural unit (based on S2f), rather than having its strands fall apart and flop around randomly. The hairpin is both ordered and disordered, depending on the timescale of analysis and frame of reference.

The physical plausibility of this dynamic model was assessed using a pair of 15 ns all-atom molecular dynamics simulations performed by Ming Lei. What these found, shown in Fig. 7, was that the hairpin maintained its internal structure while moving freely with respect to the globular portion of the protein. In addition, the simulations suggested a reason the 60-65 loop had similar dynamics to the hairpin — transient hydrogen bonds formed between side chains in the hairpin and residues in the loop, causing their motions to be correlated.

The qualitative agreement between the low-field and high-field data supports our contention that this technique can be made to work and to give valuable data about certain kinds of fluctuations. Future work on proteins with this technique will require a rigorous approach to control for the systematic bias we observed. Additionally, this study re-emphasizes the value of taking relaxation data at many fields in order to fully characterize biomolecular dynamics.

As for the dynamics of SARSN, the finding is interesting but doesn’t yet provide any specific insight. Disordered regions of a protein are often associated with promiscuous binding activity, and this hairpin is no exception. However, the existence of multiple binding sites in one of these regions is usually attributed to a significant ability to restructure itself. Here, that possibility would seem to be limited by the apparent persistence of the hairpin’s intrinsic structure. The ability of the hairpin to move freely while maintaining a particular internal arrangement may have advantages in capsid construction, an idea that could potentially be tested by inserting prolines or glycines in the β-strands, which should disrupt the hydrogen bonding that preserves the hairpin.

Al and his collaborator Mary Roberts are currently continuing their investigations of 31P dynamics in nucleic acids and lipids using low field. They’re even advertising:

Clarkson, M., Lei, M., Eisenmesser, E., Labeikovsky, W., Redfield, A., & Kern, D. (2009). Mesodynamics in the SARS nucleocapsid measured by NMR field cycling Journal of Biomolecular NMR DOI: 10.1007/s10858-009-9347-6 OPEN ACCESS

 Posted by at 2:00 AM
Aug 062009
 
The model-free formalism of Lipari and Szabo is a way to convert experimental NMR data into a limited number of generalized parameters describing the internal dynamics of a protein. However, the relaxation rates that are typically measured by NMR — the R1, the R2, and the steady-state nuclear Overhauser effect (nOe) — do not themselves appear in the model-free formulas. Instead we see a term, J(ω), and this constitutes the interface between the data and the model. This term refers to the spectral density, which is a measure of the power available to relax spins at a given angular frequency. The relaxation rates measured by NMR spectroscopists interrogate this density at known frequencies, which means that we can use those rates to assess general information about the shape of the spectral density function and thus constrain the model-free parameters.

In biomolecular NMR, these rates are most frequently measured on the nitrogen of a backbone amide group, in which case they fundamentally depend on the spectral density at three frequencies: 0, the Larmor frequency of nitrogen (ωN), and the larmor frequency of the proton (ωH). The precise relationships are as follows:

R1 = D [3JN) + 6JNH) + JNH)] + C [3JN)]
R2 = D/2 [4J(0) + 3JN) + 6JH)+ 6JNH) + JNH)] + C/6[J(0) + 3JN)]
steady-state nOe = 1 + RNOE γH / R1 γN
RNOE = D [6JNH) – JNH)]
D = μ022γN2γH2/64π2rNH6
C = Δσ2ωN2/3

where γH and γN are the gyromagnetic ratios of these nuclei, is the reduced Planck constant, and μ0 is the magnetic constant (or vacuum permeability, if you prefer), and Δσ is the chemical shift anisotropy of the 15N nucleus (typically -160 – -170ppm).

I’m not going to cover precisely why they have these relationships today; instead I want to focus on how these relationships connect certain dynamic behaviors to particular observations about relaxation rates. The key to this is to think about how the spectral density looks. At right I have a simplified spectral density calculated for a rigid protein of reasonable NMR size (I only show the positive side of the function, the negative is a mirror image). While the particular shape of the spectral density function will depend strongly on the internal dynamics and overall size, certain general features will be the same for most proteins. It should be immediately evident, for instance, that J(0) >> JN) >> JH) (shown on the figure for a 500 MHz magnet). This implies that each relaxation rate reports on just one spot in the spectral density. R2 should be proportional to J(0), R1 to JN), and RNOE to JH), keeping in mind that ωH >> ωN.

The shape of this curve derives in a fairly obvious way from the Lorentzian used to calculate it, in this case the Lipari-Szabo formalism, which if you’ll recall is:


Where τm is the time it takes the protein to tumble through one radian in solution, S2 is the order parameter for the bond in question, and τe is the correlation time of internal motions. The Lipari-Szabo model is not the only model of the spectral density, but most of the alternatives just add more Lorentzians or scaling factors. These models differ in the fine structure of the spectral density, but the overall shape (and the features I’m about to describe) is generally not affected.

It should be clear from examining this (and given that τm >> τe) that the point where ωτm = 1 divides the spectral density into two regions. Where ωτm <= 1, the first term dominates, and the spectral density is determined by S2 and τm. Where ωτm >> 1, the second term dominates and the spectral density is essentially dependent on (1-S2) and τe. This being the case, you would expect highly flexible moieties (low S2) to have inefficient R2 and R1 relaxation and highly efficient NOE relaxation, and this we generally find to be the case.

Similarly, you would predict that increasing τm would cause R2 to increase. The graph at right simulates relaxation rates for a typical, rigid backbone amide nitrogen (at 500 MHz) as the τm increases (note log scale on x). As you can see, the R2 (red) does in fact get continuously higher as τm is increased; this is one of the reasons NMR spectroscopy of very large molecules is so difficult. Also note that R1 (blue) goes through a maximum and then declines. This is because as τm increases, the point where ωτm = 1 shifts to lower and lower frequency. When |ωN| > 1/τm, the spectral density at ωN starts to fall off, reducing R1. This might sound advantageous, but in fact it is another reason that spectroscopy on large molecules is difficult — their inefficient R1 relaxation means that additional time must be scheduled after each transient to create a sufficiently sensitive steady state. Because even a simple spectrum can have 2048 transients, adding just a few fractions of a second per transient can rapidly amount to a significant increase in experiment time.

It’s obvious that it would be questionable to map the spectral density based on just three relaxation rates, if for no other reason than that we have four unknowns and three pieces of data. This is typically addressed in three ways, which are often used in combination. The first is to reduce the spectral density, by making some general assumptions about the nature of the spectral density around ωH and collapsing the JN +/- ωH) terms into 0.87*JH). Another approach is to increase the number of relaxation rates measured, by incorporating R1zz or other measurements, but many of these rates incorporate additional factors (such as ρHH) that must also be fit, so that their ability to reduce the dimensionality of the problem is sometimes limited.

The third approach is to take data at several fields. The Larmor frequencies ωH and ωN depend on the strength of the magnetic field in the spectrometer, while J(0) is obviously field-independent. As a result, each additional field of data taken improves the ratio between data and unknowns. This improvement is valuable even when the relaxation is being fit to a simplified representation such as the model-free formalism, and therefore dynamics experiments should always include measurements at more than one field if at all possible. Moreover, the field-dependence of relaxation rates can be very informative, in general terms, about the dynamics of the system.

In the simplified view it might seem that R2 should be essentially independent of field strength, but observations show this not to be the case. R2 increases at high fields primarily because of the chemical shift anisotropy contribution, which has a square field dependence and therefore increases with field to a greater degree than ωN declines. As a result, R2 has a sort of chevron appearance as you vary the field, with differences in dynamics primarily affecting the magnitude rather than the shape. This means that for R2 the field-dependence is not particularly informative about the dynamics. However, if a residue has anomalous R2 field-dependence with respect to the rest of the protein, this can be an indicator of a chemical exchange process on the μs – ms timescale.

Because relaxation due to chemical shift anisotropy makes a lesser contribution to R1 (and depends entirely on JN) for this rate) the behavior of R1 with respect to field is generally much simpler — for proteins, the R1 almost always decreases as field increases. The degree to which this occurs, however, can be quite different depending on the dynamics behavior that is going on. The reason for that can be seen in the sample spectral densities to the left, calculated for a typical backbone amide (blue) and a flexible one (red). As you can see, the more flexible residue has a lower J(0) and a smaller slope between the flat portions of the spectral density than the rigid one. This means that the R1 will be lower at high field and higher at low field, decreasing the field-dependence of the residue’s relaxation. The exact magnetic field where this crossover occurs depends on the correlation times of the internal motion and global tumbling.

The gyromagnetic ratios of the hydrogen and nitrogen nuclei have opposite signs, so the heteronuclear NOE measured for these nuclei should be less than one. How much less depends on the relative ratio between JH) and JN). For flexible residues, the spectral density at large ω will be high (and that at lower ω will be low), this ratio will be large, and a low value will be measured in the hetNOE experiment. RNOE typically has a steep field-dependence for flexible residues, and because this rate dominates the ratio, one tends to see greater field-dependence of the hetNOE for flexible residues. However, the situation for the hetNOE is more complex than for the other two rates because the spectral density around ωH defines the relaxation. As a result, the internal correlation time (particularly if it’s on the order of 100 ps – 1 ns) starts to dictate the shape of the spectral density, and hence the magnetic field-dependence of relaxation. For certain τe, the hetNOE will have no apparent field dependence, whether the residue is flexible or not.

Actually parameterizing the dynamics of a given group requires numerical fitting of the relaxation data, but for many questions a qualitative estimate will suffice. In these cases just examining the field-dependence of one or two relaxation rates (especially R1 or NOE) can provide valuable insight into the heterogeneous dynamics of a given protein. In the next post I’ll describe an example of a case in which this turns out to be true.

 Posted by at 1:00 AM
Apr 142009
 
ResearchBlogging.orgOne of the most-studied cases of the relationship between dynamics and catalysis is the bacterial dihydrofolate reductase (DHFR). DHFR catalyzes the reduction of dihydrofolate to tetrahydrofolate while oxidizing the cofactor nicotinamide adenine dinucleotide phosphate (NADPH). As part of this catalytic process, a region of the protein called the “Met 20 loop” switches from a “closed” state that shields the active site from solvent to an “occluded” state that separates the substrate from the cofactor. NMR studies of DHFR structural dynamics have correlated the protein motions with the chemical changes. In a recent study appearing in Structure, researchers from the University of North Carolina show that the binding of inhibitors such as methotrexate (MTX) and trimethoprim (TMP) appears to uniquely disrupt the dynamic networks of DHFR.

Previously, seminal work from the lab of Peter Wright surveyed the dynamics of DHFR in every step of its reaction pathway. Boehr et al. determined that structural fluctuations in each complex represented motions towards the next step in the reaction. The conformational exchange rates they obtained from their relaxation-dispersion experiments closely resembled the rate constants that had been independently determined for the chemical steps. In almost every complex the conformational exchange was widespread, affecting residues in both the substrate and cofactor binding sites, as well as important distal locations such as the Met 20 loop.

Because the existing work from the Wright lab hewed as close to the natural substrates and products as possible, Mauldin et al. chose to examine the dynamic effects of inhibitor binding to DHFR. Like Wright’s group, they used relaxation-dispersion experiments to identify conformational changes taking place on the μs-ms timescale. In the NADPH:DHFR complex the motions are widespread, encompassing the substrate binding site, the Met 20 loop, and distal locations. Binding of either inhibitor eliminates about half of this dynamic network and dramatically reduces the fluctuation rates of those residues for which conformational exchange continues to occur.

Based on their fits of the exchange rates, Mauldin et al. conclude that the substrate binding pocket moves in a way that mimics the enzyme’s normal motions in the transition from its closed state to its occluded state. The long-range conformational changes that actually complete this transition, however, have been completely quenched. With the inhibitors bound, DHFR is like a car that’s turning over but won’t start. Part of the enzyme is still moving in exactly the right way to proceed along the reaction coordinate, but for some reason this motion doesn’t catch on throughout the protein.

In order to gain a more complete understanding of the dynamic effects, Mauldin et al. performed experiments to identify the motion of the protein on the ps-ns timescale. Analyzing the dynamics of methyl and amide resonances using the Lipari-Szabo model-free formalism, the authors realized that inhibitor binding did cause long-range changes in dynamics, just in a faster regime. Where the natural substrate complexes have motions that occur hundreds or thousands of times per second, the inhibitor-bound forms have (smaller) motions that occur millions of times per second. Because these altered motions encompass the Met 20 loop and surrounding residues, the authors argue that they reflect abortive attempts by the protein to transition into the occluded state.

Although these inhibitors do not appear to change the protein’s overall conformation, they produce long-range dynamic effects on short timescales and quench distal motions on intermediate timescales. The binding pocket appears to still be experiencing fluctuations related to the transition between the closed and occluded conformational states, but the mechanism that couples the binding site dynamics to the motion of the loop that defines these two states appears to be broken.

The million-dollar question is this: do drugs alter DHFR dynamics because they inhibit the chemistry, or do these drugs inhibit the chemistry because they alter DHFR dynamics? Quenching dynamics costs energy in the form of conformational entropy, and it may be possible to tune a drug for improved efficiency by blocking the binding site without altering the dynamics. This is only true, however, if the dynamics don’t matter to successful inhibition. On the other hand, if blocking the conformational switching of the Met 20 loop inhibits the enzyme, then drugs can be designed for that angle of attack as well. In the case of a protein like DHFR, where the bacterial enzyme has similar activity but a very different structure from its human equivalent, drugs that target regions other than the active site may significantly reduce side-effects. As a result, protein targets that were previously off-limits due to shared chemistry may become tractable due to divergent dynamics and structure.

Mauldin, R., Carroll, M., & Lee, A. (2009). Dynamic Dysfunction in Dihydrofolate Reductase Results from Antifolate Drug Binding: Modulation of Dynamics within a Structural State Structure, 17 (3), 386-394 DOI: 10.1016/j.str.2009.01.005

Boehr, D., McElheny, D., Dyson, H., & Wright, P. (2006). The Dynamic Energy Landscape of Dihydrofolate Reductase Catalysis Science, 313 (5793), 1638-1642 DOI: 10.1126/science.1130258

 Posted by at 2:00 AM