Jan 252012
 

Imagine that you could get an injection of a protein that would chop up arterial plaques. Imagine that you could drop a plastic bottle into a pool of bacteria that would transform it back into high-grade oil. Imagine that you could take any organic material at all and, with a minimum of planning, transform it into any kind of desired organic chemical with a bare minimum of energy input and no need to purify intermediates. This is the vision behind the applied structural biology of protein design, the holy grail of which is to come up with a way to make enzymes that will perform novel chemistry. A study recently published online in Nature Biotechnology by David Baker’s group (1) suggests that the design process could be improved by crowdsourcing certain parts of the problem to gamers (the paper is paywalled at Nature but freely available via the Foldit site).

To do this, the Baker group used their program Foldit, which they have used previously for predicting three-dimensional protein structures from their amino acid sequences. Rather than predicting a structure from a known sequence, however, the Baker group asked the Foldit players to figure out an amino acid sequence that would generate a desired structure. The goal was to enhance an enzyme that would perform the chemically useful Diels-Alder reaction.

An enzyme is a protein that increases the rate of (catalyzes) a chemical reaction, often by incredible amounts. The best enzymes can increase reaction rates by factors of up to 1017 relative to the same reaction occurring in pure water. Protein design aims to produce artificial enzymes with rate enhancements comparable to their natural counterparts. To do this, biochemists try to design an active site that stabilizes the transition state of a chemical reaction. The transition state is the point of a reaction where the molecules are in their least stable state, and equally likely to revert to substrates or continue on and become products.

Unfortunately, it’s not just as simple as stabilizing a transition state. Enzymes have to bind and release their substrates and products, producing energy landscapes that are at least as complex as the one I have drawn below. Using a protein design protocol they had described in previous publications, Baker’s group managed to produce a weak enzyme. They then asked the Foldit players to help out, by posing some specific challenges to try and stabilize the bound substrates. The Foldit players eventually produced an 18-fold improvement in the enzyme’s kcat/KM value. To understand what that means and what the players accomplished, let’s examine this reaction coordinate:

That’s a busy little figure, but it’s not as bad as it looks. The position up or down in the figure indicates how much energy a state has. The more energy, the less likely the system is to occupy that state. Left to right positions show us how close we are to the desired state of the system, which is to have the product (P) we want separate from the enzyme (E) that catalyzed its production from substrate (S). To move from one stable state to another stable state, you have to push the system over hills (energy barriers) in the landscape, just like pushing a car up a hill. The higher the barrier, the slower that step becomes. For simplicity, this diagram shows only one substrate, but the artificial enzyme had two. We can pretend that the Foldit effort started with an enzyme that resembled the blue curve.

We start with E and S separate from each other in solution (E+S). E and S bind to each other to form ES, releasing binding energy. Here I’ve shown a small barrier between E+S and ES, but in many cases there is no barrier here, or it is negligible. Next S is converted to P, and as you can see there is usually a large energy barrier, at the top of which is the transition state (TS). The height of the barrier is determined by the activation energy, which is affected by the structure of the enzyme-substrate complex. Once P has been formed, the complex dissociates so we have free enzyme and product (E+P). Here I have shown E+P to be a lower-energy state than EP, but this won’t necessarily be true.

In the language of Michaelis-Menten kinetics, this landscape is described by two main parameters. KM, also called the Michaelis constant, describes the balance between E+S and ES, and therefore primarily reflects the binding energy. The larger the binding energy, the more ES will be favored, and the lower KM will be. The turnover number, or kcat (maybe we should call this the Menten constant?) describes the creation of product over time, and in this diagram it depends on the activation energy. Again, the larger the activation energy, the lower kcat will be. However, kcat really just depends on the slowest step of the catalytic cycle. If the largest energy barrier was between EP and E+P, kcat would depend on that barrier. Because kcat/KM is something like a normal rate constant, and combines the values in an easy-to-understand way (a higher kcat/KM means a better enzyme), it’s often used to describe an enzyme’s activity.

So how did the Foldit players improve the activity by a factor of 18? The original enzyme design left part of the active site open to water. Through a series of iterations, the Foldit players filled in this void with a self-stabilizing helix-loop-helix motif (Figure 1b). The upshot of this was that the affinity of the enzyme for both substrates increased. Thus, KM decreased, as shown in Table 1, for both substrates. At the end of the process, the diene bound six times as tightly and the affinity for the dienophile improved by about a factor of three. This accounts for all the observed change in kcat/KM, because kcat was not improved.

Although it may not seem like it, we can also learn a great deal from the fact that kcat did not change. This observation shows that the changes made by the Foldit players did stabilize the TS. Otherwise, the energy barrier would have increased when they stabilized the ES complex. However, the best-case scenario would have been for them to uniquely stabilize TS without improving the energy of ES, because this would effectively lower the energy barrier and increase the reaction rate. Because this didn’t happen, the situation follows the orange curve in the figure above: the ES and TS states have shifted down in energy by the same amount, with no change to the activation energy.

The lack of change in kcat also indicates that the Diels-Alder reaction itself, rather than product dissociation, is rate-limiting for the enzyme. My reasoning here is that the increase in affinity is general. We know that both the ES and TS complexes were stabilized by the changes, so EP probably was too, as shown in the orange curve. If the EP → E+P transition were rate-limiting, these stabilizing mutations would have made the enzyme slower.

The Foldit players made this a better enzyme, but that doesn’t exactly mean that it’s an impressive one. The observed kcat is significantly slower than almost any natural enzyme, and the overall rate enhancement is on the order of 103-104, which is not much better than catalytic antibodies. The success of the Foldit players at improving the affinity of the enzyme for all the bound states suggests that it might be possible to use crowdsourced systems like Foldit to accomplish the more difficult feat of stabilizing a TS, or at least to generate folds that support a pre-defined TS. The ultimate goal is to produce something like the green curve, where substrate binding is stronger and activation energy is lower. I hope that such efforts will be taking place among the Foldit players soon, if they haven’t started already.

Disclaimer: I am part of an ongoing collaboration with David Baker’s group unrelated to the Foldit program.

1) Eiben, C., Siegel, J., Bale, J., Cooper, S., Khatib, F., Shen, B., Players, F., Stoddard, B., Popovic, Z., & Baker, D. (2012). Increased Diels-Alderase activity through backbone remodeling guided by Foldit players Nature Biotechnology DOI: 10.1038/nbt.2109 Also available for free from the Foldit site.

Sep 202011
 

One of the goals of computational biology is to predict the complete high-order structure of a protein from its amino acid sequence. Often reasonably good structures can be produced by modeling a new protein according to an already-known structure of a homologous protein, one with a similar sequence and presumably a similar structure. However, these structures can be inaccurate, and obviously this method will not work if no homologous structure is known.

Foldit is an online game developed by the research team of Dr. David Baker that attempts to address this problem by combining an automated structure prediction program called ROSETTA with input from human players who manually remodel structures to improve them. Even though most of the players have little or no advanced biochemical knowledge, Foldit has already had some striking results improving on computational models. An upcoming paper in Nature Structural & Molecular Biology (1) (PDF also available directly from the Baker lab) details some interesting new successes from the Foldit players.

Contrary to some reports, the Foldit players did not solve any mystery directly related to HIV, although their work may prove helpful in developing new drugs for AIDS. What the Foldit players actually did was to outperform many protein structure prediction algorithms in the CASP9 contest, and to play a key role in helping solve the structure of an unusual protease from a simian retrovirus.

M-PMV Protease

If you don’t recognize Mason-Pfizer Monkey Virus (M-PMV) as a cause of AIDS in humans, that’s because it isn’t. It causes acquired immune deficiency in macaques, however, and it has an unusual protease that may tell us useful things.

Crystal structure of inactive HIV-1 protease mutant in complex with substrate.

A crystal structure of an inactive mutant of HIV-1 protease in complex with its substrate. The protease monomers are in dark green and cyan, the substrate is represented as purple bonds.

Retroviruses like HIV often produce proteins in a fused form rather than as individual folded units. In order to be functional, the various proteins must be snipped out of these long polyprotein strands, so the virus includes a protease (protein-cutting enzyme) to do this. In most retroviruses, this protease is dimeric: it is composed of two protein molecules with identical sequences and similar, symmetric structures. The long-known structure of HIV protease, seen on the right (learn more about HIV protease or explore this structure at the Protein Data Bank) is an example of this architecture.

People infected with HIV often take protease inhibitors to interfere with viral replication. These drugs attack the active site, where the chemical reaction that cuts the protein strand takes place, but it has been theorized that viral proteases could also be attacked by splitting up the dimers into single proteins, or monomers. The problem is, the free monomer structures aren’t known.

This is where the M-PMV protease comes in. Although it is homologous to the dimeric proteases, M-PMV protease is a monomer in the absence of its cutting target. If we knew this protein’s structure, we could perhaps design drugs that would stabilize other proteases in their monomer form, rendering them inactive. An attempt to determine the structure using magnetic resonance data (NMR) produced models that seemed poorly folded and had bad ROSETTA energy scores. And, although the protein formed crystals, X-ray crystallography could not solve its structure either, despite a decade of effort.

An X-ray diffraction pattern.The reason for this has to do with how X-ray crystallography works. If you fire a beam of X-rays at a crystal of a protein, some of the rays will be deflected by electrons within it and you will observe a pattern of diffracted dots similar to the one at left, kindly provided by my colleague Young-Jin Cho. The intensities and locations of these dots depend on the structure and arrangement of the molecules within the crystal. X-ray crystallographers can use the diffraction patterns to calculate the electron density of the protein and fit the molecular bonds into it (below, also courtesy of Young-Jin). However, the electron density cannot be calculated from the diffraction pattern unless the phases of the diffracted X-rays are also known. Unfortunately there is no way to calculate the phases from the dots.

An electron density map

An electron density model (wireframe) with the chemical bonds of the peptide backbone (heavy lines) fitted into it.

There are many ways to solve this problem, but not all of them work in every system. One widely-applicable approach is called “molecular replacement”. In this method, a protein with a structure similar to that of the one being studied is used to guess the phases. If this guess is close enough, the structure factors can be refined from there. In the case of M-PMV protease, however, the dimeric homologues could not be used for replacement, and an attempt to use the NMR structure to calculate the phases also failed.

Then the Foldit players went to work. Starting from the NMR structure, Foldit players made a variety of refinements. A player called spvincent made some improvements using the new alignment tool, which a player called grabhorn improved further by rearranging the amino acid side chains in the core of the molecule. A player named mimi added the final touch by rearranging a critical loop.

Going from mimi’s structure (several others also proved suitable), the crystallographers were able to solve the phase problem by molecular replacement and finally determine the protease’s structure. None of the Foldit results were exactly right, so it’s inaccurate to say that the players solved the structure. However, their models were very close to the right answer, and provided the critical data that allowed the crystal structure to be solved. Once the paper is published, you’ll be able to find that structure at the PDB under the accession code 3SQF.

We can’t know right now whether this structure will enable the design of new drugs, but the Foldit players were the key to giving us a better chance of using it for this purpose. What may be even more exciting is the possibility that Foldit could be used in other structural studies to come up with improved starting models for molecular replacement. As with any method of predicting protein structures, however, the gold standard is CASP, so the Foldit teams participated in CASP9.

CASP9

The Critical Assessment of protein Structure Prediction is a long-running biennial test of computer algorithms to calculate a protein’s structure from its sequence. This experiment in prediction has a fairly simple setup.

1) Structural biologists give unpublished structures to the CASP organizers.

2) The sequences belonging to these structures are given to computational biologists.

3) After a set period, the computational predictions are compared to the known structural results.

The Baker group generated starting structures using ROSETTA, then handed the five lowest-energy results off to the Foldit players. For proteins that had known homologues, the results were disappointing. Foldit players did well, but they overused Foldit’s ROSETTA-based minimization routine, which tended to distort conserved loops.

An energy landscape showing an incorrect move towards a false minimum and a correct, more difficult move towards a true minimum.The nature of this problem became even more clear when the Baker group handed the Foldit players ROSETTA results for proteins that had no known homologues. In that case they noticed that players were using the minimization routine to “tunnel” to nearby, incorrect minima. You can get a feel for what that means by looking at the figure to the left.

In this energy landscape diagram, the blue line represents every possible structure of a pretend protein laid out in a line, with similar structures near each other and the higher-energy (worse) structures placed higher on the Y axis. From a relatively high-energy initial structure, Foldit players tended to use minimization to draw it ever-downward towards the nearest minimum-energy structure (red arrow). Overuse of the computer algorithm discouraged them from pulling the structure past a disfavored state that would then start to collapse towards the true, global minimum energy (green arrow).

The Foldit players still had some successes — for instance, they were able to recognize one structure ROSETTA didn’t like very much as a near-native structure. The Void Crushers team successfully optimized this structure, producing the best score for that particular target, and one of the highest scores of the CASP test. If the initial ROSETTA structures had too low of a starting energy, though, the players wouldn’t perturb them enough to get over humps in the landscape.

Thus, Baker’s group tried a new strategy. Taking the parts of one structure that they knew (from the CASP organizers) had a correct structure, they aligned the sequence with those parts and then took a hammer to the rest, pushing loops and structural elements out of alignment. This encouraged the players to be more daring in their remodeling of regions where the predictions had been poor, while preserving the good features of the structure. Again, the Void Crushers won special mention, producing the best-scoring structure of target TR624 in the whole competition.

Man over machine?

Does this prove that gamers know more about folding proteins than computers do? Some of them might, but Foldit doesn’t really use human expertise. Rather, the game uses human intelligence to identify when the ROSETTA program has gone down the wrong path and figure out how to push it over the hump. When the human intelligences aren’t daring enough, or trust the system too much, as in the case of the CASP results, Foldit doesn’t do any better than completely automated structural methods. When the human players are encouraged to challenge the computational results, however, the results can be striking. As Baker’s group are clearly aware, further development of the program needs to be oriented towards encouraging players to go further afield from the initial ROSETTA predictions. This will likely mean many more failed attempts by players, but also more significant successes like these.

Disclaimer: I am currently collaborating with David Baker’s group on a research project involving ROSETTA (but not Foldit).

1) Khatib, F., DiMaio, F., Cooper, S., Kazmierczyk, M., Gilski, M., Krzywda, S., Zabranska, H., Pichova, I., Thompson, J., Popović, Z., Jaskolski, M., & Baker, D. (2011). Crystal structure of a monomeric retroviral protease solved by protein folding game players. Nature Structural & Molecular Biology DOI: 10.1038/nsmb.2119

Sep 192011
 

Given that videogames are often demonized by research (and “research”) blaming them for everything from rudeness to the epidemic of youth violence, gamers often take a great deal of cheer from research attaching positive outcomes to videogame play. One such article that recently attracted some attention was work suggesting that playing videogames could correct amblyopia (often called “lazy eye”) in adults (1). Of course, given how negative results get oversold, it’s worth asking whether these have been, too. The paper appeared in the open-access journal PLoS Biology, so let’s open it up and take a look.

The fundamental problem that the authors are out to solve is that, while amblyopia can generally be corrected if it is treated in childhood, success tends to be rarer in adults. Knowing that video games have proven useful in improving adults’ abilities to perform a wide variety of visual tasks, these researchers decided to ask whether they could help treat amblyopia.

Figure 1 shows their experimental design. First they screened and assessed a group of adults with amblyopia. Then they divided these individuals into three groups. One group (10 individuals) played a total of 40 hours of Medal of Honor: Pacific Assault with the normal eye patched. An additional 3 individuals were assigned to a group that played SimCity Societies for an equal amount of time (it is unknown whether the author’s controlled for Societies‘ well-known liberal bias), again with the normal eye patched. The final seven individuals were given twenty hours of ordinary visual challenges (watching movies, reading, etc.) with the normal eye patched (occlusion therapy or OT), in order to ensure that patching alone wasn’t causing any observed improvements. Most individuals from the last two groups, following an intermediate assessment, then went on to play 40 hours of MOH.

As the authors note, there are several limitations to this study immediately apparent. The sample size was small, individuals were not assigned to groups randomly, and both participants and researchers knew what kind of treatment they were getting. This does not mean we should disregard the results. However, they do need to be taken with a grain of salt until the findings can be replicated in a larger sample.

And there is good cause to try to replicate these findings. Figure 2 is, unfortunately, something of a symbol party (the symbols and colors identify individual subjects by their type of amblyopia), so we’re better off focusing only on panel D, at lower right. The first item in panel D is a logMAR chart, used to measure visual acuity, and it probably looks familiar to you. Each line on the chart represents 0.1 logMAR units, and as you can see, the lower the score, the better your vision. The panel to the right of that shows the averaged data from all twenty individuals after OT and videogame therapy (VG). Here they are showing the percentage improvement in acuity in crowded conditions (the whole chart) or in isolation (a single letter). OT did not produce any improvement in acuity, while 40 hours of VG therapy produced an average 30% improvement in acuity. The other two graphs here indicate that improvement in acuity was unrelated to baseline acuity, and that the crowding index (the loss of acuity due to the presence of other letters) did not change substantially due to therapy.

This is a critical figure because, as the authors state, “reduced visual acuity is the sine qua non of amblyopia.” Substantial improvements in acuity, therefore, represent a major goal of therapy. Perceptual learning, in which participants make subtle visual judgments using their amblyopic eye, has been shown to improve acuity in adult amblyopes as well. If videogames can produce a comparable improvement, however, they may prove just as efficacious because they encourage therapy (=play) through fun.

Panels A-C of Figure 2 show the raw results and percentage improvements for each individual group. Two additional points are worth noting. Panel B shows that the 20h of occlusion therapy were ineffective, but the subsequent 40h of MOH improved acuity in all continuing individuals. However, it should be noted that while the gaming took place at the research location, the occlusion therapy was done on the individual’s own time and self-reported. This study therefore does not control for the benefits of a monitored and enforced eye-exercise regimen.

Panel C is also of interest. Although the group here is small (and the data correspondingly noisy), it appears that their acuity was improved by both SimCity and MOH. This was somewhat unexpected, because in the past positive visual effects produced by action video games have not been replicated by non-action games. Understanding why that’s not the case here may help provide some additional insights into the mechanisms by which games improve acuity in these patients. I haven’t played SimCity Societies, but having played previous SimCity iterations I know that these games often require the player to integrate a variety of visual information (traffic flow, electricity, dynamic economies) simultaneously, which may underlie the observation. Had these subjects actually played videogame chess, their improvement might have been less.

The authors went on to test the subjects’ vision in various ways. Figure 3 shows a test of positional acuity, and is rather badly made, but gets the point across that positional acuity (assessed using the funky little chart in panel A) improved in the game-playing group (panel B). This included both increases in “sampling efficiency”, related to a fitted number of correct positions extracted (out of 8) (panel C and E-SB2) and decreases in “internal noise”, or the degree to which the individual’s own eyes interfere with his assessment of position (panel D and E – SA5). The results in panel E compare improvements in efficiency and internal noise, with the three labeled graphs comparing results in the non-amblyopic eye (NAE) to the amblyopic eye (AE) before and after videogame treatment.

The authors also decided to test the effect of the games on spatial attention, as they report in Figure 4, by briefly showing the subjects a field of dots (at a size where they could be easily seen), followed by a checkerboard pattern and asking them to report the number of dots seen (panel A). Not all the individuals had an appreciable difference between the non-amblyopic eye and the amblyopic eye prior to the VG treatment (panel B). However, the degree of improvement in spatial attention tended to be greater the worse the initial condition was (panel C), including for SimCity players (symbols surrounded by dotted circles). For the worst-off subjects (dotted circle in panel B), significant improvements in accuracy and response time were observed (panel D-F).

Finally, the authors tested the stereovision of some subjects using a standard test (Figure 5). Again, substantial improvements were noted in all those tested (which excluded subjects with strabismus), to the degree that some of them were effectively cured.

These results show that playing video games produced dramatic improvements in vision for adults with amblyopia by a variety of measures. However, this study had many limitations, and nobody should go around prescribing (or self-prescribing) videogames as amblyopia therapy just yet. The sample size here was very small, and because of the way groups were assigned the various populations differed in non-trivial ways (the MOH group, for instance, was younger and more male than the others). The conditions for occlusion therapy were very different from those used in the videogame therapy, which could have contributed to the different outcomes. Even if a more comprehensive trial shows similar results, more work will be necessary to identify the best course of treatment, which I note is unlikely to take the form of a 24-hour Modern Warfare 3 binge fueled by Bawls and pizza.

That said, these results appear to justify a larger, more complete study, which we can certainly hope to see in a few years from these authors.

1) Li, R., Ngo, C., Nguyen, J., & Levi, D. (2011). Video-Game Play Induces Plasticity in the Visual System of Adults with Amblyopia PLoS Biology, 9 (8) DOI: 10.1371/journal.pbio.1001135

Nov 052008
 
ResearchBlogging.orgThe latest evidence in the debate over the effects of video game violence has arrived in the November edition of the journal Pediatrics. Japanese and American psychologists, including well-known media violence researchers Craig Anderson and Douglas Gentile, report that violent video games constitute a causal risk factor for physical aggression. Perhaps unsurprisingly, the gaming internets have already expressed their disagreement with these results via angry blog postings based on secondary reporting (calmer coverage can be found at Gamasutra). A more professional critique has also been offered, in the form of a post-publication peer review by Texas A&M International University Professor Christopher Ferguson. The paper tries to sell itself as a significant piece of new proof, which it is not. Anderson et al. have found an interesting, if weak, correlation that they cannot prove to be causal, due to the limitations of the methods employed.

The study has two key advantages that, in principle, make it a unique addition to our knowledge about the effects of video game violence. Firstly, it attempts to correlate physical aggression (PA) in teens and kids with habitual exposure to video game violence (HVGV) 3-6 months earlier. While the use of a timecourse alone cannot prove causation, long-term correlations are thought to suggest a causal relationship more strongly than instantaneous correlations. Secondly, the study involves several age groups from two countries, the United States and Japan. Although more children play video games in Japan than in the US, the rate of violent crime in Japanese society is much lower than in the US. This has occasionally been held out as disproof of an HVGV-PA link, but all it really establishes is that other factors play a significant role. Therefore it would be interesting to determine whether cultural differences between the US and Japan alter the effect of HVGV on PA.

Three sets of children (two in Japan, one in the US) filled out questionnaires querying their gaming habits and physical aggression levels. Some months later, these same children were surveyed again to see whether their physical aggression levels had changed. The authors found that HVGV levels at the first time point had a weak correlation (r= 0.28) with PA at the second time point. This effect varied significantly over the individual datasets and was strongest in the youngest age group. However, the r value did not exceed 0.5 for any of these datasets.

In layman’s terms, one could see these results as evidence that HVGV predicts between 8% and 16% of the level of physical aggression, depending on the age group and nationality. I caution my readers that this interpretation is an oversimplification that depends on certain assumptions about the data to have any validity. Because no statistics of the underlying matrices are provided I cannot substantiate those assumptions, so this should not be taken as a definitive description of the study’s findings. Statistics (even averages) imply a model, and should not be trusted if it cannot be proved that the model is appropriate.

These results are interesting and indicate that, although the magnitude of the effect may differ between societies, there is nonetheless a universal positive correlation between HVGV and subsequent physical aggression. Despite the elaborate discussion of youth violence in the paper, this does not directly indicate a linkage to criminal behavior. Moreover, this correlation is difficult to interpret due to the study’s numerous flaws.

There are good reasons to wonder whether the interpretation of the questionnaires produced a valid measure of HVGV at all, the assignment of violence level by genre being particularly suspect. A more significant problem may be that HVGV and PA were assessed by different means in every single group. Each group used different delays between surveys, and each involved differently-aged children. This doesn’t necessarily mean that conclusions drawn by aggregating the three are wrong, and the authors contend that agreement across the varying methodologies indicates robustness. However, the differences in method and subjects multiply the potential sources of error considerably. Since the derived correlations are so weak, this is a significant concern. In addition, because the populations differ substantially in respects other than nationality, it is impossible to accurately assess the effect of culture on the relationship between HVGV and PA. Doubtless future longitudinal studies will apply more uniform methods.

This brings me to another weakness of the study. Scientists will occasionally joke that the best possible set of correlational data is the one that contains only two points, the reason being that you are assured of being able to draw a perfect, straight line through all your data. In practice, however, we know that having limited numbers of data points makes our interpretations much more likely to be wrong. A “longitudinal” study involving two questionnaires given a couple of months apart hardly provides firm footing for a long-term correlation or a causal relationship. The authors acknowledge that the study is limited in this regard, but argue that the short wait would most likely depress correlations from their true value. Still, a longer timecourse with more measurement points would be highly desirable.

The authors make no attempt to account for any confounding factors other than gender. They do not seem to have taken data on family situation, peer influence, parental involvement, or school performance, although all of these factors are known to correlate to greater or lesser degrees with both PA and video game habits. If we only wish to establish that there is a correlation between HVGV and PA that’s not a huge problem. However, Anderson et al. clearly mean to establish video games as a causal factor for aggression. In the absence of controls for confounding factors, that is impossible.

Curiously, the authors also do not seem to have measured HVGV at the later time point. One objection to existing research linking HVGV to real-world violence has been that the observed correlations exist because people predisposed to violence choose to enjoy violent media. Testing the hypothesis that PA at the initial timepoint predicts HVGV at the second timepoint seems like an obvious thing to do, if only to squelch this objection. This seems to me particularly worthwhile, because the predictive power of HVGV for later aggression appears to be less than the instantaneous correlation between HVGV and aggression, significantly so for the older group. In light of these facts, the choice not to assess HVGV at the later time seems extremely odd.

Despite these flaws, this research is a step in the right direction. We need longitudinal studies, carefully controlled for confounding factors, over a range of ages and nationalities to parse out the true effects of video games on aggressive behavior in teens and adults. I do not find the present study terribly convincing, and I particularly dislike the more sensationalistic high points of its discussion section. Nonetheless, I hope that the authors will take criticisms like those of Dr. Ferguson into account as they design studies that will more rigorously investigate the causal relationship between HVGV and PA.

Only a particularly obstinate person would deny that there is a correlation between the intake of violent media, including video games, and aggressive behavior. They may inspire aggressive behavior, or serve as an outlet for existing aggression; either way, the correlation ought not be ignored. However, video games are just one, and doubtless not the most important one, of a constellation of potential factors affecting child behavior. Without a genuine analysis of the complicated causal relationships among these it is impossible to provide good advice to parents, doctors, and psychologists. The present study does not fill that gap in our understanding; it is doomed by its single-minded focus on video games and failure to account for confounding factors. While it is of value to know that the correlation between violent video games and aggression transcends national and cultural boundaries, it would be of greater value to know whether excessive playing of violent video games is a cause of aggressive behavior, a result of pre-existing aggression, or both. That is a question this research does not adequately, much less conclusively, address.

C. A. Anderson, A. Sakamoto, D. A. Gentile, N. Ihori, A. Shibuya, S. Yukawa, M. Naito, K. Kobayashi (2008). Longitudinal Effects of Violent Video Games on Aggression in Japan and the United States PEDIATRICS, 122 (5) DOI: 10.1542/peds.2008-1425