The very idea that cells proofread their genetic information makes intelligent design intuitively obvious. One doesn’t proofread gibberish. If cells had cobbled together haphazard strings of building blocks, it wouldn’t really matter in what order they were assembled. We know, of course, that the sequence matters: most mutations cause disease or death. Proofreading is evidence par excellence that genetic information represents real information, the kind found in books and software. ID advocates do not find it surprising, therefore, that cells go to great lengths to protect their genetic information.
Cell “quality control” has been recognized in the literature for some time. In fact, the Nobel Prize in Chemistry for 2015 went to three scientists who discovered DNA repair mechanisms. Cells inspect and correct their informational macromolecules at every stage: at transcription, at translation, and during post-translational modification. Roving molecular machines inspect other machines at work in the cell. They recognize misfolded proteins and tag them for degradation. And when the cell divides, molecular machines check every letter as DNA strands are duplicated. Cells are in the “quality control” business.
Proofreading, however, is a step beyond repair. A cell can repair a broken strand of DNA without regard to the sequence of nucleotide “letters.” Real proofreading must guarantee the accuracy of the sequence itself. Does the cell check for typos? Absolutely. A paper in the Proceedings of the National Academy of Sciences shares new evidence that bears on the design question. Researchers from Uppsala University in Sweden found not just one, but two independent proofreading steps in the ribosome beyond the one that was already known. They occur where messenger RNA transcripts are translated into proteins. The title says it: “Two proofreading steps amplify the accuracy of genetic code translation.” Here’s their statement on the significance of their discovery:
We have discovered that two proofreading steps amplify the accuracy of genetic code reading, not one step, as hitherto believed. We have characterized the molecular basis of each one of these steps, paving the way for structural analysis in conjunction with structure-based standard free energy computations. Our work highlights the essential role of elongation factor Tu for accurate genetic code translation in both initial codon selection and proofreading. Our results have implications for the evolution of efficient and accurate genetic code reading through multistep proofreading, which attenuates the otherwise harmful effects of the obligatory tradeoff between efficiency and accuracy in substrate selection by enzymes. [Emphasis added.]
If you recall the translation steps animated in Unlocking the Mystery of Life, you remember that messenger RNA (mRNA) transcripts are read in sets of three letters (codons). Matching the mRNA codons are transfer RNA molecules (tRNA), each equipped with a matching “anticodon” at one end, and an amino acid at the other end (when fully loaded, they are called aminoacyl-tRNAs, or aa-tRNAs). As the codons and anticodons pair up in single file inside the ribosome, the amino acids fasten in single file with peptide bonds. The growing polypeptide chain will become a protein after translation is complete. Additional molecular “chaperones” ensure that the resulting polypeptide chains are folded correctly into functional molecular machines.
The Uppsala team peered into the ribosome to take a closer look at the step where tRNA meets mRNA. They knew that selection of the correct tRNA was a crucial first step, first predicted by Linus Pauling seven decades ago. When the measured accuracy in translation was shown to be actually higher than Pauling predicted, molecular biologists suspected some kind of error correction mechanism must be at work. A proofreading mechanism was subsequently found in the ribosome. But how does it work? We can relate to human proofreaders, but how do molecules without eyes proofread in the dark inside of a ribosome?
Accuracy amplification by proofreading requires substrate discarding to be driven by a chemical potential decrease from the entering of a substrate to its exit along the proofreading path. One way to implement such a drop in chemical potential is to couple the discarding of substrates by proofreading to hydrolysis of GTP or ATP at high chemical potential to the low chemical potential of their hydrolytic products.
In short, proofreading needs to be energy efficient, but it won’t happen without the expenditure of an energy-rich molecule to push it along. The reaction must favor getting the right molecule where it belongs.
Biochemists knew that each aa-tRNA has to be prepped for its role by binding to an assistant called Elongation Factor Tu (EF-Tu) plus a fuel molecule, GTP. But after that step, the authors found two more:
We have found that the bacterial ribosome uses two proofreading steps following initial selection of transfer RNAs (tRNAs) to maintain high accuracy of translation of the genetic code. This means that there are three selection steps for codon recognition by aa-tRNAs. First, there is initial codon selection by aa-tRNA in ternary complex with elongation factor Tu (EF-Tu) and GTP. Second, there is proofreading of aa-tRNA in ternary complex with EF-Tu and GDP. Third, there is proofreading of aa-tRNA in an EF-Tu−independent manner, presumably after dissociation of EF-Tu·GDP from the ribosome (Fig. 1).
This significantly amplifies the accuracy of translation. “Although it was early recognized that multistep proofreading confers higher accuracy and kinetic efficiency to substrate-selective, enzyme-catalyzed reactions than single-step proofreading,” they say, “it has been taken for granted that there is but a single proofreading step in tRNA selection by the translating ribosome.” The new findings shed new light on the actual molecular steps required for high accuracy proofreading. And although their work was done on bacteria, “we suggest that two-step proofreading mechanisms are at work not only in bacteria but also in eukaryotes and, perhaps, in all three kingdoms of life.”
How does an evolutionist explain this? Early in the paper, they say, “We suggest that multistep proofreading in genetic code translation has evolved to neutralize potential error hot spots originating in error-prone initial selection of aa-tRNA in ternary complex with EF-Tu and GTP.” But that cannot be true. It’s a teleological statement. Natural selection cannot “evolve to” do anything. Later in the paper, they focus more on the question, laying out the plot for an evolutionary fairy tale: “Why Did Mother Nature Evolve Two Proofreading Steps in Genetic Code Translation?”
The existence of two distinct proofreading steps may appear surprising, because the accuracy of initial codon selection by ternary complex normally is remarkably high. Therefore, we suggest that two-step proofreading has evolved to neutralize the deleterious effects of a small number of distinct error hot spots for initial codon selection as observed in vitro and in vivo.
This should cause even more grief for neo-Darwinism, because it shows that single-step proofreading “normally is remarkably high.” In essence, the cell double-checks its already-accurate translation. They actually use the word “rechecking” to describe it. They estimate it provides a million-fold increase in accuracy, “far above the here observed modest accuracy amplification in the range of 300.”
Apart from the unexpected finding of two proofreading steps, the present study has identified the structural basis of the first, EF-Tu−dependent, step and suggested mechanistic features of both proofreading steps. These findings will facilitate structural analysis of the proofreading steps along with structure-based computations of their codon-discriminating standard free energies for a deeper understanding of the evolution of accurate reading of the genetic code.
Other Examples of Redundant Systems in the Cell
This is not the only case of multiple, independent systems in the cell. Three researchers in Massachusetts, also publishing in the Proceedings of the National Academy of Sciences, found redundant mechanisms for repairing double-stranded breaks in DNA. The two pathways, NHEJ and MMEJ, may work as primary and backup systems. “It is possible that there is partial redundancy between the NHEJ and MMEJ pathways, with MMEJ serving as a backup and NHEJ being the primary mechanism.” The backup pathway contributes to repair of some double-stranded breaks but not all of them.
Past posts here at Evolution News have pointed out redundancy in biological systems, such as this one stating that “pathways are organized into an intertwined, often redundant network with architecture that is closely related to the robustness of cellular information processing.” Another article pointed out that chromosomes appear to have a backup site for centromeres.
What we learn in these papers comports well with what David Snoke said in an ID the Future podcast about Systems Biology as an engineer’s way of looking at life (for more, see this from Casey Luskin). Engineers understand concepts like backups, redundancy, double-checking, and quality control. They realize that there are tradeoffs between accuracy and speed, so they seek to optimize competing design requirements.
Instead of the bottom-up view of the reductionist, the systems biologist takes the top-down view: how do all the components work together as a system? In actual practice, he says, systems biologists seek to understand living things as examples of optimized systems, and also to “reverse engineer” them in novel ways. In both contexts, intelligent design — not Darwinian evolution — is the operative concept driving the science.