Saturday, September 29, 2012

The Joseph Smith Papyri: Updating the Length Controversy

Last June I blogged about attempts to determine the original length of a scroll of papyrus that Joseph Smith possessed, and from which he may have derived the Book of Abraham. At the very least we know it is the source of Facsimile 1. Andrew Cook and Chris Smith (neither of which are believing Church members, as I understand) developed a mathematical approach to determining the winding length of the scroll, which is more precise and sophisticated than simply eyeballing the papyri, and then applied a formula to determine the original length of the scroll. Their conclusion was that the scroll of Hor was too short to have contained the Book of Abraham.

John Gee, an Egyptologist associated with FARMS/Maxwell Institute, published an article in which he criticized Cook and Smith's method. According to Gee, their formula gave inaccurate results when applied to an actual complete scroll. However, I noted that I found Gee's article unsatisfactory because it did not engage Cook and Smith's article in the level of detail that it deserved. Further, in laboring to discredit their work, Gee missed or ignored the fact that their conclusion was compatible with what Gee claims to be a popular LDS view: that the Book of Abraham was more a product of revelation than a translation of text.

There have been a couple of additional developments in this little controversy, and so the story needs updating. I will take them in chronological order.

Last month Gee spoke at the annual FAIR conference and his talk touched on this issue. After briefly introducing his own attempts to apply a standard mathematical formula (developed by someone named Hoffmann), Gee said,

Andy Cook developed a slightly different formula and he and Chris Smith applied it to one of the papyri and they’ve been loudly proclaiming that they who have never worked with papyri know more than I who have been working with papyri for a quarter of a century.
Gee then showed his previously published figure that compares a known full scroll with application of Hoffmann's formula compared to Cook and Smith's. Gee claimed to have found 5 errors in Cook and Smith's work, though he did not specify what they were. He then showed the results of fixing one of those unspecified errors, which greatly improved the results (red), compared to the real scroll length (blue) and Cook and Smith's original formula (green). (For some reason, he could not be bothered to label his data series or axes.)



Gee commented:
The errors are therefore something in Cook’s formula and methodology and not something in the papyrus measurements. It shows us that Cook’s methodology is fundamentally flawed.

Now, I attribute Cook’s mistakes to working in a new field, where neither he nor Chris Smith have had any experience working with papyrus before. And there were some math mistakes that for some reason Cook did not catch. As you can see, if he corrected one mistake it would have made a big difference in his results.
Gee concluded by saying that both the Hoffmann and Cook/Smith formulas make some "fallacious assumptions" and that they can, at best, "give a ballpark estimate."

Andrew Cook has responded in the most recent issue of Dialogue. The article, "Formulas and Facts: A Response to John Gee" (subscription required) hits back at Gee on several counts. According to Cook,

1. Gee misunderstands the Cook/Smith equation, not realizing that it is the same as the Hoffmann equation, though expressed slightly differently.

2. As a corollary, Gee's representation of differences of result between the Hoffmann and Cook/Smith formulas is incorrect. They should be exactly the same.

3. Gee erroneously accuses Cook and Smith of estimating the thickness of the papyri in order to derive winding lengths, when in reality Cook and Smith measured winding lengths in order to derive thickness.

4. Differences in the smoothness of the two lines suggest that Gee did not treat the formulas consistently, which had the effect of exaggerating the alleged inferiority of Cook and Smith's formula. Further, it appears that he applied the thickness derived by Cook and Smith for the Hor scroll to the known Royal Ontario Museum (ROM) scroll instead of deriving a new value for the ROM scroll.

5. Cook obtained winding measurements for the same ROM scroll, and applied the Hoffmann and Cook/Smith formulas, and compared the results to the actual scroll length. The results of the two formulas were identical, and both were nearly identical to the actual scroll.

6. The novel contribution of the Cook/Smith paper was the autocorrelation method of identifying the winding lengths, which has been mostly overlooked. Accurate measurement is important for the reliability of the result.

7. Cook concedes that their paper contains a calculation error. However, it is not known whether that was among the errors Gee claimed to find, and anyway it only made a difference of 5 cm in the final result.

I can't adjudicate all of these points, but I think Cook makes a good case. For example, take point #3. In his "Formulas and Faith" article, Gee wrote:
Cook and Smith use the thickness of the papyri (which they did not measure but only estimated) as an indication of the change in diameter to calculate the difference between the lengths of successive windings in the scroll. Hoffmann—knowing that most papyri are already mounted, thus rendering it impossible to measure the thickness—uses the average difference between successive windings for the same purpose....

With the data gleaned from this intact roll in Toronto (that is, the individual winding lengths), I applied each of the mathematical formulas, using the assumptions made by the authors of the formulas concerning papyrus thickness, air-gap size, and size of smallest interior winding.
But as Cook correctly points out, they clearly did not begin by estimating the thickness and air-gap size. Rather, they used a similar method to that of Hoffmann--using winding length to derive effective thickness (i.e. change in radius). "Our primary task therefore, is to determine the effective thickness of the papyrus from the winding lengths." I can't help but conclude that Gee is wrong here.

It appears to me that on the technical merits Cook and Smith really do have the upper hand. Gee may have a quarter of a century experience working with papyri, but it's not obvious to me why that should give him mathematical superiority, particularly when his adversary (Cook) is apparently a theoretical physicist. We are talking about physical dimensions here, after all. If Gee ends up eating humble pie after having trashed Cook and Smith's work, he's only got himself to blame.

However, I do think that Gee's sense of caution is legitimate. I will explain why in my next post.



Continue reading...

Friday, September 21, 2012

Important Experimental Evolution Result Detailed

In 1988 biologist Richard Lenski started 12 identical cultures of E. coli bacteria and began propagating them every day to study their evolution. Along the way, he and his students froze cultures so that they could always go back and study them more closely. A few years ago his lab reported that after about 31,000 generations the bacteria had gained the ability to live on citrate as a food source. The results received a lot of fanfare and attracted the interest of anti-evolutionists. However, the paper describing the result was not fully satisfying because it didn't look into the underlying genetic changes that gave rise to the citrate-eating ability.

This week Lenski's lab published a follow-up paper and we now know what happened....mostly. E. coli already have a gene (citT) for a protein that imports citrate into the bacterial cell, but it isn't turned on in the presence of oxygen. When they looked at the citrate-eating bacteria, they found that the area around that gene had been duplicated, but in a way that put the duplicated gene under a new promoter, which is a section of DNA that regulates when genes are turned on. This is illustrated in Figure 2 of the paper:


The new arrangement allowed citT to be turned on in the presence of oxygen and, lo and behold, citrate-eating bacteria were born. However, they weren't very efficient at first so further tweaks to the new arrangement refined their ability. Some of these tweaks included further duplications of the new arrangement to increase the amount of the importer protein produced.

This would all be interesting enough, but they also found that if they went back and inserted the new gene arrangement into ancestral bacteria from before generation 20,000, it hardly worked. But it did work if they put it in ancestral bacteria after generation 20,000. This fits with their previous results where they found that the ability to eat citrate could re-evolve in cultures started from stocks after generation 20,000. It thus appears that other unspecified mutations elsewhere in the genome set the table for the gene duplication and rearrangement to be useful. Unfortunately we may never know what those other mutations were because even with the genome sequence, figuring out which mutations were important would be an enormous amount of work. Interestingly, when they looked at the genomes of the bacteria from the re-play experiments, they found that the same kind of gene duplication and rearrangement often occurred but that no two were exactly alike. In a few cases the genetic event was quite different while giving the same basic result.

The authors propose that evolution often proceeds in a manner that can be divided into three parts: potentiation, actualization and refinement. First, mutations accumulate that are of little significance on their own. Second, some kind of genetic event occurs that results in a new function or new regulation of the function. This genetic event is able to be accommodated because of the previously unimportant mutations. Third, the new ability is refined by further mutation.

The whole study is quite elegant and was clearly a lot of work, and it will go down as a landmark in experimental evolution. Yet, its results are not surprising. These kinds of genetic events can be inferred to have occurred many times in the genomes of all manner of organisms, including humans. But now we have a detailed record of one that occurred (and re-occurred) in a laboratory. This study is also a clear refutation of the ridiculous but frequent claim by creationists that mutations can only degrade genetic information.


See also Carl Zimmer's summary: The Birth of the New, The Rewiring of the Old


Continue reading...

Sunday, September 09, 2012

Genome Junk

A mini scientific controversy has broken out this week, exacerbated by anti-evolutionists, that has to do with how much of the human genome is functional. It may surprise you to learn that only about 1% of the genome codes for proteins. Since proteins are the foundation of all of the building blocks of the body and all of its chemistry, it is reasonable to wonder what the rest of the 99% of the genome is for.

Although it has become common in both popular and scientific publications to claim that the rest was long dismissed as junk, this ahistorical claim is not correct. Biologists have long recognized that other non-coding parts of the genome (i.e. not coding for protein) play important regulatory roles in gene expression. For example, in order for a protein to be made, DNA must first be transcribed into RNA, which is then used as the template for protein synthesis. In order for transcription to occur a collection of proteins must assemble around DNA upstream of the gene, which then proceed along the DNA while copying it into RNA form. Some stretches of DNA are particularly attractive places for the transcription machinery to assemble, and these are known as promoters (because they promote gene transcription; get it?). Over the decades additional methods of regulation have been recognized. More recently it has been recognized that RNA molecules themselves can serve important regulatory functions. But the basic point to be made here is that biologists have long understood that not all non-coding DNA is worthless junk.

However, biologists have also come to understand that over 50% of the genome is made of the remnants of viruses and other self-replicating pieces of DNA. In addition, there are extra broken copies of genes scattered around the genome. Although there are some interesting cases where these elements have taken on a function by helping to turn on/off a particular gene, or something similar, it is hard for many biologists to imagine that all of it has important function. After all, the genome is not static and these are ongoing phenomena. Sometimes a novel insertion of one of these elements can cause disease. Yet there are some biologists who think that most, if not all, of the genome has function because of stringent natural selection.

This debate would probably attract little attention if it weren't for anti-evolutionists. The notion that much of the genome is full of dispensable junk built up over millions of years is offensive to people who think that God created the human genome out of whole cloth. Why would he load it up with useless junk? As the Intelligent Design movement was taking shape, ID proponents made what they claimed was a scientific prediction based on ID: that there is no junk DNA. The reasoning is that if the human genome is designed, then it won't contain lots of non-functional DNA [1]. ID proponents have been pushing this and have trumpeted every report of DNA being moved from the non-functional category to the functional.

This week a high-profile paper was published claiming that 80% of the genome is functional, and most of the reporting has followed along with the narrative that whereas scientists used to think that the genome was mostly junk, it turns out to almost all be functional. However, a lot turns on the meaning of the word functional, and the lead author of the paper has admitted to setting the bar extremely low. For example, if a section of DNA is transcribed into RNA it is deemed functional. But what is missing is evidence that all of that RNA actually does anything important for the cells in question. After all, the basic interactions of DNA with its environment are governed by the laws of chemistry (and the underlying laws of physics), and there is going to be noise. Should we really believe that none of those RNA transcripts are accidental by-products of chemistry? Similar questions can be raised about the other criteria used to judge a piece of DNA functional, like protein binding, etc. Critics suspect, based on comments by the lead author, that the 80% bit was really a gambit to increase the splash of the paper.

Evolutionary biologist T. Ryan Gregory has been writing about the junk DNA issue for years at his blog. Since most vertebrates have about the same number of genes (20,000-30,000), an often unstated assumption is that more complex organisms need more extra DNA to regulate those same number of genes in more complex ways. However, it turns out that genome size does not correlate with organismal complexity, which has led Gregory to propose the onion test.

The onion test is a simple reality check for anyone who thinks they have come up with a universal function for non-coding DNA. Whatever your proposed function, ask yourself this question: Can I explain why an onion needs about five times more non-coding DNA for this function than a human?

I'm with the critics on this one, and not just because it defies the ID brand. The claim that 80% of the genome is functional just doesn't pass the smell test (or, in my view, the onion test), and it renders the term functional virtually meaningless.

For more on this controversy, see Nature's blog post, Fighting about ENCODE and junk. T. Ryan Gregory and Larry Moran have been blogging about this at a furious pace. You can find entries into their analysis here. For more general background on junk DNA, Gregory has a nice collection of older posts (both his and others').



Notes:
1. Like so much else with ID, lack of junk DNA is not a scientific prediction based on ID. This is because ID proponents also disclaim any knowledge of the designer's motives or identity. That being the case, there is no reason to think that the designer would avoid accumulation of junk DNA. For example, the designer could have started life, or a few lineages of life, and let them evolve from there. If these lineages accumulated junk DNA, they would still be in line with the central claims of ID. You cannot claim that junk DNA is incompatible with design while simultaneously claiming not to know anything about the methods or motives of the designer. Well, actually you can because it's a free country, so you can say anything you want. But we're talking about logical discourse here.



Continue reading...

Thursday, September 06, 2012

Arctic Sea Ice is Recovering

The summer of 2007 saw a record in seasonal Arctic sea ice melt. A few years later in an online discussion, I was told that Arctic sea ice was recovering. This was based on the fact that sea ice melt in subsequent years had not reached the low point of 2007. A look at decade-scale trends did not support the assertion, but hey--trends shmends!

This summer is smashing the 2007 record low (source).



We've already passed the previous record, and we probably have not hit bottom quite yet.

But here's the good news: Seasonal Arctic sea ice melt in the next couple of years will probably not reach this year's low. And that means Arctic sea ice is recovering! Yay!

Skeptical Science illustrates how this game is played (note that 2012 is not shown (yet?)):




Continue reading...

  © Blogger templates The Professional Template by Ourblogtemplates.com 2008

Back to TOP