Writing our Genome (II)
In my first article, I talked about the recent GP-Write proposal and highlighted what I thought were key questions that needed answering. I was pleasantly surprised, a couple of days later, to receive an email from George Church providing some counter-arguments and answers. I’ve reproduced below (with permission) some points (divided by topic) from our email exchange.
i. benefits for everyone?
from the original article: “Would GP-Write lead to synthesis companies focusing on low costs-per-base for high-volumes, while ignoring demand for low-volumes?”
GC: This is like asking: Did NGS ignore demand for low volumes of sequencing (typically 200 bp)? The answer is “yes”, but what is considered “low volume” today is one run of a MiSeq (15 billion bp). We’ve brought the cost of both high and low volume down by over a million-fold (and with many new applications).
from the original article: “It simply is not a technology that is universally applicable or even necessary in most biotechnological solutions…we tweak and modify”
GC: This assumes that synthesis remains slow and expensive. That statement sounds like some made about sequencing in 1994, before we did the first bacterial genome (H.pylori). Today when we alter one base pair, we check the whole genome, because it is so easy (even for human genomes). What was unattractive becomes very compelling as price and quality improve.
DM: Wouldn’t ‘tweaking and modifying’ usually be more efficient than building up from scratch? We don’t reinvent the wheel every time we build a new car so why synthesise a genome to solve every problem?
GC: Yes. We don’t reinvent, but we do build cars from scratch (from raw metal and plastics). Same with genomes. We recoded a 4.7 Mbp E.coli genome using only editing (and conjugation) to make 0.007% changes, but to make 1.56% changes, whole genome writing (WGW) was clearly much more cost effective (manuscript in press). Maybe we won’t use WGW (or WGR) for “every” problem, but as costs plummet they are used often enough that it matters. (including dropping costs of partial genomes and other tools as important side products).
from the original article: “can’t imagine grant agencies funding multiple large-scale DNA synthesis projects”
GC: They already have funded Mycoplasma, yeast, E.coli, Salmonella. Lowering costs means many more (and larger) genomes. When cheap enough, they aren’t even called projects (or grants), just a routine assay (as above example with checking one bp).
DM: Wouldn’t a large outlay of USD 100 million towards a single genome synthesis project (yes, with multiple genomes) cause funding agencies to think twice before investing in multiple DNA synthesis approaches, given the limits on public science funding?
GC: For the BRAIN initiative we saw $200M the first year (mainly discretionary and related funding) to help reduce costs for ongoing $5 billion per year neuro grant costs. This involves a remarkable number of “Innovative Neurotechnologies”. Similarly the NHGRI $1000 genome technology program, beginning in 2005, resulted in 40 different NGS technologies tested. Exploring multiple innovative approaches to testing synthetic constructs is what we need today too.
i. The cost of HGP-Write
from my original article: “the project will ultimately cost much more, given that HGP-Read cost $2.7 billion in 1991 dollars”
GC: … or we could say that it “will ultimately cost much less, given that a human genome reading today is $999”.
DM: One could also say that the tech would cost much less, 25 years after kick-off and a decade after primary development. Does the Engineering Biology Centre estimate that USD 100 million would be sufficient to reach completion?
GC: Fluorescent NGS development time was about 8 years. GP-read “kick-off” did not include any commitment to radical cost-reduction until 2005. I can’t speak for the whole team, but I’d guess that $100M (focused on tech cost reduction) would be enough to have a 1000-fold impact on both small scale (plasmids) and large scale (mammalian genome, editing and WGW) needs. This may not be considered “completion”, noting that we still have gaps in HGP-read, and we are now still dropping the price — to $100 human genome reads (then $10 and then $0 — as in google maps).
ii. DNA synthesis
GC: One noteworthy point: The first graph that you show is very different from the graph in our 2016 GP-write Science paper supplement (as well as our 2009 review figure 2). This is especially important in this context since human genomes (and equivalents) have been synthesized many times over (at oligonucleotide level) via Affymetrix, Perlegen, Nimblegen and Agilent technologies — for roughly $2000 per genome.
DM: The graph I used is of course, from Rob Carlson, and I realise that it’s significantly different from the one in the 2016 GP-Write paper’s supplementary information. I did wonder about this when I first read the paper but I didn’t realise it had been published before (with detailed data) in the 2009 Carr and Church paper. Rob Carlson seems to use much more conservative figures when assembling his graph and I must say his figures seem to chime better with the costs I face when ordering DNA synthesis (especially considering the minimum order quantities (MOQs) that Gen9, Twist and others require). I don’t think its possible for an academic researcher today to order gene synthesis at the rate the GP-Write graph shows. Of course, these concerns would not hold for a project the scale of GP-Write but they are relevant, I think, to Question A from my story.
GC: Neither Rob’s plots nor my plots were limited to academics. Both companies and academics can order genes at $0.03/bp today and oligos from arrays at $1E-6/nt. Indeed, my lab and others (Elledge, Hannon, Shendure, Lander, Zhang, etc) have been using oligos from arrays since our Tian et al. Nature 2004 paper. What some labs do about MOQs is collective bargaining, which is more fun for our students than competing with robots (and learning obsolete skills).