Another PCR copy number question

11 posts / 0 new
Last post
Amtekoth
Amtekoth's picture
Another PCR copy number question

I'm trying to determine copy number of virally infected DNA in cells that I have isolated DNA from.

For my standards, I have 2 templates. The first is a plasmid carrying the viral gene of interest. The plasmid is 10kb long, the DNA sequence being amplified is 176bp.

The second standard is the 176bp amplicon itself, which I amplified, ran on a gel and cut out.

I speced both the plasmid and amplicon DNAs and calculated the copies per pg of starting material.

When I run QPCR on the same 'copy number' of each template, I get different crossing points. I get nice standard curves for each, but they are off from each other by orders of magnitude!

I can't figure out how many copies of the gene I have in my test samples if my 2 standards are so different!

Help!

Ed

R Bishop
R Bishop's picture
Its been a while but here's a

Its been a while but here's a stab.

I suspect you are running into priming efficiency problems. Which "standard" gives the order of magnitude lower Ct values? Is it the 176bp or the plasmid?

Rb

Jason King
Jason King's picture
After you have isolated the

After you have isolated the genomic DNA, do you digest it with restriction enzymes before doing the PCR? If not then then I would guess that neither of the standards would be a perfact control as the template DNAs are of such different length and structure.
 
Also, I'm assuming that prior to the DNA isolation, you have counted the cells and are thus assuming that you get a 100% efficient isolation of genomic DNA. It might be good to get hold of a cell clone with known number of viral integrations and a plasmid with the PCR target in it to find out what factor you would have to multiply by to take account of sub 100% accuracies in cell counting, DNA isolation, re-purification post RE digestion (if used) and PCR amplification.

Amtekoth
Amtekoth's picture
R Bishop wrote:I suspect you

R Bishop wrote:

I suspect you are running into priming efficiency problems. Which "standard" gives the order of magnitude lower Ct values? Is it the 176bp or the plasmid? Rb

 
Thanks for the reply.  I'll recheck my notes and get back.

Amtekoth
Amtekoth's picture
parvoman wrote:

parvoman wrote:

After you have isolated the genomic DNA, do you digest it with restriction enzymes before doing the PCR? If not then then I would guess that neither of the standards would be a perfact control as the template DNAs are of such different length and structure.
 
Also, I'm assuming that prior to the DNA isolation, you have counted the cells and are thus assuming that you get a 100% efficient isolation of genomic DNA. It might be good to get hold of a cell clone with known number of viral integrations and a plasmid with the PCR target in it to find out what factor you would have to multiply by to take account of sub 100% accuracies in cell counting, DNA isolation, re-purification post RE digestion (if used) and PCR amplification.

Hi Parvoman,
 
No, I don't digest the genomic samples.  One of the other things I do is run an appropriate GAPDH or B2M PCR as a control to give me number of cells.   I also try to isolate DNA from the same number of cells and then adjust based on the DNA concentration.  The control reactions are usually spot-on the same from sample to sample, so I'm pretty sure I'm using the same amount of template per genomic sample.
 
As for clones, I've recently infected control cell lines and then done limiting dilutions to isolate positive clones, but I'm not sure if they can have more than 1 integration event per cell.  Still, I'll be running the clones as additional controls (not standards) in the future.

Ivan Delgado
Ivan Delgado's picture
This sounds like a tough

This sounds like a tough problem. Here are some things I would consider in case you haven't already: 
 
1. Is the efficiency of your GAPDH assay comparable to the efficiency of your gene of interest? If the efficiencies are >2% different, you will likely come across problems when trying to normalize using GADPH
 
2. I would try using more than one normalizer. While most of the time using a single normalizer like GAPDH is fine, you may be getting genetic effects that lead to the big differences you see in your controls. 
 
3. When you run your standards (10kb plasmid and gel purified PCR), do you mix them with "equal" amounts of the same genomic DNA from your cells (without infecting virus)? The efficiency of your PCR will be significantly higher in the purified DNA controls compared to the gDNA samples. In other words, anything you can do to replicate the conditions of all your samples the better you will be able to relate them to each other. 
 
4. Last but not least, have you considered running a Southern? Sometimes the old way of doing things is the solution.
 
Good luck

JMG
JMG's picture
Hi Amtekoth

Hi Amtekoth
I just joined this forum because your topic was very interesting ... And copy number questions have popped up on Biotechnique's Forum as well ...
As another has noted here you most likely are dealing with different efficiencies of amplification depending on the genesis of each template type (and how that genesis affects the qPCR rxn). 
Your plasmid will most likely have one EAMP, while the PCR product, another - and understandably so - given the different isolations and the slightly different topography/geometries involved (even though we're talking about the exact same target here)...
Also remember, the plasmid looks like 2 copies of target to qPCR. Your PCR product standard is most likely dsDNA as well going into the qPCR -- and so, it too will look like 2 copies per each molecular duplex (to the qPCR).  Rev. primer binds the sense target, and Fwd binds the antisense, right away in the tube, etc. (the qPCR goes from 2 to 4 to 8 to 16 instead of 1 to 2 to 4 to 8 to 16 etc. per unit each ds target-containing molecule (ds DNA plasmid, and dsPCR product)) etc.
(I was initially thinking this may have something to do with your difference in calculated copies but ...you state that you are dealing with "orders of magnitude difference" between the two template types) ... so a difference of 2:1 is definitely not the kind of difference you are seeing, correct?  You are talking about 100 to a 1000-fold difference between the two...
 When you spec.'d your samples, I am assuming you used the appropriate blanking buffer per each different template type.  If your plasmid exists in Buffer A, blank the spec. with Buffer A, and if your PCR product exists in Buffer B, blank the spec. with Buffer B. If this was not done, this could be one root causes of your "orders of magnitude" difference between the two template types when assessing copy # by qPCR. 
If the spec. readings were not the problem, and since the # of copies of target per molecule of dsDNA cancels out for both template types, your concern must be related to how you use your EAMP (exponential amplification) or E (amplification efficiency) values in your quantiation calculations.  The "orders of magnitude difference" is a little alarming - so I'm not quite convinced yet of my own argument here because you'd have to also have quite an uncharacteristic difference in the efficacy (or efficiency) of the qPCR for these two different representations of the exact same target template. But, a small change in EAMP, can indeed cause a big change in initially estimated copies ... so ...
 The only way I would know if I am talking about the right thing is if you put some of your results through the equations I've outlined below:
Absolute (ds plasmid) template 1:   Xo1 = 10(-ba1/ma1)*EAMP1-Ct1
Absolute (PCR product) template 2: Xo2 = 10(-ba2/ma2)*EAMP2-Ct2
What "Xo" value do you come up with in each case, when using these equations?
They should give you your problematic "orders of magnitude" difference.
EAMP = 10(-1/m)  where m is of course the slope of each different target (or template in
this case) standard curve.
Then, putting one in the language of the other by using the transformation:
transformed ba2 = ba1*log(EAMP1)/log(EAMP2)
then putting this "b" into equation 2 above ...
what do you get for Xo then? Just curious.  Do you get agreement then?
Very puzzling thing you have brought up here ... since, if there are real problems with something like this, then, assuming equal efficiency between target genes and reference genes (for those who do that), becomes even more risky (again for those who do that).
I am a strong proponent of always finding a way to assess the amplification efficiency whenever possibe, since slight changes in its estimation is the thing that exponentially inflates or deflates data very quickly, and by orders of magnitude -- like you are seeing here ... (and which you really should try to explain somehow). Please let me know if you look into this further -- it is very curious.
Thanks~!!
JMG

Jen_Floyd
Jen_Floyd's picture
Besides the amplification

Besides the amplification efficiencies of your target gene and control gene, which is pretty important, I think another consideration is that using the spec to quantitate your DNA is sometimes imperfect and since you're using two different templates it might be the source of your problems.  A more accurate way to quantitate your DNA is using a fluorescent dye and measure with a fluorimeter.  Also more accurate is to estimate concentration by fluorescence on the gel photo compared to the DNA ladder.

JMG
JMG's picture
I appreciate what Jen_Floyd

I appreciate what Jen_Floyd has said below...
Personal experience is indeed the only thing we can follow (and that is dictated by what our budget's allow) and in many cases right now ... what budget ...?
Although I believe spec. and NanoDrop readings can be very accurate/precise (as I originally, and now I think, too vehemently wrote here [my apologies for being so opinionated])     .... Problems with nucleic acid spec/NanoDrop readings (to add to Jen's comments) is the inability to discern between single nucleotides, RNA and DNA (and other A260 contributing molecules) in the same sample. There are a new generation of dyes (in addition to Ribo Green for DNA/cDNA and Pico Green for RNA assessments) that claim to be able to just give us the DNA concentration, or just the RNA concetration. (This is the "Qubit Quantitation Platform" which uses "Quant-iT fluorescence technology" - i believe offered by Invitrogen but not sure).
We too have little money for fluorimeter purchase - and so we merely rely on taking our readings (of RNA) post DNase-treatment only --- but using the 'exactly correct' zeroing buffer (including all DNase reaction components etc.; very strange buffer) -- the same buffer the RNA samples themselves are in post-DNase treatment, etc. is what we use to blank/zero the spec. and/or NanoDrop before using them for 260, 280 measurements of our DNase-treated RNA samples...
Lookout for free nucleotides - they absorb more at A260 than when they are incorporated into their polymeric cousins, DNA and RNA. [This is why it is nonsense to try to measure cDNA directly (after RT from RNA) since the free nucleotides and any non RNase H-degraded RNA contributes to the spec. reading ...]
I wonder if the qPCR product being discussed here had any other spurious dNTPs or other sequences (left-over primers?) or something in it (upon isolation or purification) that may have overestimated its concentration (by spec./NanoDrop?) - and therefore gave lower copy numbers than expected during the qPCR?  But if the primers were 25 nt each (not sure if there was a probe) ... and the whole product was 176 bp... it should've been able to be resolved away from the primers when cut-out from the gel...
My apologies for keeping this topic going on ... (but it is very interesting) -- would like to know how it turns out as there is a colleague in Spain who is attempting a very similar thing... for the first time.
JMG

Jen_Floyd
Jen_Floyd's picture
Not all labs have access to

Not all labs have access to sophisticated (read expensive) fluorimeter dyes, etc, so I was offering a low-tech solution in the event that the former solution was not a possibility.  I'm well aware that looking at ethidum or Sybr Green fluorescence on a gel isn't ideal.  However, my own personal experience with spec readings and nucleic acid concentrations, always using the proper and matching dilution buffers, and always keeping the A260 reading within 0.1 to 0.5 to avoid linearity problems, I have seen gross differences between spec readings on samples and results for qpcr control readings or loading control band quantification on a Northern or Southern blot.  So, for that reason, I do not normally trust the spec readings for an exact measure of nucleic acid concentration.  I don't have a scientific explanation, only personal experience.  I have found that when it comes to experiements, the scientific explanations are sometimes inadequate when inexplicable things happen.

Amtekoth
Amtekoth's picture
 We use a nanodrop to spec

 We use a nanodrop to spec DNA samples and it seems to hold up pretty well.  When I run 'equivalent' total DNA samples through the qPCR with housekeeping gene primers (mGAPDH for mouse DNA, huB2M for our human samples), I get spot on, almost superimposable traces over dozens of samples.
I'll need to work on the Math that JMG proposed.  Thanks for the assistance.