Jump to content

User talk:Kirk shanahan

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Retired
This user is no longer active on Wikipedia.

The Major Fallacies of Cold Fusion

[edit]

{Please place comments to this section in the following section. Just refer to what paragraph you want to discuss if it is relevant. I'd like to keep this article intact and not all chopped up by commentary. Thanks for your help on this.}


Cold Fusion (CF) advocates (CFers, short for ‘cold fusioneers’, a contraction of ‘cold fusion engineers’ proposed by CFers themselves) want the scientific mainstream and general population to believe that a new, solid-state, nuclear reaction was uncovered by the work of Fleischmann, Pons, and Hawkins. They have presented a large variety of ‘evidence’ that this must be the case. Their side of the story is well presented in the recent (2007) book by Dr. Edmund Storms. However, the contrary case is not so well presented there, in fact is could be said to be misrepresented there by selective failures on the part of Dr. Storms. I wish to attempt to delineate the problems here for the Wikipeida public.

In the Storms book are several tables of collected experimental evidence for several primary claims (primary in the sense that their numbers and prominence in CFers views make them highly notable). First is the table of experiments that have detected apparent excess heat (Note the phraseology. They have detected signals, but the signals are not proven as true excess heat.) That table takes up approximately 8 pages in the book, a very impressive number. The problem is that none of those experiments provides the information needed to evaluate the veracity of the ‘excess’ claim over the conventional explanation offered by the Calibration Constant Shift (CCS), with the one exception of Dr. Storms’ own work on Pt electrodes published at the ICCF9 in 2000, which in fact is what led to the presentation of the CCS explanation. That fact led this author to reanalyze the data supplied by Dr. Storms to show that it was within experimental error, and thus cannot be confirmed as _true_ excess heat. Again, no other study has presented the information to be able to make that determination, so, no other study can confirm _true_ excess heat either.

To confirm excess heat is true, it must be shown that the excess heat signal is significantly above (usually this means 3 times larger than) the experimental noise level. The consistent error the CFers make is to assume the baseline calorimeter noise is the experimental error. In fact it is a small part of the total error, and the dominant term could well be due to the CCS. (Other things might be going on as well.) Typical baseline noise is perhaps 75 mW vs. signals of 780 mW in Dr. Storms’ case and even higher in others. Unfortunately as shown in my 2002 publication, Dr. Storms’ CCS is a “1%” error (i.e. 1 sigma ~ 1% of input power). In Dr. Storms’ case, the input power was up to ~20 W and the best excess signal was ~780 mW for a percentage of ~4%. In chemistry a “1%” technique is a very, very good technique, and it is not expected that Dr. Storms (or anyone else) could do much better. His calorimeter was ~98-99% efficient as well, which is also top-notch design. Unfortunately, this means his 780 mW signal is ‘in the noise’ and is not conclusive proof of excess heat.

What Dr. Storms did to allow this analysis was he published a) the input power (nicely correlated to apparent excess) and b) limited information on repeatability of calibration constants. He pointed out that a resistive heater calibration was 1.7% off from an electrolytic calibration. Again, an excellent level of reproducibility, but not good enough to show his signal was above noise. Unfortunately no other researchers have published such information, particularly on calibration stability with time, i.e. ‘b’ above. Without both some measure of ‘a’ and ‘b’, the veracity of an excess heat signal cannot be determined. Again, this is nothing more than the standard scientific demand that the signal be significantly above the noise. The only requirement is to recognize the noise in these experiments is not just the baseline fluctuation.

The CCS is simple to understand. It simply says that between the time the experiment is calibrated, and the time that experimental data is collected, an instability in the experiment has occurred and thus the prior calibration is no longer valid. In other words expressed mathematically, at time t0, the experimenter determines the calibration equation is Pout = 5 * X + 3. Then at time t1 a change occurs, such that at time t2, when the experimenter measures the unknown conditions, the true calibration equation (unknown to the researcher) is Pout = 4 * X + 3. Clearly, if you multiply x by 5 when the correct (at that time) value is 4, you will get the wrong Pout. This is why measuring the calibration equation at several times to statistically assess the stability of those constants is a necessity.

In an even more detailed sense, such a measurement must contain every opportunity for the equation to vary, otherwise, the determination of the level of variation is incomplete. This is another potential problem with proving excess heat is real. But this almost brings up a ‘Catch-22’ situation. If you get the variable signal that CFers call ‘excess heat’, but I would call a CCS, how can you tell it apart. The answer is that you must know what is going on. In my publications I showed mathematically that a shift in heat distribution in the cell could be one way a CCS could occur, and I proposed a chemical mechanism to get such a shift. However, this mechanism has not been accepted by the CFers. That is fine, but the consequence of that is that they must propose a mechanism for the excess (which they do) AND show control over it (which they have NOT done). Admittedly, this is where it gets very difficult. But essentially, if the excess heat signal is true excess heat, the CFers need to develop detailed control over it. Once they do, they should be able to optimize it, increasing its strength to a point where it is well above the ‘CCS’ noise level. Conversely, if my proposed mechanism is correct, it should be optimizable as well, although the way to do that will undoubtedly be very different as the difference between optimizing a conventional chemical reaction and a nuclear reaction are worlds apart. We all await such experimentation.

The second largest table in Dr. Storms’ book was on claims to have detected the mass 4 isotope of He, a putative fusion product. Unfortunately, the CFers and everyone else realize that 4He is found in the air, and that if air leaks into the experiment, one can find 4He. Complicating this is the fact that liquid He (which is primarily 4He) is used in many scientific labs, and thus potentially can be found at concentrations higher than in background air due to its use in nearby labs (connected via ventilation flow paths). That is particularly troublesome. This was recognized in the 1989 DOE report, and again in the 2004 report. What makes it crucial is the work of W. B. Clarke on samples supplied by SRI, where he showed conclusively that sir had leaked into all 4 samples provided. This was in experiments published in 2002 and 2003. So, 13-14 years after the fact, we are still plagued with air inleakage problems. Therefore, no scientist will accept 4He claims if they are not backed up by proof from the same experiments that air inleakage was not a problem, and that 4He can be measured accurately. Air inleakage was detected by Clarke by discovering nitrogen in the samples, as well as neon in levels commensurate with the He if it came from air. No reports of 4He adequately address this issue, thus none are conclusive proof of a nuclear origin if the He. To repeat, to adequately address this issue, the analysis method must be proven reliable numerically and with respect to air exclusion (and such proof must be supplied, not inferred, assumed, or asserted).

The third largest table in Storms’ book was on heavy metal transmutation results. At this point, the table was not very long, but several labs/experimenters were noted. Just as with the He, yes, they were detecting new elements on their electrodes surfaces (or membranes, or whatever). But whether these new elements came from a nuclear process is not well proven. In fact the normal conclusion would have been otherwise in any other field. ‘New’ elements have been detected on Pd electrode surfaces since the beginning of the cold fusion story. In the beginning, they were rightly attributed to contaminants. In fact on prominent one was Pt, which is almost all cases is what the ‘other’ electrode is made of. In other words, the Pt electrode was somehow being dissolved and deposited on the Pd electrode. And this is _normal_. What is not normal is claiming it or other new elements were formed by a nuclear reaction in or on the Pd electrode.

To prove that these new elements arise from such a reaction is extremely difficult. For example, one of the earlier claims for transmutation came from the Patterson Power Cell, and its ‘commercial cousin’, the RIFEX kit. Scott Little performed an interesting study on one such kit, and got essentially the same results as Miley and Patterson (the purveyors of the RIFEX kit), i.e., he did find ‘new’ elements, but he went one step further (actually two one steps). First he computed whether the new elements detected could have been detected in his starting materials. He found that the large majority _could not have been detected_., i.e. the RIFEX kit was probably just concentrating a sub-trace level contaminant on the bead surfaces. Additionally Little took those elements that were found above the detection limit and traced several of them down to components that the cell itself had been constructed of. In other words, the RIFEX cell was extracting the elements from itself and depositing them on the beads to be found later as ‘new’ elements (just like the Pt). Yes, they were ‘new’, but their origin was not a solid-state nuclear reaction. CFers must prove that this is not true in their results, and none have done so to date.

Secondarily, CFers have not used their analytical techniques adequately. They have consistently misidentified these ‘new’ elements by assuming their analysis instruments function better than they actually do, or by choosing one identity from what was in fact a list of possible identities. For example, Iwamura has been noted as claiming to have produced Pr and Mo from a complex membrane structure that deuterium was diffused through. The problem is that these identifications were done with a single peak from an XPS spectrophotometer. Unfortunately, XPS is a ‘fingerprint’ technique, i.e. like in regular fingerprint analysis, more than one match point must be found to be able to identify the element, but Iwamura only used one peak (match point). Multiple match points are required because usually there are several elements that can produce a given peak. Later, Mizuno’s group found that Iwamura’s Mo peak was actually S (Mo and S have overlapping peaks where Iwamura chose to pick his one peak). While Mizuno did not challenge the Pr identification, in fact someone else did (unnamed participant at one of the ICCFs), and they suggested the Pr was actually Cu, a metal commonly found in vacuum systems, as well as trace contaminants from the membrane fabrication materials. Again, it was a one-peak identification, and thus is not definitive.

Another misuse of analytical instrumentation/results comes from experiments that claim to have show altered natural isotopic distributions. This is only done with the SIMS technique. Going back to the Iwamura case above, the “Mo” claimed to have been produced supposedly had an altered isotopic distribution. Mo has several isotopes, but its major one is mass 96. In Iwamura’s results, the mass 96 peak was greatly enhanced over normal distributions, thus supposedly proving the nuclear nature of the reaction (which tend to favor particular isotopes). However, sulfur is 100% mass 32, and 3 x 32 = 96, i.e. the mass 96 came from an S3 species, so given Mizuno’s result, it is easy to see why Iwamura was misled. This kind of analysis _must_ be done on all claimed isotopic anomalies in order to substantiate that they are real. As well, typically ignored are the di- and triatomic metal hydride species. Being 1 and 2 mass units larger that the bare metal, they would convince the unwary an isotope distribution shift has occurred, when in fact all one is looking at is hydride chemistry. To recap then, transmutation results need to properly interpreted and identified, and _then_ need to be shown NOT to have arise from concentrating and collect contaminants. So far, only Scott Little has done a thorough enough job on that.

Other tables could have been placed in Storms’ book for other families of claims, but Storms only dealt with those bodies of evidence that he thought were adequately large to be persuasive. He was correct in that, it does require an adequate body of evidence to make a scientific claim. Unfortunately he inaccurately presented the CCS story, left out the Clarke He story, and hand-waved (i.e. did not reference) the anti-contamination argument. Proving a new revolutionary argument takes addressing all the issues, not ignoring them. Until such time as the CFers either a) address the issues above (and related ones) or b) conclusively demonstrate a cold-fusion-powered water heater, the rest of us are correct in assuming the CF case is unproven, and the Wiki article needs to reflect that reality.

Why Cold Fusion Calorimetry Can’t Be Trusted

[edit]

PLEASE MAKE COMMENTS ON THIS SECTION IN THE NEXT SECTION. DO NOT EDIT THIS SECTION. EDITS HERE WILL BE DELETED.


Whenever an experimental result is computed from experimental measurements, the errors in the measurements must be propagated through the calculation to estimate the error in the computed result. This is done through what is known as the Propagation of Error theory. There are two versions of this, the more complete version which involves covariance matricies, and the simplified version which assumes covariances will average to zero in a large data set. We will use the simplified version here.

The simplified propagation of error formula to compute the variance (or its square root, the standard deviation) in a computed value ‘y’, which is a function of a number i variables ‘x_i’, the following formula is used:

If y = f(x_i). then s_y ^2 = Sum{ (part_d(f)/part_x_i)^2 * x_i^2 } (1)

(part_d(f)/part_x_i is the partial derivative of the function f with respect to the ith x variable, s_y is the standard deviation of y)

In a calorimetric experiment, power output is measured. Accuracy is insured by calibration, which means a known (true) P (Pt) is sent into the cell, and a measured P (Pm) is recorded. Pm never equals Pt because some heat is lost though penetrations through the calorimeter boundary, at a minimum.

To ‘adjust’ for the lost heat, a calibration curve is established by measuring the paired Pt and Pm values at several values of input power. Then, a calibration equation of some sort is determined. Frequently, this is a simple linear equation (Po = k * Pm + b), but this is not required. The point is that the calibration constants of whatever equation is developed are experimentally determined values and thus their error must be propagated through to the computed output power Po.

Note that Po – Pt is the ‘excess power’ (or ‘excess heat’) that is used to claim cold fusion is shown. Also note that in most calibration equations, Pm is expanded in terms of measured variable. For example, in mass flow calorimetry,

Pm = Cp*f*dM/dt*(Tout-Tin)

In Seebeck calorimetry, the process of determining k folds in a conversion factor for the measured voltage to be converted to a power. I.e., the Seebeck equation is typically given as Po = K * Vt, where Vt is the summed voltage derived from a series arrangement of thermoelectrics, and K is now our k from above times a conversion factor. However, that does not impact the propagation of error.

We will discuss the simplified linear case where the ‘b’ coefficient of the standard linear regression equation is zero. (This means the measured power at zero input power was actually zero, a desired condition.) Thus Po, the output power, which is the adjusted measured power, is computed via a simple multiplicative constant we will call k, i.e.

Po = k * Pm (2)

So, let us compute the error in Po for the generic equation (2) via (1).

With x1 = k and x2 = Pm:

Part_d_y/part_x1 = Pm and Part_d_y/part_x2 = k

And thus

s_Po^2 = Pm^2 * s_k^2 + k^2 * s_Pm^2 ,

where s_Pm and s_k are the standard deviations of Pm and k, respct.

What are the numeric values now for k, Pm, s_k, and s_Pm? Here we must turn to the literature for answers as to what values are often used. Normally Pt is in the range of 10-50 watts. Therefore we will use 10 for Pm and later consider what using 50 would do. In a very good calorimeter, almost all the heat is captured and thus k would be nearly 1 (k=1 is the ideal case of no loss.) Storms’ and McKubre’s calorimeters are of this caliber, with efficiencies of 98-99%. Others however, such as those used by Energetics Technologies, are only 70% efficient. In that case k= 1/.7 = 1.43. We will use 1.0 in the calculations and later consider what using 1.43 and 1.5 would do. S_Pm is routinely given in the case of the good calorimeter as something around 75 mW or 0.075 watts. We won’t mess with that. That leaves the estimate of s_k. Unfortunately data to evaluate this is typically not mentioned and no literature values exist. The only data we have is that of Storms which I reanalyzed in the 2002 publication. So, we will examine the cases where s_k = 1% of k (a very good technique), and discuss what 5% or 10% might give later. To summarize, our base case is: Pm=10W, s_Pm=0.075W, k=1.0 (unitless), and s_k=.010. (This set is for a ‘good’ calorimeter in terms of heat loss and calibration shift.) Thus:

s_Po^2 = 100 *1.0x10e-4 + 1*5.625x10e-3 = 1x10e-2 + 5.625x10e-3 s_Po^2 = 1.5625 x 10e-2 Watts^2

s_Po = SQRT( s_Po^2) = 0.125 Watts or 125 mW

As expected, the sigma has increased a bit from 75 mW. This almost always happens when you fold in another variable in the error propagation. Even in the best case, the error increases a little. The 3 sigma band then on Po is +/- 375 mW. That is for a nearly perfect calorimeter (no losses). That means that any excess heat signal of 375 mW or less is considered ‘in the noise’.

Now let’s consider a 50% loss, i.e. k = 2, and a 10% st. dev., i.e., s_k = 2*.1 = 0.2 (a really ‘bad’ calorimeter in terms of heat loss and calibration shift). Now:

s_Po^2 = 100 *0.04 + 4*5.625x10e-3 = 4 + .0225 = 4.0225 Watts^2

s_P_o = 2.006 Watts or ~2000 mW

The 3-sigma band here will exceed 6 Watts!

Now for a 50W input power, the first term in the equations becomes 2500 instead of 100. For our best case, s_Po = SQRT(.25 + .005625) = 0.5056 Watts or ~500 mW, and 3-sigma = 1.5 Watts. The worst case gives s_Po = SQRT(100.0 + .005625) = 10.00 (rounded) Watts! 3-sigma is then 30 Watts!

Notice that even if we double or triple the baseline noise on the Pm measurement, the dominant error term is the one involving the suspected shift in the calibration constant. The s_Pm term is just not that important, which is why when you look at an excess power plot, the apparent excess heat peaks appear to exceed the baseline noise by large factors. However, that apparent baseline noise is not the relevant error term as we just showed.

“Oh, but 50% heat loss is too big.” Could be, but 70% is typical. For 70% loss, k= 1.43, and let’s consider 10% shift in k, i.e. s_y =.143. (10W input and 75 mW noise again).

s_Po^2 = 100 * .143^2 + 1.43^2 * .075^2 = 2.0449 + 0.0115 = 2.0564 Watts^2, s_Po = 1.434 Watts

And just for overkill, let’s consider the _real_ Storms calorimeter. It was ~98% efficient, meaning k=1/.98 = 1.0204, and computing its s_k as 1% of k gives s_k = .0102. The maximum input power produced the maximum excess heat at it was ~26W, and as usual, s_Pm = 75 mW.

s_Po^2 = 26^2* .0102^2 + 1.0204^2*.075^2 = 0.0703 + 5.86x10e-3 = 0.0762 W^2 s_Po = .276 Watts or 276 mW, 3-sigma = 828 mW

Compare the 276 mW 1-sigma value to the claimed precision of 75 mW and the observed maximum excess heat of 780 mW and you can see how important considering the calibration constant shift can be. It dominates!

This conclusion applies generically to Seebeck calorimeters too. There Pm == Vt in the equations above. Now we need to know values for Vt and s_Vt as well. I don’t know these, but the computations are the same.

One final comment. The biggest 3-sigma band computed here was 30 Watts. However, that in no way limits how bad a given experiment can be. It is entirely possible to get significantly larger error bands in other experiments.

The final point – the CCS must be considered to determine if a proposed excess power or heat signal is in the noise or not at the given experimental conditions.

One comment added after the Marwan et al Response to my Comment on the Krivit and Marwan JEM paper came out: This computational approach is used to estimate the random error in a quantity given random errors in the measured quantities. It does not imply the system is random, it is just a 'rule-of-thumb' used to decide what is reasonably inside or outside the noise band (i.e., it is a 'sanity check'). The randomness of the system under study must be determined from the actual behavior. My papers do NOT claim the CCS is random. In fact, they claim exactly the opposite.


PLEASE MAKE COMMENTS ON THIS SECTION IN THE NEXT SECTION. DO NOT EDIT THIS SECTION. EDITS HERE WILL BE DELETED.