RSS Feed
iTunes Link

Episode 82: How to Design a Hyperdimensional Physics Experiment

Download the Episode

Recap: Richard C. Hoagland has claimed now for at least a decade that there exists a “hyperdimensional torsion physics” which is based partly on spinning stuff. I assume he is correct in his model and claims and that his Accutron watch can actually measure what he claims. I then go through how you would properly design an experiment to test the claims and show why the data that Mr. Hoagland has presented is, quite literally, meaningless.

Puzzler for Episode 82: I mentioned that Richard Hoagland actually claims the orientation relative to the spinning sphere matters. If that's the case, how would you modify the basic design of the experiment I outlined, and what additional things would you need to keep track of?

Answer to Puzzler from Episode 81: There was no puzzler in Episode 81.

Q&A: This episode's question comes from Anders on the SGU forums who asks: "Everyone says there is a massive ocean on Europa, under a crapton of ice. But how do we know? How certain are we?"

There's one main line of evidence for a sub-surface ocean, and there are a few smaller ones.

The main one is that Europa is way too small to have its own magnetic field because any core should be solid and no longer moving. So it can't generate its own field. It also shouldn't be able to have an induced field because there would be no liquid metal flowing around.

But, when the Galileo spacecraft flew by Europa, it found that there was in fact an induced magnetic field caused by its passage through Jupiter's magnetosphere. To get an induced field, you have to have a highly electrically conductive layer of material somewhere inside the moon. Salty liquid water is a good one. A liquid core would be a good one, too, but its core is almost certainly solid because the moon is so small so it would have cooled by now.

That's observational, but there's also theoretical, which came first. Europa is in a resonance with two other large Galilean satellites, Io and Ganymede. It also has an elliptical orbit around Jupiter. These all mean that it gets flexed and relaxed during every orbit through tidal effects, and this will heat it up. Theoretical models showed that it could heat up enough for there to be a liquid ocean under an icy crust.

New News, Related to Episode 81: Light Travel Time Problem

  • Quite conveniently, with last episode on the speed of light changing so that creationists would be happy, Answers in Genesis, three days later, published an article in the Answers Research Journal by Danny Faulkner, another creationist astronomer, about another possible solution to the light travel problem for young-Earthers. To quote:
    • I spent more than 30 years looking for a solution to the light travel time problem, and recently I began thinking about a possibility that I find satisfactory. With so many other proposed solutions, one may legitimately ask why one more? I see that most of these solutions to the light travel time problem have advantages and disadvantages. If there were one solution that worked, there would not be so many solutions, and there would not be such sharp disagreement. Please consider my modest proposal. As I have previously argued (Faulkner 1999), I submit that God’s work of making the astronomical bodies on Day Four involved an act not of creating them ex nihilo, but rather of forming them from previously-created material, namely, material created on Day One. As a part of God’s formative work, light from the astronomical bodies was miraculously made to “shoot” its way to the earth at an abnormally accelerated rate in order to fulfill their function of serving to indicate signs, seasons, days, and years. I emphasize that my proposal differs from cdk in that no physical mechanism is invoked, it is likely space itself that has rapidly moved, and that the speed of light since Creation Week has been what is today.
  • In other words ... Godddit. The rest of the article is showing how this is consistent with the Bible, while the text before it goes over all the previous models.

Additional Materials:

Transcript

Claim: Richard C. Hoagland has claimed now for at least a decade that there exists a “hyperdimensional torsion physics” which is based partly on spinning stuff. In his mind, the greater black governmental forces know about this and use it and keep it secret from us. It’s the key to “free energy” and anti-gravity and many other things. Some of his strongest evidence is based on the frequency of a tuning fork inside a 40+ year-old watch. The purpose of this episode is to assume Richard is correct, examine how an experiment using such a watch would need to be designed to provide evidence for his claim, and then to examine the evidence from it that Richard has provided.

Predictions

Richard has often stated, “Science is nothing if not predictions.” He’s also stated, “Science is nothing if not numbers” or sometimes “Science is nothing if not data.” He is fairly correct in this statement, or at least the first and the last: For any hypothesis to be useful, it must be testable. It must make a prediction and that prediction must be able to be tested and that test must result in data that is consistent with what the hypothesis predicted.

Over the years, he has made innumerable claims about what his hyperdimensional or torsion physics “does” and predicts, though most of his predictions have come after the observation which invalidates them as predictions, or at the very least it renders them useless.

In particular, for this experiment we’re going to design, Hoagland has claimed that when a mass (such as a ball or planet) spins, it creates a “torsion field” that changes the inertia of other objects; he generally equates inertia with mass. Inertia isn’t actually mass, it’s the resistance of any object to a change in its motion. For our purposes here, we’ll even give him the benefit of the doubt, as either one is hypothetically testable with his tuning fork -based watch.

So, his specific claim, as I have seen it, is that the mass of an object will change based on its orientation relative to a massive spinning object. In other words, if you are oriented along the axis of spin of, say, Earth, then your mass will change one way (increase or decrease), and if you are oriented perpendicular to that axis of spin, your mass will change the other way.

Let’s simplify things even further from this more specific claim that complicates things: An object will change its mass in some direction in some orientation relative to a spinning object. This is part of the prediction we need to test. We could get more complicated if this basic prediction were shown to be true, but as we'll find out, that's unnecessary.

According to Richard, the other part of this prediction is that one way to actually see this change is when big spinning objects align in order to increase or decrease the mass from what we normally see. So, for example, if your baseball is on Earth, it has its mass based on it being on Earth as Earth is spinning the way it does. But, if, say, Venus aligns with the sun and transits in front of the Sun as seen from Earth (as it did back in July 2012), then the mass will change from what it normally is. Or, like during a solar eclipse when the Sun and Moon align. This is the other part of the prediction we can test.

Hoagland also has other claims, like you have to be at sacred or “high energy” sites or somewhere “near” ±N·19.5° on Earth (where N is an integer multiple, and “near” means you can be ±8° or so from that multiple … so much for a specific prediction). For example, this apparently justifies his begging for people to pay for him and his significant other to go to Egypt last year during that Venus transit. Or taking his equipment on December 21, 2012 (when there wasn’t anything special alignment-wise…) to Chichen Itza, or going at some random time to Stonehenge. Yes, this is beginning to sound even more like magic, but for the purposes of our experimental design, let’s leave this part alone, at least for now.

Designing an Experiment: Equipment

To put it briefly, Richard uses a >40-year-old Accutron watch which has a small tuning fork in it that provides the basic unit of time for the watch. A tuning fork’s vibration rate (the frequency) is dependent on several things, including the length of the prongs, material used, and its moment of inertia. So, if mass changes, or its moment of inertia changes, then the tuning fork will change frequency. Meaning that the watch will run either fast or slow. The watch was specially modified by a guy for the purpose of this measurement, including running wires from the tuning fork out of the watch to aid in measurement.

The second piece of equipment is a laptop computer, with diagnostic software that can read the frequency of the watch, and a connection to the watch.

So, we have the basic setup with a basic premise: During an astronomical alignment event, Hoagland’s Accutron watch should deviate from its expected frequency.

Designing an Experiment: Baseline

After we have designed an experiment and obtained equipment, usually the bulk of time is spent testing and calibrating that equipment. That’s what would need to be done in our hypothetical experiment here.

What this means is that we need to look up when there are no alignments that should affect our results, and then hook the watch up to the computer and measure the frequency. For a long time. Much longer than you expect to use the watch during the actual experiment.

You need to do this to understand how the equipment acts under normal circumstances. Without that, you can’t know if it acts differently – which is what your prediction is – during your time when you think it should. For example, let’s say that I only turn on a special fancy light over my special table when I have important people over for dinner. I notice that it flickers every time. I conclude that the light only flickers when there are important people there. Unfortunately, without the baseline measurement (which would be turning on the light when there AREN’T important people there and seeing if it flickers), then my conclusion is invalidated because I don't know if it's just because of the important people or if it's a normal feature of the light.

So, in our hypothetical experiment, we test the watch. If it deviates at all from the manufacturer’s specifications during our baseline measurements (say, a 24-hour test), then we need to get a new one. Or we need to, say, make sure that the cables connecting the watch to the computer are connected properly and aren’t prone to surges or something else that could throw off the measurement. Make sure the software is working properly. Maybe try using a different computer to make sure that the connector port is acting properly. These are all things that one would normally do in a scientific experiment where we want our results to be believed.

In other words, this boils down to the fact that, we need to make sure all of our equipment behaves as expected during our baseline measurements when nothing that our hypothesis predicts should affect it is going on.

A LOT of statistical analyses would then be run to characterize the baseline behavior to compare with the later experiment and determine if it is statistically different. One of the most basic ways this would be looked at is to take an average and standard deviation. The average is the sum of all the values divided by the number of values. So, say you measure the frequency of the watch 10 times every second, and you let this run for a week. You would have 864,000 measurements of the frequency of the watch. You'd add them all up and divide by 864,000 and that's your average. The average should be what the manufacturer said the frequency of the watch is, which is 360 Hz, or a vibration 360 times per second.

The standard deviation is a bit more difficult to understand, but in large datasets, it's most often a bell curve, also known as a normal distribution, also known as a Gaussian. In other words, if you were to make a histogram of all those 864,000 measurements of frequency, where the histogram is the number of times you recorded a certain range of frequency like 360.0-360.1 Hz, and 360.1-360.2 Hz, and so on, it would look like a bell curve, peaking in the middle right at the average, and with fewer and fewer measurements the farther you get from the average. A Gaussian has the property that 68.3% of the time, the values you measure will be within 1 standard deviation of the average.

I know some folks listening to this aren't math people, so let me explain what that means because it's important. Let's say the average I measure is right on the manufacturer's specs, 360 Hz. I have my 864,000 measurements of frequency, but they aren't all exactly 360 Hz. I calculate from my data a standard deviation of ±0.5 Hz. This means that 68.3% of the frequency measurements I have will actually be in the range of 359.5-360.5 Hz. Very few will be EXACTLY 360 Hz. What this also means is that the inverse is true: 31.7% of the measurements I make will be smaller than 359.5 Hz or larger than 360.5 Hz.

In physics, the gold standard is a 5-sigma detection. For a Gaussian distribution, 68.3% of the data are within 1-sigma. 95.4% of the data are within 2-sigma, so 359-361 Hz. 99.7% of the data are within 3-sigma, or 358.5-361.5 Hz. 99.994% of the data are within 4-sigma, or 358-362 Hz. And 5-sigma means that 99.99994% of the data are within that range, in this case 357.5-362.5 Hz. That means, out of my sample of 864,000 data points, either zero or only one point should be outside of that 5-sigma range. And that's my baseline. That means if when we run the experiment and we see something OUTSIDE that 5-sigma baseline range, it's significant.

Now, those are just two basic statistics that we would run. You would also do tests to make sure that there wasn't a variation from day to night, that it was well insulated from temperature changes and didn't behave differently if it's 50°F versus 80°F outside, or low versus high humidity, especially if you plan on taking this on field trips. Like, to your sacred places.

Also, ideally, you'd have a couple of these and you'd characterize each one. This way, when you actually run your experiment at your anointed time, you could leave one running in the lab, one in your car, one exactly at your sacred site, and so on. These would be controls, but that's a separate issue that's not entirely necessary for this already long discussion.

Remember the bottom-line: We need to know how the device behaves when nothing special is going on; without that knowledge, we can't say that it behaves differently when something special is going on. Just like my flickering light example.

Designing an Experiment: Running It

After we have working equipment, verified equipment, and a well documented and analyzed baseline, we then perform our actual measurements. Say, turn on our experiment during a solar eclipse. Or, if you want to follow the claim that we need to do this at some “high energy site,” then you’d need to take your equipment there and also get a baseline just to make sure that you haven’t broken your equipment in transit or messed up the setup.

Then, you gather your data. You run the experiment in the exact same way as you ran it before when doing your baseline. Again, if you had multiple apparati, you'd run them in various locations at that same time, and you'd actually have other people running them, in order to get multiple independent tests of the phenomenon so you couldn't just chalk up a weird result on one of them but not the others as evidence for your phenomenon.

Data Analysis

In our basic experiment, with our basic premise, the data analysis should be fairly easy.

Remember that the prediction is that, during the alignment event, the inertia of the tuning fork changes. Maybe it’s just me, but based on this premise, here’s what I would expect to see during, say, the transit of Venus across the sun (if the hypothesis were true): The computer would record data identical to the baseline while Venus is away from the sun. When Venus makes contact with the sun’s disk, you would start to see a deviation that would increase until Venus’ disk is fully within the sun’s. Then, it would be at a steady, different value from the baseline for the duration of the transit. Or perhaps increase slowly until Venus is most inside the sun’s disk, then decreasing slightly until Venus’ limb makes contact with the sun’s. Then you’d get a rapid return to baseline as Venus’ disk exits the sun’s and you’d have a steady baseline thereafter.

Regardless, the basic data is a series of numbers: The frequency you recorded around the event, and your baseline. You run the same statistics on each. And, you use the statistics to determine if one is different from the other. There are numerous ways to do this, but the easiest one to understand again gets to the average and standard deviation.

You might get a result that is very obviously different, like before your average was 360 Hz and during the transit it very clearly averages 450 Hz. But remember that standard deviation. And what if your baseline was 360 Hz and the average during the transit is 360.8 Hz. Is that different enough?

In other words, you need to determine whether the variation you see is different enough from baseline to be considered a real effect. We'll use the same numbers from before for baseline: 360 Hz with a standard deviation of 0.5 Hz. Meaning that we should expect the data 68.2% of the time to be within 0.5 Hz of 360 if the null hypothesis is correct: That there is no hyperdimensional physics effect. That's the null hypothesis and we have to assume that's correct unless the statistics show it can be rejected.

So, 360±0.5 Hz during our week-long baseline. Only, the transit of Venus doesn't last a week. It's a few hours. Let's say you could collect data for the whole thing and managed to get something like 12 hrs of the transit -- just to make the numbers easier. And let's say that IF the hyperdimensional physics thing were true, you'd get a single, different value throughout the transit than you do when it's not transiting. So it would look like a step function. You can use other statistics to determine other features, but I want to make this easy since I'm not using graphs right now.

So, for the 12 hours of the transit, taking measurements 10 times a second, you get 43,200 measurements. A lot, but not the 864,000 you got for baseline. So the statistics might be worse. So, let's say your average IS different during the transit, and you get 360.8 Hz on average. But, you couldn't control temperature very well out in the open, and you have fewer data points, so your standard deviation is larger. Maybe ±1.0 Hz.

So you have your baseline at 360 and your test at 360.8. But, 68.3% of the time, your baseline is 359.5-360.5 Hz and your test was 359.8-361.8 Hz. In other words, these overlap within 1-sigma. There is a non-trivial chance that you would get this result just by running the experiment in the lab again for 12 hours.

As I mentioned earlier, in physics, the gold standard for a detection is 5-sigma. This means, in this case, you need to multiply each standard deviation by 5: so 0.5 becomes 2.5, and 1.0 becomes 5.0. Add that to your baseline. So you would need to have had your average, from the transit of Venus, be 367.5 Hz to have less than 1 in a million chance of this being a statistical fluke.

But now let's say you can't. Your theory predicts that the variation is only going to be up to around 360.8 Hz versus your 360 Hz baseline. How would you get better statistics to show this effect to within 5-sigma?

You’d need different and better equipment. A turning fork that is more consistently 360 Hz (so better manufacturing = more expensive). A longer event. Maybe a faster reader so instead of reading the turning fork’s frequency every 0.1 seconds, you can read it every 0.01 seconds. Those are the only ways I can think of. But, this is why physics experiments are expensive. We seek to understand more and more subtle phenomena, smaller and smaller effects, and in order to do that, we have to have better and better equipment that can give much more consistent results. It's all about decreasing that standard deviation.

Repeat!

Now, despite what one may think or want, regardless of how extraordinary one’s results are, you have to repeat them. If in our experiment our baseline was 360±0.5 Hz and our transit was 400±1.0 Hz, that's exciting. That could get you more funding. But that doesn't prove the phenomenon.

You need to replicate it over and over and over again. Preferably other, independent groups with independent equipment does the repetition, also.

Unfortunately, no matter how much you might want it to, one experiment by one person does not a radical change in physics make. Numerous experiments by one person if it can't be duplicated by other people does not a radical change in physics make.

What Does Richard Hoagland’s Data Look Like?

I’ve now spent around 25 minutes explaining how you’d need to design and conduct an experiment with Richard’s apparatus and the basic form of his hypothesis. And why you have to do some of those more boring steps (like baseline measurements and statistical analysis).

To-date, Richard claims to have conducted about ten trials. One was at Coral Castle in Florida back I think during the 2004 Venus transit, another was outside Alburqueque in New Mexico during the 2012 Venus transit. Another in Hawai’i during a solar eclipse, another at Stonehenge during something, another in Mexico during December 21, 2012, etc., etc.

For all of these, he has neither stated that he has performed baseline measurements, nor has he presented any such baseline data. So, right off the bat, his results – whatever they are – are meaningless because we don’t know how his equipment behaves under normal circumstances. Going back to my light example, I don’t know if the light above my special table flickers at all times or just when those important people are over if I don't turn it on when those important people aren't there.

Richard Hoagland also has not shown all his data, despite promises to do so.

You can find some graphs of his data on his website. One plot that he says was taken at Coral Castle during the Venus transit back in 2004 is typical of the kinds of graphs he shows. My reading of it is that it shows his watch appears to have a baseline frequency of around 360 Hz, as it should. The average, however, as calculated by the Accutron diagnostic software he's using, states that the average is 361.611 Hz, though we don’t know how long that’s an average. The instability is 12.3 minutes per day, meaning it’s not a great watch. It's also over 40 years old. Remember that the frequency of a tuning fork depends on its mass ... if you got a bit of dirt stuck to it, that could be enough to throw it off.

On the actual graph, there's an apparent steady rate at around that 360 Hz, but there are spikes that deviate up to around ±0.3 Hz, and there is a series of deviations during the time Venus is leaving the disk of the sun. But that effect continues AFTER Venus is no longer in front of the sun. In fact, the deviations from 360 are bigger and more numerous after Venus exits the sun's disk than when it was there in the first place, the opposite of what Richard should have predicted! In addition, the rough steady rate when Venus is in front of the sun is the same frequency as the apparent steady rate when Venus is off the sun’s disk.

Interestingly, he also shows another trace from Coral Castle. And a third one on a different page on his website, too. Same location, same Accutron (because he only has one), and even the same time period. Same number of samples, same sampling rate. Problem is, all three graphs are DIFFERENT traces that are supposed to be happening at the same time! Maybe he mislabeled something. I’d prefer not to say that he faked his data -- I like to have at least an aloof and haughty sense of decorum on this podcast.

But, at the very least, this calls into question A LOT of his work in this. If a real scientist were found to have faked their data, they would lose their job and be disgraced. If a real scientist displayed different data that was supposed to have been collected by the same instrument at the same time, there would be a significant inquiry.

What Conclusions Can Be Drawn from Richard’s Public Data?

So, what conclusions can be drawn from Richard's public data? None.

As I stated, the lack of any baseline measurements automatically mean his data are useless because we don’t know how the watch acts under “normal” circumstances.

That aside, looking at the data that he has released in picture form (as in, we don’t have something like a time-series text file we can graph and run statistics on), it does not behave as one would predict from Richard’s hypothesis.

Other plots he presents from other events show even more steady state readings and then spikes up to 465 Hz at random times during or near when his special times are supposed to be. None of those are what one would predict from his hypothesis. They are what one might expect if he jiggled the cord and didn't have a good connection.

What Conclusions does Richard Draw from His Data?

But, what conclusions does Richard draw from his data? To quote from his website in various places:

“stunning ‘physics anomalies’”

“staggering technological implications of these simple torsion measurements — for REAL ‘free energy’ … for REAL ‘anti-gravity’ … for REAL ‘civilian inheritance of the riches of an entire solar system …’”

“These Enterprise Accutron results, painstakingly recorded in 2004, now overwhelmingly confirm– We DO live in a Hyperdimensional Solar System … with ALL those attendant implications.”

Et cetera.

Final Thoughts

In the end, the craziness of claiming significant results from what – by all honest appearances – looks like a broken watch is the height of gall, ignorance, or some other words that I won’t say.

With Richard, I know he knows better because it’s been pointed out many times that what he needs to do to make his experiment valid.

But this also gets to a broader issue of a so-called “amateur scientist” who may wish to conduct an experiment to try to “prove” their non-mainstream idea: They have to do this extra stuff. Doing your experiment and getting weird results does not prove anything. This is also why doing science is hard and why only a very small fraction of it is the glamorous press release and cool results. So much of it is testing, baseline measurements, data gathering, and data reduction and then repeating it over and over again.

Richard (and others) seem to think they can do a quick experiment and then that magically overturns centuries of "established" science. It doesn't.

Provide Your Comments: