RSS Feed
iTunes Link

Episode 47: Image Processing and Anomalies, Part 1

Download the Episode

Recap: So many pseudoscientific claims rely on photographic evidence, and astronomy is no exception. In this Part 1 episode, I go through the basics of how an image gets from photons to pixels, the corrections we make to the image to try to get it to best represent the original, and some anomalies that can result.

There was no Puzzler in Episode 46.

Puzzler: Let's say you run across a website that claims to have found fossilized life on Mars in photos from the Mars rovers. What steps would YOU take in order to look into the claims?

Q&A: For this episode's Q&A, I really wasn't going to do another crater question, but this one was timely. Before I read it, a teensy bit of background: Richard C. Hoagland was on Coast to Coast AM all night on August 5th, with a few different people he didn't let talk, discussing the landing of the Mars Science Laboratory, AKA "Curiosity," that occurred during the first hour of the show. Richard, among other things, said that the central mound in Gale Crater (the MSL landing site) is a collapsed arcology. He explained why he believes that, in part, because a crater forms when you blast a huge hole in the surface, so how do you get a mountain?

The answer is that you get a rebound effect once craters get to a certain size. On Mars, that's about 6 km across as I showed in my dissertation. Below that, you don't get a central peak, above that, you generally do. Though it should be emphasized that in the case of Gale Crater, the central mound is much more extensive than a normal crater central peak and it's thought to be a bleep-load of sediment deposited by water.

So on the SGU forums, Belgarath stated: "You indicated that all Martian craters above a certain size leave a mountain in the middle. Is that true of the Moon, too? Does the body have to be in a certain size range for craters to form that way? Is the mountain basically happening because the surface liquifies and plops back up like what happens with a rock in water?"

And the answer is yes, that's exactly what happens. At a basic, basic level, the impact liquifies the surface and at these temperatures and under these stresses, the rock will act like a fluid. That rebound effect you get when you throw a rock into a pond is what is thought to form the central peaks of craters.

If you dropped a small enough rock into a pond, you would NOT get the rebound effect, you'd just make a temporary hold in the pond's surface. That's the case for smaller craters. But when you get bigger, you get the rebound and the central peaks. If you get really big, you get the peak spreading out into a ring, and you get what we call a Peak-Ring crater. Mars has around a dozen of these that still have their peaked-ring. The Moon, because of much less active erosion, has many more.

But the diameter transition of where we get these central peaks and peak-rings varies by target type and by location in the solar system.

On Earth, the transition is around 3 km. On Mars, it's around 6 km. On the Moon, it's around 15. What controls this transition was originally fundamentally thought to be based on surface gravity. Since Earth has more gravity than Mars, the transition is at a smaller diameter, and since Mars has more gravity than the Moon, it's a smaller diameter than the Moon's. But, target material does also play a role, as I proved in my doctoral dissertation, and impact velocity is also thought to play a role. So since the average velocity of impactors at Mars is something like 12-15 km/sec whereas the average at Mercury is closer to 30-40 km/sec, there will be a difference between those, as well.

Additional Materials:


The topic I'm going to talk about today, and next time, is image processing. I know that might sound a bit boring, but a lot of pseudoscience out there relies on images, and a fair number of astronomy conspiracies say that NASA or scientists are processing the images, they're not the original ones, and if THEY would just give us the original images, we'd see "insert conspiracy of choice" -- usually things like faces on Mars, which will be a different episode.

This is also a "Part 1" episode, "Part 2" to get into some more detail about image processing and analysis, and I'll say now that it'll be using a subject that's been of significant discussion lately as a case in point. So, to get some idea of what will likely go into Part 2, check out my blog post on my response to Mike Bara's criticism of my video of the lunar ziggurat.

This episode is going to be divided into a few parts, where I'm first going to talk about the non-basic process of taking and processing a black-and-white image with a telescope with a modern CCD, then how some different spacecraft cameras work, and then get into color processing. I'll end the episode with a few clips and then explain based on what I just explained why they are wrong. The finer details of contrast, sharpening, filters, detail, noise, and dynamic range will be discussed in Part 2. Which is why a video companion is in the works, merging these two episodes, and HOPEFULLY will be completed in time for the next episode's debut near the end of next week.

This is all going to be focused on visible light, but the same basic idea works for nearby wavelengths like infrared and ultraviolet. The more extremes like radio and gamma rays are more complicated, but I have never seen pseudoscience related to them with astronomy ... though I'm sure they're out there.

Black & White Astronomical Ground Photography

The history of ground-based astrophotography starts with photographic plates over 100 years ago. Many major observatories that were around then still have these plates archived.

Moving forward, let's talk about the modern CCD. CCD stands for "Charge-Coupled Device." It's an array of light-sensitive parts that are often called "pixels" which is short for "picture element." I know that because I was reading a paper from a few decades ago that felt the need to define a pixel.

I'm not going to go into exactly how CCDs work, but I'll provide a link or two in the shownotes.

Each pixel in a black-and-white camera is supposed to be the same, and for visible-light, they're generally sensitive to wavelengths around 400 to 800 nm long, which is a bit in the ultra-violet and a bit into the infrared. But, their sensitivity over those wavelengths varies, and they're almost always more sensitive to the red and less sensitive to the blue end. The better the quality CCD, the more even the sensitivity is. The better the quality CCD, the more sensitive it is.

It may surprise you to know that your consumer camera will only record, generally speaking, anywhere from 5% or so of the light that hits it for a cheap-o camera to up to around 15-20% for professional ones. Everything else is lost; this is called "quantum efficiency" and often abbreviated as "QE." The reason that amateur astronomers may spend $5000 on a single CCD chip is because they'll record closer to 30-40% of the light that hits them. The reason that professional astronomers will spend $50,000 to over $100,000 on a single CCD chip is because they'll record closer to 80-90% of the light that hits them. This means that, everything else being equal, if you have to wait 2 seconds to record something with your camera, a good astronomy CCD will record it in closer to 1/10 second. When astronomers are taking exposures that last for hours on end, you can start to understand why they shell out the big bucks. For black and white. I'll get into color in maybe 10-15 minutes.

What I want to focus on is what we DO with the pictures we take. To do that, you first have to start to think about an image not as a pretty picture that's mysterious, but as a series of numbers. Each pixel has a numerical value that tells you how much light is there so how bright it should be. If you can think of them like that, then everything else will make MUCH more sense. In other words, a "photograph" is really that, a "graph" or "display" of "photons" or "light."

With that in mind, astronomers are very picky. If a group of pixels has recorded light from a star in one location on the CCD, and another group elsewhere on the CCD has recorded light from a different star, then astronomers want to be sure that ANY difference between the two is real. After all, if they publish a paper saying that Star A is half as bright as Star B and then they find out the reason was a hair got on the telescope and blocked part of Star B, then they're going to look pretty stupid.

To deal with that, we do two things. The first is called "darks" and the second is called "flats."

Darks are slowly making their way into some consumer cameras, especially the prosumer and pro-level Nikon cameras. This is the "Noise Reduction" setting, and the reason that it takes twice as long to take a picture in this setting is that it takes the photo you want and then it takes a photo with the shutter closed. It then subtracts the latter from the former.

Again, think of the pixels just as numeric values, and this will make more sense. The purpose of the darks are to determine what the camera sensor records when it's not supposed to record anything at all. There will be an inherent level of noise due to the basic laws of thermodynamics and the underlying quantum mechanics. There may also be some unevenness in the sensitivity of the detector, and there may be some hot and some cold pixels -- individual pixels that record things much brighter or much darker than they should. If you subtract this inherent level of "what the CCD is recording when it's supposed to be recording nothing" out of the photo you WANT to take, then you get rid of that stuff, mostly. Like if 10 photos of light were supposed to hit the pixel, but it recorded 12 photons, but then when you took a photo with the shutter closed you got 2, then you subtract 2 from 12 and get 10, the real value.

Flats are another form of calibration image, and it's done with the shutter open through the exact optics that you're going to use for your real photos. Flats are taken of a purely evenly illuminated surface. For example, the twilight sky. Or, if you've ever been to a professional observatory, you may have noticed a big white circle with lights around it on the dome ceiling. Those are for "dome flats" which are the same thing.

The purpose here is to take a photo of something that you know what it should look like (after it's been dark-subtracted) -- a perfectly evenly illuminated image. If the photo you get back is not, you know that there is a flaw in the optics, like a fingerprint, or a hair, or a speck of dust. To get rid of these, we divide the photo we wanted by the flat.

Often during a single night of observing, more calibration images are taken than images of the object of interest.

I'm not going to get into why we subtract some and divide others, but trust me, the math works out. The important part here, if you get nothing else out of this discussion, is that astronomers don't just go willy-nilly and take a photo of something and say "This is it!" There are several steps involved to go from the original image to the final one, and all of these processing and calibration steps are in place for the sole purpose of best representing what the scene really looks like, and taking into account all of the individual minuscule problems that may be going on with your equipment.

Different Spacecraft Camera Types

Early Days: Faxes

In the early days of spacecraft imaging, the camera system onboard was literally a film camera, an on-board dark room, and then a scanner that read the image and converted it line by line to an analog signal that was transmitted back to Earth.

Now, I fully recognize that that was the state of the art, and people back then weren't stupid, but let's be serious here. There's a lot that could go wrong and interfere with getting a "true" representation of what the object looks like. First, you're using film, and every piece of film is a bit different, and every grain of film is going to be a bit different. You can't do darks for film to figure out what the biases might be.

Next, you're developing it onboard and then scanning it with an analog sensor. Data can drop out, something may go wonky with the sensor, etc. And then you're transmitting it back to Earth, pretty much the same way we do it today, and so we know that there are sometimes problems with receivers and transmitters.

This was a particular issue with the Viking orbiter that returned some of the first mapping images from Mars: Lots of data drop-outs. This manifest as small black dots scattered throughout the images in random locations. One happened to make a feature look like a nostril, but that's another episode. Another example is the Lunar Orbiter imagery -- if you ever see photos of the moon that have some obvious dark, parallel bands running through them that are evenly spaced, these are scan lines from Lunar Orbiter.

In sending the images back to Earth, the data are almost always compressed, meaning that some of it is usually sacrificed in order to make the overall size MUCH smaller so that it can get back to Earth more cheaply and in less time. Just like your internet, it costs more for NASA to have a faster speed, and so if you can download a 360x240 pixel video as opposed to one that's HD quality (1920x1080 px), it will sap up MUCH less bandwidth and hard drive space.

Once the images got here, the signal was recorded to magnetic tape and then these were played back for imaging onto film positives, usually blown up, which meant you had to combine several of these so-called "framelets" to get the original image. I was just reading a paper from 1970 where the authors describe doing this for Lunar Orbiter images.

Again, the process here isn't as important as the idea that there's A LOT that goes into these and even a "raw" image from a spacecraft isn't the same as saying "original," nor the same as saying, "this is exactly what the scene looks like."

Modern: CCD Array

That's not to say that modern systems aren't prone to their own problems. The first type of modern spacecraft camera is the same as an astronomical CCD camera that I talked about earlier -- it's an array of pixels, usually in a square, that records light, is read out, and transmitted to Earth. Just like we do with telescopes on Earth, extensive dark frames and flats are taken to correct for different sensitivities across the array and for abnormalities in the optics.

But there are more steps, at least for imaging planetary surfaces. The main one is that after all those corrections are made, a geometric correction needs to be made. Let's say, for example, you photograph your kitchen sink. You're standing in front of your sink, and you snap a photo.

Problem is, that photo doesn't really represent the layout of the sink. If you were to draw a grid all over your kitchen, including into the sink, and you took that photo, you'd find that the grid lines are not perfectly parallel and perpendicular, there would be some wavers as you go over the topography of your sink.

The same goes for photos of planetary surfaces. When we photograph over mountains and craters and the camera is not looking directly, exactly, straight down, you get the same issue. We use models of the topography and some complicated mathematics that luckily someone else besides me has figured out to correct for these. If you've ever seen non-square modern photos of planetary surfaces, and the borders look kinda wavery and there's an uneven black border, that's why. That image has been geometrically corrected so that it fits exactly right into a perfect latitude/longitude grid. Or at least as close as we can get it.

Modern: Push-Broom

The second main type of spacecraft camera is not a square nor rectangular array like we've been talking about so far, but it's something called a "push-broom" detector or array. This is usually just a single line of pixels, and as the spacecraft moves over the surface, the pixels continuously read in the data to generate a very long image. Several of the cameras in orbit of Mars work this way, and the narrow angle camera on the Lunar Reconnaissance Orbiter works this way, too.

That means to reconstruct the image, you have to figure out where everything was exactly when that data was taken, otherwise you start to smear things. That will make the geometric corrections smear things even more.

Again, we've worked all this stuff out so pretty much know how to do it right, but the point in mentioning all this stuff is to give you an idea of all of the work that goes into creating the final image that you see. And we're still talking Black and White photos, here. So any time you hear someone claim, "But they processed the photo and that's why it doesn't show a face!" at the very least by this point you can understand that the statement is a 100% non sequitur.

Modern: Scanning Old Film

A final part of modern imagery is old imagery. I'm working on a paper where I'm attempting to do some comparison between my crater counts near the Soviets' Luna 24 landing site with some that were published back in 1975. They list the Apollo metric camera photographs they used to do crater counts. I was able to find those, online, in digitized format.

Scanning these old photos in, I think, is of huge importance for both historic and current research purposes. For example, the example I just mentioned. In doing so, though, you're always going to introduce some other problems.

For example, despite their best efforts I'm sure, there are hairs and lint and dust that are on these scans. If I were conspiracy-minded, I'd say that it's a big giant worm on the Moon that's 2 miles long. In reality, I know that you're always going to have these and I ignore them as a known anomaly that has a mundane explanation. We'll get into some conspiracy with a certain hair in another episode, though. *cough*ApolloCMoonRock*cough*

This may be obvious to most of you that you'll get this problem. Especially if any of you have worked to scan in your own family photos and found that no matter how much you try, you will always have some hair/dust/lint on the photo that you put into the scanner. I'm dealing with this now, too, in a project for my parents' upcoming 40th wedding anniversary.

Releasing to the Community and to the Public

A final step in modern image processing is of course to release the images to the community and to the public. I'm not familiar with all the repositories for space telescopes, but at least for spacecraft images, planetary scientists today access most of the NASA images through a centralized service we call PDS, or Planetary Data Systems. We go in and get the data and it's usually in a raw, somewhat compressed, and almost always unprocessed-almost-as-it-was-sent-back-to-Earth state.

From there, we use tools such as ISIS, or Integrated Software for Imagers and Spectrometers, that's maintained by the USGS (US Geologic Survey), to process them as I described above. ISIS has scripts that convert the image to the ISIS standard format, that can dark-subtract and flat-field and map project and do all this other fancy stuff. And then it outputs to your normal image format of choice. And to all you astrophysicists out there, it's MUCH easier to use than IRAF, so ¡ha! For you non-astrophysicists, IRAF is the Image Reduction and Analysis Facility which is what you generally use for processing astrophysics images from telescopes. And it is a huge pain to use. FLPR! FLPR! Okay, enough with the inside jokes ... and to those who don't know, do a web search for "IRAF FLPR".

Anyway, that's how most professional planetary scientists who know what they're doing get access to the raw data. I do this all the time, in fact, I'm running some ISIS processing scripts as I write and record this episode to process some images from the Lunar Reconnaissance Orbiter.

Another way that professionals access data is by what we call "derived" data products, and this is stuff like a mosaic of the Moon that was generated by thousands of images and put together by the imaging team. These are often made to try to be geometrically correct and not necessarily to look pretty. For example, if you look at the lunar mosaic put out by the Lunar Reconnaissance Orbiter Camera team, it's made up of thousands of images and are probably pixel-accurate to within about 10 pixels, but there are banding issues and shadows don't always match up. The THEMIS mosaic of Mars that came out prior to the last one had some problems with it where some ghosting was present due to some poorly aligned images, meaning that some features appeared doubled, offset by a few kilometers. If I were conspiracy-minded, I'm sure I could think of something that was hiding.

But to a planetary scientist, we care about where the features are, not really if the shadows of a hill are consistent between the two images that make it up. But, that's a kind of anomaly that some will point to and say that we're trying to hide something.

And that brings us to releasing images to the public. This is often very different from releasing images for scientists. The public, as I've discussed numerous times with Apollo Hoax episodes, generally likes to see pretty pictures without all of the image anomalies and imperfections and shadows not quite lining up that the professionals couldn't really care less about. To do this, to make the images for press, we will often sacrifice some of the exactness for prettyness. I'm willing to go into an image and use the clone stamp or paint brush on a piece of hair or shadow to get it to look better for press. I did that recently to remove a seam where two images were slightly mosaicked together wrong.

And finally, the images ARE released with the press release or on someone's website, and just as with anything else, they're usually saved as a lossy file format like JPG. Often with a high compression meaning that a significant amount of information that is lost. This introduces artifacts when software, like a web browser, tries to reconstruct the original image from the compressed version to display it to you. JPG artifacts are the most common, and these appear as large blocks several pixels in size with seemingly odd colors or brightnesses.

Most people would dismiss these as normal image compression artifacts, but anomaly hunters do not, and I'll specifically address this in a bit with an example from none other than Richard Hoagland.

Color Processing

One of the final steps to this already long and still only Part 1 episode is how we get color.

Your Camera

Your consumer camera - the camera in your cell phone or watch or pin or actual mono-tasker camera - is a color sensor. Each group of four pixels is made up of three colors, a red pixel in the top left, a green pixel in the top right, a green pixel in the bottom left, and a blue pixel in the bottom right. This is called a Bayer pattern. When your camera takes a photo, it then automatically interpolates all those alternating reds, greens, and blues together to give you what it THINKS the red was at EVERY pixel, the green was at EVERY pixel, and the blue was at EVERY pixel.

Usually it does a good job 'cause things aren't discontinuous at the pixel level for most applications. Your great Aunt Steph's cousin's mother's former college roommate's maid of honor's wedding dress is generally that ugly robin egg blue all over, and so your camera can approximate it well without knowing the exact values at every pixel.

Astronomical Cameras: Filters

But astronomers need to do better. We're taking photos of stars or asteroids or objects in general where the information we want may only be one pixel across -- or really less than one pixel, but it just gets recorded on one pixel. If we want color, then we use a black-and-white detector, and we use filters.

So I might take a CCD and use five different filters in order to get what I want. I might use a broad blue, broad green, broad red filter, and then a very very narrow filter designed just to record the transition of Oxygen in green light, and another designed for sodium. And going back to the beginning, because I'm changing the optics with filters, I'd need to do flats for each one.

With spacecraft that do color around planets, the colors are generally infrared, red, green, and blue. Some have more, some have less, some have different ones. For example, the HiRISE camera on the Mars Reconnaissance Orbiter that takes photos at around 25 cm/px, has a red filter that's actually red through near-infrared, a near-infrared that just cuts out everything visible and below, and a blue-green that cuts out everything yellow and above. Meanwhile, the wide-angle camera on the Lunar Reconnaissance Orbiter takes photos with 7 different color filters, two of them UV.

How you actually combine these all can lead to a whole host of problems with conspiracists ...

Combining the Colors

When we combine them, astronomers are most interested in getting useful information out of it. If I take a photo of a galaxy with two filters - say, a blue one and a very narrow green one for oxygen - when I combine the two to look at visually, I may assign the blue filter to a blue color and the green filter to a red color. That way, where I see bright red standing out from the blue, I'll know that I'm looking at planetary nebula because that's what that filter is used for.

If I'm looking at a color image of asteroid 4 Vesta, the Vesta team has keyed the colors for that particular image to make the different mineral types they're interested in stand out. The same goes for MESSENGER at Mercury.

The same goes for photos of Mars and really for anything else.

It is ONLY when we release these photos for the public that, if they're anywhere in any way close to what the human eye would actually see do we try to key the colors more to what the human eye would see. Hubble does this a lot. The wide-angle camera from the Lunar Reconnaissance Orbiter tries to do this sometimes.

HiRISE very rarely tries to do this.

The Mars rover team really tried to do this a lot. The new rover, Curiosity, actually has a Bayer pattern built-in just like your camera. After calibration with known objects, like a flag or color guide on the craft itself, we'll get true-color images of Mars, though there are people out there who say the red is fake, like Mike Bara. The Curiosity rover also has a few other filters it can use, including some narrow filters that lets in just a single color of light, a near-infrared filter, and a sun filter.

For astronomers taking photos of moving objects in the sky, colors get more complicated. Say you're the Cassini craft taking a color image of Saturn. You put your red filter in, take the image, green, take the image, blue, take the image. During that time, Saturn hasn't moved in your field of view, but a moon has. How do you combine the colors correctly for Saturn AND the moon? You can't unless you go into something like Photoshop.

This very thing happened in late 2010. A UFO guy accused the Cassini team of hiding a UFO because of a released image of two of Saturn's moons with a seemingly simple crescent of shadow for both. But, if you took the press image released into Photoshop and boosted the brightness, you saw an area of the moon that was in shadow where someone had taken a brush in Photoshop or some other image program and painted pure black. Thus, they were accused of hiding cities on the night side of the moon or something like that. When what happened was in the time that Cassini took to take red, green, and blue images, the moons moved relative to each other, so to get them to look right, you have to take them into an image processing program and get rid of the color ghosting. Nothing sinister, nothing anyone would do if they're using the images for science, but something to do before putting out a press release.

Quick Recap

Before moving into the two simple conspiracy claims, I want to recap the important points that I've talked about:

First, astronomers try to take into account EVERYTHING going on with their detectors and optics and the viewing geometry. This means that every photo is "processed" and that any claim that a photo has been processed and "if we just saw the raw unprocessed version my anomaly of choice would show up!" is false. An unprocessed photo looks like crap. Usually.

Second, every processing step can result in new anomalies popping up. There are many steps where something weird can happen, and these days where we have literally tens of terabytes of data -- that's millions of images -- everything is automated. You don't have someone checking every image in detail to make sure that it was done correctly.

Third, older generation images are especially prone to anomalies that most people would dismiss as exactly what they are, things like data dropouts, image noise, dust/scratches/lint/etc.

Fourth, every photo that you see that's color is a reconstruction from either color filters or specific color-sensitive pixels. We can try to approximate it to what the human eye saw by loads of calibration steps, but everything is to some extent false color. That said, photos that are said to be close to what the human eye would see, really are. Mars really is red.

Fifth, when we release images through press release, we don't care quite as much about the scientific integrity. If there were a giant hair in a scan of an image I used for science, before that went into my press release, I would go into Photoshop and use the Content-Aware editing tool and remove that hair. The result might show a faint outline of something, or have weird noise properties that then someone searching for a conspiracy would go in and find. That doesn't mean I'm trying to hide a UFO nor a glass tube on Mars, it means that I don't want a hair in an image plastered across the front of the New York Times. Not that any of my stuff has ever been in the New York Times, but a budding young scientist can dream ...

Similarly, when I release that image, I'm going to release it in a format that news organizations want to deal with, namely a relatively small, compressed, JPG. If you blow up that JPG, anomalies will be found.

Crazy Claims

Which really now brings us to two examples of claims related to what I've just talked about.

Hoagland and LCROSS Cities

The first has to do with when NASA's LCROSS mission impacted the lunar south pole to try to throw up a plume of material to determine if any water molecules were present back in 2009.

Richard Hoagland made a rather large, dramatic lead-up to the event where he claimed this was the real space program who found out about the secret space program's cities at the lunar south pole, and we were mad and so this was us literally nuking them and the whole water thing was a cover. Even after it was over and there was no visible explosion, Richard still managed to find apparent evidence of his claim in the small NASA images that were released.

[Clip from Coast to Coast AM, October 16, 2009, Hour 2, starting around 13:45: Richard C. Hoagland talks about the geometry and structure ... of compression artifacts (see the above links to his website).]

Or, to quote from his website, and I'm going to do my best impression of how he speaks:

The newly-released NASA frames (above) looked "way over-exposed," yes ... but also, after being properly reversed--

They contained regions which were definitely ... intensely ... geometric.

And, when brightened just enough in the computer to allow one to see "down into the shadows" ... clearly, there was definitely "something" there; like, the outlines of some kind of "mega-scale geometric engineering ... dimly-lit, deep under 'an over-exposed, lunar surface layer' ...."

What is clear is that this is a case of JPG compression and then brightening up the perfectly black shadowed region to bring out the JPG compression artifacts. It's that simple, and there goes all of Richard's evidence for this. He doesn't know about JPG artifacts, or he chose to hope his audience doesn't.

To quote from reporter Dwayne Day, "There's an old saying that when you hear hooves, think horses, not zebras -- and certainly not unicorns or centaurs. Hoagland sees dust specks and thinks ancient giant extraterrestrial ruins." Or, in this case, he sees JPG artifacts and thinks "secret space program giant lunar infrastructure."

Since I've already talked about the basic claim of false color on other planets, I'm going to go into a short case study of a claim that deals not specifically with anything I've talked about in this episode:

Andy Basiago and Mars Anomalies

I've written a few blog posts about this over the past few years, so if any of you are long-time readers, you may recognize the name of Andy Basiago. Most recently, he's the guy who claimed that he went to Mars along with a young Barry Soetoro a few decades ago, and young Barry turned to him and told him that he would be president one day.

No, I'm not going to address that stuff in this episode, but before the whole chronovisor and Project Pegasus stuff that Basiago came up with, he made headlines in late 2008 as a lawyer who claimed that National Geographic wouldn't publish his ground-breaking analysis of one of the photos from the Mars Exploration Rover "Spirit" where Basiago claimed to find numerous living animals, including the famous "Bigfoot" on Mars.

Besides the obvious pareidolia - seeing familiar patterns in randomness - what this also points out is another aspect of image processing that many pseudoscientists tend to do: They blow the image up in size and then read into the pixelated noise. This is something that anyone familiar with any Star Trek series, or CSI, knows: Picard or Kirk or one of the other two captains who killed the franchise tell their conn officer to put something on the view screen and then magnify and magnify and magnify. The image is crystal-clear at each magnification level.

With real images that we take, 100% is 100%. There is no extra, hidden data that can be gathered between the pixels. And I'm aware that Richard Hoagland, Mike Bara, and others think otherwise ... I'll discuss this more in detail in the companion video and probably next episode.

But, if you insist, image software will be happy to try to comply with your request and make the image bigger. It does this by various different interpolation schemes, but this always means that it is making up the missing data based on the real data. It's like it takes two points and tries to fit some sort of line or fit to it based on those two points and the other ones around it. It guesses. It is not ADDING new information to what was already there.

But, if you insist, image software will be happy to try to comply with your request and make the image bigger. It does this by various different interpolation schemes, but this always means that it is making up the missing data based on the real data. It's like it takes two points and tries to fit some sort of line or fit to it based on those two points and the other ones around it. It guesses. It is not ADDING new information to what was already there.

Inevitably, when you do this, you will get an image that looks blocky because even though the interpolation tried to smooth things out, you're still fundamentally blowing up individual pixels.

So not only are you making up intermediate information between pixels, but you're also increasing all that noise that we talked about earlier, all those potential tiny defects that couldn't exactly be accounted for. That one hot pixel that we didn't dark-subtract becomes a giant glowing eyeball in the newly blown-up image. That data dropout becomes a nostril in a giant face. That scan line becomes an underground subway tunnel that collapsed.

That's what Basiago did in this case, and it's what a lot of people tend to do.

That's not to say you can't always blow an image up a bit. If your cropped photo of your daughter's horseback competition is only suitable for a 4x6" print, you can increase the size a bit and get 6x9. No one is really going to notice. But when you blow an image up 4000% and then say you've found staircases on Mercury when you're really just looking at individual pixels, that's a problem.

How to Spot a Potential Fake or Forced Anomaly

I'm going to end this episode's long main segment with an eye on Part 2, leaving you with some advice on some ways to spot a potential fake or forced anomaly. First, does it actually pass the so-called "smell test?" As in, does it look fake? The recent "discussions" I've had regarding the ziggurat on the moon, for example, looks fake at first-glance. It does not even pass this most basic of qualitative glances.

But beyond that, there are a few other questions. Such as, does the image have a scale bar? If not, and they're claiming that this is a 50-mile-wide feature, you have no way of knowing if that's true or not. That was the case with the "bigfoot on Mars" -- the feature was actually on the order of centimeters tall.

Another is whether they tell you what the original image is and what the original source is. Again with this lunar ziggurat thing, the original image was not described, but it was fortunately in the file name. But we were also not told where in the original the anomaly was. We also were not told where the version with the ziggurat came from, and Mike Bara has now come out and state it that he got it from a guy on the "Call of Duty Zombies" forum. Not exactly the most credible of locations. That should set off some red flags, but he believes that one is 100% real and the NASA one is fake.

More flags if any of the other stuff I've mentioned throughout this episode crops up. Claims of color fakery. ANY claims of geometric features or patterns are very often attributable to pixelation and/or compression artifacts. Also check for lint. Or elsewhere in the image for things like data drop-outs. If the feature is a single pixel and the claim relies on that being pure black or pure white, that's probably just a standard hot or cold pixel.

In the end, every claim of an anomaly that I've seen has always had a more mundane explanation. But, we'll keep looking.

Provide Your Comments:

Comments to date: 1. Page 1 of 1.

expat   California

8:14am on Thursday, August 9th, 2012

Very good. I'm bookmarking this as a resource to return to when I'm writing about space imagery.

Your Name:

Your Location:


Your Comment:

Security check *

security image