RSS Feed
iTunes Link

Episode 48: Image Processing and Anomalies, Part 2

Download the Episode

Recap: So many pseudoscientific claims rely on photographic evidence, and astronomy is no exception. In this Part 2 episode, I build on the basics from Part 1 and talk about dynamic range, noise, rotation, resizing, and adjusting the range of colors displayed. In almost every case, the computer is estimating the final result based on the original image, and anomalies can easily form.

Episode 47's puzzler was an open-ended discussion.

There was no puzzler for this episode.

Q&A: This episode's question comes from Belgarath who asked on my blog: "I've been wondering ... there are several of the pictures coming back from [the new Mars rover] Curiosity which are "White Balanced." What exactly is that and why do they do it?"

I wasn't initially going to use this for a Q&A, but it's timely, and in the past few days I've come across a lot of people mentioning this on news websites, conspiracy sites, and seeing it from both the rover images and HiRISE. So, since I'm assuming that at least the majority of you have been seeing these images, it's a timely issue, and it fits right in with image processing, here we go.

This gets back to the discussion in the last episode where I talked about color processing, and how when you have more than one filter in front of the detector, you can assign any color you want to that color recorded in the final image. If you record red, ultraviolet, and yellow, when you make a color composite in the computer you can make those blue, green, and red if you want. Whatever works to bring out the important features for your purpose.

With HiRISE, which is the High-Resolution Imaging Science Experiment on board the Mars Reconnaissance Orbiter, or MRO (yes, astronomers have lots of acronyms), there are three filters. The entire detector has a strip of red through infrared which lets in 570-830 nm light. In the middle, it also has a near-infrared filter that cuts out all wavelengths below 790 nm, and it has what they call a "blue-green" filter that cuts out everything above 580 nm.

So HiRISE cannot do "true-color" as we would think of it, and most of the images released are black-and-white from the red-infrared pixels. But, since it does have three different filters, it can do color strips down the middle of the black-and-white image, and a lot of what you've seen over the last week or two from HiRISE of the Curiosity landing site are these three-color composites.

I don't know for certain, but I think most of them assign the blue-green filter to blue, the red-infrared to green, and the infrared to red when doing three-color composites. The way to interpret this is that anything that then appears bright blue in the images is much more visible in the light that humans can see, while anything that's bright red is invisible to humans but brighter in the infrared.

Mars has a thin layer of rusty red dust across its surface, but below that, in most places, is basalt, which is volcanic rock. Basalt appears generally neutral in color, a blackish rock. As opposed to red dust. So when you're color combining and you boost the blue, anything that has recently disturbed the surface, like a fresh impact crater, or, say, a rover landing, is going to appear bright blue in these as opposed to the more neutral orange.

Similarly, if any of you have seen the strip that shows one possible path of Curiosity up Mount Sharp, it goes from orange rock to bright blue sand dunes. That rock is more sedimentary rock with the dust veneer, while the blue are sand dunes made out of basaltic rock.

Similarly, there are the images from Curiosity. As I mentioned last time, Curiosity does have a camera with a Bayer pattern built-in so that we could get true red-green-blue images sent back just like your consumer camera. Mars not only has a layer of dust on the ground, but also in the atmosphere. It's like it's constantly shooting photos through a haze, so the photos are going to have a red cast to them.

When NASA refers to the photos as "white balanced," that's adjusting the colors so that something that's supposed to be pure white - like a sticker on the rover - looks white in the photo. That's NOT how it would look on Mars, but it IS how it would look if it were on Earth. That helps geologists to identify rocks. When I took a geology lab way back a few years ago in undergrad, one of the first ways we learned to identify different rock types was simply by color.

In the end, this gets back to just a simple understanding of what "true-color" means, and that it's fake. Mars is red. But when you take a photo, you can change the colors to be whatever you want to bring out the features you want.

Additional Materials:

Transcript

Recap

I know that Part 1 just came out, but I think it's important for this episode to give a bit of a recap.

The episode was focused on explaining the basic process that MUST be gone through to produce the final images that we all know and love from telescopes and spacecraft. I covered things like detector sensitivity, color processing, a teensy bit of how spacecraft get images back to Earth, releasing images to the public, and talked about a few crazy claims along the way.

The purpose of all of that with pseudoastronomy really gets back to the fundamental point of, an image that scientists produce is luminosity-wise and geometry-wise the BEST that we can approximate what the feature actually looks like, that color is often false, and that along every step of the way, anomalies are both removed and other ones can crop up.

This episode is going to focus on some of the finer points of image processing, misunderstandings that people have about them, and how these misunderstandings have resulted in claims of artificial features on, usually, the Moon and Mars.

I should also mention that there is a companion video in the works for this, but as I expected, it's taking a bit longer to make than I'd hoped, and my day job is sapping my time. Hopefully it will be released relatively soon. The script is written, the voice recorded, it's just a matter of compositing everything to make some sort of sense. And while it's a companion video, it's meant to be independent of both these episodes and to demonstrate some of the more visual things.

Finally, before we get started, I want to remind you that this episode will make a lot more sense if you think of images as a bunch of numbers. Every pixel of every image has a numerical value to it, and that value tells you how bright it should be. You can think of it as how many photons it recorded, or how we stretched it for display, but think of it as numbers. For color, if you're working in red, green, and blue, then every pixel instead of ONE number has THREE, each one telling you how much red is at that pixel, green is at that pixel, and blue is at that pixel. It's all numbers, and if you can think that way, this episode will be A LOT easier to understand.

Dynamic Range

And so, a natural first thing to talk about is dynamic range which is the range of those numbers in an image.

In this sense, the dynamic range can be thought of as two different things: First, how much light is recorded, and second, how much light is displayed.

In terms of recording, if you had an ideal medium, you could record anything from zero to nearly infinity. So your image would span, literally, a nearly infinite range where very dim things could be displayed next to very bright things and all would have a non-zero and non-infinite value.

In reality, we can't do that. These days, digital detectors are limited by what we call a "well depth" which, if you go back to episode 35, can be thought of as how deep your bucket for light is. Since our computing system is based on binary, well depths are in powers of 2. The most common is what we call "8-bit," which can express values between 0 and 2^8-1. The reason for the minus one is so that you can include zero. This means that your dynamic range of ANY 8-bit black-and-white image is between 0 and 255 shades of grey. On a computer screen, any 8-bit image that has a pixel with a value of 0 will be black, and 255 will be white.

The next stage up is usually 16-bit, which can express values between 0 and 65,535. Obviously, 65,535 is much larger than 255, and so a 16-bit detector, and a 16-bit image has a MUCH larger dynamic range. Most astronomy CCDs are 16-bit. Most professional cameras used by normal photographers are actually 14-bit, but the software fakes it and scales up to 16-bit for output. Another name for this is "bit-depth," so we could just say that we're working in 8-bit, 16-bit, or a 14-bit-depth image.

32-bit images can display 2^32 shades of grey, or between 0 and 4,294,967,295. Many modern graphics programs, such as Photoshop, can handle 32-bit images, but they don't like them and you generally need to down-sample into 16- or 8-bit to really deal with them. The new-fangled photography craze of HDR (high-dynamic range) processing deals with 32-bits because, well, it's a high dynamic range.

Of course in this discussion, we're talking about modern equipment, digital devices. Film and photographic plates have a higher dynamic range than even 16-bit. When they're scanned today, that range is generally compressed into 16-bit, though, and usually into 8-bit for web.

So the next question could be, what happens when you convert an image from one to the other?

When you up-scale, say from 8-bit to 16-bit, you are asking the computer to create information that is not there. Depending on the exact algorithm, it will do one of two-ish things. The first possibility is that it will simply take every pixel and multiply it by 256. So a pixel that was 0 before would stay 0, but a value that was 1 before will now be 256. A value that was 2 before will now be 512. And so on. The other method is that it will guess. It'll first do what I just said - multiply by 256, but then it will examine the surrounding pixels and adjust them up or down in brightness, the exact amount depending on the exact algorithm.

In other words, it is CHANGING the information that was there because you are asking it to. For real science, this should never ever be done.

Going the other direction, such as from a 16-bit to an 8-bit image, you are telling the computer to delete information. Remember, pixels can only have an integer value, or a whole number. It can't be 1.3 or 16.7. So when you go from 16-bit to 8-bit, it divides every pixel by 256 and then rounds, usually down. So ANY pixel before that ranged from 0 to 255 will be converted to 0. Any pixel that was 256 to 511 will be converted to 1. And so on.

This means that you are LOSING information and can no longer tell as much about the variation from one part of an image to the other in terms of brightness. This is why professional photographers work in 16-bit, and this is why astronomers work in the highest dynamic range of the original data.

This is also why scans of old photographic plates and film that are displayed as 8-bit images on the internet, such as the region where this alleged lunar ziggurat I've been dealing with lately, cannot display the original dynamic range of the film. Claiming that the shadows all having a value of 0 means that someone at NASA took a black paintbrush in Photoshop to the regions to remove evidence of ancient aliens demonstrates a complete lack of understanding of dynamic range.

One final thing to say about dynamic range is that an image need not take full advantage of the dynamic range that it is afforded. What I mean by this is, say I have an 8-bit image, so it could display values between 0 and 255. But, it may only have pixels with values between 20 and 180. Its dynamic range does not actually take advantage of the 8-bit space it's in, but that doesn't mean there's anything wrong with that. In fact, that's usually a good thing because it means that the darkest regions of the image have SOME data in them that's non-zero, and it means that the brightest areas weren't saturated at the high end, 255, so there's data in them, too.

But, that doesn't mean that that image looked that way originally. Someone could have compressed it down to, effectively, 7-bit but saved it in 8. Nor does it mean that it has to stay looking that way. I'll discuss how you can change that in a bit when I talk about levels, curves, and contrast.

Noise

But now, to continue at the pixel-level of detail, we need to talk about noise.

All photographs have an inherent level of noise because of very basic laws of thermodynamics -- in other words, the fact that the atoms and molecules are moving around means that you don't know exactly what data recorded is real. The colder you can get your detector, the less noise there will be, which is why astronomers will sometimes cool their CCDs with liquid nitrogen or even liquid helium.

That said, I haven't really explained what noise is, and I'm going to do so again from the digital perspective. There are two sources of noise. The first is what I just mentioned, where the atoms and electrons moving around will sometimes be recorded as a photon when there really wasn't one. The cooler the detector, the less they'll move around and so the less they'll be detected. This is purely random, and so it will appear in some pixels more than others and so you don't know what's really going on.

The other kind of noise is purely statistical. The recording of photons by digital detectors is a statistical process, and it is governed by what we call "Poisson Statistics." That means that there is an inherent, underlying uncertainty where you don't know how many photons hit that pixel even though you have a real number that was recorded. The uncertainty is the square-root of the number that was recorded.

To make numbers easy, let's say you record only 9 photons. The square-root of 9 is 3, so that means that even though you recorded 9 photons there, your uncertainty of how bright that pixel should be is ±3, or 33%. Now let's say you record 100 photons. Your uncertainty is ±10, a larger number, but that's only 10% of 100, so your uncertainty is smaller in a relative sense. Now let's say you record 10,000 photons. Your uncertainty is ±100, which is only 1%. This is why we always want to record more photons, more light, because the relative uncertainty of how bright that pixel should go down the more light we record.

What's the effect of noise when you don't have a lot of light recorded? Well, the vast majority of you out there listening to this probably already know because you've taken those low-light photos that turn out like crap. They're fuzzy, the color probably looks like it has tiny dots of red or green or blue all over it, and there's little dynamic range. That's a noisy image because of the inherent uncertainty in the light hitting every pixel in your camera, but so that it wasn't completely dark, your camera multiplied all the light - the noise included - in order to make something visible.

With the idea of noise in mind, after an image is taken, there is only one way to scientifically reduce the noise without any guesswork based on a computer algorithm: Shrink it. When you bin the pixels, as in doing something like combining a 2x2 set of four pixels into one, you are effectively adding together the light that was there, averaging it, and so reducing the amount of noise by a factor of 2. For example, if you have a set of four pixels that recorded 110, 92, 84, and 103 photons each, the relative noise levels were 9.5%, 10.4%, 10.9%, and 9.9%. After you've combined them to an average of 97, the relative noise is only 5.1%. If you bin 3x3, you reduce the noise by a factor of 3 as opposed to 2.

Noise is random across the whole thing, and it makes it look grainy. A perfectly smooth, white surface could look like a technicolor dust storm if you photograph it under low light.

But, people who make a supposed living out of searching for anomalies in lunar and martian photography will take what might be a perfectly smooth, in reality evenly-toned surface, look at the difference between one pixel and the next that's purely due to noise, and say its an anomaly once they do other things to it.

Resizing - Reducing and Enlarging (and Rotating)

One class of those other things you can do to an image is resize and rotate.

I did mention this briefly in the last episode with some of Andy Basiago's work. The basic question here is what happens when you change the size or rotation of an image.

First, rotation. When you rotate an image, you are ALWAYS changing the raw information that was there UNLESS you are rotating by multiples of 90°. And that's because we deal with rectangular pixels. If we had hexagonal pixels, then you could rotate in multiples of 60°.

So if you flip your photo from landscape to portrait mode, you're fine. If you rotate that vacation photo of the Kremlin by 2.4° clockwise, you are telling the computer to make up the information at every single pixel. Now, it does a really really good job of that. But because pixels are square, and because you are rotating by a non-angle-of-a-square amount, the computer has to use mathematical formulas to figure out what the value of each pixel would have been IF when you took that photo it had been rotated by 2.4°.

An implication of this is that if you rotate an image by some amount, and then rotate it again by another amount, it is NOT the same as if you rotated by the sum of those two amounts originally. It'll be really really really close, but not exactly the same. This means that if an image has been rotated many times, some anomalies can crop up, though I honestly have not seen anyone point to those specific kinds of anomalies as their ancient artifacts.

What I have seen, though, is when people point to anomalies caused by resizing. Mike Bara, Richard Hoagland's one-time co-author, recently wrote when talking about a particular image I'm now intimately familiar with, "[The] upsampling process would have the effect of actually making the NASA image better."

In fact, I'd say that a good half or more of the images that I've seen that people claim to have found ancient architecture in on the Moon and Mars are due to scaling the image up due to the extreme misunderstanding of what this means, as Mike Bara demonstrated.

Scaling an image up in size is very much like trying to increase the bit-depth of an image like I talked about with dynamic range. Remember when you scale from an 8-bit to 16-bits, you multiply every pixel value by 256 and then may or may not do some extra math to make it maybe smoother and more what the computer thinks is realistic.

The exact same thing goes on when you increase an image in size: You are telling the computer, yet again, to guess on data that is not actually there. People who anomaly hunt and do this seem to think that if I take a photograph from space of a parking lot and it covers 1 pixel, that I can then take that 1 pixel in the computer, scale it up in size, and it will show individual cars.

You might be accusing me of a reductio ad absurdum fallacy, but I'm not exaggerating here: These people really think you can recreate data that is not there. The misunderstanding is that that data is somehow hidden within the average light that hit that pixel and was recorded. They think that if you increase the size, the computer can then use the surrounding pixels and somehow extract detail that was lost from the inherent pixelation of the image.

Problem for them is that this is not true. A pixel is a pixel, and 100% is 100%. When you tell the computer to increase the image size, it will, just like increasing bit depth, use one of two-ish algorithms. Let's start by just saying that you tell it to increase to 200% The first method is that it will simply tile each pixel into a 2x2 block of 4 pixels. So if you increase instead to 1000%, then you'll just have a blocky, larger image. But the default of almost every image program out there is the second method, where the software will use any number of different kinds of algorithms to guess at what the missing pixels are.

So let's say that at pixel location (0,0) there is a value of 10, at pixel location (0,1) there's a value of 20, at (1,0) there's a value of 15, and at (1,1), there's a value of 25. If we scale this up to 200% the original, the code will start by keeping position (0,0) the same. The pixel value of 20 that was at (0,1) will now be moved to (0,2). The one at (1,0) will be at (2,0), and the one that was at (1,1) will be at (2,2). But now we're missing information at 5 pixels in-between those.

The computer will then guess. It will use those original pixel values and guess at what the missing information was. Well, by "guess" I mean it will use a pre-determined algorithm to figure it out. And, just like rotation, this is not communicative -- if you increase by 123%, then increase by 123% again, you will get a slightly different result than if you had originally increased in size by 151.29%.

If you increase by multiples NOT of 100%, then you are telling it to guess even more because it can't use all of the original pixels in the new image. Say you increase the image size to 150%. The pixel that was at (0,0) stays the same, the one at (0,1) moves to (0,1.5), and the one from (0,2) moves to (0,3). So the only original pixels that the computer can be absolutely certain about are no longer 3 out of 4, but 2 out of 4. True, the pixel that moved to position (0,1.5) will be used in whatever algorithm the computer is using to figure out the missing pixels, but it will no longer be in the new image.

The algorithms that the computer uses to guess vary, but they almost always try to make things smooth and look pretty to the human eye.

So let's say that you have an image of Mars' surface that's smooth and even. But, there's some variation in brightness at the individual pixel level due to the inherent noise. Now you increase the size by a factor of 5. That means that only 1 out of every 25 pixels in the new image is real, all the others are estimates by the computer, and the computer wants to make things look smooth. The result will be blobs and circles and other features that, if you look at enough of them, you will find something that you think looks artificial.

In fact, I just did this experiment myself in Photoshop. I created a 100x100 pixel image, painted it black, and then used the Add Noise filter to add in random noise. I then increased the size to 5000x5000 pixels, and I saw streets, several faces, and a very large Rorschach test. I'll have this example in the companion video. If you add some JPG compression artifacts on top of that, you have a veritable gold mine of pareidolia.

To reiterate the point of this particular section of the main segment, though, whenever you scale an image up in size, you are not gaining new data, you are telling the computer to make stuff up, and in a real photograph, especially one where you have variations at the original pixel level, you are almost guaranteed to get anomalies that look artificial because, well, by definition they are artificial.

Levels, Curves, Contrast

One of the last things I want to talk about are levels, curves, and contrast.

The best one to start with is levels - at least this is what it's called in Photoshop and a few other image processing programs that I've used before. The best way to think about levels is to return to the example earlier where I talked about dynamic range, and that if you have an 8-bit image with shades that COULD span between 0 and 255, it may not, it may only go from, say, 64 to 191. That means when it's displayed on the computer, it will look grey overall, with no solid blacks and no whites.

What you can do with levels is to change that. The basic idea is that it will stretch things out. So you could tell the computer that you want a brightness of 64 to actually be 0, and you want anything that's 191 to be 255. And it will stretch things out. In this particular case, it stretches the range by a factor of 2. So any pixel before that had a value of 64 will now be 0, 65 will be 2, 66 will be 4, and so on. Anything that was 191 will be 255, 190 will be 253, 189 will be 251, etc.

So your image will span the whole range now. But, you have gaps in the brightnesses. You have nothing with a brightness of 1, 3, 5, and so on. As with changing bit-depth with dynamic range, depending on the algorithm, the computer may choose to change some of the pixels based on the ones around it. Changing the information that's there. So when someone says that the image is completely original and all they did is adjust the levels, it's not really original anymore.

But you can do more with levels, you can clip things off. Usually, there are just a very few pixels that would have a value of 0, and just a very few that have the highest value, either 255 or 65,535. Many programs when you hit the Auto Levels button will look at the brightest 0.1% of pixels and assign them all to white, look at the darkest 0.1% of pixels and assign them all to black. It will then scale everything between them. In other words, it will REMOVE information because the darkest 0.1% may have had a value up to 5, and the brightest may have had values down to 220. But those are now 0 and 255, removing some of the subtle variation.

And so, yet again, conspiacists do not understand this. To quote from Mike Bara: "What the histograms show us is that while the image produced by NASA has a wide dynamic range, the areas of shadow, where the details that make the Ziggurat stand out as artificial might be found, have virtually no dynamic range. They’re absolute black. And that can only mean one thing; they were painted over by someone at NASA with a black paintbrush tool."

No, it means that either the original was not exposed long enough to capture any sort of variation within a very black shadow, or the levels were simply clipped in very basic processing. Something that most scanner software DOESN'T tell you is that it does this auto levels function automatically unless you tell it explicitly not to. I can very easily see a summer intern tasked with scanning bunches of old photos and just setting everything on auto.

Moving on to contrast adjustment, we start to get away from basic linear stretching. Increasing contrast increases the amount of dark pixels and bright pixels. It does this by stretching things away from pixels with brightnesses a neutral grey, 127 if we're using 8-bits. The amount of contrast enhancement determines the exact amount of pushing more pixels darker and more brighter, but the point at this point is that you are again losing the original information that was there.

So say you have a photo of a planetary surface with a bit of noise in it. You've already blown it up in size, rotated it a few times, and now you increase the contrast. The question now that you should ask is: ¿How much of that is original information, and how much of it is computer estimations based on what was there originally?

Which brings us to curves, which is the hardest of the three to understand. Curves can be thought of as levels, but with more control. Say you have an image, like one I was recently working from that shows the moon as a generally evenly grey surface, but there's a small part of it with a fresh crater and very, very bright ejecta emanating from it. To properly fit the entire dynamic range in the image, most of the pixels have to be dark - most of the surface is dark relative to the bright ejecta. Just a few of the pixels are bright. If you look at a histogram of this, you'd see a large spike of dark-valued pixels, a small spike of bright, and almost nothing in-between.

Curves can be used to change this. With curves, you can do things like stretch the dark values, mapping them to a broader range, while compressing the bright values to just a few in the upper end of the dynamic range. As before, the computer is happy to change the information that is there to let you do this, filling in gaps in shading at the dark end when you stretch them, and decreasing the dynamic range of the high end to get all the bright ones in a narrow range of shades.

Bottom-Line

I was going to also talk about filters and sharpening, but I think I'll leave that for a Face on Mars episode. This one was dense and long enough as it was.

By now, I think you've figured out the basic theme of this episode: There are a lot of seemingly innocuous ways to fiddle with an image, but almost all of them will result in the computer creating information that wasn't actually there. It's BASED on the data that was there, but it's not 100% real. If you have a photo on which you've performed a lot of these operations, especially if it's low bit-depth like 8-bit, a lot of artificial-looking features can show up. Because, literally, they are artificial. Not because it's an ancient city on Neptune's moon Triton, but because you have told the computer to add it.

For a scientist, we do this as little as possible and we have to document every step of it because journals require that. The journal Science specifically states, "Science does not allow certain electronic enhancements or manipulations of micrographs, gels, or other digital images. ... Linear adjustment of contrast, brightness, or color must be applied to an entire image or plate equally. Nonlinear adjustments must be specified in the figure legend."

But for a photographer, all of this stuff is fair game. The goal of a photographer is to capture an event and make it look attractive. I always adjust the colors, the curves, the saturation, and other things in photos I take, and I always do it in 16-bit space before converting to 8-bit for display. For photography for art, that's fine. For science, it can be fraud.

This seems to be lost on the pseudoscientists.

Provide Your Comments: