Filmscanners mailing list archive (filmscanners@halftone.co.uk)
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[filmscanners] RE: 8bits vs. 16bits/channel: can the eye seethe difference
> From: Austin Franklin
>
> > No, it's dithering, not aliasing. Aliasing is the translation of
> > frequencies by sampling.
>
> Paul, that IS what you described, the translation of frequencies by
> sampling! You said:
>
> "If you have an area of blue sky whose actual analog color levels
> are, say,
> R=85, G=110 and B=182.75"
>
> Whether the 182.75 gets picked up as 182 or 183 is a form of aliasing, not
> dithering. If in fact, the original image has that kind of tonal
> resolution, then it is not being sampled at a high enough frequency to
> maintain that level of detail. It's being undersampled, and as such, the
> result of the sampling of the higher frequency components is aliasing.
No, it's not aliasing. There is no translation of spatial frequency, because
there is no change in sample rate (i.e., spatial resolution). Aliasing
happens when you take something with one sample rate (possibly infinite,
i.e., analog) and translate it to a different sample rate, resulting in
frequencies present in the original that exceed half the new sample rate
(the Nyquist) being translated into new, usually unwanted, frequencies. That
has nothing to do with what happens when 16 bits are reduced to eight.
> But it does NOT contain "more than 8 bits of useful information". You can
> not consider noise valid data. I understand the effect you are talking
> about, and it is not dithering. The effect is caused by post
> data filtering
> as done by your optical system, which is sampling at a lower frequency.
It isn't the noise that's the useful data. It's the average across many
pixels that's the useful data. The presence of noise is what allows this
information to leak through the truncation process.
> Not with a DSP definition of dithering. With a misapplied
> definition of it,
> perhaps...but the correct term is aliasing. Dithering, in a signal
> processing definition, is ADDING random noise (there are other uses of the
> word dithering, as used in astronomy, but that is different that we are
> discussing here). There is always noise inherent in sampling that is
> +-1/2LSB, which is the result of what is called aliasing.
Austin, sometimes I despair at your peculiar ideas. Digitizing with a finite
resolution is not aliasing, it's merely quantization. If you quantize the
values, you obviously introduce quantization noise, whose high frequency
components will be aliased, but the aliasing isn't what we're discussing
here, and it isn't significant either. We're talking about smooth areas of
an image such as blue sky, whose frequency content is nil. Quantizing can
create posterization, which can be eliminated by the addition of noise
before the quantizing. That ain't aliasing.
Let me give an example. If an area of sky (let's stick to B&W for
simplicity) has a gradient whose values range from 100 to 120, in the
absence of noise there will be 21 distinct stripes. If the area in question
is 1000 pixels wide, then each stripe will be 47 or 48 pixels wide. The only
aliasing going on is the fact that the widths of the stripes will follow
some sequence like 47,48,48,47,48,48,47,48,48, etc., meaning that there is
some utterly insignificant modulation introduced at one cycle per 47+48+48
pixels, which would have been a very different frequency had the width of
the gradient been slightly different. That is, the aliasing is only the tiny
invisible variations in the widths of the stripes, due to the spatial
sampling, not the very visible stripes themselves.
However, if noise is added to the analog (or 16bpc) representation before
converting to 8bpc, the posterization disappears, and averaging over a small
area of pixels recovers the original higher resolution data--although this
of course only works for low-spatial-frequency data. It doesn't matter if
the noise is intentionally added numerically, or merely fortuitously added
by film grain or thermal electrical noise, it serves the same purpose. Which
is what my original point was, in the second message of this long thread.
You make much of the definition of dithering, and how what goes on in a
delta-sigma coder is something different. Well, it's mathematically
equivalent, and the term "dithering" is indeed applied to that process. The
equivalent to delta-sigma is used to represent an image with a very small
number of colors. When reducing an image to an indexed color format like
GIF, the most common algorithm is called Floyd-Steinberg Dithering.
Representing a pixel using the nearest available color, and then
distributing the resulting error into nearby as-yet-unprocessed pixels is
equivalent to doing 2D delta-sigma coding with a FIR filter. The same
process is used by inkjet printers to reduce an image to a much smaller
number of values (1+2*2*2=9 with a 4-color printer, 1+3*3*3=28 with a
6-color printer), and is indeed commonly referred to as dithering. (See the
GIMP-Print document, Appendix A, "Dithering".)
Although the algorithm doesn't work in the same manner, the result is the
same: a quantized noisy signal which, when averaged out (filtering out the
high frequencies that the ear or eye can't detect) reconstruct the original
higher-resolution data. The only difference is that one algorithm adds
independent noise, while the other uses chaotic feedback to generate noise.
Indeed, delta-sigma has a problem with simple signals, because the resulting
"noise" consists of discrete tones, or "birdies", so even delta-sigma is
usually performed on data that have had some noise added to break up the
birdies.
--
Ciao, Paul D. DeRocco
Paul mailto:pderocco@ix.netcom.com
----------------------------------------------------------------------------------------
Unsubscribe by mail to listserver@halftone.co.uk, with 'unsubscribe
filmscanners'
or 'unsubscribe filmscanners_digest' (as appropriate) in the message title or
body
|