Filmscanners mailing list archive (filmscanners@halftone.co.uk)
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: filmscanners: Re: Hello, thanks, and more.
On Mon, 22 Oct 2001 00:16:38 -0400, you wrote:
>> 1) Bits. I need some clarification on what the siginificance of all
>> the different bit-rates are about for color.
>
>You don't mean "rates". Rate is a measure of speed (or periodicity)...and
>doesn't apply here.
>
Right! I think I meant "rating." Thanks for clarifying.
>> For example, one person
>> mentioned that there is no real advantage in 16-bit over 8-bit color
>> for printing. Someone else mnetioned "editing in 16-bit." The Canon
>> software only offers scanning in 24-bit color,
>
>24 bit color is a quantity of 3, 8 bit colors (8 bits for red, 8 bits for
>green and 8 bits for blue)...so that's considered 8 bit mode. You are
>confusing pixel depth with overall color depth. 36 bit color is three 12
>bit pixels (12 for red, 12 for green and 12 for blue).
>
>> 2) Sizing. Now this is just specifying the pixel dimension of the
>> image, correct? Without changing the resolution.
>
>A bit of background... Resolution really only has meaning when you are
>scanning, or outputting (as in printing or seeing the image on the screen),
>but nothing to do with the physical image in memory...since it is just an X
>by Y number of pixels, with no unit of measure associated with it. It's
>only when you output it that you give it a number of pixels per unit of
>measurement...so...when you resize an image, what you are doing is assigning
>the number of pixels per unit measurement (usually per inch), and that
>changes the resolution...
How do you resize an image without losing/adding pixels? Just by
specifying the inch dimension? That's something I've never been clear
about - whether choosing inch, cm, pixels or whatever in the size
dialogue does anything different. It seems to this neophyte that they
are all terms for the same thing. Does the choice actually result in
different processes?
unless you interpolate/decimate the data...which
>means to add or remove pixels. What this does is allow you to keep the same
>output resolution and get a larger or smaller image. Sorry if that confuses
>you, but in an email, it's hard to describe...
>
>> For example, I scan a slide at
>> 2720 dpi, and I get a 28.9 MB TIFF file that measures something like
>> 3889x2550 pixels. After adjusting color and brightness, etc, and
>> saving, I go into the properties dialogue and specify a web-based
>> size, i.e. about 750 pixels in the longest dimension. Is that
>> "downsampling?"
>
>Yes.
>
>> Is that process in itself "lossy?"
>
>Yes, very much so. Think about it. You are converting an image that WAS
>3889 pixels across, now to 750 pixels across. That's called "decimation",
>technically. What it is doing is throwing out (or averaging...or some other
>more advanced algorithm) every 4 out of 5 pixels. That's a LOT of loss.
Ok, that is a lot of good info. Does it follow then, that your
original scan should be done with the eventual output in mind? For
example scan a slide at a lower resolution for the web than for
printing? Then you would have no decimation in resizing a large 2720
dpi TIFF to display on a monitor. Seems like people are saying scan
at the highest res possible, save the raw file and work from that.
But that would involve a lot of this information loss when resizing,
or is the information lost not essential?
Thanks for the time
Ken
|