Filmscanners mailing list archive (filmscanners@halftone.co.uk)
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: filmscanners: RE: filmscanners: RE: filmscanners: Pixels per inch vs DPI
Dye clouds are a double edged sword.
On the one hand, due to the random positioning and their transparent
nature, they can make for a very small apparent resolution because they
can overlap in all sorts of random patterns making areas much smaller
than a fixed array of pixels which would read the R G B or C M Y
components of the pixel area within the same area (the sensor reads all
three color separations from the same locations). A single cyan dye
cloud, for instance, might partially overlap with a magenta one in one
area, a yellow in another and perhaps both or none in yet another.
Defining all of this via a pixel array would require very, very, small
pixels. Dye clouds, however, being randomly positioned and shaped allow
for all sorts of irregular "information" to form, much of which is
smaller than one dye cloud itself, although it may also not be accurate
in either color or location. In this case, the grain or dye clouds
contain a certain level "noise" (errors), but when it gets that small,
our eye would rather see randomized, inaccurate information than
non-random, geometric forms or total lack of this "filler".
Dye clouds are, in effect, positioned at the point of manufacture, when
their progenitor (a silver grain) is laid down in the specific color
emulsion layer, long before anyone knows the image content.
The grain that eventually allows for the creation of the dye cloud in
color films is completely randomized in it's position and to a
lesser extent, its shape, and within manufacturing limits, its size.
The only reason any specific shaped and positioned dye cloud does
not become an impediment or degrade the image characteristic is because
each are so small and jumbled around, so in most cases, one rarely can
one see an individual dye cloud. They are very small, and are clumped
with parts of other dye clouds, both of the same color, and because of
their transparency, other emulsion layers which contain other colored
dye clouds.
Now, if some "brilliant engineer" (is that an oxymoron? (that's a
joke!)) figures out a way to allow for silver grains to literally
migrate within the emulsion or change shape as the image is formed or
during development and do nice things like, say, all line up perfectly
when I'm taking a picture of something with a straight line, well then,
yes, the dye clouds will have something going for them that would not
theoretically be possible even with very tiny pixels (however, I
actually imagine those "brilliant engineers" will come up with a way to
make pixels mobile before making grain do so ;-)...
Very simply, grain, or dye clouds are predetermined in their location
and shape and are not relocated by picture content. Pixels would
be required to be very small to reproduce the "perception" of current
film technologies. Fuji has a hexagonal/star shaped pixel array, so
that might reduce the rectangular elements, but we still are faced with
the fact that the red, green and blue separations are all taken from an
identically positioned array.
As to the future of digital capture technologies, who knows what might
be stumbled upon. Both film and pixel based captures have inherent
errors built into the process, and for now pixel based have many more
limitations, but that could change.
Art
|