From: Ray Fischer on 27 Apr 2010 02:42
David J Taylor <david-taylor(a)blueyonder.co.uk> wrote:
>"nospam" <nospam(a)nospam.invalid> wrote in message
>>> We start with 4Mp of red and blue, we end up with 12Mp of red and blue.
>>> Irrespective of what else is used in the interpolation that is *STILL*
>>> interpolation and upsizing.
>> interpolation yes. no upsizing since it's already the correct size.
>Each sensitive element is the same physical size as the resulting RGB
>pixel, yes. But the spacing of the pixels is not. For each of the 12M
>output pixels, there are only 3M red sensitive locations, at twice the
>physical spacing of the 12M pixels. Therefore the red information in
>between those red-sensitive pixels is obtained by spatial interpolation.
No. No pixels are being added to the image. Additional information
is being added to existing pixels. No additional pixels are being
From: nospam on 27 Apr 2010 02:48
In article <hr610n$15j$1(a)news.eternal-september.org>, David J Taylor
> "Pixel" may have a number of meanings - there is the element in a JPEG
> file which has three components (R, G & B), and there is the region on a
> sensor which received light and turns it into an electrical signal.
they're all spatial elements of an image.
> latter are sometimes called sensels, although that's not a term I tend to
> use a lot.
very few people do, but the number is the same.
From: nospam on 27 Apr 2010 02:50
In article <4bd6875d$0$1616$742ec2ed(a)news.sonic.net>, Ray Fischer
> No pixels are being added to the image. Additional information
> is being added to existing pixels. No additional pixels are being
From: David J Taylor on 27 Apr 2010 02:50
"nospam" <nospam(a)nospam.invalid> wrote in message
> the spatial location of any given pixel is the same, it's the contents
> that is interpolated. think sparse matrix. it's not upsizing a small
> 3mp matrix into a bigger 12mp matrix.
I prefer to think of the process as spatial interpolation of missing
information, as upsizing may have a physical connotation, which could be
From: Martin Brown on 27 Apr 2010 02:51
Ray Fischer wrote:
> Martin Brown <|||email@example.com> wrote:
>> Ray Fischer wrote:
>>> Kennedy McEwen <rkm(a)kennedym.demon.co.uk> wrote:
>>>> Ray Fischer
>>>>> David J Taylor <david-taylor(a)blueyonder.co.uk> wrote:
>>>>>> "Ray Fischer" <rfischer(a)sonic.net> wrote in message
>>>>>>> David J Taylor <david-taylor(a)blueyonder.co.uk> wrote:
>>>>>>>>> however, the total number doesn't change. there are 12 million on the
>>>>>>>>> sensor and 12 million in the image, or however many the sensor has.
>>>>>>>> There are 12 million monochrome pixels on the sensor,
>>>>>>> No, there are 4.6 million pixels. Any other claim is a lie.
>>>>>>>> interpolated to 12
>>>>>>>> million colour pixels. The sensor only has 3 million red pixels, but
>>>>>>> Learn what "pixel" means.
>> I suggest that you learn what a pixel means - that would help.
> You're too much of a stupid ashsole to be condescending.
Are you another sock puppet of the P&S troll?
>> The Bayer grid contains filtered photosensor sites. It takes the data
>>from several of these to construct any pixel in the final image.
> An outright lie.
You are *too* stupid for words. Clueless! I hope that my explanation is
clear to anyone who actually wants to learn about the Bayer mask.
The final image in a DSLR camera is always RGB format. There might be
one somewhere that will do luminance as monochrome but it still requires
data from multiple sensor sites to reconstruct that image.
> You're "confused" becuse you think that a pixel must have a certain
> amount of color information. In fat it need contain no color information
> at all.
>>>>>> This part of the thread had evolved to being about Bayer in a hypothetical
>>>>>> 12MP DSLR, and whether or not spatial interpolation was involved.
>>>>> A pixel is a picture element. You cannot split up the color
>>>>> componenets of a pixel in some arbitrary way and then claim that a
>>>>> single pixel is really three, four, or a thousand pixels.
>>>> So a Bayer CFA sensel, being an incomplete picture element,
>>> It's not incomplete. There's no such thing as an "incomplete" pixel.
>>> A monochrome pixel is still a pixel.
>> IA monochrome pixel is a pixel, but only if the image can be interpretted
> No "if".
You really need to go back to basics. Whoever told you what you so
fervently believe is completely out of step with imaging conventions.
>> A Bayer mask image looks pretty strange if
> Non sequitur. The definition of pixel has nothing to do with any
> sensor type.
>> By any reasonable definition
> You're not reasonable.
>> There are 12M sensor sites,
>> In common computer imaging usage a pixel is generally taken to mean a
>> monochrome image of 8, 16 or 32 bits, or a colour pixel with either
>> palletted 8 bits, 16bits (R,G,B = 5,6,5), 24bits (8bits for each of
>> R,G,B), 48bits (16 bits each for R,G,B).
> Tell us: Where did you get your degree in computer science? Where
> did you get your education in computer graphics? I got mine from
> Stanford and Cal Poly and from working in the graphics business.
You are not a good advert for the quality of their teaching then.