Erik's web

  • Increase font size
  • Default font size
  • Decrease font size

MF Digital, myths or facts? - Collecting more photons

E-mail Print PDF
Article Index
MF Digital, myths or facts?
Collecting more photons
Comparing the top of the line MF back with the top of the line DSLR
Is Capture One better on IQ180 files?
A larger format is less demanding of the lens
The Really Right Stuff
OLP (AA) filtering
The 16 bit issue, truth or myth?
Do MFD have better DR?
MFDBs have better color, truth or myth?
Flexibility
Summary and conclusions
Are large pixels better?
Read also
Experts's comments
Thanks!
All Pages


An MF-sensor is larger than a full frame sensor

The larger size of the sensor leads to at least two advantages, the first one it collects more photons while the second one is that it makes less demand on the lenses.


A larger sensor can potentially collect more photons. Each pixel works like a small capacitor. Light (which consists of photons) strikes the sensor and excites electrons so they turn into a free electron. The free electrons charge is collected in the capacitor. It can be assumed that each captured photon yields a single electron charge. The capacitor can hold a given number of electrons. The capacitance of the capacitor is proportional to it's size. So if we double the surface of the pixel it will have the double capacity. If we assume that we have a full frame sensor of 24x36 mm and compare it with a MF sensor of 36x48 mm size the later one will have twice the area, so it will collect about the twice the number of photons if we assume that exposure is chosen so the sensor is fully uitilized. The number of electrons a pixel can hold is called Full Well Capacity (FWC). (Note: the 36x48 mm is purely hypothetical. Please also note that MF sensors come in many different sizes.)


The number of photons collected is dependent on exposure. An MFD pixel may hold say 50000 electrons. So if we expose an image fully "to the right" the lightest parts of the image will receive 50000 electrons. Even if the pixels was fully dark it would still hold a few electrons. Reading out the electrons has always some variation. Technically this is called noise. Readout noise for CCDs used in MFD digital is about 15 electron charges. This means that least number of electrons needed to have discernible signal is around 15. This is called signal noise ratio of one. SNR=1. FWC/readout noise is the dynamic range of the sensor. Our theoretical sensor would have a signal noise ratio of 50000/15 = 3333. Normally we would translate this to exposure stops, which would be 11.7 stops. This really says that the darkest part of the image that contains discernible information is 11.7 stops below maximum exposure.  (Note: I changed readout noise from 12 to 15 in order to be more consistent with real data, thus DR is 11.7 stops while it was 12 stops in the older version of the article.)


Data on readout noise can be found in sensor specifications, but the readout electronics in the camera will add some noise on their own. For instance the data sheet for the Dalsa FTF9168C (60 MP sensor) gives a readout noise of 11 electrons. Sensorgen estimates readout noise for the P65+ camera to 17.6 electron charges, while on the Pentax 645D they estimate 13.3 ( http://www.sensorgen.info/ ). I


Obviously, there are two ways improve DR, either make the pixels larger, thereby increasing the Full Well Capacity (FWC) or by reducing readout noise. The only feasible way to increase the FWC is to make the sensor larger, even if some progress can probably made in circuit design to improve the capacity of each pixel.


DSLRs using CMOS technology with on chip AD conversion could reduce readout noise to just a few electrons, thus these sensors can keep more detail in darks.


Now, light comes in quanta, called photons, and there is a natural variation of the incident number of photons. So there will be a random variation of photons on each pixel. This is called shot noise. Shot noise is the major limiting factor in digital photography and it at property of light. Not much man can do about it.


Shot noise has Poisson distribution, which has the nice property that signal/noise ration is the square root of the signal. So having 50000 photons in our example, signal noise ratio would be around 224. Now, let's assume that medium grey is three stops below saturation. Three stops corresponds to a factor of eight (2*2*2 = 8). So if we expose for saturation at highlights we would have photons on a midtone pixel, giving an SNR of 79 (which is still good). Increasing ISO essentially means that we underexpose doubling ISO mean halving the number of photons. If we assume base ISO is 100, going to ISO 400 would reduce the number of photons per pixel to 12500 for highlights and 1562 for medium gray. So SNR in the grays would be 39.5 (not so good). Noise is square root of the number of photons so noise would still be around 39.5 photons per pixel.


We mentioned earlier that readout noise with traditional readout is around 15 electrons/pixel. We use electrons and pixels as interchangeable quantities here, so that would mean that at 400 ISO our theoretical sensor would still be dominated shot noise (natural variation of light) in medium gray. Readout noise would probably start to show up in the darkest areas.

  • So what does the reasoning above tell us:
  • There is something called shot noise which is property of light
  • A larger sensor will collect more photons and therefore have less shot noise
  • The darkest part of the image may be dominated by readout noise (and will be dominated br readout noise at high ISO).
  • Dynamic range is FWC / readout noise


So, lets try to look ats some real data:


Make FWC Readout Noise SNR 12.5% DR
Nikon D800 44972 2.7 75 14EV
Phase One P65+ 53019 17.6 81 11.5EV


The above data is per pixel. So what we can say is that the P65+ has a very small advantage for SNR in the grays but the Nikon can go much farther down in the grays.


Now, the P65+ has 60 MP while the Nikon has 36MP. If we print at a given size any part of the image will hold more pixels on the P65. So we can simply scale the FWC of the P65+ with number of pixels. Read noise will scale with the square root of the number of pixels. So we can see that the P65+ has a now a real advantage in the grays. We could call this binning in software. Binning in software improves grays but has less effect on the darks.


Make FWC scaled Readout Noise SNR 12.5% DR
Nikon D800 44972 2.7 75 14EV
Phase One P65+ 84000 22.7 102 11.8EV


Conclusion on noise in highlights & shadows

  • Unless some dramatic improvement happens in technology, dramatically increasing the FWC on small sensors, larger sensors will have an advantage in midtone noise.
  • Unless some dramatic improvement happens in technology, reducing the readout noise of large sensors, modern CMOS will have an advantage in the darks.
On the CMOS side, it seems that Sony has a significant lead in keeping readout noise down. Canon uses a different technology that does not allow for low readout noise. Nikon uses Sony sensors for some cameras like the D800. Other Nikon cameras use sensors developed by Nikon and produced by an independent foundry.
The weak point of Canon's cameras is DR at low ISO, that is there is significant noise in the darks. That noise is kept down when increasing ISO by appying preamplifiaction before Analogue Digital Conversion. So Canons perform as other cameras at high ISO but loose out on shadow noise at base ISO.



Last Updated on Thursday, 13 December 2012 11:56  

Statistics

OS : Linux u
PHP : 5.2.9
MySQL : 5.0.67
Time : 03:02
Caching : Enabled
GZIP : Disabled
Members : 1510
Content : 72
Web Links : 1
Content View Hits : 398068

Login

Banner1

J!Analytics