Erik's web

  • Increase font size
  • Default font size
  • Decrease font size

MF Digital, myths or facts?

E-mail Print PDF
Article Index
MF Digital, myths or facts?
Collecting more photons
Comparing the top of the line MF back with the top of the line DSLR
Is Capture One better on IQ180 files?
A larger format is less demanding of the lens
The Really Right Stuff
OLP (AA) filtering
The 16 bit issue, truth or myth?
Do MFD have better DR?
MFDBs have better color, truth or myth?
Flexibility
Summary and conclusions
Are large pixels better?
Read also
Experts's comments
Thanks!
All Pages

Note: This is an article in progress.


If the text is cropped, please try to make the web reader window wider.


Many photographers find MF Digital superior to smaller formats. Many explanations are given some are obviously right and some are obviously wrong. In this article I'll try to look at some of the arguments and try to sort out if they fact, myth or reality.


Some of the statements often encountered


  • MFD has better tonality
  • MFD has better color
  • MFD has better DR
  • MFD has 16 bits
  • MFD have CCD and DSLRs have CMOS
  • CMOS has AA-filtering and CCD has not
  • CCD has better color than CMOS





An MF-sensor is larger than a full frame sensor

The larger size of the sensor leads to at least two advantages, the first one it collects more photons while the second one is that it makes less demand on the lenses.


A larger sensor can potentially collect more photons. Each pixel works like a small capacitor. Light (which consists of photons) strikes the sensor and excites electrons so they turn into a free electron. The free electrons charge is collected in the capacitor. It can be assumed that each captured photon yields a single electron charge. The capacitor can hold a given number of electrons. The capacitance of the capacitor is proportional to it's size. So if we double the surface of the pixel it will have the double capacity. If we assume that we have a full frame sensor of 24x36 mm and compare it with a MF sensor of 36x48 mm size the later one will have twice the area, so it will collect about the twice the number of photons if we assume that exposure is chosen so the sensor is fully uitilized. The number of electrons a pixel can hold is called Full Well Capacity (FWC). (Note: the 36x48 mm is purely hypothetical. Please also note that MF sensors come in many different sizes.)


The number of photons collected is dependent on exposure. An MFD pixel may hold say 50000 electrons. So if we expose an image fully "to the right" the lightest parts of the image will receive 50000 electrons. Even if the pixels was fully dark it would still hold a few electrons. Reading out the electrons has always some variation. Technically this is called noise. Readout noise for CCDs used in MFD digital is about 15 electron charges. This means that least number of electrons needed to have discernible signal is around 15. This is called signal noise ratio of one. SNR=1. FWC/readout noise is the dynamic range of the sensor. Our theoretical sensor would have a signal noise ratio of 50000/15 = 3333. Normally we would translate this to exposure stops, which would be 11.7 stops. This really says that the darkest part of the image that contains discernible information is 11.7 stops below maximum exposure.  (Note: I changed readout noise from 12 to 15 in order to be more consistent with real data, thus DR is 11.7 stops while it was 12 stops in the older version of the article.)


Data on readout noise can be found in sensor specifications, but the readout electronics in the camera will add some noise on their own. For instance the data sheet for the Dalsa FTF9168C (60 MP sensor) gives a readout noise of 11 electrons. Sensorgen estimates readout noise for the P65+ camera to 17.6 electron charges, while on the Pentax 645D they estimate 13.3 ( http://www.sensorgen.info/ ). I


Obviously, there are two ways improve DR, either make the pixels larger, thereby increasing the Full Well Capacity (FWC) or by reducing readout noise. The only feasible way to increase the FWC is to make the sensor larger, even if some progress can probably made in circuit design to improve the capacity of each pixel.


DSLRs using CMOS technology with on chip AD conversion could reduce readout noise to just a few electrons, thus these sensors can keep more detail in darks.


Now, light comes in quanta, called photons, and there is a natural variation of the incident number of photons. So there will be a random variation of photons on each pixel. This is called shot noise. Shot noise is the major limiting factor in digital photography and it at property of light. Not much man can do about it.


Shot noise has Poisson distribution, which has the nice property that signal/noise ration is the square root of the signal. So having 50000 photons in our example, signal noise ratio would be around 224. Now, let's assume that medium grey is three stops below saturation. Three stops corresponds to a factor of eight (2*2*2 = 8). So if we expose for saturation at highlights we would have photons on a midtone pixel, giving an SNR of 79 (which is still good). Increasing ISO essentially means that we underexpose doubling ISO mean halving the number of photons. If we assume base ISO is 100, going to ISO 400 would reduce the number of photons per pixel to 12500 for highlights and 1562 for medium gray. So SNR in the grays would be 39.5 (not so good). Noise is square root of the number of photons so noise would still be around 39.5 photons per pixel.


We mentioned earlier that readout noise with traditional readout is around 15 electrons/pixel. We use electrons and pixels as interchangeable quantities here, so that would mean that at 400 ISO our theoretical sensor would still be dominated shot noise (natural variation of light) in medium gray. Readout noise would probably start to show up in the darkest areas.

  • So what does the reasoning above tell us:
  • There is something called shot noise which is property of light
  • A larger sensor will collect more photons and therefore have less shot noise
  • The darkest part of the image may be dominated by readout noise (and will be dominated br readout noise at high ISO).
  • Dynamic range is FWC / readout noise


So, lets try to look ats some real data:


Make FWC Readout Noise SNR 12.5% DR
Nikon D800 44972 2.7 75 14EV
Phase One P65+ 53019 17.6 81 11.5EV


The above data is per pixel. So what we can say is that the P65+ has a very small advantage for SNR in the grays but the Nikon can go much farther down in the grays.


Now, the P65+ has 60 MP while the Nikon has 36MP. If we print at a given size any part of the image will hold more pixels on the P65. So we can simply scale the FWC of the P65+ with number of pixels. Read noise will scale with the square root of the number of pixels. So we can see that the P65+ has a now a real advantage in the grays. We could call this binning in software. Binning in software improves grays but has less effect on the darks.


Make FWC scaled Readout Noise SNR 12.5% DR
Nikon D800 44972 2.7 75 14EV
Phase One P65+ 84000 22.7 102 11.8EV


Conclusion on noise in highlights & shadows

  • Unless some dramatic improvement happens in technology, dramatically increasing the FWC on small sensors, larger sensors will have an advantage in midtone noise.
  • Unless some dramatic improvement happens in technology, reducing the readout noise of large sensors, modern CMOS will have an advantage in the darks.
On the CMOS side, it seems that Sony has a significant lead in keeping readout noise down. Canon uses a different technology that does not allow for low readout noise. Nikon uses Sony sensors for some cameras like the D800. Other Nikon cameras use sensors developed by Nikon and produced by an independent foundry.
The weak point of Canon's cameras is DR at low ISO, that is there is significant noise in the darks. That noise is kept down when increasing ISO by appying preamplifiaction before Analogue Digital Conversion. So Canons perform as other cameras at high ISO but loose out on shadow noise at base ISO.


Comparing the top of the line MF back with the top of the line DSLR

The images below are used by the kind permission of Tim Ashley, who published them on getDPI in this article.

The images can be downloaded here, please only download if you are really interested, we don't want to blow Tim Ashley's drobox limit!
The first image is actual pixels view of shadow detail from both IQ180 (top) and D800 bottom. In my view it is pretty clear that the IQ180 is a bit more noisy.

Image reduced to D800 height


Findings

The IQ180 image has noisier shadows compared with the Nikon D800E image. The difference is visible in both actual pixels view and with the IQ180 image downscaled to D800E image dimensions.


It is possible that better results would been achieved by Capture One, but I'm pretty sure the comparison is pretty relevant regarding raw image data.



Is Capture One better on IQ180 files?



I made a test with C1 version 7 (evaluation).  I don't know much about C1, I did increase sharpening in LR 4.2 to "Landscape presets" and increased luminance noise reduction to 26. These settings were entirely arbitrarily choosen. The images below show the same detail, scaled to Nikon D800 height, first image is normal while the other is brightened so shadow detail is more obvious. LR4.2 processing on Nikon and IQ180 are the same.



The brightened image is below:



My take is that Capture One does a better job on the IQ180 image, but the Nikon image still holds it's own. I would be inclined to say that the Nikon image may be less noisy, even when the IQ 180 has the advantage of a proprietary raw processor.


Please note that the raw images are avalaible for download , so you can do your own tests!



A larger format is less demanding of the lens

A lens transfers an image from the subject to the sensor. The image will be deformed in different ways. Small details will be smeared out. If we think of a small point of light it will be spread over a larger area. Imaging is about light, darkness is lack of light, so that is always light that diffuses into darkness. This happens also be a law of nature.
For small details this diffusion will lead the loss of sharpness. The contours will be fuzzy, instead of being sharp. There is a measurable quantity for this diffusion, it is called Modulation Transfer Function (MTF). MTF says how much of the contrast between two nearby objects will be lost. MTF is essentially 100% for large patterns and goes to zero for small patterns. MTF is a very pactical measure, lens designers produce lots of MTF data, for different light, different focus and so on. It probably takes dozens of MTF graphs to describe a single lens.
The example below was made with two DSLRs having different sensor size, but using the same lens. Both cameras were 24MP one was APS-C and the other full frame. MTF shows how a lens can transfer contrast. The curves shown here illustrate how much contrast the lens/sensor combiantion can transfer at different frequencies.

What we can se in the images is that the sensor on the left can transfer more contrast at each cycles/mm. For instance at 40 lp/mm it would transfer about 58% on the APS-C camera while on the full frame it would transfer around 50%. But, because the sensor is smaller, the relevant comparison is to compare 40 lp/mm on FF with 60 lp/mm on the cropped frame. So here we see that the cropped sensor would transfer about 36% contrast at 60 lp/mm to compare 50% contrast at 40 lp/mm on the full frame. So, using the same lens the full frame camera has a significant advantage in sharpness.


Now, the upper dotted line shows the diffraction limit. An optically perfect lens would have MTF like diffraction limit. on APS-C such a lens would reach around 75% MTF. It is thus theoretically possible to compensate for the format size by building a better lens, in practice that would be very difficult.


The images below demonstrate this difference:

With no sharpening (full frame to left and crop frame to the right):


Or properly sharpened (full frame to the left, crop frame to the right)



Sharpening has a major inpact on digital images. I have not looked into this, at least for now.


Real world MTF comparison based on test images from Imaging Resource:


The image samples above were made using different small format Sony cameras, using essentially the same lens. Below is an MTF based comparison  of the Pentax 645D and the Nikon D800E. This NTF curves are based on test image shot at Imaging Resource. In this case the Pentax 645D was tested with it's standard lens, while the Sigma 70/2.8 Macro was used on the D800E.


Carefully comparing the MTF curves below I would say that the Pentax 645D does have a very small advantage, but the difference is more like an Imprial Quibble (that is small enough not to be significant).





The Really Right Stuff

The Really Right Stuff may be a top of the line MFDB, like a Phase One IQ180, on a very well built technical camera like the Alpa (details coming later) and a Rodenstock Apo Digitar HR. You can buy a Mercedes, or two, for the price, but a Merc shoots no pictures.
The images below are taken by Marc McCalmont who kindly allowed me to use the images in this article. Marc has also shot a comparison image with his Nikon D800E and Leitax adopted Leica lens.
The first image shows the Nikon image scaled to the width of the IQ180, both at actual pixels:
The IQ180 image is on the left. It is very obvious that the Nikon image at the right falls apart. The resolution advantage of the IQ180 is real!
The next sample is the IQ180 downsampled to the width of the Nikon image:
In my view, the IQ Image still holds better detail.
A high end back still has a definitive edge on detail compared with the best DSLR when they both are optimally used. No great surprise, I think.
Tim Parkin, the nice photograher behind On Landscape, was friendly enough to allow me to use some of his test images from the Big Camera Comparison article.


The images I got was an IQ180 sample with a 40mm Rodenstock Digaron W at f/5.6 and a Nikon D800E with a Zeiss 25/2 at f/4.0. The images were developed in LR4.2 with sharpening set to zero. Even without sharpening that Zeiss was hideously sharp! The MTF plots are shown below:


In this case the horizontal axis is in LWPH (Line Widths Per picture Height). This mesaure is OK to compare directly. If we look at 4000 LW/PH which is pretty fine detail we see that the IQ 180 has an MTF around 55% while the D800E yields perhaps 45%. The right hand edge of the plot is the Nyquist limit. The Nikon D800E/Zeiss combo has an MTF of 27% at Nyquist, which leads to excessive aliasing and color artifacts. As usual, Tim Parkin has a couple of excellent articles on the issue here and here.


What we see here indicates that a high resolving MF sensor with an exellent lens will outperform even the Nikon D800E with an excellent lens, the same observation we can draw Marc McCalmont's images shown before.




OLP filtering

(This section is under construction)
In general, it could be said that MF backs dont have an optical low pass filter (OLP) also known as an AA (Anti Aliasing) filter and DSLRs do have OLP filtering.There are exceptions to this "rule". For instance:
  • The Mamiya ZD had an optional OLP filter
  • The Leica M-series (M8, M9, ME, M), Nikon D800E, Kodak ?? and the Sigma SD series lack OLP filter.
What is an OLP filter? The OLP filter is intended to supress fine detail contrast that the senor cannot resolve. If the resolution of the lens exceeds the resolution of the sensor, fake resolution will arise. This fake resolution can actually enhance the visual impression of the image, but is nevertheless fake.
With bayer sensors the problem is compounded by as the artifacts often show up as color moiré.
To reduce these problems most cameras have a so called OLP filter. This filter normlly consists of two wafers of birefrigent material, normally Lithium Niobate. These crystals have different index of diffraction for the polaried components of light. So light passing to the two wafers is split into four parallell beams.
Normally the thickness of these wafers is chosen so they reduce color moiré to reasonable levels. As a consequence they also reduce find detail contrast. Much of the contrast can be regained trough adequate sharpening.
The image on the left below is the best example I have seen of aliasing. If you look carefully the line pattern disappears at the 18 mark and the lines pattern returns at 20. At the 24 mark we still have a lite pattern but it is obviously fake as we only see three line pairs. On the other hand if we don't know about the test target the image looks natural and the fake resolution can actually add to the perception of sharpness. This image comes from DPReviews test of the Sigma DP2 one of the few cameras having a "Foveon" sensor. The Image on the right coming from a traditional "Bayer filtered" camera has some color artifacts in spite of OLP-filtering.
Tim Parkin has investigated the difference ibetween the D800 and the D800E in this excellent article. He essentially found that the studio test images from the D800E could essentially take no sharpening while the D800 images aherpened well in LR/ACR with (Amount = 100, Radius = 0.6 and Detail = 0). I measured MTF on the unsharpened D800E image and the sharpened D800.
The sharpened D800 image is below:
Here we can see that the sharpened D800 image is actually sharper than the unsharpened D800E image. On the other hand a different subject could allow more sharpening on the D800E image. Sharpening is to a significant effect up to taste.
The dotted lines in the figures above show the theoretical MTF of a perfect lens at f/4.
Interestingly enough the IQ 180 image show much less color artifacts. This image was sharpened similarly to the the Nikon D800 image. The unsharpened image would have around 17% MTF at Nyquist. The dotted line here indicates the diffraction limit.
If we check the slanted edge test target for aliasing artifacts we can see that the Nikon D800 shows less color artifacts, followed by the IQ180 and the D800E being worst.
IQ180 below:
Nikon D800E below:
My take on the question? The OLP filter is quite expensive and also has a significant thickness. It would probably simply not fit in the M series Leica. On the other hand, I don't think DSLR vendors are using OLP filters for pleasure. A few DSLRs have lacked OLP filtering and had problems with aliasing artifacts Kodak DSLRs and the Nikon D70 comes to mind. According to rumor, Sony has tested the Alpha 99 without AA filter and found that there were a lot of Moré issues.
Reducing pixels pitch and stopping down also reduces aliasing. Aliasing is mostly a problem with very good lenses at best aperture. I guess that MF cameras are often used stopped down. Still, it seems that under studio conditions and shooting textiles color Moiré (the most visible form of aliasing artifacts) is quite problematic.


The 16 bit issue, truth or myth?

One of the claims oftem made for MF is the use of 16 bits. The simple truth is that this is pure nonsense. The number of useful bits is the same as DR. As we have seem above, DR for MFD is around 11.5 EV while for DSLRs it can approach 14 EV. It is very much possible that MF backs use 16 bit pipeline. This may be due to the fact that standard components are normally 8-bit, 16-bit, 32-bit or 64-bit designs. A 12 bit pipeline cannot be built with standard components. DSLR vendors use ASICs (Application Specific Integrated Circuits) optimized for the available data. MFDBs use a 16 bit pipeline to transport 11-12 bits worth of data. So talking about 16 bitness is a marketing truth, in common speak known as a lie.



Do MFDBs have better DR than DSLR?

The 16 bit issue is related to DR. DR essentially tells about distance from maximum undistoreted signal to the noise floor. In photography it's practical to measure it in EV (f-stops). DR = log10((FWC/ReadoutNoise))/log(2).


Here, FWC is full well capacity, the number of electron charges each pixel can hold, and readout noise is measured in electron charges.


We have seen before that DR for the best DSLRs is approaching 14EV while it is more like 11.5 EV for the leading digital back. The reason behind this is the todays best CMOS sensors have incredibly low readout noise. On the other hand especially Canon cameras fail to make use of the full dynamic range of their sensors, due to off chip digital readout.


Dynamic range is obvious if we expose to the right (that is fully utilize FWC) and look at shadow detail. A sensor with a wide dynamic range would be able to produces good detail in the darkest darks. If we reduce exposure, hiking up ISO, we would still be able to produce high quality images. A sensor with excellent dynamic range would excel at high ISO.


Now, Canons have a weakness in dynamic range but excel at high ISO. How do they do it? The answer seems to be that Canon's CMOS sensor has excellent dynamic range, but the camera cannot handle. So when high ISO is set the signal is preamplified so readout noise can be reduced but FWC cannot be any longer fully utilized.


Anoher mythical property of MFDBs is better handling of highlight clipping. I have significant doubts in this regard. Both CCDs and CMOS are linear devices and they should behave pretty identically when going into saturation.


I'm not really sure about the importance of DR. It is important for sure, but I don't see it as the most important parameter.








MFDBs have better color, truth or myth?

Tim Parkin has tested a lot of MFDs and also DSLRs. What he says is that MFDBs have different colors. He finds that the IQ180 is near the top and that he wouldn't touch a P45 with a "barge pole".
This is a P45 image :
While this is from the IQ180:
Tim regards that the Sony Alpha 900 is the one with the best color, see below:
Color is a complex issue that is more about perception than about quantifyable data. All raw converters have widely variable and adjustable color rendition. The figures presented below may be of some interest but I don't think there is sufficient information to draw any conclusions regarding MF having better color is myth or reality.
With exception of the Foveon (owned by Sigma) all pixels are monochrome. Color is added by a color grid array (CGA) in the front of the sensor. So color has nothing to do with sensor design. The raw data contains monochrome data for each pixel and also information on the placements of colors in the CGA. Now, the CGA normally has a "bayer pattern" with RGBG pixels. The spectral response of the R, G and B pixels may vary. The raw file also contains information on how to convert the monochrome information into color.
The CGA implementation may vary between vendors. It is feasible that some vendors would opt the CGA design for high ISO while others may opt for good color separation. On the other hand, a CGA optimized for good color separation may be bad at separating subtle color nuances. Compare this to Velvia in the old times. Velvia had excellent color separation, and low ISO but was not very good at properly handling subtle colors.
The two spectral diagrams below are from an excellent article on camera color from DxO-mark, available here.
To the left we have Canon EOS 500D to the right Nikon D5000.


The image below shows the color response of the Dalsa_FTF9168C which may or may not be used in the IQ160.


On the Dalsa chip the red channel is the most sensitive one, but the overlap bettween red and green is more similar to the Nikon camera shown above than to the canon. Neither is even close to the sensivity of the human eye (linked from Norman Koren's webpage):


So, let's just find that sensors are different. In human vision color really arises in the brain as a result of complex processing. In digital cameras color is created in raw conversion and the subsequent processing.


Good color, accurate color and pleasant color


We don't know how many colors the human vision can tell apart. I have seen figures between 200 000 and 20 000 000. The normal standard in the photographic industry is the Xrite color checker. This contains 24 color patches. We can check how two cameras these 24 colors:


In the figures below the out frame is camear color, while the inner frame i reference color with and without luminance correction.

Hasselblad H4DIII50 below:



Nikon D800E below


Another representation is the a*b* diagram given below:



With the corresponding data for the Nikon D800E below


The measured data above actually indicates that the D800E is better in reproducing a color checker card under a given set of conditions. The main difference between the Hassy and the D800E was that the Hassy image processed in LR4.2 was significantly oversaturated. When processing in LR4.2 I pulled back 13 units of saturateion on the Hassy and 4 units on the Nikon. Delta E is about half on the Nikon.


I have no first hand experience with MF, what I write is a drill down based on reading and looking at files available on the net. Still I have seen a couple of direct comparisons of Digital MF and DSLRs and got the impression that Digital MF camera had better color. This thread on LuLa forums is a good example.




Flexibility

Once you have a digiat back you can put it on different kind of equipment like:
So it is a very flexible device.


Summary and conclusions

It is very well possible that MF can offer better image quality than smaller formates. This is not depending on MF or some magic properties of the sensors,
CCD or CMOS or even 16 bitness. The obvious reason for MF having advantages is the larger size of the sensor. The larger size of the sensor helps in:
  • Capturing more photons
  • Making less demand on the lens
Regarding color reproduction I don't know. I have seen direct comparisons where I would argue that MF has better color, but that may depend on a lot of factors.

Are large pixels better?

The way I see it small pixels are preferable to large pixels in most aspects.
Regarding the number of photons collected, the only factor that really matters is the number of photons collected. Smaller pixels would collect fewer photons, but there would be more pixels. It matters very little if you collect 24 000 000 x 1000 photons or 6 000 000 x 4000 photons you still end up 24 000 000 000 000 photons. Would you print the image at 8x10" at 360 PPI you would end up with 2300 photons/pixel in both cases.

Once a print scale is fixed photons/pixel in the sensor is irrelevant.

There is some relevance to pixel size regarding DR but none at all with regard to shot noise.

Now the above reasoning is not exactly correct, as there are electronic components and wiring associated with ecah pixel. Making the pixels smaller mean that more of the chip estate will be used for wiring and components. The optimal size of pixels shrinks when narrower design rules are used. Here is a good article on the issue.

Smaller pixels allow for higher resolution but they will also reduce aliasing artifacts. A sensor with smaller pixels will be of lesser need of OLP-filtering. A sensor with smaller pixels will have less jaggies and respond better both interpolation and deconvolution type sharpening.

Noise characteristics are much less dependent on pixel size than commonly believed, here is an excellent article discussing the issue.
Phase One has a technology called Sensor+ where four pixels are binned into one at high ISO. So the IQ180 goes from small pixels to large pixels when increasing ISO. The DxO data shows this quite clearly.
In the data normalized for a given print size, Sensor+ gives a small increase in DR.
The effect on "Tonal range" which is dominated by shot noise, the effect of increasing pixel size is ignorable.


Read also

More will be coming...


Experts's comments

Doug Peterson, guru of Medium Format at Digital Transitions responded by saying:

"It is often too easy to analyze specific components of a system and lose the forest for the trees. No manufacturer selects any component based on it's individual spec, but rather how it will contribute to the image quality chain, speed, size, weight, power drain, and production cost of the entire system. 

For instance the selection of a specific A/D convertor is not made based on a label of "8 bit" or "16 bit" but rather for how well, how quickly, and how reliably it converts from analog to digital and how good the resulting digital file is. Marketing it as a "16-bit product" is short hand for referring to great tonality and elasticity-under-adjustment that the resulting raw files have.

Another component in this image quality chain is the software. The team all works together: Team Phase One engineer that lead the team that designed the Dalsa IQ180 sensor, the Team P1 engineer who lead the team that selected the a/d convertor and other electronics (including the very well performing dark-frame calibration hardware), the Team P1 engineer who is head of the team that handles color rendering and profiling, and the Team P1 engineers who work on the underlying algorithms of Capture One to pull the most detail with the least noise from the raw file. These team members live and breath image quality, and work together - their decisions for any given component in the image quality chain are deeply guided by this team work. 

For instance the dark frame calibration technology is designed with the sensor, a/d convertor, and raw processing algorithms in mind. This will be most glaringly obvious when opening a long exposure in both Capture One (which takes advantage of the dark frame data) and ACR (which does not) but the difference is present in all images. 

There are many, many aspects of image quality which cannot be fully wrapped into a single numerical value. Dynamic range for instance is very poorly represented by a simple statement of "XX stops." If a detail in the shadows can be seen, but is rendered in an ugly way - say for instance with blob-like (low frequency) color noise, or with clumpy or rectangular noise artifacts then it is not photographically useful. If a highlight can be recovered but it has odd stepping patterns or poor color accuracy it is of minimal photographic use. In this regard there is no single answer for "how many stops of dynamic range does this camera have" when the motivation for the question is photographic and not scientific - no matter what manufacturers or testing agencies like dXo would have you believe.

Noise, likewise, cannot be quantified entirely in one numerical value. Nor can it be measured in isolation from the effect of the raw processor. Open an ISO3200 file from an IQ160 in Capture One 6 and Capture One 7 (as we have illustrated on our Phase One Facebook page) and you will find a drastically more photographically useful image in Capture One 7. There is less noise, it is more easily removed without compromising detail, and the noise that is present is "pretty" (gaussian in nature, evenly distributed, without signs of clumping or rectangular blockiness). Notably the improvement in processing for Phase/Leaf files in C1v7 vs C1v6 is greater than the (still notable) improvement for dSLR files. This is because the software team can consult directly with the guys who make the back (and literally design the sensor) and have taken enormous effort at making the end-result of the image quality chain as good as possible.

When analyzing camera systems it makes sense to analyze the entire chain as a whole. I genuinely believe the IQ180 sensor, when coupled with Capture One and a good set of lenses, is the best system on the market for the purpose of producing raw files for photographic use (with the standard caveats that other cameras will exceed it for longer exposures and very high ISOs).


Tim Parkin of On Landscape sent this comment, mostly regarding color:

Hi Erik, Just taken a look at the article and the section on colour probably needs something explaining that colour accuracy with a colourchecker card is far from the main concern in terms of getting accurate colour. The biggest issue is metameric failure of the dyes used in the bayer array. I've done some analysis of this and looked into the visual quality of the cameras in the big camera comparison (feel free to link to the images directly if you like)

http://static.timparkin.co.uk/static/tmp/cameratest-2/800px.html

Here's the article talking about colour

http://www.onlandscape.co.uk/2012/02/the-myth-of-universal-colour/

In my opinion the quality of MFDB's varies hugely and is something I'll be testing more closely in the future. In our Big Camera Comparison I would say that we have a split from worst to best in the DSLR side with Canon generally pretty poor and Sony typically very good. With MFDBs the split was with the P45 being atrocious and the IQ180 being very nice indeed. However the best of all was the Sony A900 with the IQ180 and D800 close behind.. I wouldn't touch a P45 with a barge pole though (at least for landscape). Tim



Thanks!

This article would not be possible without the folks sharing experience and images on the net. The article uses images from:
Tim Ashley (IQ180 and Nikon D800E shadow detail)
Marc McCalmont (The Really Right Stuff)
Alex Koloskov (Color Checker shots of Hasselblad H4D40 vs Nikon D800E)
Imaging Resource (Test images from Nikon D800E and Pentax 645D used to generate MTF data using Imatest)
Norman Koren the founder of Imatest has many excellent articles on image quality and MTF.
Tim Parkin, the gentleman behind On Landscape has been most helpful providing insights, comments and slanted edge images for MTF calculations.



Last Updated on Thursday, 13 December 2012 11:56  

Statistics

OS : Linux u
PHP : 5.2.9
MySQL : 5.0.67
Time : 16:40
Caching : Enabled
GZIP : Disabled
Members : 1510
Content : 72
Web Links : 1
Content View Hits : 380042

Login

Banner1

J!Analytics