Join ePHOTOzine, the friendliest photography community.
Upload photos, chat with photographers, win prizes and much more for free!
Making Sense of Sensors - The pixel war is becoming more and more confusing. Ian Andrews sheds some light on the difference between image sensors and what you can expect from each.
Over the past few years there has been a myriad of advances in digital camera sensor technology. New advances are still being announced at regular intervals. On top of that, sensor sizes from different manufactures vary from absolutely tiny through to offerings for medium-format backs and more, with pixel counts seeming to accelerate weekly. Each manufacturer will supply you with reams of blurb explaining why their's is better than the rest, why more is better, why bigger is better, etc. Here, we present a layman's overview of the types available and the advantages and disadvantages of each type to help you choose.
Types of image sensor
The two basic types of image sensor are CCD and CMOS. Within these two are various technologies developed by different companies. Digital camera manufacturers produce products with both systems and have even been known to market products with both systems in the same camera! So how are we, the consumers, supposed to make a choice?
CCD (Charge Coupled Device) was the initial system that was commercially viable and has been around since the 1960’s. Continuous development over that time has brought it to the standard that we see today. Many believe that, with the law of diminishing returns, CCD has reached its peak and further large strides in its development are unlikely. Having said that, it is still the most widely used type of sensor in digital cameras today and with big companies still using and working on CCD chip development, who is to say that the technology is old hat? The system is inherently less noisy than its major competitor, though at the expense of being more power hungry.
The younger contender is CMOS (Complimentary Metal Oxide Semiconductor). Despite being a newer idea, CMOS has been around for a number of years and development has been gathering pace. One big advantage that CMOS has is a much lower power requirement, but a disadvantage is that it is inherently noisier than CCD. In recent years, manufacturers have paid CMOS more attention and have overcome this, in some cases quite spectacularly. For many cases, they are now turning their full attention to CMOS in their higher end cameras. The lower power consumption and greater processing capabilities, it seems, are winning the day.
How many million pixels does your camera have? This is probably the most asked question about a camera there is, but pixels are now getting confusing. With the advancements in sensor technology, a simple count of the number of pixels is not enough to quantify the ability of a sensor to record detail. It's complicated by the size of the pixel, the construction of the pixel, the way that pixel records the information it receives and the way that information is processed. Another thing putting a spanner in the works is the direction from which the pixel receives the light that gives it the information.
What is happening is that, as sensor development improves, the individual pixels are becoming more sophisticated. It would now be more appropriate to call them photo-sites as some now have more than one detector in them. Examples include Fuji’s Super CCD which have one large detector and one smaller detector alongside each other in the same photo-site and the Sigma/Foveon X3F sensor which has three detectors stacked on top of each other in the same photo-site. These vagaries have caused confusion, not least amongst the manufacturers and especially their marketing departments!
With the exception of the Sigma/Foveon sensor, all of these types will only record light/dark at each photo-site and therefore need a filter over each one to allow them to record the colours. This filter needs to be one of three different colours, Red, Green or Blue depending on where it is on the sensor. The pattern these colours make up is known as the Bayer Pattern, where each photo-site in every group of four has two green, one red and one blue filter over it. From the information these sites and the surrounding ones receive, the software then makes up its mind what colour each site should really be.
Conventional image sensor technology diagram
|A conventional image sensor features colour filters applied to a single layer of pixel sensors in a tiled mosaic pattern.||The filters let only one wavelength of light - R(red), G (green), or B(blue) - pass through to any given pixel location.
The pixel location records only that one colour, so the other information at that site is lost.
|As a result, conventional mosaic image sensors capture only 25% of the R(red) 25% of the B (blue), and 50% of the G(green) light.|
Distribution of colours in a Bayer pattern sensor
The exception, the X3F sensor from Sigma/Foveon, has a detector for each colour stacked one on top of the other at the same photo-site, taking advantage of the different wavelengths of each colour and their ability to penetrate silicon to differing degrees. It's still however, a CMOS sensor. The Fuji Super CCD sensor, despite having two detectors at each site, still has a Bayer pattern filter over the sensor.
FOVEON X3® direct image sensor technology diagram
|The FOVEON X3 direct image sensor comprises three layers of pixel sensors embedded in silicon.||Silicon absorbs different colours, or wavelengths, of light at different depth, so each layer captures a different colour among R(red), G(green) and B(blue). Full colour is directly captured at each pixel location.||Only the FOVEON X3 image sensor captures 100% of the R(red), G(green), and B (blue) light at every pixel location.|
So now we have two technologies, CCD and CMOS, and two methods of recording colours, Bayer and X3F. The next variable is sensor size or, more importantly, pixel (photo-site) size. Sensor size quotations use measurements that hark back to early television technology and are not helpful to our purpose, so we will look at sizes in relation to what is known as ‘full frame’ from the days of 35mm film stock. (36x24mm). This again, can be confusing, but as most manufacturers of Digital SLR cameras publish the ‘crop factor’ of their sensors, most people will have an idea of the concept. Suffice to say that if a camera has a ‘crop factor’ greater than one, the sensor will be smaller than the full frame 36x24mm. (e.g. a factor of 1.5x means a sensor 2/3rds the size or 23.7x15.6mm.)
At present, DSLR cameras can be bought with crop factors of 1x, 1.3x, 1.5x, 1.6x, 1.7x and 2x. It doesn’t take much of a mathematical brain to work out that a camera with a 1.3x factor with 8 mega-pixels has much larger pixels than a 2x factor camera with the same number of pixels. And, theoretically, larger pixels will gather more light so therefore they must be better! Yes, but!!
|The larger the sensor, the greater the angle that the light falls on the sensor at the edges of the frame. As the pixels or photo-sites are in fact slightly cup shaped, this causes shading in the pixels that are furthest from the centre point. So, theoretically, the smaller the sensor the better!|
By now, you should be totally confused. And we haven’t even mentioned the obvious one, that the more pixels the better the sensor, or so we are led to believe! To an extent, this is true, but the more you put into a given space, the smaller they must be. Smaller sensors gather less light and therefore require more amplification. More amplification means more noise, which requires more filtering. Again, the more information that is gathered, the larger the file size of the picture. The bigger the file size, the more processing power required to realise the final product.
Then there is the recording format. Jpeg, Tiff, RAW, Hi, Fine, Good, Standard, etc. In-camera processing parameters also vary from one manufacturer to another. And aspect ratio, where most DSLR’s are 3:2 but the new 4/3rds system is 4:3 the same as many point & shoot cameras. Where does it all end?
We have tried to put together a few examples of the different types of sensor results for you to check out. As far as possible, they have all had the minimum amount of post processing in order to give you an idea of what the sensor is producing, rather than what the post processing software is turning it into.
The photos below are displayed at 100% and are cropped versions of this scene. You can click on any of the small cropped photos to display the original full size photo taken using each camera.
|Nikon D70, 6mp CCD, Sigma EX f/2.8, 70mm @ f/8||Olympus E-300, 8p CCD, Zuiko 14-45mm f/3.5-5.6, 45mm @ f/8||Canon 1D, 4.2mp CCD, Canon 70-200mm L IS f/2.8 @ f/8|
|My thanks to Sigma Imaging for providing lenses in the three fittings and for the use of some of their graphics.|
|Canon 10D, 6mp CMOS, Sigma 70-200mm EX f/2.8, 70mm @f/8||Sigma SD10, 3.4mp CMOS/X3F Foveon, 70-200mm EX f/2.8, 70mm @ f/8|
There is no real answer to the question “Which is the best?” There is, however, probably an answer to “Which is the best for you?” but only you can come up with it. If you are only ever going to use your pictures for web use, display on your TV through the DVD player or have 6x4inch prints done at the local shop, then any of the available sensors are perfectly adequate. Just pick one that produces colours that are pleasing to you on a camera that you feel comfortable handling.
If you put your pictures to a more demanding use, like club competitions, magazine submissions or the occasional library submission where image quality and file size are more of a concern, then the higher count sensors will prove to have dividends over interpolating files up.
If you are expecting to produce large prints, over 16in, consistently or plan to work in the upper ends of the photography market, then only the really top end, high count sensors will do.