Monday, June 12, 2006

Moores Law and Digital Cameras

Digital technology has taken the world by storm, so much so that it might be easy to think the revolution is over.

In photography, for example, it is tempting to think that once everybody has a digital camera, the transition will be complete and things will settle down, right? Wrong. The revolution is taking off; it is only the boring part that's nearly over.

The reason is Moore's Law, the notion behind advances in the computer industry for the past 40 years. Gordon Moore, a founder of Intel, observed in 1965 that the number of transistors on a chip doubled every 18 months, and that is pretty much still true. More transistors for the computer mean more features and more bang for the consumer's buck. For digital cameras, the bang has meant sensors with more megapixels and bigger memory cards.

Some say the megapixel race will stop, just as people used to think that 8- bit, 16-bit or 32-bit computers would be enough. The problem with that reasoning is all the smart engineers who wake up every day looking for a competitive edge by turning computing power into something worth buying. I'm betting that they will succeed, because there are so many opportunities.

So far, camera designers have focused on vital but mundane tasks, like producing picture quality equal to that of film. Professionally, my 16-megapixel Canon is vastly better than film.

Yet why stop at film? I'm eagerly awaiting Canon's next move, probably to 25-plus megapixels. I'm what marketing people call an early adopter, but mark my words - you'll own a 16- or even a 25-megapixel point-and-shoot in a few years, and it will not stop there. By some estimates, your eyes have an effective resolution of more than 500 megapixels. If you can see it, why shouldn't a camera record it?

Source

Comments: Post a Comment

Links to this post:

Create a Link



<< Home

This page is powered by Blogger. Isn't yours?