Normally, for this to work its best, you want to start with something like five dark frames. a 30 second exposure with the lens cap on) to get a better map of the exact noise characteristics of your exact sensor, and take that into account (I know Noise Ninja allows that, and if memory serves NeatImage does as well). IIRC, NeatImage also allows you to take "dark frames" (e.g. To compensate for that, the noise reducer will normally do rather minimal averaging in the green channel, somewhat more in the red channel, and more still in the blue channel.Īn advanced noise reducer will normally start with a model of the noise for an individual sensor, and apply the noise reduction based on that model. This, however, tends to increase the noise in the blue channel. To maintain the color balance in the final picture, the brightness of the blues in the picture has to be "boosted" to compensate. In a typical case, the green filter transmits more light than the red or (especially) the blue. The normal arrangement is something like g-r-g-b (aka, a Bayer pattern). A normal digital camera has a filter in front of each sensel. They will also take the channels of the picture into account. Something like NeatImage or Noise Ninja will do its pixel averaging adaptively - for example, it'll start with a scan for changes that occur over enough pixels that they're unlikely to be noise, and where it sees those, do the averaging over fewer pixels. Averaging fewer pixels loses less detail, but reduces the noise less. Averaging more pixels reduces noise more, but loses more detail. The problem, of course, is that simple averaging loses detail. At its most basic, noise reduction normally uses pixel averaging.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |