The main tank, brimful with ideas. Enjoy them, discuss them, take them. - Of course, this is also the #1 place for new submissions!
User avatar
By AaronBurns
#6078
All cameras have a limited amount of defintion, clarity, and details in images, due to the fact that definition in different light ranges disappear when in bright or dim light. In dim light all bright definition is lost and in bright light you can't see any defintion in the dark areas. I propose that we create cameras (still or movie) that can capture all definition, clarity, and details in all light ranges and conditions (dark, mid-range, and bright). This would beat plasma screens in resolution and even beat the details picked up and captured by the human eye even with the best vision.

This full light range capture is easy to produce. All cameras are now computerized with Charged Coupled Devices (computer chips) to capture light, details, resolution, clarity, and details instead of mechanical parts of the older type cameras of the past but, the image is captured by only one chip. If we add three of these CCD chips (one for each light range) then we cature the entire image instead of just one light range. Right now your most current technologically advanced cameras capture one range at a time. I did research with camera companies and they are only working on resolution and not light ranges. In other words they are not addressing this issue directly.

Reward: Credit
#6916
Having three sensors (which are mostly CMOS these days, not always CCD as you claim) would introduce problems of high cost, size, optical aligment/fragility, data transfer rate, etc., and I doubt consumers would put up with these disadvantages to reap the slim benefit, and I disagree that it's "easy to produce". What you're describing is not simple "dynamic range", as sensors can vary their clock rates to adjust for a wide range of general light conditions, but *intra-image* dynamic range: the ability to have both bright and dim content in the same image.

A better solution might be to put multiple pixel sizes on a single sensor die, and Fuji has already done that (their "third-generation" sensor), but it's not very popular. Why? Everything comes at some cost, and it's just not clear that dynamic range is a sensor problem.

1. Example, you can also increase dynamic range by using a standard sensor with an A/D that has more resolution, to get more raw bit depth (rather than a fancy sensor). But in all these cases, collecting more bit-depth = more data. Using 54bit (18/color) could accurately reproduce most scenes (>1:200000, or >50dB), but this depth on a 5 Mpixel camera is 33 MB per shot. This slows the time req'd between shots, eats memory, and slows data transfer, etc. Most people would find these things irritating.

2. BTW, some artifacts you see with high contrast images are "flare" from the optics... not the sensor itself, or a result of poor composition (user error).

3. There are many methods photographers use to mitigate contrasty scenes... use the "histogram" display on your digital camera, use synchro or fill flash, or upload the data in raw format (again, large files) rather than do the JPG inside the camera to give you more flexibility.

3. Dynamic range is much more limited by the printers and displays we use. Digital cameras usually have much better dynamic range than photo prints, inkjet printers, computer monitors, and often better than plasma monitors. They are also higher than 35mm film in nearly all cases (8-12 stops vs. around 7 stops for film). So, again, the sensor is often not the bottleneck.

4. A topic generating a lot of interest lately is the data conversion in the camera. Traditional 2D spatial compression (GIF, JPG) dramatically reduces image size (with some loss), but there are some advanced image processing techniques that put their attention on the Z-axis (intensity), in particular capturing and preserving dynamic range. You might thing of this as a "Dolby" for images. In this way, you can preserve some dynamic range without resorting to the brute-force method of large bit-depth and thus huge file sizes. Again, though, everything has a tradeoff, in this case the cost of the time/space to execute this fancy image processing code, and the engineering effort required to make it work elegantly in all conditions.
By yanivger
#6918
there is a lot beter way for solution.instead buying a high-tech camera.
u can change your pictures brightness cantrast and whatever with photoshop ,and do changes on different part of the image and the result it what u see ,not what the camera desidesfor you.
User avatar
By AaronBurns
#6928
You are right about the images catured by the digital cameras which, is what we are talking about here. They can not be displayed by even the best plasma screen if we were talking about pixels and frame rates but, you have seem to gone off course. We are talking about light ranges that would increase the image quality not the quanity of the caputered image.
You should study the differences between film and digital cameras and why they still use 35 mm film instead of digital cameras when making movies. It's the ability to capture true light in the light temperature range which a digital camera can not capture.
Basically, to calm your nerves; images of all three light ranges can be catured and displayed as a high definition picture by showing the full (Contast) in clear detail and not by the cameras of today which can display only one light range due to the fact that they would drop any detail in another light range.
To dispel your myth and illogic. You are not getting the point here. Think of it as putting (Three) lenses on your camera each set to capture a different range of light. We are talking about the three that actually can be adjusted on any camera. (Bright, Mid-Range, and Dim) light. Todays cameras are set to capture pixels and make higher quality images by adding together smaller bits and putting them together as a detailed image but, light is adding detail in an entirely different way and has nothing to do with pixels. It will simply capture the whole image instead of part if it. Some what of a larger amount of information to cature but, not much.
It would be the same pixel rate and the same image but, both at the same time; would be an image that exists with out any bright spots or drop outs in the image due to the dark.
You would be simply be able to view the WHOLE picture instead of the part in which the digital cameras pick up as the standard image which, they seem to think is the range we need to see adjusted by the cameras ability to change. We do not want any light or dark images to change as you seem to think but, we would like to see as if your iris never had to shrink or expand. Sort of like an eye that is in all stages of excepting the viewable picture.
If you want me to make a better explaination or invention then add three lenses to a 35 mm camera then you can understand it.
I took film classes in college.
Also, look up multiple lens cameras from the past. You can capture the same image in the same shot. They simply placed the images in the same light range. A slight adjustment is too capture light in three ways (All light ranges) giving us an overlay of three or more images that, when added together produce a TOTAL picture.
I don't think that Movie Producers are concerned with cost and the technology that we are discussing does not exist yet but, I doubt, greatly, that it would be of immense expense. Thusly, we have a better consumer product that would improve on and not lessen the value of the images we would like to preserve. Many people could benifit from this.
I would like to hear what your analysis of the discussion is now that you've heard my retort to your educated (Oppinion).
By the way; do you work for Kodak? lol ;-D ;-D ;-D
OFFSHORE

Is there anymore need for physical cards? I suppos[…]

A Place for problems and solutions

This is a really good proposal. One title could be[…]

Team Innovating Forum

Are there forums for team innovating? Normally peo[…]

Whats your favorite Xbox game?

Mine is outrun2