How to understand stereopsis and stereoscopy
Firstly we should analysize the difinition between the
stereopsis and stereoscopybefore understanding them clearly.Below are about the basic meaning related:
Stereopsis (from stereo- meaning "solid" or "three-dimensional", and opsis meaning appearance or sight) is the impression of depth that is perceived when a scene is viewed with both eyes by someone with normal binocular vision. Binocular viewing of a scene creates two slightly different images of the scene in the two eyes due to the eyes' different positions on the head. These differences, referred to as binocular disparity, provide information that the brain can use to calculate depth in the visual scene, providing a major means of depth perception. The term stereopsis is often used as short hand for 'binocular vision', 'binocular depth perception' or 'stereoscopic depth perception', though strictly speaking, the impression of depth associated with stereopsis can also be obtained under other conditions, such as when an observer views a scene with only one eye while moving. Observer motion creates differences in the single retinal image over time similar to binocular disparity; this is referred to as motion parallax. Importantly, stereopsis is not usually present when viewing a scene with one eye, when viewing a picture of a scene with both eyes, or when someone with abnormal binocular vision (strabismus) views a scene with both eyes. This is despite the fact that in all these three cases humans can still perceive depth relations.
To understand what stereoscopic 3D is, it's necessary to understand perceived depth. There are many cues that help us perceive depth.
Stereoscopy (also called stereoscopics or 3D imaging) is a technique for creating or enhancing the illusion of depth in an image by means of stereopsis for binocular vision. The word stereoscopy derives from the Greek "στερεός" (stereos), "firm, solid"[2] + "σκοπέω" (skopeō), "to look", "to see".[3]
Most stereoscopic methods present two offset images separately to the left and right eye of the viewer. These two-dimensional images are then combined in the brain to give the perception of 3D depth. This technique is distinguished from 3D displays that display an image in three full dimensions, allowing the observer to increase information about the 3-dimensional objects being displayed by head and eye movements.
Objects
in perspective, occlusion, and relative size are good indicators of
depth.
An object that is farther away is interpreted as such by our
brains if it is much smaller than another object next to it. Our brain
already knows how big those objects should be in relationship to one
another. If two objects are roughly the same size in our field of view,
and one is occluded by or is occluding another object, our brain infers
that one of those objects is in front of the other. (Occlusion mean one
object is laid on top of the other and obscures the other.) Paintings or
games can appear 3D because they obey these rules. After Effects also
obeys these rules when you create a 3D composition with a camera.
Another
important depth cue is lens blur. If our eyes (or a camera lens) focus
on a specific object, and another object appears blurred next to it, our
brain knows that the other object is either in front of or behind the
object. If there is no blur, our brain thinks that the two are at a
similar distance. One can clearly see this phenomenon as our eyes focus
on different objects and our retinas blur the out-of-focus objects in
the background. Our brain interprets this as a depth cue without us
realizing it. This phenomenon is subtle as our brain filters it
seamlessly into our perception. It is usually unnoticed to the average
person. However, it is possible to train our eyes and brain to
experience and be conscious of the depth of field by relaxing the eye
muscles and using the following (or similar) technique. Look through a
windshield with water droplets on it at night. When you focus outside
the windshield, the water droplets turn into little halos of color
called
bokeh. Similarly, when you focus on the droplets, the
streetlights in the background turn into bokeh. This effect can be
accomplished with one eye closed. Therefore, it has nothing to do with
stereopsis, but instead has to do with our eye’s lens focusing, similar
to how a camera lenses focus. Understanding how depth of field is
related is important when attempting to create realistic images and
works hand-in-hand with stereoscopic 3D in After Effects. Especially
with the new and improved Camera Lens Blur effect and related features
in After Effects CS5.5.
Finally, arguably the most powerful depth
cue is stereopsis. Stereopis is the ability of our brain to take two
input images from different perspectives and gain an understanding of
how far away two different objects are in relationship to each other.
The key point to understand is that since our eyes are spaced apart on
our heads, each eye can view a slightly different perspective of the
world in front of us. Look at an object nearby and close one eye, then
switch eyes back and forth several times. Then try this same exercise on
an object that is far away. You notice that the object
that that is nearby jumps from side to side in your field of view a lot
more drastically than the object far away. If the close object is in the
same general direction as the far away object, the close object
switches sides of the far away object. This is the basis of how
stereopsis works. Your brain takes the relative horizontal distance
between objects in your field of view and compares them to gain an
understanding of where those objects are in relationship to each other
in terms of depth. It is theorized that pigeons bob their head in order
to gain depth perception (since their eyes are on opposite sides of
their head and they can’t see depth otherwise). If you look through only
one eye, you lose your stereopsis depth cue. However if you bob your
head from side to side with that eye still closed, you can get a sense
of depth again. This separation between eyes that provides different
perspectives is the key to stereopsis.
|
How to understand stereopsis and stereoscopy |
It is important to keep all
these depth cues in mind when constructing a stereoscopic 3D
composition in After Effects. In the real world, it is possible to give
contrary information to the brain and trick it. Optical illusions like
the Ames Room, the Infinite Staircase, or tilt-shift photography are all
examples of how depth cues can be manipulated and our brains tricked.
(Tilt-shift photographiy is a method in which a post-process
depth-of-field blur is added to an image to give a broad landscape the
feeling of a miniature.) Since After Effects gives you control of all of
these depth cues, it's important to maintain control over their
interaction and make sure that they are not giving our brains too many
contrary depth cues. In real life, one can mess around with our
surroundings in intelligent ways to create optical illusions. But more
often than not, inconsistencies in the digital realm are considered
unnatural and can even cause eyestrain or brain pain.
Stereopsis, being
the
most powerful depth cue, is no exception. It's important to
make sure that it is not painful to look at the stereoscopic result on
different screens. Ones viewing experience can change depending on how
big the screen is and how far away the viewer is from the screen.
Stereoscopy
is a digital technique for allowing our brain to see stereopsis by
tricking it. This technique is done is by presenting each eye with a
different image. The left eye is presented a view of a scene from some
virtual or real camera that shows the left perspective. Similarly, the
right eye is presented with an image of the right perspective. In this
way, each eye is presented with a different image independently and our
brain puts them together, and we perceive depth. When viewing a
stereoscopic 3D scene on a monitor, the elements in the scene have a
tendency to pop out or sink into the screen. Stereopsis is telling us
that the object is closer or farther away from us than how far away the
monitor actually is.
Many different devices and systems exist for
delivering stereopsis to our brains. But in general the principle behind
all of them is the same; get one eye to see one view, and the other to
see a different perspective of the same scene. Anaglyph glasses are the
oldest method, and by far the cheapest. Different colored lenses color
filter each eye’s view differently. Red-blue glasses filter out blue on
the left eye and red on the right eye. On the display side, the left
image is colored red, and the right is colored blue. Then the images are
overlapped. Each eye sees only the associated image. Because of the
inherent color distortion, it is difficult to see all the colors
accurately using anaglyph. But the setup is very easy and works
accurately for judging depth and convergence. Polarized glasses work on a
simple principle. Two images are displayed on a screen, one image emits
horizontally polarized light only, and one emits vertically polarized
light only. The glasses have polarized lens such that each only lets
through light polarized in one direction. Active shutter glasses work by
blocking one eye at a time at a high rate (usually 60fps) and switching
the left and right images every frame while synchronized with the
monitor. Some TVs use no glasses at all, such as those from Alioscopy.
Aliscopy uses lenticular technology, in which the lens on the monitor
itself actually refracts the lights in different directions so that each
eye gets a different perspective simply by being in a different
location in relationship to the TV. There are many more methods for
stereoscopy.
When dealing with
stereopsis in the
real world, the only things that can vary are the positions of objects
in front of you, and the perspective from each eye can only change based
on that. The only way to make an object look closer through
stereopsis
is to actually place it closer. You can’t easily change the distance
between your eyes, your field of view, or the aperture of your eyes (at
least not voluntarily) to modify the depth of field you perceive.
However, in the digital realm, there are many more variables since all
of these aforementioned things can be changed. Therefore, there is a
high likeliness to introduce confusing depth cues that are contradictory
and cause pain when viewing.