0% found this document useful (0 votes)
36 views9 pages

Perception: Law of Proximity

1. Perception involves recognizing and interpreting sensory information from our environment and using that information to interact with our environment. It allows us to take in sensory data and make it meaningful. 2. There are two types of processing involved in perception - bottom-up processing, which builds perception from basic sensory information, and top-down processing, which is driven by cognition and applies prior knowledge and expectations. 3. Depth perception arises from both binocular cues that use information from both eyes as well as monocular cues that can provide depth from a single eye through factors like motion parallax, depth from motion, and occlusion.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views9 pages

Perception: Law of Proximity

1. Perception involves recognizing and interpreting sensory information from our environment and using that information to interact with our environment. It allows us to take in sensory data and make it meaningful. 2. There are two types of processing involved in perception - bottom-up processing, which builds perception from basic sensory information, and top-down processing, which is driven by cognition and applies prior knowledge and expectations. 3. Depth perception arises from both binocular cues that use information from both eyes as well as monocular cues that can provide depth from a single eye through factors like motion parallax, depth from motion, and occlusion.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Perception can be defined as our recognition and interpretation of sensory information.

Perception
also includes how we respond to the information. We can think of perception as a process where we
take in sensory information from our environment and use that information in order to interact with
our environment. Perception allows us to take the sensory information in and make it into something
meaningful.
For example, let's look at our perception of words. Each letter of the alphabet is in itself a singular
letter. When we perceive words, we think of them as one singular unit that is made up of smaller
parts called letters. It is through this organization of letters into words that we are able to make
something meaningful. That is, we perceive an entire word, and this word has a specific meaning
that can be found in the dictionary.

Law of proximity

Law of similarity

Law of closure
1. Law of Proximity—The law of proximity states that when an individual perceives an
assortment of objects they perceive objects that are close to each other as forming a group.
For example, in the figure that illustrates the Law of proximity, there are 72 circles, but we
perceive the collection of circles in groups. Specifically, we perceive that there is a group of
36 circles on the left side of the image, and three groups of 12 circles on the right side of the
image. This law is often used in advertising logos to emphasize which aspects of events are
associated.[17][18]
2. Law of Similarity—The law of similarity states that elements within an assortment of objects
are perceptually grouped together if they are similar to each other. This similarity can occur
in the form of shape, colour, shading or other qualities. For example, the figure illustrating
the law of similarity portrays 36 circles all equal distance apart from one another forming a
square. In this depiction, 18 of the circles are shaded dark and 18 of the circles are shaded
light. We perceive the dark circles as grouped together, and the light circles as grouped
together forming six horizontal lines within the square of circles. This perception of lines is
due to the law of similarity.[18]
3. Law of Closure—The law of closure states that individuals perceive objects such as shapes,
letters, pictures, etc., as being whole when they are not complete. Specifically, when parts of
a whole picture are missing, our perception fills in the visual gap. Research shows that the
reason the mind completes a regular figure that is not perceived through sensation is to
increase the regularity of surrounding stimuli. For example, the figure that depicts the law of
closure portrays what we perceive as a circle on the left side of the image and a rectangle
on the right side of the image. However, gaps are present in the shapes. If the law of closure
did not exist, the image would depict an assortment of different lines with different lengths,
rotations, and curvatures—but with the law of closure, we perceptually combine the lines
into whole shapes.[17][18][19]
4. Law of Symmetry—The law of symmetry states that the mind perceives objects as being
symmetrical and forming around a center point. It is perceptually pleasing to divide objects
into an even number of symmetrical parts. Therefore, when two symmetrical elements are
unconnected the mind perceptually connects them to form a coherent shape. Similarities
between symmetrical objects increase the likelihood that objects are grouped to form a
combined symmetrical object. For example, the figure depicting the law of symmetry shows
a configuration of square and curled brackets. When the image is perceived, we tend to
observe three pairs of symmetrical brackets rather than six individual brackets.[17][18]
5. Law of Common Fate—The law of common fate states that objects are perceived as lines
that move along the smoothest path. Experiments using the visual sensory modality found
that movement of elements of an object produce paths that individuals perceive that the
objects are on. We perceive elements of objects to have trends of motion, which indicate the
path that the object is on. The law of continuity implies the grouping together of objects that
have the same trend of motion and are therefore on the same path. For example, if there are
an array of dots and half the dots are moving upward while the other half are moving
downward, we would perceive the upward moving dots and the downward moving dots as
two distinct units.[20]
6. Law of Continuity—The law of continuity states that elements of objects tend to be grouped
together, and therefore integrated into perceptual wholes if they are aligned within an object.
In cases where there is an intersection between objects, individuals tend to perceive the two
objects as two single uninterrupted entities. Stimuli remain distinct even with overlap. We
are less likely to group elements with sharp abrupt directional changes as being one
object.[17]
7. Law of Good Gestalt—The law of good gestalt explains that elements of objects tend to be
perceptually grouped together if they form a pattern that is regular, simple, and orderly. This
law implies that as individuals perceive the world, they eliminate complexity and unfamiliarity
so they can observe a reality in its most simplistic form. Eliminating extraneous stimuli helps
the mind create meaning. This meaning created by perception implies a global regularity,
which is often mentally prioritized over spatial relations. The law of good gestalt focuses on
the idea of conciseness, which is what all of gestalt theory is based on. This law has also
been called the law of Prägnanz.[17] Prägnanz is a German word that directly translates to
mean "pithiness" and implies the ideas of salience, conciseness and orderliness.[20]
8. Law of Past Experience—The law of past experience implies that under some
circumstances visual stimuli are categorized according to past experience. If two objects
tend to be observed within close proximity, or small temporal intervals, the objects are more
likely to be perceived together. For example, the English language contains 26 letters that
are grouped to form words using a set of rules. If an individual reads an English word they
have never seen, they use the law of past experience to interpret the letters "L" and "I" as
two letters beside each other, rather than using the law of closure to combine the letters and
interpret the object as an uppercase U.[20]

Bottom-up vs. Top-down Processing

There are two general processes involved in sensation and perception. Bottom-up
processing refers to processing sensory information as it is coming in. In other words, if I flash a
random picture on the screen, your eyes detect the features, your brain pieces it together, and you
perceive a picture of an eagle. What you see is based only on the sensory information coming in.
Bottom-up refers to the way it is built up from the smallest pieces of sensory information.

Top-down processing, on the other hand, refers to perception that is driven by cognition. Your
brain applies what it knows and what it expects to perceive and fills in the blanks, so to speak. First,
let us look at a visual example:

Look at the shape in the box to the right. Seen alone, your brain engages in bottom-up processing.
There are two thick vertical lines and three thin horizontal lines. There is no context to give it a
specific meaning, so there is no top-down processing involved.

Now, look at the same shape in two different contexts.

Surrounded by sequential letters, your brain expects the shape to be a letter and to complete the
sequence. In that context, you perceive the lines to form the shape of the letter “B.” Surrounded by
numbers, the same shape now looks like the number “13.” When given a context, your perception is
driven by your cognitive expectations. Now you are processing the shape in a top-down fashion.

Depth perception arises from a variety of depth cues. These are typically classified
into binocular cues that are based on the receipt of sensory information in three dimensions from
both eyes and monocular cues that can be represented in just two dimensions and observed with
just one eye.

Monocular cues[edit]
Monocular cues provide depth information when viewing a scene with one eye.
Motion parallax
When an observer moves, the apparent relative motion of several stationary objects against
a background gives hints about their relative distance. If information about the direction and
velocity of movement is known, motion parallax can provide absolute depth
information.[5] This effect can be seen clearly when driving in a car. Nearby things pass
quickly, while far off objects appear stationary. Some animals that lack binocular vision due
to their eyes having little common field-of-view employ motion parallax more explicitly than
humans for depth cueing (e.g., some types of birds, which bob their heads to achieve motion
parallax, and squirrels, which move in lines orthogonal to an object of interest to do the
same [6]).[note 1]
Depth from motion
When an object moves toward the observer, the retinal projection of an object expands over
a period of time, which leads to the perception of movement in a line toward the observer.
Another name for this phenomenon is depth from optical expansion.[7] The dynamic
stimulus change enables the observer not only to see the object as moving, but to perceive
the distance of the moving object. Thus, in this context, the changing size serves as a
distance cue.[8] A related phenomenon is the visual system’s capacity to calculate time-to-
contact (TTC) of an approaching object from the rate of optical expansion – a useful ability in
contexts ranging from driving a car to playing a ball game. However, calculation of TTC is,
strictly speaking, perception of velocity rather than depth.
Kinetic depth effect
If a stationary rigid figure (for example, a wire cube) is placed in front of a point source of
light so that its shadow falls on a translucent screen, an observer on the other side of the
screen will see a two-dimensional pattern of lines. But if the cube rotates, the visual system
will extract the necessary information for perception of the third dimension from the
movements of the lines, and a cube is seen. This is an example of the kinetic depth
effect.[9] The effect also occurs when the rotating object is solid (rather than an outline figure),
provided that the projected shadow consists of lines which have definite corners or end
points, and that these lines change in both length and orientation during the rotation.[10]
Perspective
The property of parallel lines converging in the distance, at infinity, allows us to reconstruct
the relative distance of two parts of an object, or of landscape features. An example would
be standing on a straight road, looking down the road, and noticing the road narrows as it
goes off in the distance.
Relative size
If two objects are known to be the same size (e.g., two trees) but their absolute size is
unknown, relative size cues can provide information about the relative depth of the two
objects. If one subtends a larger visual angle on the retina than the other, the object which
subtends the larger visual angle appears closer.
Familiar size
Since the visual angle of an object projected onto the retina decreases with distance, this
information can be combined with previous knowledge of the object's size to determine the
absolute depth of the object. For example, people are generally familiar with the size of an
average automobile. This prior knowledge can be combined with information about the angle
it subtends on the retina to determine the absolute depth of an automobile in a scene.
Absolute size
Even if the actual size of the object is unknown and there is only one object visible, a smaller
object seems further away than a large object that is presented at the same location [11]
Aerial perspective
Due to light scattering by the atmosphere, objects that are a great distance away have lower
luminance contrast and lower color saturation. Due to this, images seem hazy the farther
they are away from a person's point of view. In computer graphics, this is often called
"distance fog." The foreground has high contrast; the background has low contrast. Objects
differing only in their contrast with a background appear to be at different depths.[12] The color
of distant objects are also shifted toward the blue end of the spectrum (e.g., distant
mountains). Some painters, notably Cézanne, employ "warm" pigments (red, yellow and
orange) to bring features forward towards the viewer, and "cool" ones (blue, violet, and blue-
green) to indicate the part of a form that curves away from the picture plane.
Accommodation
This is an oculomotor cue for depth perception. When we try to focus on far away objects,
the ciliary muscles stretch the eye lens, making it thinner, and hence changing the focal
length. The kinesthetic sensations of the contracting and relaxing ciliary muscles (intraocular
muscles) is sent to the visual cortex where it is used for interpreting distance/depth.
Accommodation is only effective for distances less than 2 meters.
Occlusion
Occlusion (also referred to as interposition) happens when near surfaces overlap far
surfaces.[13] If one object partially blocks the view of another object, humans perceive it as
closer. However, this information only allows the observer to create a "ranking" of relative
nearness. The presence of monocular occlusions consist of the object's texture and
geometry. Monocular occlusions are able to reduce the depth perception latency both in
natural and artificial stimuli.[14][15]
Curvilinear perspective
At the outer extremes of the visual field, parallel lines become curved, as in a photo taken
through a fisheye lens. This effect, although it is usually eliminated from both art and photos
by the cropping or framing of a picture, greatly enhances the viewer's sense of being
positioned within a real, three-dimensional space. (Classical perspective has no use for this
so-called "distortion," although in fact the "distortions" strictly obey optical laws and provide
perfectly valid visual information, just as classical perspective does for the part of the field of
vision that falls within its frame.)
Texture gradient
Fine details on nearby objects can be seen clearly, whereas such details are not visible on
faraway objects. Texture gradients are grains of an item. For example, on a long gravel road,
the gravel near the observer can be clearly seen of shape, size and colour. In the distance,
the road's texture cannot be clearly differentiated.
Lighting and shading
The way that light falls on an object and reflects off its surfaces, and the shadows that are
cast by objects provide an effective cue for the brain to determine the shape of objects and
their position in space.[16]
Defocus blur
Selective image blurring is very commonly used in photographic and video for establishing
the impression of depth. This can act as a monocular cue even when all other cues are
removed. It may contribute to the depth perception in natural retinal images, because the
depth of focus of the human eye is limited. In addition, there are several depth estimation
algorithms based on defocus and blurring.[17] Some jumping spiders are known to use image
defocus to judge depth.[18]
Elevation
When an object is visible relative to the horizon, we tend to perceive objects which are closer
to the horizon as being farther away from us, and objects which are farther from the horizon
as being closer to us.[19] In addition, if an object moves from a position close the horizon to a
position higher or lower than the horizon, it will appear to move closer to the viewer.

Binocular cues[edit]
Binocular cues provide depth
information when viewing a scene with
both eyes.
Stereopsis, or retinal (binocular)
disparity, or binocular parallax
Animals that have their eyes placed frontally can also use information derived from the
different projection of objects onto each retina to judge depth. By using two images of the
same scene obtained from slightly different angles, it is possible to triangulate the distance to
an object with a high degree of accuracy. Each eye views a slightly different angle of an
object seen by the left and right eyes. This happens because of the horizontal separation
parallax of the eyes. If an object is far away, the disparity of that image falling on both retinas
will be small. If the object is close or near, the disparity will be large. It is stereopsis that
tricks people into thinking they perceive depth when viewing Magic
Eyes, Autostereograms, 3-D movies, and stereoscopic photos.
Convergence
This is a binocular oculomotor cue for distance/depth perception. Because of stereopsis the
two eyeballs focus on the same object. In doing so they converge. The convergence will
stretch the extraocular muscles. As happens with the monocular accommodation cue,
kinesthetic sensations from these extraocular muscles also help in depth/distance
perception. The angle of convergence is smaller when the eye is fixating on far away objects.
Convergence is effective for distances less than 10 meters.[20]
Shadow Stereopsis
A. Medina Puerta demonstrated that retinal images with no parallax disparity but with
different shadows are fused stereoscopically, imparting depth perception to the imaged
scene. He named the phenomenon "shadow stereopsis". Shadows are therefore an
important, stereoscopic cue for depth perception.[21]

You might also like