0% found this document useful (0 votes)
191 views

Unit 2 Foundations For Visualization

Uploaded by

Yashwanth
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
191 views

Unit 2 Foundations For Visualization

Uploaded by

Yashwanth
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

• Foundations for visualization:

• Visualization stages
• Semiology of Graphical Symbols
• The Eight Visual Variables
• Historical Perspective
• Taxonomies
• Experimental Semiotics based on Perception Gibson‘s Affordance theory
• A Model of Perceptual Processing
• Mappings for visualization. Choose most appropriate visualization from options within your
toolset, e.g. scatter plot versus bar chart (depends on the data type of the axes). ...Before creating
visualizations, you should thoroughly analyze your data to understand its structure, relationships,
and any patterns or trends. Foundations for visualization refer to the fundamental principles, concepts,
and techniques that underlie the creation and effective use of visual representations of data and
information. Visualization is a powerful tool for understanding complex data, identifying patterns,
and communicating insights. We have now covered the start and the end of the visualization pipeline,
namely getting data into the computer, and, on the human side, how perception and cognition help us
interpret images. We have looked at one fundamental visualization, namely the scatter plot. There
are many other visualization techniques and systems some this is necessary for us to structure our
study of the field.
• user interaction ideally takes place at any point in this pipeline.
• each link is a many-to-many mapping.
• many visualization systems have multiple visualizations at the same time on the screen, and thus
have multiple representation mappings and corresponding renderings.
The Visualization Process
Data preprocessing and transformation:

• The starting point is to process the raw data into something useable by the visualization system.

• make sure that the data are mapped to fundamental data types dealing with specific application data
issues such as missing values, errors in input, and data too large for processing. Missing data may require
interpolation. Large data may require sampling, filtering, aggregation, or partitioning.
bn

I.Visualization stages
• process of creating effective data visualizations typically involves several stages, each with its
own objectives and considerations.
• These stages help ensure that the visualization effectively communicates insights and
information.
Stages
• Mapping for visualizations
• Rendering transformations
• Expressiveness
• Effectiveness
Mapping for visualizations
•Once the data are clean, we can decide on a specific visual representation. This requires
representation mappings: geometry, color, and sound, for example. It is easy to simply
develop a nonsense visualization, or one that conveys the wrong information.
•Crucial influences on the visualization of data sets are expressiveness and effectiveness

1. Mapping for visualizations: Mapping refers to the process of establishing a correspondence


between data variables and visual properties or encodings. It involves assigning data values to
visual attributes such as position, size, color, shape, or texture. Effective mapping is crucial for
creating meaningful and interpretable visualizations.

For example, in a scatter plot, the data variables for the x and y axes are mapped to the horizontal and
vertical positions, respectively. In a bar chart, the data values are mapped to the lengths or heights of
the bars.

2. Rendering transformations: Rendering transformations are operations that are applied to the
visual representations of data during the rendering process. These transformations can enhance
the visual clarity, emphasize specific aspects of the data, or adapt the visualization to different
instance, scaling transformations can be used to adjust the size of a visualization to fit a specific
layout or canvas. Projection transformations are used to convert 3D data into a 2D
representation for display viewing conditions or constraints.

Common rendering transformations include scaling, translating, rotating, clipping, and projecting. For
on a flat surface.

3. Expressiveness: Expressiveness refers to the ability of a visualization technique or system to


effectively represent a wide range of data types and characteristics. An expressive visualization
technique should be capable of accurately and clearly conveying various patterns, relationships,
and structures present in the data.

• For example, scatter plots are expressive in representing bivariate relationships, while parallel
coordinates are expressive in visualizing high-dimensional data. The choice of an expressive
visualization technique depends on the complexity and nature of the data being analyzed. An
expressive visualization presents all the information, and only the information.
• Expressiveness thus measures the concentration of information.
• If the information displayed is less than that desired to be presented, then Mexp < 1. If Mexp >
1, we are presenting too much information.

We have 0 ≤ Mexp ≤ 1. If Mexp = 1, we have ideal expressiveness. If the information displayed is less
than that desired to be presented, then Mexp < 1. If Mexp > 1, we are presenting too much information.
4. Effectiveness: Effectiveness in data visualization refers to the degree to which a visualization
achieves its intended purpose or communicates the desired information effectively. An effective
visualization should accurately convey the relevant patterns, trends, or insights present in the
data while minimizing potential misinterpretations or cognitive load.
5. Meff=1/(1+interpret+render).
6. We then have 0<Meff≤1.
7. The larger Meff is,thegreater the visualization’s effectiveness. If Meff is small, then either the
interpretation time is very large,or the rendering time isl arge.If Meff is large(closeto1),then
both the interpretation and the rendering time are very small.
8.

The effectiveness of a visualization can be evaluated based on criteria such as:

 Accuracy: How faithfully the visualization represents the underlying data.


 Clarity: How easy it is to perceive and understand the information being presented.
 Interpretability: How well the visualization facilitates the extraction of insights and decision-
making.
 Aesthetics: How visually appealing and engaging the visualization is, which can impact user
engagement and memorability.

Effectiveness is often assessed through user studies, expert evaluations, or analytical methods that
measure the performance of users in comprehending and utilizing the information presented in the
visualization.

These are essential in the design and implementation of effective data visualizations. Proper mapping
ensures that the data is accurately represented visually. Rendering transformations enhance the visual
clarity and adapt the visualization to different viewing conditions. Expressiveness ensures that the
visualization technique can handle the complexity of the data, while effectiveness measures how well
the visualization achieves its intended purpose and communicates the desired information.

II.Semiology of Graphical Symbols:

• A visual object called a graphical symbol.


• A symbol must be easily recognized. E.g, a red octagon is universally understood as a stop sign.
If you use a symbol that requires a key-to-meaning mapping or reference, interpretation will be
slowed and therefore efficiency is reduced.
• a graphical object or representation can be well designed, and how it is perceived.
• The science of graphical symbols and marks is called semiology.
• Every possible construction in the Euclidean plane is a graphical representation made up of
graphical symbols.
• This includes diagrams, networks, maps, plots, and other common visualizations.
• Semiology uses the qualities of the plane and objects on the plane to produce similarity features,
ordering features, and proportionality features of the data that is visible for human consumption.
• There are numerous characteristics of visualizations, of images, or of graphics made up of
symbols.

1.Symbols and Visualizations:

Figure (a) contains an image that is universally recognizable.

• Such images become preattentively recognizable with experience.


• Figure (b), on the other hand, requires a great deal of attention to understand;
• The first steps are to recognize patterns within that figure.
• The first identifies the major elements of the image, with the second identifying the various
relationships between these.
• With attentive effort, the symbols are perceived (transferred from long-term memory).
• Important: Without external (cognitive) identification, a graphic is unusable.
• The external identification must be directly readable and understandable.
• Since much of our perception is driven by physical interpretations, meaningful images must
have easily interpretable x-, y-, and z-dimensions and the graphics elements of the image must
be clear.
• Discovery of relations or patterns occurs through two main steps.
• The first is a mapping between any relationship of the graphic symbols and the data that these
symbols represent.
• Symbols are abstract or conventional signs that can help you emphasize and annotate important
or interesting aspects of your data visualization.
• For example, you can use symbols to highlight outliers, trends, or patterns in your data, such as
arrows, lines, dots, or asterisks.

(a) Symbol with obvious. (b) Representation with complex meaning.


meaning.
In other words, any pattern on the screen must imply a pattern in the data. If it does not, then it
is an artifact of the selected representation
• any perceived pattern variation in the graphic or symbol cognitively implies such a similar
variation in the data.
• Any perceived order in graphic symbols is directly correlated with a perceived corresponding
order between the data, and vice versa.
• similarity in data structure ⇐⇒ visual similarity of corresponding symbols;
• order between data items ⇐⇒ visual order between corresponding symbols.
2.Features of Graphics:
• Matrix representation of a set of relationships between nodes in a graph.
The size represents the strength of the relationship

• This produces a one-to-one correspondence between a 3D view with height and a 2D view with
size, thus different interpretations for the z value.
• The set of all points either in the 2D or 3D image represents the totality of the relations among
the three dimensions x, y, and z, and any patterns present imply a pattern in the data.
• We identify the tree as the dominant feature of this image, rather than the individual parts that
make up the tree.
• When looking at Figure Below, we immediately see two tree branches.

• The eye sees either branch independent from the number of its leaves.
• The graphic can contain a very large number of single data items, themselves graphics, with the
only possible limitations being technical ones, such as making sure that the various graphic
symbols are distinguishable from each other.
• But even then, perhaps the texture resulting from the en masse number of symbols may produce
an object of interest.
• The eye can preattentively see the various distributions of symbols.
Rules of a graphic:
• So 3D is the medium by which we need to interpret the graphic.
• All graphics are represented on the screen. All objects will be interpreted as flat (in 2D) or as
physical objects (in 3D).
So 3D is the medium by which we need to interpret the graphic
• We can identify some fundamental rules:
1. The aim of a graphic is to discover groups or orders in x, and groups or orders in y, that are
formed on z-values;
2. (x, y, z)-construction enables in all cases the discovery of these groups;
3. Within the (x, y, z)-construction, permutations and classifications solve the problem of the
upper level of information4. Every graphic with more than three factors that differs from the (x, y, z)-
construction destroys the unity of the graphic and the upper level of information; and

5. Pictures must be read and understood by the human.


• Analysis of a graphic:
• When analyzing a graphic, we first perceive groups of objects.
• Finally, we examine special cases not within the groups or relationships between the groups
(combination of both).
• This process can be done at many levels and with many different visualizations.
• Supporting analysis plays a significant role.
III.The Eight Visual Variables
• The application of graphics to communicate information requires an understanding of graphic
primitives and their properties.
• For the most part, all graphic primitives will be termed marks.
• Marks can vary in size, can be displayed using different colors, and can be mapped to different
orientations, all of which can be driven by data to convey information.
• In total there are eight ways in which graphical objects can encode information, i.e., eight visual
variables.
• 1.Position
• 2.Shape
• 3.Size
• 4.Brightness
• 5.Color
• 6.Orientation
• 7.Texture
• 8.Motion
Eight variables can be adjusted as necessary to maximize the effectiveness of a visualization to
convey information.
• Position: The first and most important visual variable is that of position, the placement of
representative graphics within some display space, be it one, two-, or three-dimensional.
Position has the greatest impact on the display of information, because the spatial arrangement of
graphics is the first step in reading a visualization.

• The maximization of the spread of representational graphics throughout the display space
maximizes the amount of information communicated, to some degree.
• The visualization display with the worst case positioning scheme maps all graphics to the exact
same position; consequently, only the last-drawn graphic is seen, and little information is
exchanged.
• The best positioning scheme maps each graphic to unique positions,such that all the graphics
can be seen with no overlaps.
Displayed here is the minimum price versus the maximum

Mark: The second visual variable is the mark or shape: points, lines, areas, volumes, and their
compositions. Marks are graphic primitives that represent data.

• Any graphical object can be used as a mark, including symbols, letters, and words

• Several examples of different marks or


glyphs that can be used

• When using marks, it is important to consider how well one mark can be differentiated from other
marks.

• Within a single visualization there can be hundreds or thousands of marks to observe; therefore, we try
not to select marks that are too similar.

• When using marks, it is important to consider how well one mark can be differentiated from other
marks.

• Within a single visualization there can be hundreds or thousands of marks to observe; therefore, we try
not to select marks that are too similar.

Size (Length, Area, and Volume) :

• The previous two visual variables, position and marks, are required to define a visualization.

• The third visual variable and first graphic property is size.

• Size easily maps to interval and continuous data variables, because that property supports gradual
increments over some range.

• And while size can also be applied to categorical data, it is more difficult to distinguish between marks
of near similar size, and thus size can only support categories with very small cardinality.

• when marks are represented with graphics that contain sufficient area, the quantitative aspects of size
fall, and the differences between marks becomes more qualitative.

• Example sizes to encode data.

• Brightness :
• The fourth visual variable is brightness or luminance.

• Brightness is the second visual variable used to modify marks to encode additional data variables.

• While it is possible to use the complete numerical range of brightness Values.

Brightness scale for mapping values to the display.

• human perception cannot distinguish between all pairs of brightness values.

• brightness can be used to provide relative difference for large interval and continuous data variables,
or for accurate mark distinction for marks drawn using a reduced sampled brightness scale.

Color : The fifth visual variable is color;

• While brightness affects how white or black colors are displayed, it is not actually color.

• Color can be defined by the two parameters, hue and saturation.

• Hue provides what most think of as color.

• the dominant wavelength from the visual spectrum.

• Saturation is the level of hue relative to gray, and drives the purity of the color to be displayed

Microsoft hue/saturation color selector..

• The use of color to display


information requires mapping data values to individual colors.
• The mapping of color usually entails defining color maps that specify the relationship between
value ranges and color values
• Color maps are useful for handling both interval and continuous data variables, since a color
map is generally defined as a continuous range of hue and saturation values .
Example colormap that can be used to encode a data variable

• Orientation :

• The sixth visual variable is orientation or direction. Orientation is a principal graphic component behind
iconographic stick figure displays, and is tied directly to pre attentive vision.

• This graphic property describes how a mark is rotated in connection with a data variable.

• Clearly, orientation cannot be used with all marks; for instance, a circle looks the same under any
rotation.

• The best marks for using orientation are those with a natural single axis; the graphic exhibits symmetry
about a major axis.
• These marks can display the entire range of orientations.

• Example orientations of a representation graphic, where the lowest value maps to the mark pointing
upward and increasing values rotate the mark in a clockwise rotation.

Example


Texture: The seventh visual variable is texture.

• Texture can be considered as a combination of many of the other visual variables, including marks ,
color , and orientation.

• Dashed and dotted lines, which constitute some of the textures of linear features, can be readily
differentiated, as long as only a modest number of distinct types exist.

• example: Six possible example textures that could be used to identify different data values.

• geometric textures can be readily emulated with color textures, with color variations similar to those
obtained via lighting effects.

• Finally, the distribution and orientation of marks themselves can form regions of texture.

• Motion:

• The eighth visual variable is motion. Infact,motion can be associated with any of the other visual
variables, since the way a variable change saver time can convey more information.

• One common use of motion is in varying the speed at which a change is occurring.

• The eye will be drawn to graphical entities based not only on similarities in behavior, but also on
outliers.

Effects of Visual Variables:

• Different visual variables can serve different purposes.

• We can categorize these purposes in a variety of ways.

1.Selective visual variables


2.Associative visual variables

3.Ordinal visual variables

4.Proportional visual variables

5.Separating visual variables

1. Selective Visual Variables: Selective visual variables are used to distinguish or categorize data
elements into different groups or categories. These variables allow viewers to quickly identify
which data points belong to which group or category. Examples of selective visual variables
include color, shape, and texture.
2. Associative Visual Variables: Associative visual variables are used to establish connections or
associations between data elements. They help viewers perceive relationships, patterns, or
groupings within the data. Examples of associative visual variables include proximity,
alignment, and connectedness (e.g., lines or curves connecting data points).
3. Ordinal Visual Variables: Ordinal visual variables are used to represent ordered or ranked data.
These variables convey the relative order or ranking of data elements, but not necessarily the
precise numerical differences between them. Examples of ordinal visual variables include
saturation, size, and texture density.
4. Proportional Visual Variables: Proportional visual variables are used to represent quantitative or
numerical data values. These variables accurately convey the proportional or relative
magnitudes of data elements. Examples of proportional visual variables include length, area,
volume, and angle.
5. Separating Visual Variables: Separating visual variables are used to visually separate or
distinguish different layers or aspects of the data. They help viewers perceive the boundaries or
distinctions between different components or segments of the visualization. Examples of
separating visual variables include spatial separation, depth cues (e.g., occlusion or perspective),
and layering.

By understanding the different purposes of these visual variables, designers can make informed choices
about which variables to use for encoding specific aspects of their data. Here's a brief explanation of
each category:

Selective Visual Variables: Used to categorize or distinguish data elements into different groups,
enabling viewers to quickly identify which data points belong to which category.

Associative Visual Variables: Used to establish connections or relationships between data elements,
helping viewers perceive patterns, groupings, or associations within the data.

Ordinal Visual Variables: Used to represent ordered or ranked data, conveying the relative order or
ranking of data elements, but not necessarily the precise numerical differences.

Proportional Visual Variables: Used to represent quantitative or numerical data values accurately,
conveying the proportional or relative magnitudes of data elements.

Separating Visual Variables: Used to visually separate or distinguish different layers or aspects of the
data, helping viewers perceive the boundaries or distinctions between different components or
segments of the visualization.
By carefully choosing and combining these visual variables, designers can create effective and
expressive visualizations that accurately represent the underlying data and facilitate comprehension and
insight generation.

• Selective visual variables.

• After coding with such variables, different data values are spontaneously divided by the human into
distinguished groups (e.g., for visualizing nominal values).

• Size (length, area/volume);

• Brightness;

• Sample/texture;

• Color (only primary colors): varies with the brightness value;

• Direction/orientation.

• Associative visual variables.

• All factors have same visibility (e.g., for visualizing nominal values).

• Sample/texture

• Color

• Direction/orientation

• Shape

• Ordinal visual variables.

• After coding with such variables, different data values are spontaneously ordered by the human (e.g.,
for visualizing ordinal and quantitative data).

• Sample/texture;

• Size;

• Brightness.

Example associative variables: (a) textures; (b) colors; (c) direction; (d) shape

Example of separating texture:


• Proportional visual variables.

• In addition, these variables obtain a direct association of the relative size (e.g., for visualizing ordinal
and quantitative data).

• Size (length, area/volume);

• Direction/orientation;

• Brightness.

• Separating visual variables.

• All elements are visible (the rest are not visible).

• Sample/texture ;

• Color;

• Direction/orientation;

• Shape.

IV. Historical Perspective:

• The art of visualization, the principles of graphics and their comprehension, is generally understood.
But as a science, we have yet to define a consistent formalism for general visualizations, or even just
for the class of data visualizations.

• Researchers are now starting to look into such an idea through the development of various models;

• Robertson first proposed this need for formal models as a foundation for visualization systems . a
number of efforts over the years to formalize the field of visualization.

• The following section contains descriptions of some taxonomies of visualization techniques.


Taxonomies :
• Taxonomies in data visualization refer to systematic frameworks or classifications that organize and
categorize different visualization techniques, methods, or approaches based on their characteristics,
purposes, or underlying data structures. These taxonomies serve as conceptual models that help us
understand and navigate the diverse landscape of data visualization techniques.

• Some commonly used taxonomies in data visualization include:

• 1. Data Type Taxonomy:

• This taxonomy categorizes visualization techniques based on the type of data they are designed to
represent, such as spatial data, hierarchical data, multivariate data, temporal data, or network data.
For example, techniques like scatter plots and parallel coordinates are suitable for multivariate data,
while treemaps and node-link diagrams are appropriate for hierarchical data.

• 2. Analytical Task Taxonomy:

• This taxonomy organizes visualization techniques according to the analytical tasks they support, such
as identifying patterns, understanding relationships, comparing values, or detecting outliers. For
instance, scatter plots are effective for identifying correlations and patterns, while bar charts are useful
for comparing values across different categories.

• 3. Visual Encoding Taxonomy:

• This taxonomy classifies visualization techniques based on the visual encoding channels they employ,
such as position, size, color, shape, or texture. It helps designers understand how different visual
variables can be effectively used to represent different aspects of the data.

• 4. Interaction Taxonomy:

• This taxonomy categorizes visualization techniques based on the types of interactions they support,
such as zooming, panning, filtering, brushing, or linking. It helps designers understand how different
interaction techniques can facilitate data exploration and analysis.

• 5. Display Medium Taxonomy:

• This taxonomy organizes visualization techniques based on the display medium or platform they are
designed for, such as desktop computers, mobile devices, large displays, or virtual/augmented reality
environments. Different display media may have different constraints and affordances that influence
the choice of visualization techniques.

• 6. Application Domain Taxonomy:


• This taxonomy categorizes visualization techniques based on the specific application domains or fields
in which they are commonly used, such as finance, biology, transportation, or social sciences. Different
domains may have unique data structures or analytical requirements that necessitate specialized
visualization approaches.

• These taxonomies provide a structured way of thinking about data visualization techniques and can
help researchers, designers, and practitioners in selecting appropriate techniques for their specific data
and analysis needs. They also facilitate communication and knowledge sharing within the data
visualization community, enabling a common understanding of the strengths, limitations, and trade-
offs associated with different visualization approaches.

• It's important to note that these taxonomies are not mutually exclusive, and some visualization
techniques may fall into multiple categories depending on the perspective or criteria used for
classification.

Taxonomy of Visualization Goals (Keller and Keller):

• Keller and Keller, classify visualization techniques based on the type of data being analyzed and the
user’s task(s).

• The data types they consider are:

• scalar (or scalar field);

• nominal;

• direction (or direction field);

• shape;

• position;

• spatially extended region or object (SERO).

• The authors also define a number of tasks that a visualization user might be interested in performing.

• While some of the tasks seem interrelated, their list is a useful starting position for someone setting
out to design a visualization for a particular application. Their task list consists of:

• identify—establish characteristics by which an object is recognizable;

• • locate—ascertain the position (absolute or relative);

• • distinguish—recognize as distinct or different (identification is not needed);

• • categorize—place into divisions or classes;

• • cluster—group similar objects;

• • rank—assign an order or position relative to other objects;

• • compare—notice similarities and differences;

• • associate—link or join in a relationship that may or may not be of the same type;

• • correlate—establish a direct connection, such as causal or reciprocal.


Data Type by Task Taxonomy (Shneiderman)(1996):

• His list of data types was somewhat different from Keller and Keller’s, and included more types from
the information visualization field.

• List of data types consisted of

• one-dimensional linear;

• two-dimensional map;

• three-dimensional world;

• temporal;

• multidimensional;

• tree;

• network.

• Task set consisted of the following

• Overview.Gain an overview of the entire collection,e.g.,using a fish eye strategy for network browsing.

• Zoom.Zoom in items of interest to gain a more detailed view, e.g.,holding down a mouse but to into
enlarge are gi on of the display.

• Filter.Filter out uninteresting items to allow the user to reduce the size of a search,e.g.,dynamic queries
that can be invoked via sliders.

• Details on demand.Select an item or group and get details when needed,e.g.,apop-up window can
show more attributes of a specific object on the screen.

• Relate.View relationships among items, e.g.,select a particular object that can then show all other
objects related to it.

• History.Keep a history to allow undo, replay, and progressive refinement, such a sallowing a mistake to
be undone,or a series of steps to be replayed.

• Extract.Extract the items or data in a format that would facilitate other uses, i.e.,saving to file, sendig
via email, printing, or dragging into another application(statistical or presentation package).

• Shneiderman suggested that an effective visual exploration tool should support most or all of the se
tasks in an easy-to-use manner.

Keim(2002) Information Visualization Classification:

• Keim designed a classification scheme for visualization systems based on three dimensions: data types,
visualization techniques, and interaction/distortion methods.

• Classification of Data Types. 6 types of data exist:

• 1.One-dimensionaldata—e.g. Temporal data, news data, stockprices, textdocuments

• 2.Two-dimensionaldata—e.g., maps, charts, floorplans, newspaper layouts

• 3.Multidimensionaldata—e.g., spread sheets, relational tables

• 4.Text and hypertext—e.g. New articles, web documents


• 5.Hierarchiesandgraphs—e.g., telephone/network traffic, system dynamics models.

• 6.Algorithm and software—e.g. Software,execution traces, memory dumps.

Classification of information visualization techniques

Classification of Visualization Techniques

• 5classes of visualization techniques exist:

1.Standard2D/3Ddisplays—e.g. X,y-orx,y,z-plots,barcharts,linegraphs;

2.Geometrically transformed displays—e.g.,landscapes, scatterplot matrices, projection pursuit


techniques, prosection views, hyper slice, parallel coordinates;

3.Iconic displays—e.g.,Chernofffaces,needle icons, staricons, stick figure icons,colori cons, tile bars;

4.Dense pixel displays—i.e., recursive pattern, circle segments,

• graph sketches;

5. Stacked displays—i.e., dimensional stacking, hierarchical axes,

• worlds-within-worlds, treemaps, cone trees.

Classification of Interaction & Distortion Techniques

• 5 classes of interaction techniques exist:

1. Dynamic projection—i.e., grand tour system, XGobi, XLispStat, ExplorN;

2. Interactive filtering—i.e., Magic Lenses, InfoCrystal, dynamic queries, Polaris;

3. Interactive zooming—i.e., TableLens, PAD++, IVEE/Spotfire, DataSpace, MGV and scalable


framework;
4.Interactive distortion—i.e., hyperbolic and spherical distortions, bifocal displays, perspective wall,
graphical fish eye views, hyperbolic visualization, hyper box;

5. Interactive linking and brushing—i.e., multiple scatterplots, bar charts, parallel coordinates, pixel
displays and maps, Polaris, scalable framework, S-Plus, XGobi, Xmdv Tool, DataDesk.

VI. Experimental Semiotics Based on perception Gibson’s Affordance Theory:

The theory of affordances implies that to see things is to see how to get about among them and what to
do or not do with them. If this is true, visual perception serves behavior, and behavior is controlled by
perception.” Gibson
Affordance theory, proposed by Gibson (2014) and brought to technology research by Norman (2008),
proposes that the use of an object is intrinsically determined by its physical shape. However, when
translated to digital objects, affordance theory loses explanatory power, as the same physical affordances,
for example, screens, can have many socially constructed meanings and can be used in many ways.
Furthermore, Gibson’s core idea that physical affordances have intrinsic, pre-cognitive meaning cannot
be sustained for the highly symbolic nature of digital affordances, which gain meaning through social
learning and use. A possible way to solve this issue is to think about on-screen affordances as symbols
and affordance research as a semiotic and linguistic enterprise.
Gibson, proposed that the form of the objects surrounding us shape the perception of what is possible to
do with them (Gibson, 2014). As we live in the physical world, we acquire perceptions of how to use the
objects and features of that world. These perceptions spring out of reality as potentialities inviting us to
take advantage of an object when we want to do accomplish a task; they are experiential and relational,
rather than carefully thought out and discrete cognitions (Jones, 2003). The process is similar to the older
concept of phenomenological “intuition,” or felt meaning. According to Gibson, a flat, solid surface
invites us experientially to stand or lay on by mere interaction with our feet, balance organs, and vision
(Gibson, 2014). We perceive the use of that surface just as immediately as any other animal would
through our sense of balance and sight; no higher cognition is involved. We do not think propositionally
“there is a flat surface, let me walk on it;” we perceive experientially that the surface will support us with
our feet, partially by touch and partly by the signals our inner ear send us telling us if it is possible to
stand upright on it or not.
An oblong object of sufficient length to provide leverage and narrow enough to be grasped invites us to
use it as a club. If threatened by an aggressor, we would grab it without any forethought. A chimpanzee
would use a stick in just the same way and with just as little forethought to defend itself. The affordance
of “wield-ability” and amplifying force is perceived at the most fundamental level, that of immediate
response to a stimulus.
Gibson’s psychology of affordances is non-conceptual, relational, and ecological (Jones, 2003). Objects
compel use, and people are conditioned at the level of perception by the form, substance, or texture of
the objects. In other words, objects have intrinsic, pre-cognitive meanings; they speak a language of their
own, shaped by what they can do for us. Humans recognize those meanings in use, rather than adding to
them meanings demanded by thoughtout plans. Created at the end of the 1930s in the narrow field of
visual perception with applicability to car safety, affordance theory remained of limited interest for
several decades, until the late 1960s. Only those that followed Gibson’s ecological psychology as applied
to visual perceptions were interested in it. Also, the theory was limited to explaining perceptions rather
than to inspire broader applications. Gibson’s own intention was to provide a Gestalt and
phenomenological explanation of how we experience the world (Oliver, 2005). His main point was that
when encountering the world, our minds do not work synthetically. They do not combine features and
properties observed and qualified individually to provide a conscious plan for using them in a certain
way. Our perception of what things and features of the world are or may be used for emerges in use;
features are directly perceived as we interact with the world. Gibson uses theterm “information pick up,”
a variation of the term “intuition,” to describe the moment when our perception starts (Gibson, 2014).
Gibson’s theory also remained an outer province of psychology because his idea that perceptions are
direct and refer to the intrinsic meanings (Oliver, 2005) of objects clashed with core tenents of cognitive
psychology, which claim that perceptions are cognitive processes that involve some reasoning (Tacca,
2011). It took a former engineer converted to the psychological study of human-technology interactions,
Donald Norman (Norman, 2013), to bring Gibson’s relational and experiential perception psychology to
the public attention, but only by mitigating the anti-cognitivist claims of Gibsonism. Working outside the
realm of experimental psychology while engaged in applied research of human interactions with
technology, Norman realized that Gibson’s insights could be directly validated and used when designing
the physical shape of products. He realized that current design practices already used affordance-thinking.
His example of the push-bar door opening mechanism, which invites the user to press a door to both
unlock and open it, is a canonical example. He took these examples, Gibson’s psychology, and some of
his insights to produce a theory of affordance-driven design. However, Norman rightly understood that
a theory created for understanding and shaping physical objects could not be used universally as such
(Norman, 2008). For example, when designing graphic user interfaces for computer applications,
websites, or apps, the number of physical affordances are dramatically reduced to “looking at,” “click
on,” “tap on,” or “drag around” actions. The physical actions themselves, including something as simple
as manipulating a mouse with a cursor, double-clicking, and anticipating the results, can be challenging
and more important have significant cognitive loads and reasoning demands. In fact, direct perception of
the meaning of very broad onscreen affordances nearly impossible. The same on-screen feature can
suggest many affordances, which need to be sorted out cognitively. A click on a computer screen can
produce many and different outcomes. Norman realized this very well when he proposed that on-screen
interfaces propose “perceived affordances” (Norman, 2013), which are subjective, involve some learning,
and can be quite numerous. However, this is a poor choice of terms, which did not consider the rigorous
definition of perception proposed by Gibson, according to whom, affordance perception is immediate
and direct. If according to Gibson, affordances are direct perceptions (Gibson, 1977), the “perceived”
modifier added by Norman to “affordances” is both redundant and confusing. Replacing the terms in the
equation “online affordances = perceived affordances,” with those suggested by the equation
“affordances = perceptions,” we obtain “online affordances = perceived perceptions,” which is rather
nonsensical. However, Norman uses the terms “perceived” in a far less prescriptive manner than Gibson.
For him, perceived is akin to “supposed” or “inferred.” “Perceived” is a mere synonym for any modifier
that would turn “affordances” from “something-that-is-demanded-by-use” into “something-that-I-infer-
this-thing-can-do.” Norman shifts the term from essential characteristics to assumed potential. An
affordance does not have intrinsic meaning; the meaning is constructed cognitively by the user. For
Gibson, affordances are “invariant characteristics,” for Norman “reasoned possibilities for action.”
Furthermore, Norman emphasizes that affordances should be visible and understandable, while for
Gibson, affordances may exist in situations where visibility is not necessary.
VII. A Model of Perceptual Processing:

• The "Model of Perceptual Processing" is a crucial concept in data visualization, as it helps us


understand how humans perceive and process visual information. This model provides insights into
how we can design more effective and intuitive visualizations.
• The Model of Perceptual Processing is based on the principles of visual perception and cognitive
psychology. It describes the various stages involved in the human visual system's interpretation of
visual stimuli. The model typically consists of the following stages:

• Visual Encoding:

• This stage involves the initial detection and encoding of visual features such as color, shape, size,
orientation, and motion.

• The human eye and early visual cortex perform low-level processing to extract these basic visual
features from the input stimuli.

• Pattern Perception:

• In this stage, the extracted visual features are combined and organized into patterns, shapes, and
objects.

• Gestalt principles of perception, such as similarity, proximity, continuity, and closure, play a crucial role
in this stage.

• The visual system groups and organizes the low-level features into meaningful patterns and forms.

• Object Recognition and Identification:

• At this stage, the perceived patterns and forms are matched against existing knowledge and
experiences stored in memory.

• The brain recognizes and identifies familiar objects, scenes, or concepts based on the perceived visual
patterns.

• This stage involves higher-level cognitive processes and prior knowledge.

• Semantic Processing:

• Once objects or concepts are recognized, their meaning and associated information are processed.

• This stage involves extracting semantic information, such as the relationships between objects, their
attributes, and their contextual meaning.

• Semantic processing relies on knowledge, experience, and contextual cues.

• Decision and Action:

• Based on the processed semantic information, decisions can be made, and appropriate actions or
responses can be triggered.

• This stage involves cognitive processes such as reasoning, problem-solving, and decision-making.

1. Pre attentive Processing:


Perception can be divided into two types: intrinsic and uncontrolled (pre-attentive) or controlled
(attentive).

Pre-attentive (Automatic) Perception:

This type of perception is fast and occurs in parallel, often within 250 milliseconds.

Certain visual effects or patterns "pop out" and are the result of preconscious visual processes.

Pre-attentive perception is automatic and does not require conscious effort.


Attentive (Controlled) Perception:

• Attentive (Controlled) Perception:

• Attentive perception is slower and involves the use of short-term memory.

• It is selective and often represents aggregates or structured objects present in the scene.

• Attentive perception transforms the early vision effects from pre-attentive processing into
structured objects.

• The process works as follows:


• Low-level visual attributes (e.g., color, shape, orientation) are rapidly perceived through pre-
attentive processes.
• These low-level attributes are then converted into higher-level structured objects through
attentive perception.
• The higher-level structured objects are used to perform various tasks, such as finding a door in
an emergency situation.
• EX: Pre-attentive (Automatic) Perception:
• Your eyes rapidly scan the crowd, automatically detecting low-level visual attributes such as
colors, shapes, and movements.
• Within a split second, certain visual features "pop out" to you without conscious effort. For
instance, you might immediately notice a bright red jacket or a person waving their hand.
• This pre-attentive processing occurs in parallel and is very fast, often within 250 milliseconds.
• Attentive (Controlled) Perception:
• After the initial pre-attentive detection of salient visual features, your attentive perception kicks
in.
• You start to consciously focus on specific areas or individuals in the crowd, utilizing your short-
term memory to remember what your friend looks like.
• Your attentive perception combines the low-level visual attributes (e.g., hair color, height,
clothing) into higher-level structured objects, allowing you to recognize familiar faces or body
shapes.
• This attentive processing is slower and more selective, as you actively search for your friend
among the crowd, ignoring irrelevant information.

You might also like