0% found this document useful (0 votes)
28 views10 pages

Assg.2summary Book

The document discusses data collection methods and statistical analysis of urban design qualities. Experts rated street scenes and their ratings were analyzed using cross-classified random effects models to separate variance between scenes and raters. Models including physical features of scenes explained a significant portion of variance in ratings for various urban design qualities like imageability, enclosure, and complexity.

Uploaded by

Cicz206018 Ziad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views10 pages

Assg.2summary Book

The document discusses data collection methods and statistical analysis of urban design qualities. Experts rated street scenes and their ratings were analyzed using cross-classified random effects models to separate variance between scenes and raters. Models including physical features of scenes explained a significant portion of variance in ratings for various urban design qualities like imageability, enclosure, and complexity.

Uploaded by

Cicz206018 Ziad
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Higher Ministry of Engineering

Canadian International Collage


Architecture Engineering Department

“Principles of Urban Design”

Assignment 2

Presented By :
Yahya Islam 20196188
Doha Ibrahim 20196126
Salma Mohamed 20196069
Reem Mahmoud 20196076
Heba salah 20196004

Presented To :
DR : Yasmeen Bakeer
Chapter two

Data Collection
• The panel members had to have knowledge of urban design concepts as well as expertise
in urban design or related fields
• Networks would be tapped. An effort was made to achieve a balance between urban
designers and other planning professionals, and between those with a new urbanist bent.
• Ten experts responsible for this campaign were chosen.
• There is a fixed protocol in shooting video of the street so that we get the same result and
it does not differ from place to place according to the imaging technique
• A census was made for twenty-two countries, and the census was comprehensively
civilized, as they provide suitable pedestrian places, sidewalks, and landscapes, but despite
the success of these cities, the streets do not accommodate pedestrians.
• Visual Assessment Survey It is about distributing video clips to members and recording
their evaluation.
• The three investigators are divided into two heads, and the rest are their assistants

Measuring Urban Design provides operational definitions and measurement protocols for five
intangible qualities of urban design: imageability, visual enclosure, human scale, transparency, and
complexity. To help disseminate these measures, this book also provides a field survey instrument
that has been tested and refined for use by lay observers.
Identify the physical features of scenes that give rise to high or low ratings on urban design quality
scales.
Chapter Three

Analysis and Final Steps

Cross-Classified Random Effects Models

• When ratings vary systematically by scene and by viewer, and random effects are present,
the resulting data structure is best represented by a cross-classified random effects model.
• The dependent variable in this analysis is the urban design quality rating assigned by an
individual panelist to an individual street scene.
• Ratings varied from scene to scene due to different qualities of the street itself and its edge,
different aspects from panelist to panelist and differences in judgment.
• A particular scene may have evoked a particularly positive or negative reaction in a
particular panelist. Such unique reactions are measurement errors.
• In statistical parlance, the “scene effect” gives rise to “scene variance.”
• Again, in statistical parlance, the “viewer effect” gives rise to “viewer variance.” The
unique reactions of individual panelists produce “measurement error variance.”
• In order to bring into focus the interesting variation across street scenes, it helps
statistically to separate the scene variance from viewer variance and measurement error
variance.
• Our analysis began by partitioning the total variance in urban design quality ratings among
the three sources of variation—scenes, viewers, and measurement errors.
• The model consisted of two parts:

actual rating = predicted rating + measurement error

where the actual rating is the sum of the predicted score for a given scene by a given viewer
plus the measurement error;

and predicted rating = constant + viewer effect + scene effect

where the predicted rating is just the sum of a constant plus a viewer effect and a scene
effect.
• For each urban design quality, table 3.4 shows the total variance in ratings and the portions
attributable to each source.
• As an example for the urban design quality of imageability:

➢ The scene variance was 0.67,


➢ The viewer variance was 0.16, and
➢ The measurement error variance was 0.50.
➢ The total variance was thus split into the following proportions:
➢ 50 percent scene variance.

Our analysis showed that all urban design qualities exhibit more variance across scenes
than across viewers.

• We estimated additional models in order to reduce the unexplained variance in urban


design quality ratings. These models included characteristics of viewers and scenes:

➢ actual rating = predicted rating + measurement error exactly as before; and


➢ predicted rating = constant + viewer random effect + scene random effect +
a*viewer variables + b*scene variables

Results of Statistical Analysis

• We tested many combinations of viewer and scene variables. The only available variables
characterizing viewers—urban designer or not (1 or 0 dummy) and new urbanist or not (1
or 0 dummy)—proved to have no explanatory power in most analyses.
• Apparently, urban designers and others, and new urbanists (a subset of the designers) and
others, react similarly to street scenes. This is consistent with earlier visual assessment
literature revealing common environmental preferences across professions.
• The models that reduced the unexplained variance of scores to the greatest degree, and for
which all variables had the expected signs and were significant at the 0.10 level or beyond,
are presented in tables 3.5 through 3.13.
• Most of the independent variables in these tables are object counts. In all, thirty-seven
physical features proved significant in one or more models. Six features were significant
in two models: long sight lines, number of buildings with identifiers, proportion of first
floor facade with windows, proportion of active uses, proportion of street wall—same side,
and number of pieces of public art. Two features were significant in three models: number
of moving pedestrians and presence of outdoor dining. The models for each quality are
presented and discussed below.
Imageability

• The estimated model left the measurement error variance unchanged at 0.50, reduced the
unexplained viewer variance only slightly from 0.16 to 0.15, but reduced the unexplained
scene variance substantially, from 0.67 to 0.19. Altogether, 72 percent of the variation
across scenes, and 37 percent of the overall variation in imageability scores (including
variation across viewers and measurement errors), were explained by the significant scene
variables (table 3.5).
Enclosure

• The estimated model left the measurement error variance unchanged, reduced the
unexplained viewer variance slightly, from 0.10 to 0.09, and reduced the unexplained scene
variance from 0.83 to 0.23. This is the largest absolute reduction in unexplained scene
variance. With just five variables, the model for enclosure explains 72 percent of the scene
variance and 43 percent of the total variance (table 3.6). All of the significant variables
have high levels of inter-rater reliability, with ICCs above 0.59.
Human Scale
• For human scale, the estimated model left the measurement error variance unchanged,
reduced the unexplained viewer variance from 0.11 to 0.08, and reduced the unexplained
scene variance from 0.68 to 0.26. Seven variables explain 62 percent of the scene
variance and 35 percent of the total variance in human scale (table 3.7).
Transparency
• For transparency, the estimated model left the measurement error variance and
unexplained viewer unchanged and reduced the unexplained scene variance from 0.77 to
0.29. Just three variables explain 62 percent of the scene variance and 32 percent of the
total variance in transparency:

Complexity
• For complexity, the estimated model left the measurement error variance and viewer
variance unchanged while reducing unexplained scene variance from 0.67 to 0.19. Six
variables explain 73 percent of scene variance and 38 percent of total variance for
complexity (table 3.9).
Legibility
• For legibility, the estimated model left the measurement error variance and unexplained
viewer variance unchanged (table 3.11). It reduced the unexplained scene variance from
0.46 to 0.21, accounting for only 54 percent of the scene variance and 21 percent of the
total variance, the lowest percentages among the nine urban design qualities studied
(refer back to table 3.4).
Linkage
• For linkage, the estimated model left the measurement error variance and unexplained
viewer variance unchanged and reduced the unexplained scene variance from 0.51 to
0.20. The model for linkage, with five variables, explains 61 percent of scene variance
but only 21 percent of total variance (table 3.12).
Tidiness
• For tidiness, the estimated model left the measurement error variance and unexplained
viewer variance unchanged while reducing the unexplained scene variance from 0.46 to
0.14. The model for tidiness explained 70 percent of scene variance and 30 percent of
total variance with just four variables (table 3.13)

Final Steps
• To decide which urban design qualities would be operationalized in the field survey
instrument, five criteria were established :
1. The urban design quality was rated by the expert panel following the criteria
suggested by Landis and Koch (1977).
2. The total variance in ratings of the urban design quality was explained to at
least a moderate degree.
3. The portion of total variance in ratings attributable to scenes was explained to
a substantial degree.
4. All physical features related to ratings of a particular urban design quality
were measured by the research team .
5. The urban design quality as judged by the expert panel had a statistically
significant relationship .
• According to our criteria, the qualities of imageability, enclosure, human scale, and
transparency had great potential for operationalization .
• The qualities of legibility, linkage, and coherence had very little potential for
operationalization.
• A draft field survey instrument was prepared for the six remaining urban design
qualities:
1. Imageability
2. enclosure
3. human scale
4. transparency
5. complexity
6. tidiness
• In the field, we measured all physical features that proved significant contributors to the
remaining six urban design qualities .
• Field observations and video clips were compared for the following:
1. inter-rater reliability of individual measurements
2. inter-rater reliability of urban design quality scores
3. rank-order correlations of individual measurements
4. rank-order correlations of urban design quality scores
• Discrepancies were significant for the following qualities and contributing features :
1. imageability (number of buildings with identifiers and noise level)
2. enclosure (number of long sight lines and proportion of sky across the (street)
3. human scale (number of long sight lines)
4. complexity (number of primary building colors and number of accent colors)
• The time between the filming of video clips and the field validation accounted
for some of the discrepancies.
• Count totals tended to be higher in the field than in the lab because the field survey
protocol took observers farther down the block .
• Shadows, glare, and panning limited what could be seen in the clips, particularly on the
opposite side of the street and ahead in the distance.
• To deal with discrepancies, we had four options:
1. ignore them on the assumption that the relationships between urban design qualities
and physical features.
2. refine field measurement protocols to more closely approximate measurements based
on clips
3. drop physical features that could not be measured consistently in the field and re
estimate the models without these features .
4. drop urban design qualities that could not be estimated consistently .
• For two features, field experience taught us to combine elements treated separately In
our original analysis
• The distinction between people walking, standing, and sitting struck us as artificial when
we walked the same stretch of street several times.
• The distinction among the different categories of street furniture and miscellaneous
street items was difficult to keep straight. Parking meters and trash cans were in one
category; hydrants and ATMs, in another. Tables, seating, and streetlights were in a
third, fourth, and fifth category.
• The combined variable “people” within a scene was substituted for “moving
pedestrians” in the models of imageability and complexity .
• The combined variable “all street furniture and other street items” was substituted for
“miscellaneous street items” in the model of human scale.
• The field survey instrument was revised to reflect these changes.
• The protocol for the classroom training was as follows:
Step 1. Review the field survey instrument. For each urban design quality, the component
physical features were reviewed and the measurement protocols were described.
Step 2. Review the gold standard measurements for the test clips. The clip was replayed as many
times as required to review all physical features on the scoring sheet.
Step 3. Make independent measurements for additional test clips. The clip was then reviewed to
reconcile differences in measurements.
• These segments were chosen to achieve as much variation as possible in the measured
qualities.
• ICC and Cronbach’s alpha values for physical features and for urban design quality
scores are presented in table 3.15.
• Long sight lines. There was very little variation in the measurements of this feature. Only
four values are possible, and none of the test segments had more than two long sight
lines.
• Street wall. The poor results for street wall were influenced by two segments in
particular. On one segment, a five-story parking garage abutted the sidewalk, although
the ground floor was set back several feet.
• Sky ahead and sky across. Raters found these measurements difficult to estimate in the
field, although they were relatively easy to estimate from the video clips. Differences
between the raters stemmed from differences in the choice of exact location from which
to estimate proportion of sky and difficulty in judging the extent of their field of vision.
• Active uses. The poor results for this feature are related to the poor results for street walls.
Because raters made different judgments about which buildings fronted on the street.
• Number of buildings. The results for this feature were fair. Large differences for one
segment can be explained by differences in the interpretation of which buildings were
visible enough to be counted.

Table 3.15.
Field Test Results
Alpha ICC
Imageability 0.927 0.863
1. Number of courtyards, plazas, and parks (both sides) 0.845 0.584
2. Number of major landscape features (both sides, beyond study area) n/a n/a
3. Proportion of historic building frontage (both sides, within study area) 0.750 0.864
4. Number of buildings with identifiers (both sides, within study area) 0.875 0.769
5. Number of buildings with nonrectangular shapes (both sides) 0.899 0.818
6. Presence of outdoor dining (your side, within study area) 0.887 0.809
7. Number of pedestrians (your side, within study area) 0.960 0.913
8. Noise level (both sides, within study area) 0.618 0.432
Enclosure 0.232 0.033
1. Number of long sight lines (both sides, beyond study area) –0.208 –0.238
2a. Proportion of street wall (your side, beyond study area) 0.517 0.373
2b. Proportion of street wall (opposite side, beyond study area) 0.863 0.725
3a. Proportion of sky (ahead, beyond study area) 0.330 0.157
3b. Proportion of sky (across, beyond study area) –0.568 –0.208
Human Scale 0.768 0.491
1. Number of long sight lines (both sides, beyond study area) –0.208 –0.238
2. Proportion of windows at street level (your side, within study area) 0.798 0.663
3. Proportion of active uses (your side, within study area) 0.422 0.239
4. Average building heights (your side, within study area) 0.956 0.912
5. Number of small planters (your side, within study area) 0.786 0.622
6. Number of miscellaneous street items (your side, within study area) 0.547 0.422
Transparency 0.817 0.708
1. Proportion of windows at street level (your side, within study area) 0.798 0.663
2. Proportion of street wall (your side, beyond study area) 0.517 0.373
3. Proportion of active uses (your side, within study area) 0.422 0.239
Complexity 0.868 0.780
1. Number of buildings (both sides, beyond study area) 0.592 0.388
2a. Number of primary building colors (both sides, beyond study area) 0.279 0.188
2b. Number of accent colors (both sides, beyond study area) 0.551 0.331
3. Presence of outdoor dining (your side, within study area) 0.887 0.809
4. Number of pieces of public art (both sides, within study area) 0.677 0.528
5. Number of pedestrians (your side, within study area) 0.836 0.700

You might also like