0% found this document useful (0 votes)
28 views

A Rulebased System For Document Image Segmentation

This document describes a rule-based system for automatically segmenting document images into text and non-text regions. The system performs image enhancement, calculates connected component locations and statistics, and filters the image to segment it based on rules. Text regions can then be converted to computer-searchable text while retaining non-text regions as images.

Uploaded by

spysarthak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views

A Rulebased System For Document Image Segmentation

This document describes a rule-based system for automatically segmenting document images into text and non-text regions. The system performs image enhancement, calculates connected component locations and statistics, and filters the image to segment it based on rules. Text regions can then be converted to computer-searchable text while retaining non-text regions as images.

Uploaded by

spysarthak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

A RULE-BASED SYSTEM FOR DOCUMENT IMAGE SEGMENTATION

James L. Fisher, Stuart C. Hinds,’ and Donald P. D’Amato


The MITRE CorporationlCivil Systems Division
Systems Engineering and Applied Technology
7525 Colshlre Drive
McLean, Virginia 22102-3481, USA

ABSTRACT have modified, integrated, and automated portions of selected


published image segmentation algorithms such that our resulting
There is a pressing need for the capability of automatically system is adaptive in that necessary parameters are dynamically
converting large volumes of paper and microfilm documents, determined for each document image under consideration. This
containing an unconstrained mixture of text and graphics, to system has been tested against numerous document formats and
computer searchable forms while retaining the non-text portions in examples are presented at the end of this document.
image form. In this paper, we present a rule-based system for
automatically segmenting a document image into regions of text and 2. BACKGROUND
non-text. The initial stages of the system perform image
enhancement functions s u c h as adaptive thresholding, As discussed by Srihari [2], document segmentation methods
morphological processing, and skew detection and correction. The can be categorized as being either top-down, in which large regions
image segmentation process consists of smearing the original image are recursively segmented into subregions (e.g., the run length
via the run length smoothing algorithm, calculating the connected smoothing algorithm or RLSA [3], and projection profile cuts [4, 5]),
components’ locations and statistics, and filtering (segmenting) the or bottom-up processing, in which pixels are initially grouped
image based on these statistics. The text regions can be converted together as connected components and progressively merged into
(via an optical character reader) to a computer searchable form, and larger regions (e.g., the neighborhood line density method [4, 61,and
the non-text regions can be extracted and preserved. The connected components analysis [7-101). Although top-down
rule-based structure allows easy fine tuning of the algorithmic steps approaches are generally faster, bottom-up approaches are more
to produce robust rules, to incorporate additional tools (as they resistant to noise and skew within a document.
become available), and to handle special segmentation needs.
Experimentation with both projection profile cuts and the RLSA
1. INTRODUCTION has shown that their performance is severely degraded by document
skew and the presence of noise. It has also been noted [ll] that the
There is a growing trend in the federal government towards the RLSA is more useful in obtaining small blocks, such as text lines,
use of electronic document input, storage, and dissemination while projection profile cuts are better for obtaining larger (e.g.,
systems [l]. Towards this end, many government agencies desire to paragraph sized) blocks.
convert their large volumes of paper and microfilm document
databases to an electronically searchable, and efficiently stored A block diagram depicting the processing stages of the
digital form. Their documents usually contain a mixture of text and document image segmentation algorithm is shown in Figure 1. The
graphics (line drawings or continuous tone or halftone regions) which functions represented in the upper row of Figure 1 comprise the
must be separated or segmented for subsequent processing and document image enhancement process, and those functions in the
use. After segmentation, text regions may be sent to an optical lower row pertain to document image segmentation. Each stage is
character recognition (OCR) system for conversion to a computer discussed in the following sections.
searchable form (ASCII), and the non-text regions may be retained
via a lossless compression algorithm (e.g., CCITT Groups 3 and 4). 3. DOCUMENT IMAGE CAPTURE
OCR system performance (in terms of the percentage of characters
correctly recognized and recognition speed) is greatly improved if Our document image database consists of both grey-scale and
only the properly segmented and oriented text regions are submitted binary images. In our laboratory, documents are captured using a
to the OCR system. The motivation for automatic segmentation is (variable) high-resolution grey-scale digitizing camera (which is also
that manual identification and manipulation of text and non-text capable of color capture) or a binary scanner auto-feed. Similarly, in
regions is prohibitive by virtue of labor costs and data volumes. order to meet our governmental clients’ needs, we must be able to
process both binary and grey-scale document images. Some
In this paper, we describe a rule-based system that we have government agencies have already contracted service organizations
developed for automatic document image segmentation. Because of to digitize (and binarize) documents and to store the resulting
the wide variability in document styles, sizes, content, etc., we do not bit-mapped images for future use (e.g., OCR conversion), and thus
assume any a priori knowledge of document structure. The we must be able to process these archived binary images. On the
functional steps of this system include: document image other hand, some government agencies are just now beginning the
enhancement (and, when necessary, binarization), document image digitization process, and therefore grey-scale images would be
segmentation, text region OCR conversion, non-text region available if part or all of this segmentation system were implemented
extraction and compression, and document storage. In part, we at the time of capture. We do, however, make the assumption that
we do not have control of the digitization process, and therefore we
must accept the given document images.
1 Current address: University of California at San Diego, Neuropsychology
We digitize the documents at the de facto standard spatial
Research Lab., Childrens Hospital, 8001 Frost St., San Diego, CA 92123. resolution of 300 pixels per inch (ppi), which is also the resolution of

567
CH2898-5/90/0000/0567$01 .OO 0 1990 IEEE
the binary document images we have received from various has been shown to work quite well for a wide variety of document
agencies. The first two image enhancement operations types and skew angles [18].
(thresholding and morphological processing; see Section 4) are
performed on the full (3OOppi) resolution images. However, all The accumulator array accum[p,e] is also used to dynamically
subsequent processing is performed on a binarized pixel-averaged determine the interline spacing V, which is differerent for each
reduced resolution image of 75ppi. After the regions have been document image. The interline spacing is obtained from the 1-D
classified, the coordinates are then mapped back onto the (possibly Fourier transform of the data array accum[*,8] (8 fixed). This value is
imaged enhanced) full resolution image and the appropriate regions necessary for proper image smearing (Section 5).
are extracted.
This completes the document image enhancement stage, and
4. DOCUMENT IMAGE ENHANCEMENT we are now ready to begin the segmentation process.

Many factors affect the quality of captured binary and grey-scale 5. DOCUMENT IMAGE SEGMENTATION
document images, including the quality of the original physical image
and the method of digitization. We therefore want to enhance the In segmenting a document image into regions of text and
document image (i.e., to correct for these undesirable conditions and non-text, we dynamically divide the document image into "regions of
artifacts) to improve OCR performance and to increase compaction interest," calculate a variety of statistics describing each region, and
and storage efficiency. Additionally, subsequent processing steps classify each region based on these statistics.
assume a "clean" binary image, especially one free of noise and not
skewed. I n our document segmentation system, we have To segment the document image into regions for further
implemented three image enhancement techniques: adaptive processing, we begin with the run length smoothing algorithm
thresholding for correcting nonuniform illumination, morphological (RLSA) [3] which consists of four steps: a horizontal smoothing
processing for removing salt and pepper noise, and a modified (smearing), a vertical smoothing, a logical AND operation, and an
Hough transform for skew detection and correction. additional horizontal smoothing. In the first horizontal smoothing
operation, if the distance between two adjacent black pixels (on the
When performing grey-scale capture, there are often spatial same horizontal scan line) is less than the threshold U, then the two
variations in illumination or original document quality, and therefore pixels are "joined" by changing to black all of the intervening white
we binarize the document by applying an adaptive thresholding pixels, and the resulting image is stored. The same original image is
algorithm designed specifically for text document images [121. then smoothed in the vertical direction, joining together vertically
adjacent black pixels whose distance is less than the threshold V.
Binarized images often contain large amounts of salt and pepper This vertically smoothed image is then logically ANDed with the
noise (both random and patterned). Each noise spot typically horizontally smoothed image, and the resulting image is smoothed
consists of a small number (e.g., 1 to 4) of errant pixels. Our horizontally one more time, again using the threshold U, to produce
research has shown that such noise adversely affects image what we term the RLSA image.
compression efficiency (e.g., a run-length based compression
algorithm generates a new run-length value for each pixel value Different values of Hand Vyield different types of RLSA images.
transition along the scan axis), and it degrades OCR performance (in Very small Hvalues simply "close" or "color in" individual characters
terms of both correct recognition percentage and processing speed) (and we term this processing at the character level [7]). Slightly
even when the noise is not readily perceived by the human eye [13]. larger values of H smooth together individual characters in a word
Small sized noise areas are easily and quickly removed through a (processing at the word level), but are not large enough to bridge the
series of morphological processing operations [13-151. A closing interword space, and even larger H values smooth together all
operation removes salt noise and a subsequent opening removes characters in a sentence (processing at the sentence level). Too
pepper noise; these processes also smooth character edges. large of a value of H often causes sentences to be joined with
non-text regions, or to connect adjacent columns. In addition,
Another scanning artifact of concern is document skew. horizontally smoothing a skewed document can result in merging
Experimentation has shown that OCR recognition performance vertically adjacent text lines. Similar comments hold for the
degrades significantlywith even small angles of skew (e.g., less than magnitude of V. Therefore, we must carefully (and dynamically)
3' of rotation). The performance degradation takes the form of select the values of the thresholding parameters Hand V, as we now
misrecognized characters and line tracking errors, which render the explain.
final ASCII equivalent unintelligible. We thus developed an
automatic technique for skew detection and correction which is The vertical smoothing parameter Vis chosen to be equal to the
based on a modified Hough transform. The Hough transform has interline spacing determined in the Hough transform computations.
gained popular use in detecting straight lines of any orientation [16]. We achieve processing at the word level by computing a histogram
The underlying principle is to map the original image from Cartesian of the horizontal distances separating consecutive white to black
(x,v) space into (p,8) space, and detect the 8 column containing the transitions and setting H equal to the most populated distance.
absolute maximum value (or, alternatively, the maximum number of Selecting these parameters i n this data dependent manner has
transitions); that value of 8 is the angle of document rotation (skew). yielded very good results, with minimal merging of words with
The Hough transform has been applied to document images graphics regions.
directly [17]. However, we have developed a more computationally
efficient approach by applying a modified Hough transform to a We then establish boundaries around, and calculate statistics of,
"burst" image (a reduced data set of the original image). This skew each of these smoothed regions using connected component
detection algorithm is detailed in [18]. After the skew angle is analysis [7, 191. A connected component is a set of black pixels
determined (by locating the 8 column containing the maximum such that there exists an arbitrary path of 8-connected black pixels
~CCum[p,8]value), the original image is then rotated -8 degrees between any two black pixels in the set. Connected components are
using bilinear interpolation, and the deskewed document image is detected i n t h e RLSA image via the row or run tracking
re-binarized via the appropriatethreshold algorithm. (When a binary method [7, 191(as opposed to the edge tracking method [19]). From
image is deskewed via the bilinear rotation algorithm, the result is a the RLSA image we initially compute the aspect ratios (width to
grey-scale image, however, to rebinarize we need to apply only a height ratios) and coordinates of the bounding convex (rectangular)
global thresholding algorithm.) This detection and correction method hulls. Using these coordinates, the unsmoothed binary image (i.e.,

568
the image input to the RLSA process) is analyzed to obtain I n the explanation of the segmentation rules, we use the
additional connected component statistical and topological following process model and syntax. The connected component
information such as black pixel densities, Euler numbers (in the (CC) data can appear in one of three streams: the TEXT stream, the
unsmoothed image, the number of individual connected components NONTEXT stream, or the UNKNOWN stream. Initially, each CC
minus the number of holes), perimeter lengths, perimeter to width datum begins in the UNKNOWN stream. A specific aspect of the CC
ratios, perimeter-squaredto area ratios, etc. After calculating these datum (e.g., black pixel density, aspect ratio, height, etc.) is
connected component values the document image can be compared with dynamic and static parameters i n selected
segmented. segmentation tests (detailed below), and if the test is successful the
CC is filtered into either the TEXT or NONTEXT streams (which we
Based upon the assumption that the document is composed denote as "CC=STR€AM otherwise the CC remains in the
'I),

mostly of text, we identify the average word height Vw as being the UNKNOWN stream. (The input to each segmentationtest is always
minimum of: the most populous connected component height, or; the UNKNOWN stream.)
ZR,i.e., a ceiling of two inches. Although the need for the latter limit
is rarely encountered, it is present to remove the effects of large The rules are listed in Table 1, and the parameters are listed in
frames which surround entire columns or pages of text. Table 2. We begin by filtering text regions with rule 1. This is
desirable as the first test because (assuming that the majority of the
Connected components are classified as being text or non-text document page is text) the number of CCs remaining in the
on the basis of characteristic connected component values. (This is UNKNOWN stream after this rule and therefore tested in subsequent
discussed in detail in Section 6). An image (or, in the case of rules is greatly reduced, thereby minimizing execution time.
multi-column documents, multiple images) composed of only text
regions is formed and passed to the OCR server for conversion to Next, a series of rules categorizes many non-text regions. Tall
ASCII. A composite image of all text regions is subtracted from the vertical line strokes are filtered by rule 2A1, long horizontal line
binarized deskewed document image to form a temporary image strokes are filtered by rule 2A2, additional tall non-text regions are
from which the non-text regions are extracted and compacted via a filtered by rule 2B, and additional text regions are captured by
lossless image compression algorithm for efficient storage. This rule 2C.
approach is taken so as to properly remove those text regions falling
within the circumscribing rectangles of non-text regions. Short words such as "an", "the", "was", etc., are filtered by rule 3.
Rules 4A and 4B filter non-text blotches, non-text blocks with very
The resulting computer searchable text and compressed high densities (e.g., detailed graphics), and non-text blocks with very
non-text regions can then be stored. To achieve a more efficiently low densities (e.g., frames surrounding text), and rule 5 filters most
stored, searchable, and structured format, we are also researching longer larger font words typically found in tiles and headings.
methods for automatically determining the document's logical
structure. After removing non-text lines, blotches, and low density non-text
blocks can we filter additional text blocks with the generalized rule 6.
6. RULE STRUCTURE
The above segmentation rules filter the majority of the CCs
In Section 5, we outlined the image processing steps necessary using the easily calculated CC attributes aspect ratio, height, and
to segment a document image into regions of text and non-text. In black pixel density. To filter the remaining CCs, we must use more
this section, we describe the control and rule structures for realizing sophisticated CC attribute measures, namely the perimeter to width
these steps. This segmentation system is executed on a Sun 41280 ratio and the perimeter-squaredto area ratio. Due to computational
system running the UNlX operating system (SunOS 4.0). complexity, we wait until this p i n t in the segmentation process (i.e.,
when the UNKNOWN stream population is smaller) to calculate
The control structure is implemented in the C language to allow perimeter values. (The perimeter values, i.e., the number of pixel
the reading and interpretation of image file headers, and to control edges, is calculated from the unsmoothed or pre-RLSA image).
the order and types of image enhancement rules executed. The Rules 7A and 7B filter the most of the remaining "difficult" text. The
control structure invokes child processes (system calls) to execute remaining two segmentation rules are special case text filters.
the proper rules, i.e., the image processing and decision processes. Rule 7C captures isolated larger font characters in a heading or title;
rule 7D is rarely used, but it filters very long words of sets of
The rules are implemented using the UNlX Makefile syntax. The connected words. Any CCs remaining in the UNKNOWN stream are
document image being processed is identified by a root name, and transferred to the NONTEXT stream.
each image and data file produced by each function block i n
Figure 1 has a unique file name suffix corresponding to the function 7. RESULTS
just completed. These suffixes are made known to the Makefile
utility via the Makefile .SUFFlX€S:special function, and the rules to Three results of our document image segmentation system are
execute the next image processing command (i.e., to produce the shown in Figure 2. In each example, the original document appears
next image or data file) are expressed by the Makefile suffix rule on the left, and the CCs of the segmented document appears on the
syntax, i.e., right, with text regions represented by the lighter grey rectangles and
.Ds.Ts: non-text regions represented by the darker grey rectangles.
rule
where Ts is the suffix of the target file (i.e., the image file produced), In the U.S. patent document of Figure 2a, the horizontal line
Ds is the suffix of the dependency file (i.e., the input image file) and below the header and the high density line art in the lower half of the
rule is the image processing function performed to produce file.Ts page were correctly identified as non-text regions. Most of the
from file. Ds. characters in the graphics region smoothed to their adjacent lines in
the RLSA process, and thus were considered to be part of the
The advantages of separating the control structure and rules graphics region. This was desirable since the particular application
allow testing, refining, adding, and executing individual rules without required all stray markings within graphics to remain as part of the
needing to recompile or use the control structure program. graphics region. Similarly, in the patent document in Figure 2b,
many of the chemical formulas were smoothed with the chemical
ring drawings. Note that Figure 2b also contains much patterned

569
salt and pepper noise, and that the text line numbers (in the thin 0. Iwaki, H. Kida, and H. Arakawa, "A segmentation method
center) column were correctly recognized as text. based on office document hierarchical structure," Roc. /€€E
Int. C o n f . S y s t . , M a n . C y b e r n . , A l e x a n d r i a , VA,
In the technical journal page of Figure 2c, the majority of the text Oct. 20-23,1987, pp. 759-763.
regions were correctly classified, as were all of the non-text regions.
Only two words in the main text body were misclassified; we G. Nagy, S.C. Seth, and S.D. Stoddard, "Document analysis
currently use this as an indication of damaged text which should with an expert system," Proc. ACM Conf. Document
Processing Systems, Santa Fe, NM, Dec. 5-9, 1988,
undergo further image enhancement before being rejoined with the pp. 169-176.
other text blocks. The large dark grey block on the left half of the
segmented page represents the large frame surrounding the text, K . K u b o t a , 0. Iwaki, and H. Arakawa, "Document
pictures, and schematic. understanding system," Proc. 7th Inf. Conf. Pattern
Rewgnition, Montreal, 1984, pp. 612-614.
The Chinese document of Figure 2d illustrates that the
segmentation process is independent of language (font). Document L.A. Fletcher and R. Kasturi, "A robust algorithmfor text string
skew is detected and corrected, and nearly all text and non-text separation from mixed textlgraphics images," I€€€ Trans.
regions are correctly identified. The large dark grey rectangle is the Pattern Anal. Machine Intell., vol. 10, no. 6, Nov. 1988,
pp. 910-918.
connected component surrounding the text table border. This
document demonstrates skew correction, and illustrates the need for J.P. Bixler, "Tracking text in mixed-mode documents," Proc.
subtracting text fields before extracting non-text fields. ACM Conf. Document Processing Systems, Santa Fe, NM,
Dec. 5-9, 1988, pp. 177-185.
8. SUMMARY AND CONCLUSIONS
H. Makino, "Representation and segmentation of document
The segmentation rules are adaptive in that most parameters images," Proc. I€€€ Comput. Soc. Conf. Pattern Rewgnition
are dynamically determined for each document. The remaining and lmage Processing, 1983, pp. 291-296.
static parameters (e.g., density, aspect ratio, perimeter to width ratio, S.N. Srihari, C-H. Wang, P.W. Palumbo, and J.J. Hull,
and perimeter squared to area ratio thresholds) are dimensionless "Recognizing address blocks on mail pieces: specialized tools
and therefore are invariant to type and style. However, perimeter and problem-solving architecture," AI Mag., vol. 8, no. 4,
filter parameters may need to be adjusted when processing at Winter 1987, pp. 25-40.
different document resolutions. This may be necessary since pixel
averaging resolution reduction operations affect character perimeter D. Wang and S.N. Srihari, "Classification of newspaper image
values. blocks using texture analysis," Dept. Comput. Sci., SUNY
Buffalo, NY, Tech. Rep., Oct. 1988.
The first 5 rules are very specific (selective) rules and are
M.A. Forrester, M.E. Glenn, and A.D. Smith, "Evaluation of
therefore order independent. However, because rules 6 and 7 are potential approaches to improve digitized image quality at the
very general rules, they must be executed only after the first five Patent and Trademark Office," MITRE Corp., McLean, VA,
rules. Working Paper WP-87WOO277, July 1987
In conclusion, the automatic document image segmentation V.P. Concepcion, M.P. Grzech, and D.P. D'Amato,
method just described is capable of handling a wide variety of "Morphological processing of patent document images,"
documents, and is amenable to faster implementation through MITRE Corp., McLean, VA, Tech. Rep. MTR-89W00092,
algorithm optimization and dedicated hardware. The rule based Sept. 1989.
structure allows for customization of the segmentation rules to fit
J. Serra, lmage Analysis and Mathematical Morphology.
specific client needs. New York: Academic Press, 1983.
ACKNOWLEDGEMENTS C.R. Giardina and E.R. Dougherty, Morphological Methods in
lmage and Signa/ Processing. Englewood Cliffs, NJ: Prentice
This work was made possible by a MITRE-SponsoredResearch Hall, 1988. ISBN 0-13-601295-7.
grant. We thank the U.S. Patent and Trademark Office for
permission to use Figures 2a and 2b, and we thank the Institute of R.C. Gonzalez and P. Wintz, Digital Image Processing, 2nd
Electrical and Electronics Engineers, Inc. (IEEE) for permission to Ed. Reading, MA: Addison-Wesley, 1987.
use Figure 2c. We also gratefully acknowledge ISBN 0-201-11026-1.
Mr. Bartley C. Conrath for his patience in refining the rule structure A. Rastogi and S.N. Srihari, "Recognizing textual blocks in
and parameters. document images using the Hough transform," Dept. Comput.
Sci., SUNY Buffalo, NY, Tech. Rep. 86-01, Jan. 1986.
9. REFERENCES S.C. Hinds, J.L. Fisher, and D.P. D'Amato, "A document skew
detection method using run length encoding and the Hough
[l] US. Congress, Office of Technology Assessment, Informing transform," I € € € 70th lnt. Conf. Pattern Recognition,
the Nation: Federal Information Dissemination in an Electronic Atlantic City, NJ, June 1990.
Age. Washington, DC: U.S. Government Printing Office,
OTA-CIT-396, Oct. 1988. LC 88-600567. C. Ronse and P.A. Devijver, Connected Components in
Binary Images: the Detection Problem. New York: John Wiley
[2] S.N. Srihari, "Document image understanding," Roc. I€€€ and Sons, 1984. ISBN 0-471-90456-2.
Comput. Soc. Fall Joint Computer Conf., Dallas, TX, Nov. 2-6,
1986.

[3] K.Y. Wong, R.G. Casey, and F.M. Wahl, "Document analysis
system," ISM J. Res. Develop., vol. 26, no. 6, Nov. 1982,
pp. 647-656.

570
Figure 1. Block diagram of the document image segmentation system showing image enhancement
(upper row) and image segmentation (lower row) operations.

Table 1:Segmentation Rules Table 2: Segmentation Rule Parameters


1. if ((HJ,~~,,< height c HT,") AND
< aspect ratio c A J ")) CC-EXT AB = 0.65 AD2,min' 3.4
(A J
2A1. if (aspect ratio < AL min) C&NONTEXT AD2,max = 6J3 ,.
A"= 1.77
A L , =~25.8
~
2A2. if ((AL," 5 asped ratio) AND (DL < density)) CC-NONTEXT AL,min = 0.4
= 1.O AS" =3.4
28. if (NOT(HL~<height c HL3))) CC-NONTEXT A~ , = 0.63 ~ j ~ A"J, = 25.0
2c. if ( H u < height < HL& CC-TEXT D87 E 0.57 082" 0.16
3. If((As,min I aspect ratio c AS AND
0 ~ =30.97 DD I = 0.88
(Hs,~, < height < HS,") AND (Ds < density)) CC-TEXT = 0.325 DD2,max' 0.544
4A. if ((aspect ratio 5 AB) AND (DB7< density)) CC-NONTEXT DH,~~,,= 0.315 D H , a~0.82
~
4B. if (NOT(DB2 I density c OB&) CC-NONTEXT DL = 0.38 Ds = 0.33
5. if ((HH min < height < HHmax) AND (DH,min 5 density < DH,max)
Dj2 = 0.425
AND (AH < aspect ratio)j CC-TEXT H ~ 3 , ~ i ,=- int(l.8Vw)
, HH,~~!, = int(1.67 Vw )
6. if ((HT2,min < height c HFmax) AND (DJ2 Idensity)) CC-TEn HL 1 = IM(0.46 Vw )
7A. if((PAD7 6 perimete /area) AND
HH,max= w '
H L=~int(3.0Vw) H L =~int(4.25 Vw )
(fWO7
CC-TEXT
iin
Iperimeterhidth c f WD7,mm) AND (density c ~ 0 7 ) ) Hs,~~,,= int(0.75Hjmjn) HS,max= 2HJ,max
H~ , ~ j =
, , int(0.875 V k ) H ~ , ~ ~ = i n t (Vw)
l.l
78. HJ ~= int(0.72Vw)
, ~ ~ ~ J2,max int(1.5vw
H
PAD^,^^^ = 40.0 fAD2,min = 9.8
fAD2,max = 50.0 f A ~ 3 = ,1.75 ~ ~
7c. f A ~ 4 , =~9.5 j ~ f A ~ 4 = ,35.0 ~ ~ ~
f WD7,min3 2.31 f W D , , =~6.68 ~
7D. f WD3,min = 4.0 f wD4,max =2.36
I
'p
N
I

512

You might also like