Profile Front Minimization
Profile Front Minimization
00
Pnnted in Great Britun. 0 1989Pergamoa Pm8 plc
TECHNICAL NOTE
Abstract-An updated version of the Profile Front Mini~zation (PFM) nodal renumbering algorithm
is presented. The new version is an order of magnitude faster and the resulting numbering produces
improved profile and wavefront characteristics. The algorithm is compared to other commonly used
algorithms. The comparisons are based on a set of standard test problems proposed by Eventine. The
results indicate that PFM is competitive and a suitable alternative to these other methods. In addition,
the new algorithm is efficient in terms of storage requirements and code size, making it easy to implement
in existing finite element codes
INTRODUCTION ties. For active column solvers, solution time is O(NB$ and
for frontal solvers the solution time is O(NW’) where N is
Finite element analyses require the solution of large, the number of equations, B is the rms bandwidth and W is
sparsely populated systems of linear equations. The form of the rms wavefront. Clearly, the effectiveness of the al-
these equations can generally be expressed as: gorithms can be directly compared by evaluating these
matrix properties.
Ku=r. Several equation reordering algorithms have been pre-
sented and compared in the literature [ld, IO-131. Empiri-
Direct solution techniques based on Gaussian elimination cal evaluation of the Profile Front Minimization algorithm
are the most common approach to solving these equations. indicates that this implementation yields profile and wave-
Due to the sparse nature of the coefficient matrix, K, a great front results comparable to other algorithms. Therefore, the
deal of computer storage and time can be saved in the purpose of this paper is twofold. First, the Profile Front
solution process if the equation solver used can exploit Minim~ation Alcott will be discussed. Irnp~~rn~~
various properties of the coeflicient matrix such as band- have made the algorithm faster and a smaller profile and
width, profile or wavefront. In addition, the equations can wavefront result. A detailed description of the new version
be ordered in such a way that the bandwidth, profile or will be provided, focusing on the storage scheme and
wavefront is minimal, thus further reducing the cost of heuristics used. Second, empirical results will be presented
solving the system of equations. which allow for direct comparison of the Profile Front
The ordering of the unknowns in the coefficient matrix is Minimization algorithm to some of the most successful
directly related to the numbering of the nodes in a finite algorithms. A brief description of general renumbering
element mesh. Coupling two dears-of-fr~dom gives rise
methods is discussed next.
to off-diagonal terms. A reduction in the storage require-
ments for the coefficient matrix can be achieved by choosing
a node ordering which clusters the non-zero terms near the RENUMBERING METHODOLOGIES
diagonal of the matrix. The node numbering which achieves
this is difficult to establish, particularly in large, complicated Marro [7] identified two distinct renumbering methodolo-
problems. For this reason, many node renumbering al- gies. These are what has been termed the classical and front
gorithms have been developed [l-8] and integrated into minimi~tion methods. Both the Cuthill-McKee and
finite element programs. These algorithms usuahy tind a Gibbs-Pool~t~kmeyer algorithms are considered classi-
good node numbering sequence which results in faster cal methods. Front minimization methods are more numer-
solution times and reduced storage requirements. ous. The Gibbs-King, Levy, Front Increase Minimization
The two most common and efficient direct solution and Profile Front Minimization strategies fall into this
methods currently used by most finite element programs are category.
the active column and frontal solution methods. Banded Since many methodologies use concepts from graph the-
solvers can easily exploit the features of vector computers; ory, a brief discussion of graph theory concepts is given. For
however, active column and frontal solvers can also utilize a more comprehensive description see Gibbs er oi. 121,and
the parallel or vectorization features. In addition, active George and Liu [ 141and Marro [7]. Graph theory provides
column and front methods require fewer storage locations a convenient way in which to organize and manipulate the
than banded solvers. Therefore, we will concentrate on the large number of equations and organizational structure
wavefront and profile characteristics of the coefficient ma- which results from the connectivity of the degrees-of-
trix rather than the bandwidth. freedom. Each degree-of-freedom equation corresponds to
It has been shown that the numerical effort for solution a single node or vertex on the graph. An edge represents the
is almost identical for both the frontal and active column coupling of vertices, or in matrix terms this is an off-diago-
methods [9]. The profile indicates the storage requirements nal term in the matrix. Vertices are considered adjacent if
for active column schemes, and the wavefront shows the they are coupled. The degree of a node is the number of
storage needed for frontal methods. Computer time required adjacent nodes. A level structure of a graph is a partition of
for complete solution is also related to these matrix proper- the nodes such that all adjacent nodes are within one level.
903
904 Technical Note
This provides a powerful means of representing the complex refinements, most notably by Gibbs 131and by Lewis 16, 121.
connectivity of nodes. Both classical and front minimization Because of the work done by Gibbs, it is generally referred
renumbering methods usually form level structures and use to as the Gibbs-King algorithm (GK). The Gibbs variant of
these to determine a starting node and in later node the King algorithm is i&ntical to that of the GPS method
ordering. in that a level structure of minimum width is generated, and
Renumbering algorithms initially developed concentrated nodes are numbered level by level. The criteria for choosing
on reducing the bandwidth of coefficient matrices. The first the next node to number, however, are the difference in the
generally successful and widely used method was the GK and GPS algorithms.
Cuthill-McKee (CM) algorithm. The objective here is to
reduce the bandwidth of the coefficient matrix. It was later I. Generate a starting node and associated rooted level
shown by George and Liu (141 and proved by Liu and structure of minimum width as in the GPS method.
Sherman [13] that by reversing the ordering, the profile 2. Number the nodes on a level by level basis by choosing
could be further reduced without increasing the bandwidth. the next node to number as the one that completes the
This implementation is referred to as the Reverse- connectivity of as many nodes as possible or one that is least
Cuthill-McKee method (RCM). The RCM al- connected to nodes that have not yet been considered for
gorithm [I, lo] follows. numbering. In addition. King suggested that the longer a
node remained on the front, a higher priority should be
I. Calculate the nodal degrees of each node and generate given to remove it (i.e. complete the associated equation and
level structures for each node of low degree. number it).
2. For each level structure:
(a) Generate a numbering by labeling the startmg The GK algorithm restricts the search for the next node
node 1 and sequentially numbering the adjacent to ‘add to the front’ to those on the currently considered
nodes in order of increasing degree. level of the level structure. Resulting profile and wavefront
(b) Ties are broken arbitrarily and the next lowest characteristics were consistently the smallest of all the
degree node which has not been numbered is then renumbering methods; however, until Lewis’ additions to
numbered as detailed above. this algorithm, the stability was questionable, and the
3. Reverse each nearing generated. storage requirements and execution speed were excessive.
4. Calculate the bandwidth, profile and wavefront data Lewis employed more appropriate data structures, which
for each numbering and choose the one that gives the best resulted m a reduction in storage reqmrements and in-
desired matrix properties. creased the execution speed. His results indicated that the
GK algorithm was almost as fast as the GPS algorithm [12].
The RCM method shows the importance of finding a Gibbs’ explanations [3] of the better performance of the
good starting node, or root for a level structure, since the GK and GPS methods show that these two methods are not
numberings generated for each level structure can exhibit constrained by a local search to reduce the matrix proper-
widely varying bandwidths, profiles and wavefront ties, but by using the mInimum levei width, the entire graph
data [2, 14. 151. structure is taken into account. Therefore. the GK and GPS
Work done by Gibbs ef al. [2] noted several deficiencies algorithms cannot only number well locally in the connectiv-
in the RCM algorithm. The Gibbs-Poole-Stockmeyer al- ity tree but also operate globally.
gorithm (GPS) extended the RCM technique by notmg Levy’s algorithm [IO] was introduced rn the early seven-
several characteristics of sparse matrices and exploiting ties. The principle is similar to the GK algorithm m that the
these in determining a starting node from which to root a node to number next is one that minimizes the increase in
level structure. front size; however, the search for this node is not limited
The GPS method uses the heuristic that increasing the to a level structure but is a global search 171.As shown by
number of levels always decreases the average number of Everstine [IO], the time required can be prohibitive. In
nodes in each level of the level structure, and it also tends addition, no starting node strategy is specified.
to reduce the width of the level structure. Second, the GPS A fine generalization of the front mnnmization method
method minimizes the width of the level structure since the has been done by Marro [7]. In this paper, the key features
resulting bandwidth will always be greater than or equal to of front mmimization are detailed with an emphasis on
the width of the level structure. The GPS algorithm can be reducing the execution time. Appropriate data structures are
summa~zed as follows. illustrated: however, a startmg node strategy is not detailed.
nodes adjacent to the node, and the weighted element degree to complete and remove from the front. The value is
is the sum of the weighted nodal degrees of all nodes negative three if the node can be removed, positive three if
contained in that element. the node is not on the front, negative one if the node is on
As stated in Hoit and Wilson [4], the goals of the PFM the front.
algorithm are twofold. First, required storage must be as The last change to the PFM algorithm involved the
small as possible. Second, the resultant node ordering selection of the starting node. The original implementation
should yield an equation number sequence with small profile used three starting nodes and generated three numberings,
and wavefront. It should be noted that the PFM algorithm choosing the one that gave the best protile. The first starting
does not attempt to reduce the bandwidth. In fact on certain node was based on minimum node weight. The second
problems, a larger bandwidth results (see next section). starting node was one of minimum node weight from the
Because in essence the frontal solution method is an element rooted level structure generated from the first starting node.
based concept [16], data structures are used which contain The third starting node was one of minimum nodal weight
the element-node connectivity. The inverse relationship is generated from the rooted level structure from the second
also required. That is, all elements which contain a given starting node.
node are also needed. This gives rise to two sets of arrays The construction of the level structures which are used
which constitute the main data organization of the PFM only for finding a starting node and generation of three
algorithm. The first set of arrays is a list of all nodes numberings took too much time, so currently, a minimum
contained in each element and a pointer array into this weighted node is used to generate one numbering. Compari-
array. The element connectivity array is stored in compact son of this starting node to that which is selected by GPS
fashion. The second set is a list of the elements connected and GK is made in the next section. See Appendix for the
to each node. This is also stored in compact fashion with a main PFM listings.
pointer array. For details on this method, see Hoit and
Wilson [4].
There are three major changes to the original PFM
algorithm. They consist of a change in the weighting EMPIRICAL RESULTS
process, the elimination of some loops replaced by pointers
and flags, and a simplification of the starting node strategy. The test problems which Everstine collected and used in
These constitute the majority of the changes to the al- his paper [IO]were used to evaluate the PFM algorithm. The
gorithm and will be detailed next. It is worth mentioning use of these problems provides a means for directly compar-
here that provisions for substructuring were also added, but ing resequencing strategies. Everstine [IO], Lewis [ 121 and
they will not be discussed in this paper. Marro [7] used these problems in evaluating RCM, GPS,
In every renumbering method once a starting node is GK and Levy. Since there is no theoretical basis for
found, some criterion must be used to choose the next node evaluating renumbering methods, these empirical results
to number. The original implementation of the PFM al- provide a common ground for direct comparison of the
gorithm initiated the concept of weighted nodes and ele- different algorithms. The problems range in size from 59 to
ments which reflect their connectivity complexity. The orig- 2680 nodes. Several finite element types are used. The
inal version, however, normalized these weights. The result test problems are diverse, and since they are from many
was to smear the weightings and make the decision process engineering environments, they do not bias any particular
less distinct. The new version removes the weight normaliza- problem type or configuration.
tion process. This was done in the NODEW routine which Two sets of problems were run using PFM. The first set
has been simplified. uses the last found minimum weighted node as the starting
After inspection of the old algorithm, it was noted that node. These results are given in Table 1. The second set uses
several interior loops could be eliminated or changed. One the GPS starting node. Table 2 shows these results. A
of the most time consuming loops that was removed is in the comparison with the best profile and rms results from the
main driving routine, OPNUM. Here there was a four deep other algorithms is shown in Tables 3 and 4. PFM yields the
loop on first the nodes on the front, next on the elements best results on several of the problems, but on average GK
connected to those nodes, third, on the nodes in each gives the smallest profile and wavefront.
element, fourth, another loop on all nodes on the front. The The use of the GPS starting nodes generally gives similar
innermost loop was eliminated by flagging those nodes that results to the minimum weight node that PFM uses. How-
are on the front in the new node number array, ‘nd’. Thus, ever, a major improvement can be seen in the two largest
nodes not yet numbered are flagged as zero; numbered problems using the GPS starting node. This would suggest
nodes are flagged as a positive integer, and nodes on the that further work should be done concerning starting node
front are flagged with negative one. Instead of having to selection in the PFM algorithm. It also shows that starting
loop over the nodes on the front in order to determine if a node selection is fundamental to a good renumbering
node is on the front, by checking the node flag, this can be strategy.
determined in an easier fashion. The flagging also permitted The storage requirements for each of the problems are
the removal of a loop over all the nodes on the front in the given in Table 5 and are compared with Lewis’ versions of
routine FRONT. The resulting reduction in run time was GPS and GK using unpacked integers. Even though PFM
large. does have redundant information in the two array sets, often
Additionally, a heuristic was added which prevents nodes it requires fewer storage locations than GPS and GK. The
from remaining on the front longer than necessary. Origi- poor results occur in problems where a large number of two
nally, elements were added to the front by compiling a value node elements exist. The data structures currently used by
for each element. The element with the lowest value was PFM are adequate, but more efficient data structures may
added to the front and the completed node equations were be possible. In constructing improved data structures, con-
numbered. The element value was compiled by checking sideration should be given not only to minimal storage but
each node on the element. If adding this element would also to efficiency in the algorithm.
complete the node’s equation, negative one was added to the A comparison of run times is shown in Table 6. As shown,
value. If the node was not on the front, positive one was the PFM algorithm is slower than the BANDIT version of
added. GPS [17]. However, the run times are still acceptable consid-
In the new version, the value is also changed if the node ering the reduction in equation solution effort, but GPS and
associated with an element is on the front but by adding the GK are still faster on average and give better results. The
element the equation is still not complete. This makes the search for the next element to consider adding to the front
elements which remain on the front longer more appealing occupies most of the time in PFM.
906 Technical Note
Table 1. Bandwidth, profile and wavefront computed by the PFM algorithm, PFM-I
Maximum Average
n Bandwidth Profile rms wavefront wavefront
59 18 286 4.981t 7 4 847
66 5 194 2.959 4 2.939
72 14 235 3.352t 4 3.264
87 2s 563 6.730t 9 6471$
162 26 1551 9.910t 13 9.574
193 61 4728 25.652 35 24.497
198 14 1312 6.978t IO 6.626
209 169 4832 24.830 38 23.120
221 104 2489 12.007 20 I 1.262
234 37 1240 5.686t 9 5.299
245 116 4179 18.481 30 17.057
310 29 2975 9.7417 14 9.597
346 72 6947 20.868t 29 20.078$
361 42 5394 15.425 24 14.942
419 255 10737 27.401 44 25.625
492 54 4632 9.931t 19 9.4151
503 446 20556 44.733 68 40.867
512 43 5067 12.043 24 9.896
592 50 10636 18.870t 28 17.966
607 329 17798 34.221 53 29.306
758 68 8683 13.144 28 II ,455
878 334 21251 25.312 39 24.204
918 439 40907 50.053 84 44.561
992 114 37288 39.088 62 37.589
1005 898 59865 63.098 108 59.561
1007 82 23528 24.108 34 23.364
1242 602 69457 58.554 85 55.924
2680 542 125079 53. II9 135 46.298
Table 2. Bandwidth. profile and wavefront computed by the PFM algorithm using GPS
startmg node strategy, PFM-2
Maximum Average
n Bandwidth Profile rms wavefront wavefront
59 16 292 5.099t 7 4.949
66 4 194 2.959 4 2 939
12 I5 242 3 440t 4 3.361
87 22 524 6.270t 9 6.023:
162 26 1551 9.91ot 13 9.574
193 72 4864 26.257 35 25.202
198 I7 1319 7.021t IO 6.662
209 99 3834 19.192t 31 18.344
221 104 2487 12.003 20 I I 253
234 43 1374 6.370 IO 5.872
245 I01 3672 I6 894 30 14.988
310 29 3006 9.s53t 16 9.691
346 79 6863 20.700t 29 19.8351
361 51 5445 15.379 25 15 083
419 255 10837 27.686 44 25.864
492 45 4586 9 837t 17 9.3213
503 446 20556 44.733 68 40.867
512 43 4960 11.785 24 9.688
592 50 10636 18.870t 28 17.966
607 328 17970 34.541 53 29.605
758 68 8683 13.144 28 11.455
878 326 22246 26.687 39 25.687
918 441 41368 50.411 85 45.063
992 108 36512 38.125 60 36.806
1005 748 49956 54.680 99 49.707
1007 49 24665 25.333 38 24.494
1242 485 41092 34.843T 61 33.0851
2680 447 97576 37.979t 69 36.409
Average 161.3 15261.1 21.071 34.1 19.636
.~___~_.
t Better or equal rms results when compared to RCM, GPS, and GK algorithms.
$ Better or equal profile results when compared to RCM, GPS, and GK algorithms.
Technical Note 907
Table 5. Storage requirements for PFM and Table 6. Relative run times
GPS/GK unpacked integers, 32 bit words
n PFM/GPS
n PFM GPS/GKt
59 2.3
59 893 607 66 1.8
66 1027 152 72 10
72 739 651 87 5X
87 1737 1021 162 16
162 1863 2064 193 I?
193 2573 4485 198 12
198 2203 2544 209 18.4
209 468 1 2830 221 33
221 3367 2821 234 2.6
234 2737 2208 245 15 I
345 5241 2743 310 7.4
310 4793 4121 346 20.0
346 9759 5300 361 7.’
361 6551 4854 419 99
419 7153 5730 492 2x
492 6829 5859 503 XY
503 7205 8617 512 50
512 7493 6542 592 ?i
592 7914 8181 607 IY
607 8945 8718 758 1.4
758 9547 10105 878 25
878 11909 11964 918 57
918 12551 12106 992 1 I
992 12069 21800 1005 33 Y
1005 22313 13754 1007 21
1007 13415 13757 1242 hi
1242 17755 16735 2680 20.9
2680 58755 38657
CONCLUSIONS REFERENCES
The PFM algorithm consistently produces node number- 1 H. L. Crane Jr, N. E Gibbs, W. G. Poole Jr and P. K.
ings which provide good profile and wavefront characteris- Stockmeyer. Algorithm 508: matrix bandwidth and
tics of the coefficient matrix. Other algorithms can produce profile reduction. ACM Trans. Math. Sofrw. 2, 375-377
better results, most notably the GK implementation of (1976).
Lewis. PFM produces profiles within 22% of those pro- 2. N. E. Gibbs, W. G. Poole Jr and P. K Stockmeyer, An
duced in GK. PFM produces rms wavefronts within 13% of algorithm for reducmg the bandwidth and profile of a
GK. From close examination of Tables 3 and 4, it is evident sparse matrix. SIAM J. Numer Anal. 13, 236250
that these numbers are adversely effected by the PFM results (1976).
on two large problems, those of 918 nodes and 1005 nodes. 3. N. E. Gibbs, Algorithm 509: a hybrid profile reduction
Further investigation is being done on why PFM performed algorithm. ACM Trans Math. Soffw. 2,378-387 (1976).
so poorly on these two large problems 4. M. Hoit and E. L. Wilson, An equation numbering
The storage requirements are modest and are comparable algorithm based on minimum front criteria. Compur.
to those of GK and GPS. The major drawback of the Srrucf. 16, 225-239 (1983).
current PFM algorithm is that it IS relatively slow in 5. I. P. King, An automatic reordering scheme for simul-
comparison with GPS and GK, even though it is an order taneous equations derived from network systems. Int. J.
of magnitude faster than its previous version. The largest Numer. Meth. Engng 2, 523-533 (1970).
advantage that the PFM algorithm has over all others is that 6. J. G. Lewis, Algorithm 582: the Gibbs-Poole
of implementation. PFM can easily be incorporated into Stockmeyer and Gibbs-King algorithms for reordering
existing finite element programs with a minimum of work sparse matrices. ACM Trans. Mazh. Sofrw. 8, 19G-194
and still result in good profile and wavefront reductions. (1982).
Four routines make up the algorithm for a total of 410 lines 7. L. Marro, A linear time implementation of profile
of commented FORTRAN source code. This contrasts with reduction algorithms for sparse matrices. SIAM 3. Sci.
the seventeen subroutines and 4400 lines of code for Lewis’ Stat. Cornnut. 7. 1212-1231 (1986).
GPS and GK algorithms. Therefore, the PFM algorithm is 8. W. F. Smyth, Algorithms for the reduction of matrix
more suitable for limited memory installations, or when a bandwidth and profile. J. Compuz. appl. Marh. 12/13,
relatively uncomplicated renumbering method is desired. 551-561 (1985).
9. R. L. Taylor, E. L. Wilson and S. J. Sackett, Direct
solution of equations by frontal and variable band.
active column methods. In Nonlinear Finite Elemenr
Acknowledgements-The authors wish to thank G. C. Ever- Analysis in Struczural Mechanzcs (Edited by W. Wun-
stine for making available the BANDIT program and his derlich, E. Stein and K. J. Bathe), pp. 521-552.
collection of test problems, and J. Campbell for his helpful Springer, Berlin (1981).
suggestions. We would also like to thank E. L. Wilson for 10. G. C. Everstine, A comparison of three resequencmg
his contrtbutions to the original PFM work. algorithms for the reduction of matrix profile and
Technical Note 909
wavefront. Znt. J. Numer. Me& Engng 14, 837-853 Large Qarse Positiue De&rite System.s. Prentice-Hall,
(1979). Englewood Cliffs, NJ (1981).
11. N. E. Gibbs, W. G. Poole Jr and P. K. Stockmeyer, A 15. A. George and J. W.-H. Liu, An implementation of a
comparison of several bandwidth and profile reduction pseudoperipheral node finder. ACM Trans. Math.
algorithms. ACM Trans. Math. Sofrw. 2, 322-330 Softw. 5, 284-295 (1979).
(1976). 16. 8. M. Irons, A frontal solution program for tinite
12. J. G. Lewis, Implementation of the Gibbs-Poole- element analysis. Znt. J. Numer. Meth. Engng 2, 5-32
Stockmeyer and Gibbs-King algorithms. AC34 Trans. (1970).
Math. Softw. 8, 180-189 (1982). 17. G. C. Everstine, BANDIT Users’ Guide. TM-l&77-
13. J. W.-H. Liu, and A. Sherman, Comparative analysis of 03, David W. Taylor Naval Ship Research and Develop-
the CuthilCMcKee and the reverse Cuthill-McKee ment Center, Bethesda, MD (1986).
ordering algorithms for sparse matrices. SIAM J. 18. A. George, Computer implementation of the finite
Nnmer. Anal. 13 (1976). element method. Technical Report STAN-CS-71-208,
14. A. George and J. W.-H. Liu, Computer Solution of Computer Science Department, Stanford University,
CA (1971).
APPENDIX
Subroutine opnum (le, nc, nd, In. ne, ndw, msum, nfmt,
nnp, nel, ntrm, nsm).
c______________________________________________________________”__op*“fl
CA S 33/3-T
910 Technical Note
20 nstart = 0
C
c-------form node and element weights and flag retained nodes
C
call nodew (ndw, msum, le, nc, numnp, nterm, nstart, numel)
if (nstart.eq.0) go to 50
ncomp - ncomp t 1
C
c-------check for user supplied starting nodes from SSTART card
C
if (istart(ncomp).ne.O) then
nstart - istart(ncomp)
write (not, 2005) nstart
2005 format(20x,'User Supplied Starting Node Number -', i7)
else
write (not,2000) nstart
2000 format (20x,'PFM Starting Node Number =',i7)
endif
C
c-------put starting node on front
C
nfrnt(1) = nstart
numb = 1
C
c-------mark starting node as being on the front
c-------(-l for normal, -3 for flagged nodes)
c-------"nd" is set to -2 for retained nodes for substructuring
C
if (nd(nstart).eq.O) then
nd(nstart) = -1
else
if (nd(nstart).eq.-2) then
nd(nstart) = -3
endif
endif
C
c-------find next element connected to front to eliminate
c-------choose element as one with the least element degree
C
35 ie = 0
min = 32000
do 45 l=ll,lh
n = nc(1)
ndw(n) = ndw(n) - msum(m)
call front (nfrnt, nd, n, numb, 1, numnp)
if (ndw(n).eq.O) then
C
c-------------don't number retained nodes (rid(n))))3)
C
if (nd(n).eq.-1) then
C
c----------------node is on the front and not retained node
C
node = node t 1
rid(n))= node
endif
call front (nfrnt, nd, n, numb, 2, numnp)
endif
45 continue
msum(m) = 0
C
c-------check if all nodes numbered (numb-o)
C
if (numb.eq.0) go to 20
go to 35
C
50 return
end
min - 1
C
c-------loop over elements to evaluate joint factors
C
do 60 kkk-1,4
C
c----------compute element weighting factors
C
11 = I
do 20 rnE~,nu~l
?h = le(m)
if (msum(m).eq.O) go to 20
isum = 0
do 18 l=ll,lh
n = nc(1)
isum = isum + ndw(n)
18 continue
C
c-------------set the element weight
C
msum(m) = isum
20 11 -1htl
C
c----------compute joint factors
c----------equal to weight of all elements attached to that node
C
do 30 n=l,numnp
ndw(n) = 0
30 continue
11 = 1
do 40 m-1,numel
lh = le(m)
if (msum(m).eq.O~ go to 40
isum = 0
do 38 l=ll,lh
n = nc(1)
ndw(n) = ndw(n) t msum(m)
38 continue
40 11 = lh t 1
60 continue
C
c-------find minimum value
C
min = 2000000000
do 50 n-1,numno
ndwn =-ndw(n)
if (ndwn.le.0) go to 50
if (ndwn.gt.min) go to 50
min = ndwn
nstart = n
50 continue
C
return
end
C
c-------------node not on front and not numbered
C
numb = numb t 1
nfrnt(numb) = n
rid(n))= -1
else
C
c-------------if substructure node-add to front but flag so not numbered
C
if (ndn.eq.-2) then
numb = numb t 1
nfrnt(numb) = n
rid(n))- -3
endif
endif
else
C
c----------remove node from front
C
do 40 m=l,numb
if (n.eq.nfrnt(m)) go to 50
40 continue
50 numb = numb - 1
if (m.gt.numb) go to 90
C
subroutine node1 (le, nc, nd, In, ne, numnp, numel, nterm,
nsum, irest)
c__________-~--________----__----_____-________~~~~____________nodel
C Routine to form node-element connectivity
C
C le
= pointers into "nc" array
C nc
= nodes connected to each element
C = temporary array to count elements attached to each node
C E - pointers into 'ken array (calculated here)
C ne - elements connected to each node (calculated here)
C numnp = number of nodes in problem
C numel = number of elements in problem
C nterm = last location used in "nc" array
C nsum - last location used in "ne" array
C Irest - all available unused storage
c-_________---___________-__________________________________________
coauaon /iolist/ ntm, ntr, nin, not, nt7, nt8, nt9, ntl0, ntll,
nt14, nt15, nt16
’ dimension nd(numnp), le(numel), nc(nterm), ln(numnp), ne(irest)
C
c-------zero element count array
C
do 10 n=l,numnp
rid(n))- 0
10 continue
C
c-------count elements attached to nodes
C
11 - 1
do 20 m=l,numel
A! ;5%,' lh
n = ncli)
if (n.gt:numnp) then
write (not,2000) m, n
2000 format(/'*** FATAL ERROR - - - Element ',i5,
’ refers to node number ',i8, ’ which is ’
'greater than the maximum number of nodes.')
StOD
endif ’
rid(n))- rid(n))t 1
15 continue
914 Technical Note
11 = lh + 1
20 continue
C
c-------form location array
C
nsum = 0
do 30 n-1,numnp
if (nd(n).le.O) then
write (not,2010) n
2010 format('O*** WARNING - - - No elements attached to node
'number ',i6)
endif
nsum = nsum t rid(n))
in(n) = nsum
30 continue
C
C______ .-form node-element array
C
11 = 1
do 40 m=l,numel
lh = le(m)
do 35 l=ll,lh
n = nc(1)
nn = in(n) - rid(n))t 1
if (nn.gt.irest) then
write (not, 2030) nn, irest
2030 format('O*** FATAL ERROR - - - In NODEL...requested
'storage location:',i8,' available:',i8)
stop
endif
ne(nn) = m
rid(n))= rid(n))- 1
35 continue
11 = lh t 1
40 continue
C
return
end