0% found this document useful (0 votes)
33 views6 pages

Pci e Error Rate

Uploaded by

H Fritz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views6 pages

Pci e Error Rate

Uploaded by

H Fritz
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

VALID 2014 : The Sixth International Conference on Advances in System Testing and Validation Lifecycle

Performance Impact of Correctable Errors on High Speed Buses

Daniel Ballegeer, David Blankenbeckler, Subhasish Chakrborty, Tal Israeli


Intel Corporation
Santa Clara, CA, USA
Emails: {dan.g.ballegeer, david.blankenbeckler, subhasish.chakraborty, tal.israeli}@intel.com

Abstract— Modern high speed serial buses are generally better understand the real world performance impact of
required by specification to achieve a maximum bit error ratio. increasing error rates beyond the specification level.
Are these requirements too restrictive? This paper will look at
a series of studies on Peripheral Component Interconnect TABLE I. BER SPECIFICATIONS FOR SOME HIGH SPEED BUSES
Express and Serial AT Attachment, investigating the impact of
Link BER Spec
bit error ratio on bus performance. The results of these studies
suggest that typical bit error ratio requirements may be PCI Express Gen 3 10-12 [1]
conservative. The findings suggest that alternative bus 10 Gigabit Ethernet 10-12 [2]
performance specifications should be considered that would SATA 3.x 10-12 [3]
open new possibilities for design, validation and
USB 3.x 10-12 [4]
manufacturing test tradeoffs.

Keywords-bit error ratio; BER; electrical validation; high These studies are interesting in that they provide some
speed interconnect; high speed bus; I/O. data justifying room for design tradeoffs. For example, there
may be significant cost savings opportunities, trading off
I. INTRODUCTION slight performance impact for lower cost material. Consider
Modern high speed serial bus specifications generally the example of a system design with long Peripheral
have a requirement for maximum Bit Error Ratio (BER) Component Interconnect Express (PCIe) bus routing lengths.
[1][2][3][4]. In this context, bit error ratio is defined as the Instead of using more expensive low loss Printed Circuit
fraction of bits transmitted over the high speed interconnect Board (PCB) material, it may make sense to sacrifice error
that are interpreted incorrectly at the receiving device—i.e., a rate and realize a cost savings with standard FR4 PCBs.
bit originally transmitted as a “1” is interpreted as a “0” or Likewise, there may be power reduction opportunities for
vice-versa. Table I summarizes these for a variety of buses: low power devices, trading off performance for lower power
Third Generation Peripheral Component Interconnect operation, without sacrificing data integrity.
Express (PCIe Gen 3), 10 Gigabit Ethernet, Serial AT It is easy to show through either empirical measurements
Attachment (SATA), and Universal Serial Bus (USB). Note or theoretical arguments that the bus BER of a product is a
that there is no inherent need or expectation that each distribution when measured across multiple instances of that
interface type has the same BER requirement, but the table product. Factors that induce this distribution include,
illustrates that 10−12 is quite commonly used. among other things, the variations in the characteristics of
Many high speed buses such as the ones listed in Table I the board interconnects, receiver circuitry, and transmitter
utilize error detection schemes such as a Cyclical circuitry. For example, in the voltage domain of the bus
Redundancy Check (CRC) at the receiving device to detect signal, these factors lead to a distribution of the electrical
any signal integrity-induced bit errors that could have margin, Vm, where Vm represents the amount of voltage
occurred over the interconnect. In such a scheme, in the swing at the receiving device beyond the minimum required
event of a detected error, a request is sent to the transmitting voltage detection threshold.
device to send the data again (a retry). Ideally, a target BER To simplify the example, consider an ideal case where
level on an interconnect that employs a CRC check must there is no noise in the system when Vm is measured, i.e.,
take into account both the effectiveness of the CRC scheme Vm is the noise-free voltage signal margin. In a real system,
with respect to the protected data packet size as well as the bit errors result from noise adding or subtracting from this
performance losses that result from error-induced retries on margin. In a zero-mean additive white Gaussian noise
the bus. Although studies and publications on the model such as that described in [8], the Vm distribution may
effectiveness of CRC error detection have occurred for
be mapped into a BER distribution via the following
multiple decades [5][6], as far as the authors know, there
relationship:
have been few, if any, studies done on real world
 x2
performance impact at various error rates. Some theoretical 1 

calculations of latency impact vs. error percentage have been BER   e 2σ 2


dx , (1)
σ 2π Vm
presented [7], but this would not take into account other
factors that interact with the error retries and contribute to
the overall performance impact on a true workload. This where σ is the standard deviation of the Gaussian noise.
paper will outline the results of several studies conducted to

Copyright (c) IARIA, 2014. ISBN: 978-1-61208-370-4 54


VALID 2014 : The Sixth International Conference on Advances in System Testing and Validation Lifecycle

same settings. In all cases, experiments were re-run at least


one time to confirm the performance results quoted.

III. PERFORMANCE DATA COLLECTION AND RESULTS

A. PCIe Gen 3 used with a graphics add-in card


In the first experiment, the PCIe Gen 3 bus studied was
the interconnect between a 3rd Generation Intel Core i7
Processor and a PCIe graphics Add-In Card (AIC). Four
different high-performance graphics cards were included in
the experiment, spanning three different vendors. Graphics
card settings were set to produce maximum performance;
future studies will also include studies with the scenario
where hardware acceleration is turned off. Three different
commercially available graphics-intensive benchmarks were
Figure 1. Conceptual illustration of bus BER distribution across different run during the experiment: Codemasters Dirt 3, a graphics-
units intensive racing game; Unigine Heaven, a graphics-intensive
benchmark designed to stress graphics AICs, and 3DMark
Fig. 1 is an example illustration of a BER distribution Fire Strike, a real-time graphics rendering benchmark. In
that could result from a Gaussian distribution of Vm. The this experiment, the degradation of performance vs. BER
specifications are a hard cut off at maximum BER limit, like was measured in both directions: in one set-up, the CPU
10-12. In reality, the BER performance of every lane on every receiver experienced the bit errors, and in a separate set of
channel on every board is different. Is it acceptable that a measurements, the graphics AIC receiver experienced the bit
small portion of lanes are slightly above the BER spec if the errors.
performance impact is negligible? Are these systems really The CPU PCIe Receiver (Rx) circuitry had built-in
considered bad if there is no noticeable performance impact validation test hooks that allowed changing the location of
to the end user? If it were acceptable to ship some portion of the sampling point in the time domain with respect to the
systems at a higher BER, there may be substantial benefit, data eye center. By offsetting the sampling point away from
such as the opportunity to reduce silicon test time the data eye center, bit errors could be induced at the
requirements. receiver.
In the first step, the BER vs. time sampling offset was
II. PERFORMANCE IMPACT EXPERIMENT OVERVIEW established by sending a known random bit sequence out the
In order to measure the performance impact of BER CPU transmitter and receiving the same bit sequence at the
levels above specification, four different high speed serial CPU receiver. This was accomplished via the far-end digital
bus usage scenarios were studied: loopback mode supported by all PCIe spec-compliant
 a PCIe Gen 3 bus used with a graphics add-in card components. In this mode, the AIC received and interpreted
 a PCIe Gen 3 test add-in card utilized for easy the data transmitted by the CPU and then retransmitted the
measurement of data mismatch errors same data back to the CPU. Received data at the CPU was
 PCIe Gen 3 used as an interconnect between a CPU compared to the CPU transmitted data to detect the level of
and Platform Control Hub (PCH) bit errors at each sampling offset point. Note as sampling
offset moved closer to nominal data eye center, more
 a Serial ATA (SATA) 6 Gb/s interconnect attached
transmitted bits were necessary to detect bit errors. BER vs.
to a hard disk drive.
sampling offset slope was checked to ensure the relationship
agreed with an additive white Gaussian noise model
In all experiments, techniques were used to induce
indicative of Random Jitter (RJ).
different BER levels on the link, either by
In the second step, the identical set of time sampling
 changing the voltage or timing sampling at the offset values were used in the same system setup, this time
receiving device to be offset with respect to the data allowing the three benchmarks to run and measure
eye center performance. In this way, a correlation of performance vs.
 error injection at the receiver, or BER at the CPU receiver could be established.
 voltage swing attenuation at the transmitting device. To establish the relationship between performance
impact of bit errors received by the Graphics PCIe Rx, a
Then, with this induced BER present, performance slightly different approach was used to induce bit errors: the
benchmarks were run that specifically focused on the I/O CPU transmit voltage swing was reduced incrementally to
being studied. In some cases, the BER was able to be produce different levels of bit errors experienced at the
monitored at the same time as the performance, whereas in receiver of the graphics card. The relationship of BER level
other cases, BER had to be measured first in a loopback vs. transmit swing was first established by a loopback testing
scheme before running the performance benchmark at the mode on the system similar to the one described above.

Copyright (c) IARIA, 2014. ISBN: 978-1-61208-370-4 55


VALID 2014 : The Sixth International Conference on Advances in System Testing and Validation Lifecycle

Then, using the same transmit swing settings, the than CPU Rx, such that at BER levels in the vicinity of 10-6,
performance of each of the three benchmarks was measured. performance penalties are similar.
Using this approach, performance vs. BER experienced at
the graphics card receiver could be characterized.

TABLE II. MINIMUM BER LEVELS AT GRAPHICS RX THAT INDUCED


3% AND 50% PERFORMANCE LOSS ON WORST-CASE BENCHMARK
Graphics BER for Graphics Rx performance loss
Card 3% 50%
A 1x10-8 1x10-6
B 3x10-6 6x10-5
C N/A N/A
D N/A N/A

Table II summarizes the performance loss observed as a


function of BER at the Graphics Rx. Table III lists similar
information as a function of BER at the CPU Rx. Values are
reported for both 3% performance loss and 50% performance
loss. Note that cards C and D were extremely robust to low
CPU Tx voltage swing and did not encounter bit errors even
at the lowest swing settings. Therefore it was impossible to Figure 2. PCIe Card D performance vs. BER on each of three benchmarks
characterize performance vs. Graphics AIC Rx BER on cards
C and D using this technique. It should also be mentioned that some card-to-card
differences were observed. This is shown in Fig. 4, which
TABLE III. MINIMUM BER LEVELS AT CPU RX THAT INDUCED 3% separately delineates the Unigine performance vs. CPU Rx
AND 50% PERFORMANCE LOSS ON WORST-CASE BENCHMARK
BER for each card. Although card B had the lowest
Graphics BER for CPU Rx performance loss performance, it proved to be the least affected by bit errors,
Card 3% 50% with little degradation all the way up to 10−4 BER. It could
A 1x10-7 3x10-6 be speculated that the lower performance of this card
B 1x10-4 2x10-4 resulted in lower utilization of the maximum available
C 5x10-7 2x10-6
bandwidth on the PCIe link, thus preserving some additional
D 4x10-8 3x10-7
bandwidth to compensate for the error retries on the link.
However, this would not explain why card A, the highest
The first notable point is that even a 3% performance loss
performing card, showed the second-most resilience to bit
was not observed until a BER of at least 10-8, which is four
errors in terms of performance impact. This suggests there
orders of magnitude above the PCIe BER spec of 10-12.
are other factors that create these differences from card to
Second, although the table values represent the worst-
card.
case benchmark, there was not a large difference in the
behavior of different benchmarks in terms of relative B. PCIE Gen 3 link between CPU and test add-in card
performance loss. This is shown in Fig. 2, which depicts the Another round of experiments was designed with a PCIe
BER vs. performance loss at the CPU receiver when using Gen 3 test add-in card to understand the BER levels
PCIe card D. Also evident in Fig. 2 is the typical number of associated with serious performance degradation. Voltage
BER sample points and intervals that produced the data sampling and timing sampling points on the CPU PCIe
summarized in Tables II and III. This card showed the most receiver were offset from nominal values to induce a bit error
difference between benchmarks, but as can be seen, even on ratio in the digital loopback mode described in the previous
this card, the relative performance loss is roughly equivalent section, in order to establish the relationship of BER to the
across all three benchmarks at a given BER. margin offsets.
Another observation is that once performance starts to Next, a PCIe functional test mode was utilized, in which
degrade on the order of 3%, it does not require a much the CPU wrote pre-defined data to the add-in card with all
greater BER to degrade the performance significantly sampling points at nominally trained values. While reading
further. This can be seen in Tables II and III, or graphically back the data from the card, the CPU receiver margin hooks
in Fig. 2. The latter graph illustrates that each benchmark were operating to test at different sampling offset points, and
performance metric degrades by 50% at a BER only 1 to 1.5 error reporting was enabled to give visibility into detected
orders of magnitude above the 3% degradation point. receiver errors such as bad packets and CRC errors. In
However, there were some differences in CPU Rx BER addition, data mismatch errors escaping the PCIe error
and Graphics Rx BER in this regard. Fig. 3 depicts this detection mechanisms were identified by comparing the
finding for Card A running Unigine Heaven. Graphics Rx received bits against the transmitted bits in the CPU
BER starts to produce performance problems at a level memory. In this way, the BER level creating normally
roughly two orders of magnitude below CPU Rx BER, but undetected data mismatch errors could be empirically
the performance decrease after that point is more gradual

Copyright (c) IARIA, 2014. ISBN: 978-1-61208-370-4 56


VALID 2014 : The Sixth International Conference on Advances in System Testing and Validation Lifecycle

measured. The experiment was performed once with the supporting evidence that as BER is increased, the main area
timing sampling offset used to induce BER, and again with of concern for an end user is in fact performance degradation
the voltage sampling offset used to induce BER at the CPU rather than undetected data mismatch issues.
receiver.
C. PCIe Gen 3 link between CPU and PCH
In this experiment, a 2nd generation Intel Xeon E5
processor was connected to an Intel BD82C606 Server
Chipset Platform Control Hub (PCH) via a PCIe Gen 3
uplink. The intent was to study the impact of PCH Rx BER
on performance of the uplink.

TABLE IV. AVERAGE BER AT WHICH 100% PERFORMANCE LOSS OR


DATA MISMATCH ERRORS OCCURRED ON PCIE GEN3 ADD-IN CARD

Method used to Avg BER for 100% Avg BER for data
induce BER performance loss mismatch error
Timing sampling 3.0x10-7
Not measurable
offset (120 runs)
Voltage sampling 4.8x10-7 5.6x10-7 (measurable on
offset (120 runs) 8 of 120 runs)

Figure 3. Unigine performance vs. BER on either CPU Rx or Tx vs. BER


on PCIe graphics Card A First, in order to monitor the performance, a benchmark
test was run that was known to exercise the bandwidth of the
PCIe link. While this was done, jitter of various amplitudes
was injected at the receiver to induce a BER at the PCH Rx.
While the jitter was injected, error logs were utilized to
monitor the rate of CRC and link recovery errors with
respect to the total number of bits transmitted to calculate the
effective BER at that jitter amplitude setting. It was found
that the jitter injection provided only a coarse control over
the effective BER. Finer granularity was achieved by
complementing the jitter injection with voltage and
temperature adjustments, which provided a finer adjustment
to the receiver BER level. This way, performance penalty
vs. PCH PCIe uplink receiver BER could be characterized.
The jitter injection technique plus voltage & temperature
adjustment did not provide as fine of control over the BER as
Figure 4. Unigine performance vs. CPU Rx BER on each of four PCIe the sampling point adjustment technique used in part A.
graphics cards However, this technique did have the advantage of being
able to monitor the actual bit errors occurring during the
Table IV shows the comparison of BER levels resulting performance test runs themselves.
in data mismatch errors during the PCIe functional test Fig. 5 displays the results of this experiment. The region
versus the BER levels causing a 100% performance loss. It of >3% performance penalty was witnessed to be in the
should be mentioned that with the PCIe functional test vicinity of 10−10 BER, again implying there is some buffer
content running in this part of the experiment, the 100% between a performance issue and the 10−12 BER
performance loss in actuality resulted in a crash or link hang specification. Because of lack of precise control over the
requiring a reboot. BER with the jitter injection, there was a clear absence of
When BER was induced by changing the timing data points in the BER range of 10−9 and 10−4. Somewhere
sampling point, the resolution was not sufficient to in this range, and by the time 3x10−4 BER is reached, the part
distinguish any data mismatch errors before reaching a BER is not able to function, which is represented by the 100%
that caused 100% performance loss. When using the voltage performance penalty on the graph in Fig. 5. Because of the
sampling offset, on 8 of the 120 runs, data mismatch errors sparseness of the data points, it is not known at exactly what
were distinguishable before a crash occurred. On the other BER this occurs. Based on the slope of the points at or
112 runs, the high level of BER created a crash before any around ~10−10, it appears that 50% degradation would occur
mismatch problems occurred. in the low 10−9 range. This agrees with the PCIe graphics
Evident from these results is that any data mismatch card experiment in section A, which also showed a
issues escaping the built-in error detection mechanisms on performance degradation from 3% to 50% occurring within
PCIe Gen 3 occur at a BER very close to or higher than the approximately 1-1.5 orders of magnitude change in Rx BER.
BER that causes catastrophic performance problems. This is

Copyright (c) IARIA, 2014. ISBN: 978-1-61208-370-4 57


VALID 2014 : The Sixth International Conference on Advances in System Testing and Validation Lifecycle

D. SATA 6Gb/s link between PCH and hard disk drive experiments, performance began to show measurable
For this measurement, an Intel BD82C606 Server PCH decrease in the neighborhood of 10−8 or even 10−10.
SATA 6 Gb/s link attached to a hard drive was studied.
Similar to the experiment in the previous section, jitter
injection was used at the PCH Rx to induce a BER. Jitter
frequency and amplitude changes were made to vary the
BER, and for finer adjustments, temperature adjustments
were made in addition to a validation test hook that provided
some level of control of the PCH Rx voltage sampling point
with respect to the center of the data eye. As these
adjustments were being made, the BER could be calculated
by logging the disparity and CRC errors occurring on the
SATA link and dividing by the total number of bits
transmitted.
While errors were being induced in this manner, a
performance benchmark involving continuous reads and
writes to the hard drive was utilized to stress the SATA I/O
as well as monitor the performance at various levels of BER. Figure 5. PCH performance penalty vs. PCH Rx BER on the PCIe uplink
With the combination of jitter injection and the data eye to the CPU
margining hook, a reasonable level of accuracy was achieved
in inducing different levels of BER on the SATA link.
Similar to the PCH PCIe uplink experiment, errors were
induced and monitored while the performance monitor itself
was being run.
Fig. 6 shows the outcome of the experimental
measurements. As BER was increased above the spec of
10−12, minimal overall performance degradation was
witnessed until a BER level of approximately 3x10−10 was
achieved. At this level, a 3% performance penalty was
observed, but from that point on, the rate of performance
degradation with respect to BER increased dramatically. As
was so often witnessed in the experiments reported in this
whitepaper, 50% performance degradation occurred only at a
BER level one order of magnitude higher, at approximately
3x10−9. Figure 6. Performance loss vs. PCH Rx BER on the SATA 6Gb/s link to a
hard drive

IV. IMPLICATIONS AND FUTURE WORK This suggests that true effective latencies with real
The data presented here suggests for the high speed serial modern-day products and workloads, taking into account the
bus types studied, there are at least two orders of magnitude error profiles (for example, number of consecutive packets
of margin above the max BER specification before a user with errors), are sometimes greater than the assumed
would experience any noticable performance loss from penalties in [7]. While error ratio profiles could differ by
replaying data after an error is detected. It is worth scenario and would not always match those in the reported
mentioning, however, that the empirical data sometimes experiments, the fundamental sources of bit errors in the
showed lower margin than a simple latency-based theoretical experiments (jitter, elevated temperature, reduced transmit
projection would predict. Murali et al. [7] speculate that voltage swing, and a non-centered data sampling point) are
based on error retry-related latency penalties, average all sources that could be experienced in a real-world system.
observed latency would not show degradation until a packet To minimize the impact of extended test runs and using
or flit error ratio in the range of 0.1%-1%. In this context, more expensive design solutions to ensure parts meet the
latency refers to the amount of additional delay in the data BER spec, an alternative approach to a simple spec value
packet, or “flit,” that is created by the receiving device would be to architect in the right validation hooks and
notifying the transmitting device of the CRC error as well as capabilities to measure performance changes as data eye
the resend of the correct data by the transmitting device. margins decrease or alternatively, as BER increases.
Projecting this value onto PCIe Gen 3, for example, with a Validation activities can then concentrate on checking that
typical CRC-protected packet payload size of 1200-2200 bits the vast majority of parts and systems will not experience
for the products measured in this paper, one would predict noticeable performance penalties—for example, no more
there would be no performance concern until a BER elevates than 3% performance loss—from resending data across the
to the range of ~10−7 to 10−6. Yet in some of the link as a result of error detection. When needed, test content

Copyright (c) IARIA, 2014. ISBN: 978-1-61208-370-4 58


VALID 2014 : The Sixth International Conference on Advances in System Testing and Validation Lifecycle

such as the PCIe Gen 3 functional test used in this paper can types besides PCIe and SATA, as well as other scenarios for
be used to confirm that undetected data mismatch errors PCIe that include a graphics card AIC where hardware
happen at or above BER levels that create severe acceleration is turned off. It is also the hope that this work
performance degradation or hangs. will motivate others in the industry to perform studies on
By structuring validation targets with respect to their platform architectures.
performance, product validation teams can have confidence
they are truly validating for a quality end user experience, V. SUMMARY
rather than a generic BER level. To illustrate, the BER In this paper, four experiments were conducted to study
requirement of 10−12 is prevalent in specs for high speed the impact of increasing levels of BER on performance of
serial buses, despite different levels of error detection and high speed serial buses. On a PCIe Gen 3 link running
different retry time penalties on these various buses, not to between a CPU and a graphics add-in card, it was found that
mention product-level architectural differences that could although there were some card-to-card differences,
create different retry penalties product to product on the performance did not start to decrease from error-induced
same serial bus type. By forcing products to abide to one retries until a BER of 10−8 at the lowest. On a PCH PCIe
generic BER spec that is not explicitly tied to an end user Gen 3 uplink to a CPU as well as a SATA 6 Gb/s I/O
impact, the spec level must be overly conservative to account running from a PCH to a hard disk, performance did not
for all possible factors across all possible systems, implying appreciably decline until a BER of 10−10 or higher. Finally,
that most products are over-designing and over-validating. the PCIe Gen 3 functional test between a CPU and test add-
in card showed that catastrophic performance issues arose at
TABLE V. TEST TIME DIFFERENCES AT DIFFERENT BER LEVELS a BER of ~10−7 but that undetected mismatch errors do not
Min test time for 95% Min test time for 95% occur until the same level of BER or worse.
BER
requirement
confidence, PCIe G3 confidence, SATA 6 The data suggests that many products have additional
(seconds) Gb/s (seconds) margin above the 10−12 BER spec before any user impact
10-12 374 499 would occur. If new standards and practices were adopted to
10-10 3.74 4.99 validate against performance impact instead of a generic
BER specification level, conservatism leading to costly over-
In contrast, by aligning to a performance-based design and over-validation could be avoided.
requirement, this conservatism can be avoided, resulting in
additional design margin and shorter validation time. Design ACKNOWLEDGMENT
margin benefit is extremely difficult to quantify even on a The authors would like to thank Alejandro Cardenas,
single I/O type because of the enormous variety of Si circuit Federico Hernandez Reyes, David Steele, and Dale Robbins
designs, fabrication processes, and board designs. Validation for performance measurements on the PCH PCIe uplink and
time benefit is more straightforward to quantify, however. SATA. We would also like to thank Tsafrir Waller for the
To empirically confirm that a given link is less than or equal CPU/AIC PCIe performance test runs and analysis.
to a certain BER at a certain confidence level, one must test
for a sufficient time. The Poisson probability distribution REFERENCES
may be used to calculate the required length of test time to [1] PCI Express® Base 3.0 specification, www.pcisig.com
validate against a certain BER to a level of 95% confidence, [retrieved: Aug, 2014].
assuming no errors are encountered during the test: [2] IEEE 802.3TM-2012 Section 5, standards.ieee.org [retrieved:
Aug, 2014].
 ln(1  0.95) . (2) [3] Serial ATA Revision 3.1 specification, www.sata-io.org
Min _ Test _ Time  [retrieved: Aug, 2014].
BER  dataRate
[4] Universal Serial Bus 3.1 Specification,
www.usb.org/developers/docs [retrieved: Aug, 2014].
Table V shows the test time improvement for PCIe Gen 3
and SATA 6 Gb/s using this approach. If an empirical [5] W. W. Peterson and D. T. Brown, “Cyclic codes for error
detection,” Proc. IRE, vol. 49, Jan. 1961, pp. 228-235.
validation test of this nature was implemented in a
[6] G. Castagnoli, S. Bräuer, and M. Herrmann,“Optimization of
manufacturing test, for example, this would imply a 99% cyclic redundancy-check codes with 24 and 32 parity bits,”
reduction in test time if it were confirmed that only a BER of IEEE Transactions on Communications, vol. 41, June 1993,
10−10 was needed as opposed to 10−12. This is immediately pp. 883-892.
evident from (2): test time is inversely proportional to BER. [7] S. Murali, T. Theocharides, N. Vijaykrishnan, M. J. Irwin, L.
One challenge encountered in this study was that, as far Benini, and G. De Micheli, “Analysis of error recovery
as the authors were able to discern, there is no published schemes for networks on chips,” IEEE Design and Test of
Computers, Sep-Oct 2005, pp. 435-442.
experimental data of performance penalties vs. BER on
[8] W. Liu and W. Lin, “Additive white gaussian noise level
modern high-speed interconnects to which a comparison estimation in SVD domain for images,” IEEE Transactions on
could be made. All previous investigations on this subject Image Processing, vol. 22, pp 872-88.
appear to be purely theoretical ([7][9]) and did not even [9] S. Wang, S. Sheu, H. Lee, T. O, “CPR: A CRC-Based Packet
analyze a specific existing high speed interconnect type. Recovery Mechanism for Wireless Networks,” 2013 IEEE
Because this appears to be an area not previously explored, Wireless Communications and Networking Conference, April
future studies will include other high speed interconnect 2013, pp. 321-326.

Copyright (c) IARIA, 2014. ISBN: 978-1-61208-370-4 59

You might also like