0% found this document useful (0 votes)
22 views17 pages

ACA ch6

The document discusses Tiled Chip Multicore Processors (TCMP), which utilize a modular architecture of identical tiles for scalability, performance, and energy efficiency, along with a communication framework called Network on Chip (NoC) for efficient inter-tile communication. It outlines key features of TCMP and NoC, including various topologies, routing algorithms, and flow control techniques, emphasizing their applications in personal computing, high-performance computing, and artificial intelligence. The document also details the advantages and disadvantages of different routing and flow control methods, highlighting their impact on network performance and efficiency.

Uploaded by

Sanyam Choudhary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views17 pages

ACA ch6

The document discusses Tiled Chip Multicore Processors (TCMP), which utilize a modular architecture of identical tiles for scalability, performance, and energy efficiency, along with a communication framework called Network on Chip (NoC) for efficient inter-tile communication. It outlines key features of TCMP and NoC, including various topologies, routing algorithms, and flow control techniques, emphasizing their applications in personal computing, high-performance computing, and artificial intelligence. The document also details the advantages and disadvantages of different routing and flow control methods, highlighting their impact on network performance and efficiency.

Uploaded by

Sanyam Choudhary
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Advanced Computer

Architecture
Unit - 6
Introduction to TCMP (Tiled Chip Multicore Processor)
A Tiled Chip Multicore Processor (TCMP) is an advanced architectural design
approach for multicore processors, where the processor is divided into multiple
identical tiles. Each tile consists of a processing core, a portion of the cache, and
a communication interface, organized in a grid-like structure on a single chip. This
architecture is designed to provide scalability, high performance, and energy
efficiency.

Key Features of TCMP


1. Tiled Architecture:

The processor is partitioned into multiple identical tiles arranged in a grid


or mesh layout.

Each tile operates as an independent processing unit with its resources.

2. Modular Design:

Tiles are modular and reusable, simplifying design and testing.

The architecture can scale easily by adding more tiles without significant
redesign.

3. On-Chip Communication via Network on Chip (NoC):

Tiles communicate with each other using a Network on Chip (NoC).

NoC provides efficient and scalable communication with low latency.

4. Cache Coherence:

TCMP architectures implement protocols like MESI or MOESI to maintain


consistency between caches of different tiles.

Advanced Computer Architecture 1


5. Energy Efficiency:

Distributed power management strategies allow efficient energy usage,


reducing overall consumption.

6. Scalability:

The modular nature allows TCMP to scale to hundreds or thousands of


cores, making it suitable for high-performance computing.

Single Core Processor


In Red box is core of processor i.e CPU chip which is connected with other
component of CPU.

Multi-Core Processor
A multi-core processor is a single computing component with two or more
independent processing units, called "cores," integrated onto one chip. Each core
can execute instructions independently, enabling the processor to perform
multiple tasks simultaneously, improving performance, efficiency, and parallel
processing capabilities.

Applications of Multi-Core Processors


1. Personal Computing:

Advanced Computer Architecture 2


Enhances performance in everyday tasks, gaming, and multimedia
applications.

2. High-Performance Computing (HPC):

Powers simulations, data analysis, and scientific research.

3. Cloud Computing and Data Centers:

Handles concurrent requests from users in virtualized environments.

4. Embedded Systems:

Found in mobile devices, automotive systems, and IoT devices for


optimized performance.

5. Artificial Intelligence (AI) and Machine Learning:

Processes large datasets and complex algorithms efficiently.

TCMP Architecture
16-core Processing Element

Network on Chip (NoC)


A Network on Chip (NoC) is a communication framework designed to
interconnect multiple components, such as processors, memory blocks, and
accelerators, within a System on Chip (SoC) or a multicore processor. Unlike

Advanced Computer Architecture 3


traditional bus-based architectures, NoC employs network-like principles to
provide scalable, efficient, and high-performance communication among the
components.

Processing unit interconnected via packet based network.

Each resource is called as a ‘tile’.

All resources are organized as a Rectangular tiles on the chip.

Each tile have an address - (X,Y).

Tiles interconnected by the network of the routers.

Communication by packet transmission.

Building blocks of Noc


Topology

Routing

Flow Control

Router Micro-Architecture

Advanced Computer Architecture 4


Key Features of NoC
1. Scalability:

NoC efficiently supports a growing number of cores by providing a


scalable communication mechanism.

Unlike buses, it avoids contention and bottlenecks as cores increase.

2. Topology:

NoC uses well-defined topologies to organize interconnections between


cores. Common topologies include:

Mesh: Regular grid-like layout.

Torus: Similar to a mesh with wrap-around connections.

Tree: Hierarchical arrangement for structured communication.

Ring: Simple circular structure.

Crossbar: Provides dedicated links between cores.

3. Packet-Based Communication:

Data is transmitted in packets, enabling efficient and flexible


communication between components.

4. Routing Algorithms:

Determines the path taken by packets through the network. Examples


include:

Deterministic Routing: Fixed path for source-destination pairs.

Adaptive Routing: Dynamically adjusts based on network conditions.

5. Flow Control:

Manages data packet movement to prevent congestion or buffer overflow.

6. Energy Efficiency:

NoC designs prioritize low-power communication by optimizing routing


and reducing redundant data transfers.

Advanced Computer Architecture 5


Topology
Topology in a Network on Chip (NoC) refers to the physical or logical
arrangement of nodes (cores, memory units, or processing elements) and
communication links in the network. It defines how components are
interconnected, determines the routing paths, impacts performance metrics like
latency, bandwidth, and energy consumption, and plays a crucial role in the
scalability of the NoC.

1. 2D-Mesh Topology

Structure: Cores are arranged in a 2D grid, and each core is connected to its four
neighbors (north, south, east, west).

2. Torus Topology

Structure: Similar to mesh, but with wrap-around links that connect edges of the
grid (e.g., top connects to bottom, left to right).

Advanced Computer Architecture 6


3. Ring Topology

Structure: Cores are connected in a circular structure, with each core connected
to two neighbors.

4. Star Topology

Structure: All cores are connected to a central hub.

Advanced Computer Architecture 7


5. Tree Topology

Structure: Nodes are arranged in a hierarchical tree structure, where parent


nodes connect to child nodes.

6. Crossbar Topology

Structure: Each node has a direct connection to every other node through a
crossbar switch.

Advanced Computer Architecture 8


7. Hybrid Topologies

Structure: Combines features of two or more topologies (e.g., mesh + ring) to


optimize for specific use cases.

Routing
Routing in a Network on Chip (NoC) refers to the process of determining the path
a data packet takes to travel from a source node to a destination node. Effective
routing is critical for achieving low latency, high throughput, and efficient
utilization of network resources in NoC systems.

Key Objectives of Routing in NoC


1. Minimize Latency:

Reduce the time taken for packets to reach their destination.

2. Maximize Throughput:

Ensure high data transfer rates without congestion.

3. Avoid Deadlocks and Livelocks:

Ensure packets do not get stuck or enter infinite loops in the network.

4. Energy Efficiency:

Reduce power consumption during data transmission.

5. Fault Tolerance:

Provide alternative paths in case of failures or congestion.

Types of Routing Algorithms in NoC

Advanced Computer Architecture 9


1. Deterministic Routing Algorithms
Description: In deterministic routing, the path between source and destination
is pre-defined and does not change dynamically based on network conditions.
This routing method follows a fixed, predetermined strategy.

Examples:

XY Routing:

In this algorithm, packets are first routed in the X direction (horizontal


axis) and then in the Y direction (vertical axis).

Pros: Simple, deadlock-free, and easy to implement.

Cons: Poor performance in cases of network congestion because it


follows a fixed path.

Source Routing:

The source node defines the entire route and passes it to the
destination node.

Pros: Simple and efficient for low-traffic systems.

Cons: Not suitable for dynamic or congested networks.

Advantages:

Simple and easy to implement.

Deadlock-free for simple topologies.

Predictable and easy to analyze.

Disadvantages:

Lacks flexibility in handling network congestion.

May not be optimal under dynamic network conditions.

2. Adaptive Routing Algorithms


Description: Adaptive routing algorithms dynamically adjust the routing path
based on current network conditions, such as congestion, link availability, or

Advanced Computer Architecture 10


network failures. These algorithms aim to optimize communication based on
the changing state of the network.

Examples:

West-First Routing:

In this algorithm, packets first move towards the west before


proceeding in other directions (such as north or east). This approach
avoids deadlocks by restricting certain turns.

Pros: Avoids deadlocks and supports dynamic changes.

Cons: Slightly increases latency as packets take detours.

Odd-Even Turn Model:

In this model, certain turns are allowed or disallowed depending on


whether the destination is located in an "odd" or "even" region of the
network.

Pros: Deadlock-free with a simple and efficient design.

Cons: Routing flexibility is limited in certain cases.

Adaptive XY Routing:

Uses a routing strategy that adapts dynamically based on congestion


or network load. In cases of congestion in one direction, packets can
be rerouted to alternate paths.

Pros: Dynamic and efficient under varying traffic conditions.

Cons: Increased complexity due to the dynamic nature of routing


decisions.

Advantages:

More flexible and adaptive to network conditions.

Can avoid congestion and balance load more effectively.

Increased fault tolerance.

Disadvantages:

Advanced Computer Architecture 11


More complex and resource-intensive compared to deterministic
algorithms.

Potential for longer routing paths in congested networks.

3. Hybrid Routing Algorithms


Description: Hybrid routing combines both deterministic and adaptive
strategies, selecting the routing method based on the current state of the
network. It aims to achieve the best of both worlds: deterministic routing's
simplicity and adaptive routing's flexibility.

Examples:

Region-Based Routing:

Combines deterministic routing within specific regions of the network


and adaptive routing between regions. For intra-region
communication, packets follow a simple deterministic path, and for
inter-region communication, adaptive routing is used.

Pros: Balances simplicity and flexibility, efficiently handles different


traffic patterns.

Cons: More complex to implement and manage.

Hybrid XY/Adaptive Routing:

This algorithm uses XY routing in normal conditions, but if congestion


is detected, it switches to an adaptive routing strategy.

Pros: Provides a fallback mechanism in case of congestion.

Cons: Increased overhead due to monitoring and adaptive decisions.

Advantages:

Offers flexibility to handle both low and high-traffic conditions effectively.

Combines the best features of deterministic and adaptive routing.

Disadvantages:

Increased design complexity.

Advanced Computer Architecture 12


Requires a decision-making mechanism to choose between routing
strategies.

Flow Control
Flow control in Network on Chip (NoC) refers to the management of data
transmission between nodes in a network to ensure that packets are delivered
efficiently without overwhelming the network resources. It plays a crucial role in
regulating the flow of data, preventing congestion, and ensuring that packets are
delivered in an orderly and reliable manner.

Effective flow control helps avoid issues like packet loss, delays, or deadlocks,
which can arise when multiple packets are competing for the same resources or
when a node or link becomes overloaded.

Types of Flow Control in NoC


Flow control mechanisms are generally categorized based on how they handle
packet transmission and how they regulate resource allocation in the network. The
main types are credit-based flow control, acknowledgment-based flow control,
and buffer-based flow control.

1. Credit-Based Flow Control


Description: In credit-based flow control, the sender is allowed to send a
packet only if the receiver has enough available space to store it. The receiver
provides "credits" to the sender, indicating how many packets it can accept. If
the receiver’s buffer space becomes full, it will stop granting credits until
space becomes available again.

How It Works:

The receiver sends a credit message to the sender to indicate how much
buffer space is available.

The sender can send a packet only if it has received an available credit.
After sending a packet, the sender will wait for further credits before
sending additional packets.

Advantages:

Advanced Computer Architecture 13


Prevents Overload: Avoids overwhelming the receiver by ensuring that it
only sends packets when the receiver is ready.

Efficient Resource Management: Ensures that buffer space is used


efficiently, reducing the risk of packet loss due to buffer overflow.

Disadvantages:

Increased Overhead: The credit message requires additional


communication, which can increase the overhead, especially in networks
with high data traffic.

Latency: Waiting for credits before sending data can introduce additional
delays, particularly if the network is congested.

2. Acknowledgment-Based Flow Control


Description: In acknowledgment-based flow control, the sender waits for an
acknowledgment (ACK) from the receiver after sending a packet. The sender
can only transmit a new packet after receiving an ACK for the previous one,
thus ensuring that the receiver has processed the data before more is sent.

How It Works:

The sender sends a packet and waits for an acknowledgment from the
receiver. Once the acknowledgment is received, the sender can send the
next packet.

The receiver confirms the receipt of the packet, allowing the sender to
proceed with the transmission of additional data.

Advantages:

Reliability: Guarantees that each packet is received and processed before


the next one is transmitted, ensuring reliability in data delivery.

Prevents Buffer Overflow: By waiting for an acknowledgment before


sending more packets, the sender ensures that the receiver's buffer is not
overwhelmed.

Disadvantages:

Advanced Computer Architecture 14


High Latency: The need for waiting for an acknowledgment for each
packet can result in significant delays, especially in high-speed networks
or networks with long communication paths.

Reduced Throughput: Since the sender can only send one packet at a
time and must wait for the acknowledgment, the overall throughput of the
network may be reduced.

3. Buffer-Based Flow Control


Description: In buffer-based flow control, the network buffers are used to
manage the flow of data. Each node (router or core) has a finite amount of
buffer space, and the flow control mechanism ensures that packets are only
sent if there is enough space in the network buffers to accommodate them.

How It Works:

The sender checks whether there is enough available buffer space at the
next hop or destination. If space is available, the packet is transmitted. If
space is unavailable, the sender must wait until the buffer space is
cleared.

Backpressure: If a buffer at an intermediate node or destination becomes


full, the sender is notified, and it stops sending data until the buffer has
space again.

Advantages:

Prevents Congestion: By using buffer space efficiently and notifying the


sender when space is unavailable, buffer-based flow control helps avoid
congestion in the network.

Simple and Effective: It is a straightforward and widely-used method for


managing the flow of packets, and it works well in many types of NoC
architectures.

Disadvantages:

Potential for Deadlock: If the buffers are not managed properly, deadlocks
can occur, where packets are stuck waiting for space that is never freed.

Advanced Computer Architecture 15


Buffer Size Limitations: If the buffer sizes are too small, they can fill up
quickly and cause delays in packet transmission, especially during high
network traffic.

Flow Control Techniques


In NoCs, flow control can be implemented in various ways to optimize packet
delivery and resource usage:

1. Hop-by-Hop Flow Control


Description: In hop-by-hop flow control, each intermediate router makes flow
control decisions independently. It checks if it can forward the packet based
on the availability of buffer space at the next hop.

Advantages:

Localized Decisions: Each router makes independent decisions, which


simplifies the flow control process and avoids bottlenecks at a central
location.

Low Latency: Reduces the need for global coordination or


acknowledgments, leading to lower latency in packet forwarding.

Disadvantages:

Limited Coordination: Since each router operates independently, there


might be inefficiencies in global network resource management, especially
in networks with dynamic traffic patterns.

2. End-to-End Flow Control


Description: In end-to-end flow control, the sender and receiver work
together to regulate packet transmission. The sender can only transmit a new
packet once the receiver has processed the previous one.

Advantages:

Global Coordination: Ensures that the entire communication path from the
sender to the receiver is considered for flow control.

Advanced Computer Architecture 16


Higher Throughput: As there is global coordination, it can maximize
throughput by optimizing the flow control process.

Disadvantages:

Higher Latency: It introduces more delays because the sender has to wait
for an acknowledgment or status update from the receiver before
transmitting more data.

More Complex: End-to-end flow control is more complex as it requires


coordination between the sender and receiver, potentially increasing the
overhead.

3. Virtual Channel Flow Control


Description: In virtual channel flow control, multiple virtual channels are used
to send packets. This allows for better isolation between different types of
traffic and prevents interference or congestion between different classes of
traffic.

Advantages:

Traffic Isolation: Different types of traffic (e.g., high-priority and low-


priority) can be managed independently, improving QoS (Quality of
Service).

Improved Resource Utilization: Virtual channels allow for better


distribution of network resources across multiple traffic streams.

Disadvantages:

Increased Complexity: Virtual channel flow control requires managing


multiple channels, increasing the complexity of the router design.

Overhead: The use of multiple channels increases the overhead due to the
need for additional buffer space and control logic.

Advanced Computer Architecture 17

You might also like