HSUPA Parameter Huawei

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Page 1 of 17

HSUPA Parameter Description


Copyright Huawei Technologies Co., Ltd. 2009. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd.

Trademarks and Permissions


and other Huawei trademarks are trademarks of Huawei Technologies Co., Ltd. All other trademarks and trade names mentioned in this document are the property of their respective holders.

Notice
The information in this document is subject to change without notice. Every effort has been made in the preparation of this document to ensure accuracy of the contents, but all statements, information, and recommendations in this document do not constitute the warranty of any kind, express or implied.

Page 2 of 17

Contents
1 Change History 2 HSUPA Introduction 3 HSUPA Overview
3.1 HSUPA Network Structure 3.1.1 Channel Introduction 3.1.2 Network Elements Involved 3.2 HSUPA Network Functions 3.2.1 Overview 3.2.2 Fast Scheduling 3.2.3 Fast HARQ 3.2.4 Flow and Congestion Control on the Iub Interface 3.2.5 CE Resource Management 3.2.6 Mobility Management 3.2.7 Power Control 3.2.8 Load Control

4 HSUPA Algorithms
4.1 Overview of HSUPA Related Algorithms 4.2 HSUPA Scheduling 4.2.1 Overview of HSUPA Scheduling 4.2.2 Process of the Scheduling Algorithm 4.2.3 User Queuing 4.2.4 AG UP Processing 4.2.5 RG UP Processing 4.2.6 GBR Processing 4.2.7 MBR Processing 4.3 Dynamic CE Resource Management 4.4 Mapping of Service to HSUPA 4.5 HSUPA over Iur 4.6 HSUPA DCCC 4.7 HSUPA Adaptive Retransmission 4.8 Uplink Macro Diversity Intelligent Receiving

5 HSUPA Parameters
5.1 Description 5.2 Values and Ranges

6 Reference Documents

Page 3 of 17

1 Change History
The change history provides information on the changes in different document versions.

Document and Product Versions


Document Version 01 (2009-03-30) Draft (2009-03-10) Draft (2009-01-15) RAN Version 11.0 11.0 11.0

This document is based on the BSC6810 and 3900 series NodeBs. The available time of each feature is subject to the RAN product roadmap. There are two types of changes, which are defined as follows:
z z

Feature change: refers to the change in the HSUPA feature. Editorial change: refers to the change in the information that was inappropriately described or the addition of the information that was not described in the earlier version.

01 (2009-03-30)
This is the document for the first commercial release of RAN11.0. Compared with draft (2009-03-10), this issue optimizes the description.

Draft (2009-03-10)
This is the draft of the document for RAN11.0. Compared with draft (2009-01-15), draft (2009-03-10) optimizes the description.

Draft (2009-01-15)
This is the draft of the document for RAN11.0. Compared with 03 (2008-11-30) of RAN10.0, draft (2009-01-15) incorporates the following changes: Change Type Feature change Change Description Section 4.8 Uplink Macro Diversity Intelligent Receiving is added. Section 4.4 Mapping of Service to HSUPA is adeed. Section 4.3 "Dynamic CE Resource Management" is optimized. Section 4.2 "HSUPA Scheduling" is optimized. Editorial change The document is reorganized. Parameter Change None. None. None. None. None.

Page 4 of 17

2 HSUPA Introduction
HSUPA is an important feature of 3GPP R6. As an uplink (UL) high speed data transmission solution, HSUPA provides a theoretical maximum rate of 5.73 Mbit/s on the Uu interface. The HSUPA specifications are as follows:
z z z

20 HSUPA users per cell 60 HSUPA users per cell 96 HSUPA users per cell

All of the three specifications are optional.

Intended Audience
This document is intended for:
z z

System operators who need a general understanding of HSUPA. Personnel working on Huawei products or systems.

Impact
z

Impact on System Performance HSUPA greatly increases transmission rates and cell throughput, and reduces transmission delay. Impact on Other Features

HSUPA does not affect other features. HSUPA requires the support of power control, load control, admission control, and mobility management.

Network Elements Involved


Table 2-1 lists the Network Elements (NEs) involved in HSUPA. Table 2-1 NEs involved in HSUPA UE NodeB RNC MSC Server MGW SGSN GGSN HLR

NOTE z : not involved


z

: involved

UE = User Equipment, RNC = Radio Network Controller, MSC Server = Mobile Service Switching Center Server, MGW = Media Gateway, SGSN = Serving GPRS Support Node, GGSN = Gateway GPRS Support Node, HLR = Home Location Register

Page 5 of 17

3 HSUPA Overview
After the HSDPA technology is introduced, the DL transmission rate has been greatly enhanced. To better meet the rapidly growing demands for data services, 3GPP introduced HSUPA in Release 6. By applying fast scheduling, fast HARQ, and shorter TTI, the HSUPA technology provides a higher UL capacity for the WCDMA network, greatly increases the transmission rate of user data, and reduces the transmission delay, thus improving user experience. The main features of HSUPA are as follows: Fast Scheduling In HSUPA, fast scheduling is to allocate system resources in the NodeB through signaling at the physical layer. By exploiting the burstiness in fast transmission, the scheduler performs rapid resource allocation between UEs to adapt to cell interference variations. This improves user experience and leads to an increase in system capacity. Similar to HSDPA, HSUPA also introduces HARQ, which allows the NodeB to rapidly request retransmission of erroneously received data. HARQ reduces retransmission at the RLC layer and shortens the transmission delay. The NodeB performs a soft combining of erroneously received data and retransmitted data before decoding. The combining makes full use of the information transmitted each time and thus increases the decoding success rate. HSUPA allows a minimum of 2 ms TTI, which further reduces the transmission delay and scheduling delay. Diversity HSUPA supports soft handover. The cells in the active set can receive data from UEs. Macro diversity combining (MDC) increases the probability of proper data reception, improves the quality of data transmission, and greatly enhances the service stability of users at the cell border.

Fast HARQ

Shorter TTI Macro Combining

To support these features, new MAC entities (MAC-e/es) are introduced in the UE, NodeB, and RNC. Note that MAC-e is in the NodeB and that MAC-es is in the RNC. On the WCDMA network, signals of different users in the UL are distinguished through scrambling codes. The signals are not orthogonal to each other, thus producing absolute interference. Signals of different users in the DL are distinguished through channel codes. The signals in the cell are orthogonal to each other. Thus, there is little interference between users. In HSUPA, data transmission of each user may increase the interference in the UL and the load. The NodeB needs to allocate UL resources among users. If the NodeB allocates more resources to a user, the transmission rate of the user is higher, which contributes more to the increase of the UL load. HSDPA provides high transmission rates only by allocating more power to increase the signal-to-interference ratio of received signals and allocating more codes for transmission to enable the physical layer to carry more bits. Unlike HSDPA, where the AMC function is introduced, HSUPA uses closed loop power control of the DPCH/DPCCH in R99. This avoids the impact of near-far effect on data transmission of users at the cell border.

3.1 HSUPA Network Structure


3.1.1 Channel Introduction
To support the HSUPA technology, 3GPP defines a transport channel (E-DCH) and five physical channels (E-DPCCH, E-DPDCH, E-HICH, E-RGCH, and E-AGCH). Figure 3-1 HSUPA physical channels
DPCCH DPCH/F-DPCH E-AGCH E-RGCH E-HICH

BTS or NodeB

E-DPCCH E-DPDCH

3G UE

The TTI of the E-DCH can be 10 ms or 2 ms. The E-DCH is mapped onto the E-DPDCH or E-DPCCH. When the TTI is 10 ms, the E-DCH provides better UL coverage performance; when the TTI is 2 ms, the E-DCH provides a higher transmission rate. The E-DPDCH carries data transmitted in the UL. The SF of the E-DPDCH varies between SF256 and SF2 with the data transmission rate. A maximum of four E-DPDCHs can be used for parallel transmission. The SF of two E-DPDCHs is SF2, and that of the other two E-DPDCHs is SF4. The E-DPCCH carries control information related to data transmission in the UL. The control information consists of the E-DCH transport format combination indicator (E-TFCI), retransmission sequence number (RSN), and happy bit. The SF of the E-DPCCH is fixed to 256. To implement the HARQ function, the E-HICH is introduced in the DL. The E-HICH carries retransmission requests sent by the NodeB. The HSUPA scheduling control information is carried by the DL E-AGCH and E-RGCH. The E-AGCH is a shared channel, which carries the maximum permissible EDPDCH/DPCCH power ratio, that is, HSUPA grant information. The E-RGCH is a dedicated channel, which is used to indicate relative grants and increase or decrease the maximum permissible E-DPDCH/DPCCH power ratio.

3.1.2 Network Elements Involved


HSUPA has impacts on the RNC, NodeB, and UE. On the control plane, the RNC processes the signaling of HSUPA cell configuration, E-DCH related channel configuration, and mobility management. On the user plane, the RLC and MAC-d layers of the RNC keep unchanged. Under the MAC-d, the MAC-es is added to implement the MDC, reordering, and decapsulation of MAC-es PDUs. In the NodeB, the MAC-e is added to implement HSUPA scheduling and HARQ management. To support HSUPA, 3GPP defines six UE categories. These UEs support different peak rates at the MAC layer, ranging from 711 kbit/s to 5.74 Mbit/s. Only the UE of category 6 supports the peak rate of 5.74 Mbit/s. E-DCH Category Category 1 Category 2 Category 3 Category 4 Category 5 Category 6 Maximum Number of E-DCH Codes Transmitted 1 x SF4 2 x SF4 2 x SF4 2 x SF2 2 x SF2 2 x SF2 + 2 x SF4 Peak Rate (Mbit/s) in 10 ms E-DCH TTI 0.7 1.4 1.4 2.0 2.0 2.0 Peak Rate (Mbit/s) in 2 ms E-DCH TTI Not supported 1.399 Not supported 2.886 Not supported 5.74

NOTE: When four codes are transmitted in parallel, two codes shall be transmitted with SF2 and two with SF4.

Page 6 of 17

For details, see the 3GPP TS 25.306 protocol. Huawei RAN supports all the UE categories. HSUPA 2ms TTI and parallel transmission of four E-DPDCHs are optional.

3.2 HSUPA Network Functions


3.2.1 Overview
HSUPA algorithms consist of control plane algorithms and user plane algorithms. The control plane algorithms are responsible for setting up and maintaining E-DCH connections and managing cell resources. The user plane algorithms are responsible for transmitting service data streams and allocating related resources. Figure 3-2 shows the structure of HSUPA control plane algorithms based on the service connection setup and maintenance procedure. Figure 3-2 Structure of HSUPA control plane algorithms

The bearing scheme is used by the network side to configure the RAB during the setup of a service connection in the cell. It then configures bearer channels for the UE based on the requested service type, service rate, UE capability, and cell capability. The access control algorithm, a sub-algorithm of the load control algorithm, checks whether the current resources of the cell are sufficient for this service connection. If the resources are insufficient, the algorithm triggers intelligent access control and takes the corresponding actions. If the resources are sufficient, the service connection can be set up. The mobility management algorithm is applicable to the established E-DCH connection. Based on the channel quality of the UE, the algorithm decides whether cells need to be added in the E-DCH active set or whether the established E-DCH connection needs to be switched from the serving cell to another cell to provide better services. The load control algorithm adjusts the resources configured for the established radio connections when the cell load increases so as to avoid cell overload. The power control algorithm is responsible for outer loop power control of the E-DCH and power control of DL physical layer signaling channels related to the E-DCH. The HSUPA user plane algorithms are responsible for dynamic resource allocation based on data transmission. Figure 3-3 shows the structure of HSUPA user plane algorithms based on the E-DCH data processing procedure. Figure 3-3 Structure of HSUPA user plane algorithms

The service data carried on the E-DCH is passed to the RLC layer and MAC-d layer on the UE side for processing and encapsulation. This process remains unchanged after the E-DCH is introduced. The newly added MAC-es and MAC-e then encapsulates the MAC-d PDUs. The MAC-e performs the ETFC selection function to calculate the grants required for different transmission rates and selects an appropriate transmission rate according to the grant of NodeB scheduling. The MAC-e performs the HARQ function to transmit and retransmit the MAC-e PDUs. Each HARQ process carries one PDU and retransmits the PDU according to the retransmission request sent by the NodeB. After the NodeB receives the MAC-e PDUs from the UE over the air interface, the MAC-e performs the HARQ function to implement data retransmission and soft combining in conjunction with the UE. The MAC-e decapsulates the properly received MAC-e PDUs and then sends them to the RNC over the Iub interface. The E-DCH flow control algorithm performs flow control on the Iub interface to avoid congestion. The MAC-e in the NodeB performs the CE resource management function to manage the hardware resources for UL data reception in the NodeB. For UEs in the soft handover state, the MAC-es in the RNC performs the MDC function to process the properly received MAC-es PDUs in multiple cells in the active set and then sorts and decapsulates the MAC-es PDUs. The CE management algorithm in the MAC-e dynamically allocates and controls the hardware resources for demodulation and reception of UL data in the NodeB. The scheduling algorithm determines UEs for data transmission and allocates air interface resources for UL transmission. The Iub bandwidth and the quantity of available hardware resources may affect transmission rates. Figure 3-4 Relations of HSUPA algorithms

Page 7 of 17

Dynamic CE resource management

Flow control

CE resources Iub bandwidth for UEs SGmax

MAC-e scheduling

3.2.2 Fast Scheduling


On the network, owing to the randomness of UE activities and services, the transmission rate for each UE constantly changes. If resource allocation is slow (for example, in R99, resource configuration is performed through signaling at the RLC layer in the RNC), the resources reserved for high-speed transmission are still allocated to the UE though the transmission rate required by the UE has decreased. In this case, when other UEs request high transmission rates, resources cannot be timely allocated to them. Therefore, the utilization of system resources is reduced. As a result, UEs requesting high transmission rates cannot obtain resources timely, and UEs at low transmission rates occupy a large number of air interface resources in the UL though data transmission is complete. Fast scheduling rapidly adjusts the UL resource allocation and makes good use of available resources. This increases the system capacity and improves user experience. UL scheduling is intended to allocate UL resources to UEs according to the demands for data transmission under a specific cell load. UL load is measured according to the amount of UL interference. In the UL, interference is generated between UEs during data transmission. Cells need to control the amount of interference generated by data transmission of all UEs within a specific range. The air interface resources allocated by UL scheduling are the interference or UL load allowed during data transmission. Under certain channel conditions, the amount of interference allowed by the scheduler is in a positive correlation with transmission rates. If the UL transmission rate is higher, the NodeB requires a higher signal-tointerference ratio. Therefore, in the case of interference, the higher the power of signals received by the NodeB from a UE, the higher the cell UL load (signals of different UEs are distinguished through scrambling codes and they are not orthogonal to each other, thus producing interference). HSUPA fast scheduling implements UL resource allocation mainly by controlling the maximum permissible signal-to-interference ratio on the NodeB side. The grant for HSUPA users is the maximum power ratio of the data channel to the dedicated control channel. The scheduler in the NodeB estimates the maximum power ratio of the data channel to the dedicated control channel based on the amount of interference to the UE and the signal-to-interference ratio of the DPCCH and assigns the power ratio as a grant to the UE. HSUPA fast scheduling rapidly adjusts the grants to UEs through signaling at the physical layer and controls UL interference caused by data transmission of each UE. When some UEs stop transmitting data, the scheduler recalls the resources for these UEs and rapidly allocates them to other UEs. In this way, UEs make full use of UL resources, thus increasing the system capacity and improving user experience. In comparison, UL interference control in R99 is implemented through signaling at the RLC layer in the RNC. In such a case, the delay is great, and more UL interference needs to be reserved. Hence, system performance is restricted. HSUPA also allows UEs to transmit UL data in non-scheduling mode. The network side performs configuration through signaling at the RLC layer to allow UEs to transmit data at any rates that do not exceed the peak rate, without the restriction of resource allocation based on fast scheduling. Non-scheduling transmission is preferred for services with stable traffic volume, low rate, and high requirement on transmission delay (such as VoIP). The UL transmission rate of these services is stable. Thus, the air interface resources required are relatively stable. These services, however, are delay sensitive. The air interface resources that are not expected for allocation often change, thus increasing the delay or delay variation. The non-scheduling mode ensures the QoS of services and reduces the signaling flows of fast scheduling at the physical layer. For details, see section 4.2 HSUPA Scheduling

3.2.3 Fast HARQ


The HARQ principles of HSUPA are the same as those of HSDPA. Fast HARQ of HSUPA enables fast retransmission at the physical layer, which reduces retransmission delay. Fast HARQ also enables combination of the retransmitted data and makes full use of the information transmitted each time, which improves transmission efficiency. In each TTI, after receiving data from a UE, the NodeB sends a success message or retransmission request to the UE. The UE retransmits the data upon receiving the retransmission request. When receiving the retransmitted data, the NodeB combines the retransmitted data and the previously received data before decoding, thus making full use of the information transmitted each time and increasing the probability of proper decoding. The soft combining mode supported by HARQ of HSUPA can be only incremental redundancy. HSUPA supports multiple HARQ processes for transmission. The number of HARQ processes depends on the length of the E-DCH TTI. When the TTI is 10 ms, there are four HARQ processes, and thus the round-trip time (RTT) of HARQ is 40 ms. When the TTI is 2 ms, there are eight HARQ processes, and thus the RTT of HARQ is 16 ms.

3.2.4 Flow and Congestion Control on the Iub Interface


HSUPA significantly increases data transmission rates on the air interface. If the NodeB cannot transmit the data received on the air interface to the RNC through the Iub interface, a large number of data will be accumulated, thus leading to packet loss and performance deterioration. Thus, it is necessary to perform flow and congestion control on the Iub interface and to adjust data rates on the air interface according to the Iub bandwidth. Flow and congestion control on the Iub interface is to estimate the available bandwidth on the Iub interface by monitoring the data transmission delay and packet loss rate on the Iub interface. It also estimates the load on the Iub interface by monitoring the size of data to be transmitted on the Iub interface on the NodeB side. When the amount of data to be transmitted on the Iub interface and the load on the Iub interface increase, Iub flow control limits UL transmission rates through HSUPA fast scheduling and reduces the amount of data received by the NodeB on the air interface. For details, see Transmission Resource Management Parameter Description

3.2.5 CE Resource Management


CE resource management is to manage hardware resources used for UL data reception in the NodeB. The NodeB requires specific hardware resources for UL data reception. The resources used for UL transmission on the E-DCH are related to the SF and quantity of physical channels. On the DCH, the quantity and SF of physical channels need to be assigned through signaling at the RLC layer. They do not change in each TTI. Thus, the required hardware resources are relatively stable. On the E-DCH, the quantity and SF of physical channels change with the amount of resource allocated during scheduling and the date rates in each TTI. The NodeB needs to allocate appropriate hardware resources for UL data reception on each E-DCH connection. If the hardware resources allocated by the NodeB are insufficient, data reception fails, thus affecting system capacity and user experience. If the NodeB allocates excessive hardware resources, the utilization of NodeB hardware resources decreases, thus affecting system performance. Huawei NodeB allocates appropriate CE resources for UL data reception on the E-DCH according to the maximum E-DCH transmission rate granted by the cell during scheduling. This meets the requirement for UL data reception and avoids a waste of CE resources. For details, see section 4.3 Dynamic CE Resource Management

3.2.6 Mobility Management


The E-DCH supports soft handover. The UE sets up connections with the cells in the E-DCH active set. The cells in the E-DCH active set receive UL data from the UE. Multiple cells can simultaneously receive UL data from the UE, thus increasing the probability of proper data reception. The RNC performs the MDC function to combine the data received by the cells in the active set. The more the number of cells in the E-DCH active set is, the greater the probability of proper UL data reception. In this case, more NodeB hardware resources need to be

Page 8 of 17

consumed because each cell in the active set receives and demodulates the UL data. Operators can adjust the number of cells in the E-DCH active set through parameter setting. Huawei RAN supports handover between the HSUPA cell and the R99 cell and inter-RAT handover between the HSUPA cell and the 2G cell. For details, see Handover Parameter Description.

3.2.7 Power Control


The power of the HSUPA UL channels E-DPCCH and E-DPDCH is based on the power of the DPCCH. It has a power offset relation with that of the DPCCH. The power control mode of the DPCCH is the same as that in R99. The DPCCH adjusts the transmission power through fast closed loop power control so that the signal-tointerference ratio of the DPCCH approaches the target value. Like the DCH, the E-DCH uses outer loop power control. The DCH adjusts the target value of the signal-to-interference ratio of the DPCCH by estimating the block error rate of transmission blocks so as to improve the QoS. The E-DCH also adjusts the target value of the signal-to-interference ratio of the DPCCH to improve the QoS. The purpose of E-DCH outer loop power control, however, is to ensure the number of retransmissions and the retransmission block error rate (RBLER). Real-time services use the RBLER as the target of E-DCH outer loop power control. Real-time services are delay sensitive. After several times of retransmission at the physical layer, the maximum permissible delay of transmission blocks has been reached. If transmission blocks cannot be properly received yet, the QoS of real-time services is affected. The control of the BLER is to control the proportion in which the transmission delay exceeds the maximum permissible delay, thus ensuring the QoS of services. BE services use the number of HARQs as the target of E-DCH outer loop power control. HSUPA power control also includes the power control of the DL physical layer signaling and indication channel, E-RGCH, E-AGCH, and E-HICH. For details, see Power Control Parameter Description

3.2.8 Load Control


Access control determines whether an E-DCH connection can be set up under the precondition that the service quality is guaranteed. The determination is based on the status of cell resources and the situation of Iub/Iur congestion. When the resources are insufficient, the E-DCH is switched to the DCH. When the resources are sufficient, the DCH is switched to the E-DCH. The implementation of this function requires the help of the channel switching algorithm. When the cell is congested, the congestion control algorithm selects some users (including E-DCH users) for congestion relief. The selection is based on the integrated priority, which considers the allocation retention priority (ARP), traffic class (TC), traffic handling priority (THP), and bearer type. When the cell load is high, the basic congestion control algorithm selects some HSUPA users for handover to another inter-frequency neighboring cell with the same coverage and lower load or to an interRAT cell. When the cell load is excessively high, the overload congestion control algorithm selects some HSUPA BE users for migration to a common channel or releases some HSUPA services. For details, see the Load Control Parameter Description.

Page 9 of 17

4 HSUPA Algorithms
4.1 Overview of HSUPA Related Algorithms
With the introduction of HSUPA, the NodeB uses three algorithms, namely, HSUPA fast scheduling algorithm, flow control algorithm, and CE scheduling algorithm. These algorithms respectively consider the Uu resources, Iub resources, and CE resources on the NodeB. Flow control and CE resource management do not directly control the rate of the UE. Instead, the MAC-e entity controls the rate of the UE based on the received algorithm results. Figure 4-1 Relations of HSUPA algorithms

HSUPA flow control provides with the MAC-e with Iub bandwidth allocated to each UE. HSUPA CE resource management provides the MAC-e with CE resources allocated to each UE and provides the maximum SG (SGmax) that can be used by the UE. HSUPA flow control provides the MAC-e with Iub bandwidth available for HSUPA and provides the MAC-e scheduler with the congestion indicator. The MAC-e the assign AG or RG to the UE based on the allocated Iub bandwidth, flow control results, CE resource allocation results, and Uu resources.

4.2 HSUPA Scheduling


4.2.1 Overview of HSUPA Scheduling
The MAC-d flows are configured in scheduled transmission mode or non-scheduled transmission mode. For details on the configuration of scheduling mode, see the Radio Bearers Parameter Description.

Non-Scheduled Transmission Mode


In non-scheduled transmission mode, the UE can transmit data at the rate specified by the RNC, without a grant from the Node B. The non-scheduled transmission mode is suitable for the services with the requirements for low delay and steady data rate. Therefore, the E-DCH can be considered as a fast retransmission DCH. If an MAC-d flow is configured with the non-scheduled transmission mode, the MAC-d PDUs belonging to this MAC-d flow shall not exceed the size specified by the IE "Max MAC-e PDU contents size". The value of "Max MAC-e PDU contents size" is calculated in the RNC by the following formula: MaxMACePDUSize = [Ceil(MBR x TTILen/RLCPDUpayload) x MACdPDUSize + 18] x MaxRateUpScale Where:
z z z z z z z z

MaxMACePDUSize: Max MAC-e PDU contents size Ceil(): to round up a value MBR: maximum bit rate specified by the Iu message RAB ASSIGNMENT REQUEST TTILen: TTI length RLCPDUpayload: RLC PDU payload, namely RLC PDU size minus RLC PDU header MACdPDUSize: MAC-d PDU size 18: sum of bits for the Transmission Sequence Number (TSN), Data Description Indicator (DDI), and N (Number of MAC-d PDUs) fields MaxRateUpScale: used for multiplying the UL MBR in the RAB assignment to achieve the peak bit rate for the service bearers on the E-DCH. The default value of MaxRateScale is 1.01 for each RAB and 5 for each SRB.

Scheduled Transmission Mode


By user queuing and assigning the scheduling grant (AG or RG), the NodeB can control the rate of the UE. The scheduling procedure takes into account such factors as the Scheduling Priority Indicator (SPI), Guaranteed Bit Rate (GBR), Iub flow control information, and CE resources. In scheduled transmission mode:
z z

The UE sends resource requests with the Scheduling Information (SI) on the E-DPDCH and the Happy Bit on the E-DPCCH. The scheduling algorithm considers the UL load factor, available uplink Iub bandwidth, and CE resources. It uses the DL control channel (E-AGCH or E-RGCH) to affect the E-TFCs used by the UEs. Thus, the algorithm can control the UL interference on the Uu interface and avoid congestion on the Iub interface.

The scheduling algorithm ensures:


z z z

Efficient use of uplink resources: The algorithm maximizes the throughput of a cell under the condition that the QoS requirements of all the UEs are met. Fairness of services: If some UEs have the same SPI, the algorithm allocates the same resources to these UEs. Differentiated services: If a user has a higher SPI, the user can obtain more uplink resources.

The scheduling algorithm performs the following operations:


z z z

Assigning the AG based on the SI and Happy Bit sent by the UE to control the maximum rate that can be used by the UE. Assigning the RG according to the Happy Bit If the user is configured with the GBR by the RNC, and the HSUPAOLSCHSWITCH parameter is set to OPEN, the algorithm ensures that the rate is not lower than the GBR.

4.2.2 Process of the Scheduling Algorithm


When the scheduling period (equal to one TTI) arrives, the scheduling algorithm functions are as follows: 1. Calculating the uplink Uu load resource of each cell Uu load resource of a cell = MaxTargetUlLoadFactor actual load 2. Limiting the UE rates according to the CE resource Based on SGmax and CE preemption, the algorithm sends AG DOWN. For detailed information, see 4.3 Dynamic CE Resource Management. Based on the Iub bandwidth allocated to the user, the algorithm sends RG down. 3. Limiting the UE rates according to the MBR The algorithm directly sends RG DOWN to the UEs whose rates need to be downsized by MBR limitation and updates the UL load based on the current UL load. For detailed information, see 4.2.7 MBR Processing. The algorithm arranges all the users that are not granted within the NodeB based on Happy Bit, thus obtaining a sequence of happy queues and a sequence of unhappy queues. The factors to be considered include the SI, SPI, GBR, and effective data rate. 4. Updating the remaining resources

Page 10 of 17

The algorithm calculates the maximum resources that can be released by the happy users for the unhappy users. The algorithm does not send RG DOWN to the happy users. 5. Scheduling the unhappy queues in a reverse order a. If the conditions for sending AG UP are met, the algorithm assigns AG to the UE based on the available load resource of the cell and updates the remaining resources. For detailed information, see 4.2.4 AG UP Processing. b. If the conditions for sending RG UP are met, the algorithm assigns RG to the UE based on the available load resource of the cell and updates the remaining resources. For details, see 4.2.5 RG UP Processing. 6. Scheduling the happy queues and the unhappy queues in turn If the available load resource of the cell is smaller than zero, the algorithm sends RG DOWN to the UE and updates the remaining resources. In the process, if HSUPAOLSCHSWITCH is set to OPEN and the value of Reff of an unhappy user is smaller than the GBR, the algorithm performs GBR processing. For detailed information, see 4.2.6 GBR Processing. The remaining UL load resource needs to be updated after the AG and RG are sent to the UEs. The NodeB does not send the non-serving RL RG DOWN command unless both of the following criteria are met:
z z

Experienced RTWP of the NodeB > target RTWP sent from the CRNC Non-serving E-DCH to total E-DCH power ratio > NonServToTotalEdchPwrRatio sent from the CRNC.

4.2.3 User Queuing


Regardless of whether AG or RG is assigned, the users must be queued first. When the scheduling period arrives, and the NodeB receives the data or SI correctly, the scheduling algorithm puts the users who can correctly receive data or SI into a happy sequence or an unhappy sequence according to the happy bit. During the queuing, the algorithm also considers the SPI, GBR, and effective data rate of each user.

Queuing Happy Users


Regardless of whether the requirements of the users for the GBRs are met, the algorithm queues all the happy users in descending order by Priorityn. Priorityn = Reff/SPI Where:
z z

Priorityn is the priority value of user n. SPI is assigned by the RNC, which is used to provide different scheduling opportunities according to the scheduling priority. SPI and SPI (FACTOR) are the same as those used for HSDPA. The value of Priorityn has a negative correlation with the SPI. During the scheduling, the rate of the user with a greater Priorityn is decreased before that of the user with a smaller Priorityn.

Reff is calculated according to the formula described in Calculating the Effective Data Rate (Reff).

Queuing Unhappy Users


When queuing unhappy users, the algorithm considers the effective data rate, SPI and GBR satisfaction degree. The rate of a user is decreased before that of a following user but increased after that of a following user. 1. For zero_grant users The algorithm arranges zero_grant users in descending order by Priorityn and puts them to the end of the unhappy sequence. Priorityn is calculated by using the following formula: Priorityn = 1/(SPI x Rreq) Where:
z z z

Priorityn is the priority value of user n. SPI is assigned by the RNC, which is used to provide different scheduling opportunities according to the scheduling priority. Rreq is calculated according to the formula described in Calculating the Requested Data Rate (Rreq).

2. For non-zero_grant users


z

If HSUPAOLSCHSWITCH is set to OPEN, the algorithm queues the users according to the following principles: For the users whose requirements for the GBRs are not met, the algorithm arranges them in descending order by Priorityn = Reff/(SPI x RGBR) and puts them before the zero_grant users. RGBR is the GBR of the user.

For the users whose requirements for the GBRs are met, the algorithm arranges them in descending order by Priority = R / n eff SPI and puts them before the users whose requirements for the GBRs are not met. z If HSUPAOLSCHSWITCH is set to CLOSE, the algorithm queues the non-zero_grant users in descending order by Priority = R / n eff SPI and puts them before the zero_grant users.

Calculating the Effective Data Rate (Reff)


Reff is the effective data rate, which is a filtered value of the successfully received data rate with a -filter: Reff(n,k) = (1 - eff) x Reff(n, k - 1) + eff x R(n, k)
z z z

(n, k) means user n and TTI k. If the data is received correctly, R(n, k) is equal to the total size of all the MAC-es PDUs (which are from the same MAC-e PDU) divided by the TTI length. Otherwise, R (n, k) is equal to zero. Reff(n, 1) is an initial value and is zero. eff is an effective rate smooth factor and is fixed to 0.6%.

Calculating the Requested Data Rate (Rreq)


The NodeB determines the data rate (Rreq) based on the available data amount obtained from TEBS. Rreq cannot exceed the maximum data rate configured by the RNC, and the power cannot exceed the available power obtained from UE Power Headroom (UPH). The formula for calculating Rreq is as follows: Rreq(n,k) = min(Rmax(UPH), argmax{R|Q(k) R x TTIT}, Rsupport), where (n,k) means user n and TTI k. 1. Calculate Rmax(UPH). a. Calculate (ed/c)2UPH according to UPH.

Assume that UPH = (ed/c)2UPH + (ec/c)2 + 11, where 1 stands for (c/c)2, the power of DPCCH. Because (ec/c)2 is known, (ed/c)2UPH can be obtained from the equation. b. Calculate all (ed/c)2 according to 3GPP protocols.

Calculate the quantized ed,j for E-TFCI based on the TB table configured by the RNC by using the method presented in HSUPA Power Control. c. The maximum (ed/c )2 Select Rmax(UPH). is (ed/c)2UPH. From the TB table, select one E-TFCI whose (ed/c)2 is the most similar to but smaller than (ed/c)2UPH. Based on the TB

Page 11 of 17

size indicated by E-TFCI and the TTI length, the Rmax(UPH) is obtained. 2. Calculate Argmax{R|Q(K) R x T}.

Argmax{R|Q(K) R x T}: to find a value R that is the maximum one and meets the condition Q(k) R x T Q(k): buffer size

R for each E-TFC is equal to the TB size divided by T. T is the length of HBDelaycnd. 3. Calculate Rsupport. Rsupport = min{R (Maximum set of E-DPDCHs), R(E-DCH MBR)}

4.2.4 AG UP Processing
After the serving E-DCH cell of the UE receives the SI of the UE, the NodeB calculates the requested rate. For a user in the unhappy sequence, the algorithm determines whether to assign AG UP to the user based on whether a request for the SI is received from the UE, whether the AGCH code is idle, and whether the Iub bandwidth and CE resource are available. If the conditions for sending AG UP are met, the algorithm calculates the grant that can be assigned to the user based on the requested rate, Iub bandwidth, and Uu bandwidth.

Conditions for Sending AG UP


When the user meets all of the following conditions, the NodeB schedules this user through AG:
z z z

The user is in the Unhappy state, and the SI sent from the user is received. The AGCH code allocated to the user is not used by other users. The user meets the requirement: SGIndexreq - SGIndexcur > AG Threshold. SGIndexreq and SGIndexcur are obtained from Rreq and Rcur.

Rcur is the current bit rate of the UE, which is calculated on the basis of the E-TFCI carried on the E-DPCCH. Rcur is equal to the MAC-e PDU size divided by the TTI length. The MAC-e PDU size can be obtained according to the E-TFCI. Rreq is calculated according to the formula described in section 4.2.3 "User Queuing."

The AG threshold is adjusted dynamically according to the traffic volume. For details, see Dynamically Setting the AG Threshold.
z z

The rate of the user is not decreased because of MBR processing, Iub bandwidth limitation, and CE resource limitation. The user demodulates the data on the E-DPDCH correctly.

If the user meets all these conditions, the scheduling algorithm calculates the AG to be assigned to the user based on the Uu bandwidth and Iub bandwidth.

Dynamically Setting the AG Threshold


When a non-zero_grant user sends an SI request, the NodeB can schedule the user through AG if SGIndexreq - SGIndexcur > AG Threshold. Compared with RG, which increases or reduces the UE scheduling grant step by step, AG can perform a faster data rate. But if the AG threshold is too low, AG causes larger fluctuation of uplink load due to a large UE data rate change Dynamically setting AG threshold can avoid the disadvantage described early.
z z

When the rate of a service source is small, the AG threshold is set to 3 so that the user can quickly obtain sufficient resource to send data. This helps to improve user experience with smaller latency. When the rate of a service source is large, the AG threshold is set to 37 to avoid usage of AG, and instead RG can be used to provide a steady cell uplink load and a steady rate for each user.

When an SI is received by the NodeB, the scheduler checks a Flag to decide the AG threshold:
z z

If the Flag is TRUE, the AG threshold is 3 and the scheduler assigns AG to this UE when SGIndexreq - SGIndexcur > AG Threshold. If the Flag is FALSE, the AG threshold is 37 and only the RG can be used.

The scheduler in the NodeB maintains the Flag for each user periodically. The Flag can be decided in the following ways:
z

The initial value of the Flag is TRUE. In the period, the Flag is set to FALSE when one of the following requirements is met:

If the total received data bit number is greater than 2 kB or the data rate is greater than 4 kbyte/s, AG will not be used except at the beginning of transmission. If any TEBS in SI received in this period is greater than 20, then 1,658 bytes < TEBS 2,202 bytes.

SI Transmission
The SI is attached to the end of the MAC-e PDU and is used to notify the serving NodeB of the amount of system resources required by the UE. Figure 4-2 SI transmission

Figure 4-3 SI structure

Where:
z z z z

UPH: UE Power Headroom, which indicates the ratio of the maximum UE transmission power to the corresponding DPCCH code power. TEBS: Total E-DCH Buffer Status, which identifies the total amount of data available across all logical channels and indicates the amount of data in bytes available for transmission and retransmission at the RLC layer. HLBS: Highest priority Logical channel Buffer Status, which indicates the amount of data available from the logical channel identified by the HLID. HLID: Highest priority Logical channel ID, which identifies the highest-priority logical channel with available data.

The transmission of SI is initiated by the quantization of the transport block sizes or by the triggering conditions. The reporting of SI is triggered according to the SG after SG is updated. The triggering of a report is indicated to the E-TFC selection function at the first new transmission. This process may be delayed if the HARQ processes are occupied by retransmissions. The SI transmission can be triggered by the following conditions (see the 3GPP 25.321 for details):

Page 12 of 17

Triggered by events At each TTI, the UE checks the SG and the buffer status. If the SG has the value Zero_Grant or all processes are deactivated and the TEBS becomes greater than zero, then the SI transmission is triggered. If the serving E-DCH cell changes and the new serving E-DCH cell is not in the previous serving E-DCH RLS, the SI transmission is triggered.

Triggered periodically Triggered by the timer T_SIG or T_SING. T_SIG is configured on the RNC through the parameter Hsupa10msSchPrdForGrant, Hsupa2msSchPrdForGrant, and T_SING is configured on the RNC through the parameter Hsupa10msSchPrdForNonGrant, Hsupa2msSchPrdForNonGrant.

If the HARQ process fails to deliver an MAC-e PDU that contains an SI to the RLS that contains the serving cell, and the SI is transmitted together with high-layer data multiplexed into the same MAC-e PDU, the transmission of a new SI is triggered. If the SI transmission is not triggered under the previous condition, but the size of the data plus the header is smaller than or equal to the TB size of the UE-selected ETFC minus 18 bits, the SI is concatenated into this MAC-e PDU. In this case, no new SI is triggered if the HARQ process fails to deliver the MAC-e PDU.

4.2.5 RG UP Processing
The prerequisites for the algorithm to send RG UP to a user are as follows:
z z z z

The user is in the Unhappy state. The user does not meet the conditions for sending AG UP. The rate of the user is not decreased because of MBR processing, Iub bandwidth limitation, and CE resource limitation. The user demodulates the data on the E-DPDCH correctly.

If all these conditions are met and the Uu bandwidth, Iub bandwidth, and CE resource allow an increase in the user rate, the algorithm sends RG UP to the user.

4.2.6 GBR Processing


If a UE is configured with the GBR by the RNC and the HSUPAOLSCHSWITCH parameter is set to OPEN, the scheduling algorithm compares the effective data rate with the GBR to determine whether the GBR is met. The GBR is transmitted from the RNC to the NodeB:
z z

If the RAB ASSIGNMENT REQUEST message from the CN carries the GBR when the RAN is set up, the RNC sends the GBR to the NodeB. Otherwise, the GBR configured on the RNC is sent to the NodeB when the RAB is carried on HSUPA. The GBR can be configured according to user priorities on the RNC.

GBR processing is as follows:


z z z

If the load on the Uu interface exceeds the value of MaxTargetUlLoadFactor, the algorithm does not send AG UP or RG UP to those users whose requirements for the GBRs are already met. If the load on the Uu interface exceeds the value of MaxTargetUlLoadFactor but does not exceed the load congestion threshold, the algorithm meets the requirements of the users for the GBRs. If the load on the Uu interface exceeds the load congestion threshold, the algorithm does not meet the requirements of the users for the GBRs.

When the user meets the conditions for sending AG UP,


z z

If Rreq is smaller than the GBR, only Rreq needs to be assigned to the user. If Rreq is larger than the GBR,

If the estimated load does not exceed the load congestion threshold after the GBR is reached, at least the GBR is assigned to the user. Otherwise, the algorithm calculates the maximum grant that can be assigned to the user according to the load congestion threshold.

When the user does not meet the conditions for sending AG UP but meets the conditions for sending RG UP,
z z

If the estimated load does not exceed the load congestion threshold after RG UP is sent, RG UP is sent to the user. Otherwise, RG UP is not sent to the user.

The load congestion threshold is 0.95. If the estimated load does not exceed the load congestion threshold, neither the serving RLS nor the non-serving RL will send RG DOWN to those users whose Reff is smaller than the GBR. In addition, no matter whether the requirement for the GBR can be met, the grant assigned to the user cannot cause the throughput of the user to exceed the bandwidth available for the HSUPA users in the NodeB.

4.2.7 MBR Processing


At each TTI, if both Rcur and Ravg of a user are greater than the E-DCH MBR, RG DOWN is sent to this user. The E-DCH MBR is transmitted by the RNC to the NodeB. For detailed information, see the 3GPP 25.433 9.2.2.13T. Ravg is the average data rate of the UE. Ravg(n, k) = (1 - avg) x Ravg(n, k - 1) + avg x Rcur(n, k)
z z z z

(n, k) indicates user n and TTI k. avg is set to 0.6%. Ravg(n,1) is an Average Rate Initial Value, which is set to 0 kbit/s. Rcur(n, k) is the current rate of the UE, which is equal to the MAC-e PDU size divided by the TTI length. The MAC-e PDU size can be obtained according to the E-TFCI.

The smoothing time is about 1.6s, about 10 times the period of fast fading that occurs during 3 km/h movement. The purpose is to reflect the impact of the channel fading and to smooth it.

4.3 Dynamic CE Resource Management


Overview of Dynamic CE Resource Management
A CE is defined as the baseband resources required in the NodeB for 12.2 kbit/s AMR speech service and 3.4 kbit/s SRB service. The HSUPA shares CE resources with the R99 services. The HSUPA improves the uplink performance of delays and rates capacity, but it consumes a large amount of CE resources. If dynamic CE resource management is unavailable, the NodeB allocates CE resources according to the maximum rate of the UE even if the actual traffic volume is very low. In this case, the utilization of CE resources is low. Thus, the dynamic CE resource management is necessary. Considering that the rate of HSUPA users changes fast, the algorithm periodically adjusts the available CE resources for the users. When a new user is admitted, the algorithm also adjusts CE resources. Dynamic CE resource management minimizes the failures in demodulation and decoding due to insufficient CE resources. Meanwhile, it maximizes the CE usage and UL throughput. Figure 4-4 Overview of CE resource management

Page 13 of 17

The MAC-e scheduler always takes CE resources into account. CE resource adjustment is performed periodically in the NodeB and notifies the MAC-e scheduler of the CE allocation. For AG UP users, the MAC-e scheduler requests CE resources from the CE scheduler, and then the CE scheduler increases CE resources based on the request message.

Periodical CE Resource Adjustment


In RAN11.0, 2 ms fast CE scheduling is introduced. When HSUPACESCHSWITCH is set to OPEN, the NodeB performs dynamic CE resource adjustment by following the procedure shown in Figure 4-5. Figure 4-5 Procedure for periodical CE resource adjustment

When each adjustment period arrives, the algorithm performs the following operations: 1. Calling back the CE resources of the serving RLS If the NodeB detects that the allocated CE resources are not fully used in the recent time (100 ms for 10 ms TTI users and 20 ms for 2 ms TTI users), it calls back the unused CE resources. For users whose CE resources are called back, the algorithm notifies the MAC-e scheduler of the SGmax. 2. Allocating CE resources for the new RL After a new RL is admitted, the new RL requests CE resources based on the CEinit. If CE resources are insufficient, CE preemption is triggered. During the preemption procedure, the algorithm preempts the CE resources of the non-serving RLs. If the CE resources of the non-serving RLs are also insufficient, the algorithm preempts the CE resources of the serving RLs. The CE resources of the serving RLs are preempted until the rate decreases to the GBR. The algorithm takes the GBR and SPI weight (FACTOR) into comprehensive consideration. For the admitted non-serving RLs, if no CE resources corresponding to CEinit is available, the algorithm allocates the CE resources corresponding to CEmin from them.
z z

CEinit: initial number of CEs, which is calculated on the basis of the configured GBR. If the user is not configured with the GBR, the CEinit is the CE resources required for transmitting an RLC PDU. CEmin: minimum number of CEs required for demodulation on the E-DPCCH and DPCH.

3. Processing CE resources among serving RLS for fairness Table 4-1 HSUPA CEs consumption rules MinSF SF64 SF32 SF16 SF8 SF4 2xSF4 2xSF2 2xSF2 + 2xSF4

HSUPA phase 1
1+1+1 1+1+1.5 1+1+3 1+1+5 1+1+10 1+1+20 Not supported Not supported

HSUPA phase 2
1 1 2 4 8 16 32 48

The difference of CEs consumption rules for HSUPA phase 1 and HSUPA phase 2 is caused by different hardware versions. The algorithm performs a fairness judgment every 160 ms and takes the GBR and SPI weight (FACTOR) into comprehensive consideration. The algorithm selects the user of the serving RLS with the largest value of priority and calculates the number of required CEs. When the available CE resources for the user of the serving RLS do not meet the requirement of the user, the algorithm performs fairness processing according to the following rules:

Page 14 of 17

z z z

For the users whose Reff is smaller than the GBR, the CE resources are not reduced. For the users whose Reff is greater than or equal to the GBR or the users whose GBR is not configured, the algorithm reduces the CE resources of the user with the highest priority based on Priority = Reff/SPI. The algorithm allocates the reduced CE resources to the user with the highest priority.

4. Allocating CE resources for the AG or RG UP user of the serving RLS If the AG or RG UP user of the serving RLS requests CE resources, the CE scheduler allocates the required CE resources. The GBR and SPI weight (FACTOR) are considered in the allocation. When allocating CE resources to the user of the serving RLS, the CE scheduler can preempt the CE resources of the user of the non-serving RLS. 5. Allocating CE resources to the RG UP user of the non-serving RLS If the RG UP user of the non-serving RLS requests CE resources, the CE scheduler allocates the required CE resources considering the GBR and SPI weight (FACTOR).

4.4 Mapping of Service to HSUPA


SRB over HSUPA
This feature provides higher signaling rate and reduces the call process delay. Since the SRB is carried on the HSUPA, the transmission resource can be saved, compared with that is carried on the DCH. The signaling over the SRB is delay sensitive and irregular. Compared with the DCH, it is more appropriate to set up the SRB over the HSUPA. The SRB over the HSUPA can be applied during the RRC connection setup or other procedures such as the mobility management. If the SRB is set up over the DCH, it can be reconfigured to be mapped on the HSUPA in some cases such as the target cell of the handover supports the HSUPA while the source cell does not. Inversely, the SRB mapping on the HSUPA can also be reconfigured to be mapped on the DCH if the target cell of the handover does not support the HSUPA. The SRB over the HSUPA is configurable. For details, refer to Radio Bearers Description.

VoIP over HSPA


In the fixed network, Voice over IP (VoIP) has turned out to be an attractive and cost-effective solution to support PS conversational services. The rapid growth of VoIP users urges the cellular operator to introduce this feature to make their network more profitable. Moreover, from the evolution point view, it is also helpful to converge the operators network into ONE all-IP network and decrease the total operational cost accordingly. In WCDMA system, on the one hand, VoIP can provide lower cost voice service compared to the traditional CS voice, and on the other hand, such service can make it simpler to support rich services like real-time video sharing or messenger. The reason is that they are all carried on the PS domain and the end user will also benefit from it. VoIP service can be carried on DCH or HSPA. When it is set up on the DCH, the capacity is not competitive due to more resource consumption. Therefore, VoIP over HSPA is a better solution. And Robust Header Compression (RoHC) should also be supported to improve the overhead efficiency. The following features are to provide VoIP over HSPA solution:
z z

RAB Mapping, refer to Radio Bearers Description. RoHC, refer to PDCP Header Compression Description.

IMS Signaling over HSPA


The IP Multimedia Subsystem (IMS) is an open and standardized architectural framework for delivering Internet Protocol (IP) multimedia to mobile users. With this feature, operators provide network-controlled multimedia services by combining voice and data in a single packet switched network. IMS Signaling over HSPA improves the utilization of code resource and transmission resource, compared with those carried on the DCH. IMS uses Session Initiation Protocol (SIP) as the key control protocol, and implements service management in the UTRAN. Such SIP signaling will be indicated by the CN in the RAB Assignment Request message. The RAB should be an interactive QoS class service. Before RAN10.0, such IMS signaling service can only be carried on the DCH. With F-DPCH supported in RAN10.0; it is considered to carry the service on HSPA, which brings better performance for IMS service. The type of channels carrying IMS signaling is configurable separately on the downlink and uplink at cell level. That is, when HSPA is chosen as the bearer with high priority, IMS signaling will be set up on it as much as possible. If the setup is not successful, for example, due to admission control, a periodical timer will be started to trigger the reconfiguration of the HSPA procedure. The IMS signaling can be mapped on the DCH, HS-DSCH, or E-DCH. For details, refer to Radio Bearers Description.

CS Voice over HSPA/HSPA+


Generally Circuit Switched Voice is carried on DCH, from 3GPPs Release 8 Circuit Switched Voice over HSPA (CS Voice over HSPA) is introduced. Namely, the UL CS voice packet is borne with E-DCH and the DL CS voice packet with HS-DSCH. Basically, CS voice over HSPA takes the ordinary mobile circuit voice service, using ordinary dialers on the phone, and circuit core switches in the network. So its not necessary for the operator to have IMS deployed already. The different traffic flow paths between CS voice over HSPA and VoIP over HSPA are showing as below.

To deploy CS Voice over HSPA, just RNC need to be updated for mapping CS service to HSPA. There is no impact for MSC and Node B. Not only frequency efficiency and cell capacity could be improved because of introducing HSPA technology for CS Voice, but also the better talk time the user will have, as the DTX/DRX technology in HSPA+ could be applied also.

4.5 HSUPA over Iur


The HSUPA over the Iur is the scenario that the DRNC cell is in the HSUPA E-DCH active state. The feature comprises the HSUPA service management over the Iur, the HSUPA mobility management over the Iur.

HSUPA service management over Iur


The HSUPA service management over the Iur includes the HSUPA service setup, modification, release, and the dynamic channel configuration control (DCCC). When the UE is in CELL_DCH state and the DRNC cell is in the E-DCH active state or the UE is in CELL_FACH state and the camps in the DRNC cell, the HSUPA service can be set up, modified and released over the Iur. This scenario (from TS25.931) shows an example of E-DCH configuration. Also TTI reconfiguration is shown in the same scenario. It is assumed that in this example DCH was established before. Figure 4-6 E-DCH Establishment and E-DCH TTI Reconfiguration

Page 15 of 17

HSUPA mobility management over Iur


The HSUPA mobility management over the Iur includes the soft handover, hard handover, cell update (because of radio link failure), and serving cell change. The process is similar to the corresponding normal HSUPA mobility management and the difference is that the cells change between the RNCs. This example (from TS25.931) shows establishment of a radio link via a Node B controlled by another RNC than the serving RNC. Figure 4-7 Soft Handover - Radio Link Addition (Branch Addition)

HSUPA static relocation


If the HSUPA service is over the Iur and the radio links are provided only by the target RNC, the static relocation can be triggered.

4.6 HSUPA DCCC


HSUPA DCCC involves the following functions when uplink channel is E-DCH:
z z z

Rate Reallocation Based on Throughput UL BE Rate Downsizing and Recovery Based on UL Basic Congestion UE State Transition Algorithm

For detailed information, see descriptions about DCCC in Rate Control Parameter Description

Page 16 of 17

4.7 HSUPA Adaptive Retransmission


When the coverage or load is limited, large retranmssion times can improvethe user throughput and the cell capacity. When a user moves to the edge of a cell, the uplink transmit power is limited, and thus the user throughput is low. In this case, large retransmission times can be used to increase the user throughput and the cell coverage. When the uplink load of a cell is limited, large retransmission times can be used to increase the uplink capacity and the cell throughput. When the coverage (the HSUPA user is at the center of a cell) or load is not limited, small retransmission times can be used to decrease the user throughput and delay. HSUPA adaptive retransmission is performed to change the target retransmission times from small retranmssion times (0.1 in 2 ms and 0.01 in 10 ms) to large retransmission times (1.1) or from large retransmission times to small retransmission times according to the limitation conditions of coverage or load. The purpose is to increase the user throughput and the cell throughput and to promote user experience. H adaptive retransmission is applicable to the best effort (BE) service. HSUPA adaptive retransmission takes the impact of CE resources, principle of fairness differentiation, and user priority into account.
z z z

Small retransmission times can be changed to large retransmission times only when the CE resources are sufficient. If the throughput in the case of large retransmission times does not increase owing to the limitation of physical capability, change large retransmission times to small retransmission times. In this way, the user throughput can be improved and thus the fairness differentiation requirement can be met. High-priority users use small retransmission times preferentially to decrease the delay.

HSUPA adaptive retransmission is controlled by PC_HSUPA_HARQNUM_AUTO_ADJUST_SWITCH. When the switch is selected (SET CORRMALGOSWITCH: PcSwitch=PC_HSUPA_HARQNUM_AUTO_ADJUST_SWITCH-1),
z z

The HSUPA service can use the smaller target number of retransmissions if the uplink is not congested. The HSUPA service can use the typical target number of retransmissions if the uplink is congested.

4.8 Uplink Macro Diversity Intelligent Receiving


Through soft handover, the WCDMA system implements the power control and provides Macro Diversity Combining (MDC) gains for the UEs in the overlapping area of handover. In this case, however, certain number of resources are required for receiving processing and transmission. Particularly, when UL service data rate is increased by HSUPA, more resources are consumed. Enhanced uplink MDC can optimize soft handover gains as well as maximize resource utilization. During access admission, based on the factors such as transmission resources, service characteristics, and related measurement reports, the SRNC can decide whether the traffic MAC-d flow on the E-DCH will apply the uplink MDC, that is, whether the Iub/Iur transport bearers (TBs) need to be established. In addition, the SRNC can also decide that the MDC-supportive uplink data and the non-MDC-supportive uplink data shall not be multiplexed in the same MAC-d flow, that is, they shall not be multiplexed in the same Uu TB block. In general, when the load of cell is high, the MDC of high-speed non-real-time uplink services will not be applied, because they utilize more transmission resources while the MDC gains are not significant. In this case, the data channel on both the Iub/Iur interface and the Uu interface will not be established. The low-speed real-time services, such as SRB and VoIP, MDC will be applied as usual except that it is rejected by access admission. On the other hand, if MDC of the high-speed non-real-time uplink service is applied, the resources it occupied in the non-serving cell are possible to be preempted when the load of the cell is high. Such resources including CE resource and Iub transmission resource. The selective UL MDC solution shows that the non-serving NodeB can choose to demodulate part of the uplink data on the air interface. When in short of CE, the nonserving NodeB need not demodulate the big TBs for high-speed non-real-time services. Instead, it should demodulate only the small TBs for low-speed real-time services and then forward the data to the SRNC for MDC. In addition, the control channels are still established and used for uplink transmission power adjustment indication. That is to say, the demodulation resources of the uplink DCCH and E-DCCH, and the modulation resources of the downlink E-RGCH and F-DPCH will always be allocated no matter which kind of service it is. This is used to control the neighboring cell interference as well as to save the demodulation resources of the uplink data channels for the non-serving NodeB.

5 HSUPA Parameters
5.1 Description
Table 5-1 HSUPA parameter description Parameter ID FACTOR Description This parameter specifies the factor associated with the scheduling priority indicator. This factor is used to calculate the step of rate upsizing. This parameter specifies the time used for decision of HSUPA happy bit. The decision is based on whether all the buffered user data can be transmitted at the current rate during the time specified by this parameter. HSUPA CE scheduling switch HSUPA overload scheduling switch This parameter specifies the time interval of sending HSUPA scheduling information for TTI 10ms when the user has Schedule Grant. This parameter specifies the time interval of sending HSUPA scheduling information alone for TTI 10ms when the user has no Schedule Grant and its buffer length is greater than zero. This parameter specifies the time interval of sending HSUPA scheduling information for TTI 2ms when the user has Schedule Grant. This parameter specifies the time interval of sending HSUPA scheduling information alone for TTI 2ms when the user has no Schedule Grant and its buffer length is greater than zero. Maximum target uplink load factor. It is the target load defined by the RNC and obtained by the NodeB HSUPA power control from the uplink load. This parameter is based on network planning. The cell throughput varies with the value of this parameter. The greater the value is, the more interferences exist. For details abput this parameter, refer to the 3GPP TS 25.433. Ratio of the non-serving E-DCH RX power to total E-DCH RX power. This parameter indicates whether the non-serving NodeB sends RG to the UE. If the value of this parameter is too small, so is the power of the non-serving RL, thus impacting the rate of the UE in soft handover state. If the value of this parameter is too great, the non-serving RL cannot send RG to the UE even in overloaded scenario. For details about this parameter, refer to the 3GPP TS 25.433.

HBDelaycnd

HSUPACESCHSWITCH HSUPAOLSCHSWITCH Hsupa10msSchPrdForGrant

Hsupa10msSchPrdForNonGrant

Hsupa2msSchPrdForGrant

Hsupa2msSchPrdForNonGrant

MaxTargetUlLoadFactor

NonServToTotalEdchPwrRatio

5.2 Values and Ranges

Page 17 of 17

Table 5-2 HSUPA parameter values and parameter ranges Parameter ID FACTOR Default Value GUI Value Range Actual Value Range Unit MML Command SET SPIFACTOR (Mandatory) ADD TYPRABHSPA (Optional) SET MACEPARA (Optional) SET MACEPARA (Optional) SET FRC (Optional) NE RNC

D50

0~100 D2, D10, D20, D50, D100, D200, D500, D1000


OPEN,CLOSE

0~100 2, 10, 20, 50, 100, 200, 500, 1000


0,1

per cent ms

HBDelaycnd

RNC

HSUPACESCHSWITCH

None

NodeB

HSUPAOLSCHSWITCH

OPEN,CLOSE

0,1

None

NodeB

Hsupa10msSchPrdForGrant

D10, D20, D50, D100, D200, D500, D1000 D10, D20, D50, D100, D200, D500, D1000 D2, D10, D20, D50, D100, D200, D500, D1000 D2, D10, D20, D50, D100, D200, D500, D1000 0~100 0~100

10, 20, 50, 100, 200, 500, 1000 10, 20, 50, 100, 200, 500, 1000 2,10, 20, 50, 100, 200, 500, 1000 2,10, 20, 50, 100, 200, 500, 1000 0~1, step:0.01 0~1, step:0.01

ms

RNC

Hsupa10msSchPrdForNonGrant

ms

SET FRC (Optional)

RNC

Hsupa2msSchPrdForGrant

ms

SET FRC (Optional)

RNC

Hsupa2msSchPrdForNonGrant

ms

SET FRC (Optional)

RNC

MaxTargetUlLoadFactor

75 0

per cent per cent

ADD CELLHSUPA (Optional) ADD CELLHSUPA (Optional)

RNC

NonServToTotalEdchPwrRatio

RNC

The Default Value column is valid for only the optional parameters. The "-" symbol indicates no default value.

6 Reference Documents
The following lists the reference documents related to the feature: 1. 3GPP TS 25.101, "User Equipment (UE) radio transmission and reception (FDD)" 2. 3GPP TS 25.211, "Physical channels and mapping of transport channels onto physical channels (FDD)" 3. 3GPP TS 25.212, "Multiplexing and channel coding (FDD)" 4. 3GPP TS 25.213, "Spreading and modulation (FDD)" 5. 3GPP TS 25.214, "Physical layer procedures (FDD)" 6. 3GPP TS 25.877, "High Speed Downlink Packet Access (HSDPA) - Iub/Iur Protocol Aspects" 7. 3GPP TS 25.858, "Physical layer aspects of UTRA High Speed Downlink Packet Access" 8. 3GPP TS 25.301, "Radio Interface Protocol Architecture" 9. 3GPP TS 25.302, "Services provided by the physical layer" 10. 3GPP TS 25.308, "UTRA High Speed Downlink Packet Access (HSPDA); Overall description" 11. 3GPP TS 25.309: FDD Enhanced Uplink 12. 3GPP TS 25.321, "Medium Access Control (MAC) protocol specification" 13. 3GPP TS 25.420, "UTRAN Iur interface general aspects and principles" 14. 3GPP TS 25.423, "UTRAN Iur interface RNSAP signaling" 15. 3GPP TS 25.425, "UTRAN Iur interface user plane protocols for CCH data flows" 16. 3GPP TS 25.430, "UTRAN Iub interface: general aspects and principles" 17. 3GPP TS 25.433, "UTRAN Iub interface NBAP signaling" 18. 3GPP TS 25.435, "UTRAN Iub interface user plane protocols for CCH data flows" 19. Transmission Resource Management Parameter Description 20. Load Control Parameter Description 21. Radio Bearers Parameter Description 22. Rate Control Parameter Description 23. Power Control Parameter Description 24. Handover Parameter Description 25. Channel Parameter Description 26. Basic Feature Description of Huawei UMTS RAN11.0 V1.5 27. Optional Feature Description of Huawei UMTS RAN11.0 V1.5

You might also like