Developing Library of Internet Protocol Suite On CUDA Platform
Developing Library of Internet Protocol Suite On CUDA Platform
Volume:3 Issue: 5
ISSN: 2321-8169
2741 - 2744
_______________________________________________________________________________________________
Rahul Bhivare
Technical Consultant
CDAC-ACTS
Pune, India
[email protected]
Abstract - Presently, Computational power of Graphics processing Units (GPUs) has turned them into an attractive platform for
general purpose application at significant speed using CUDA. CUDA is a parallel computing platform and programming model
which has the ability to deliver high performance in parallel applications. In networking world protocol parsing is very complex
due to bit wise operation. We need to parse packet at each stage of network to support packet classification and protocol
implementation. Conventional CPU with fewer cores is not sufficient to do such packet parsing. For this purpose, we are choosing
to build a networking library for protocol parsing, by this way we can offload compute intensive protocol parsing task on CUDA
enable GPUs which can optimize usage of CPU and improve system performance.
Keywords - CUDA, GPGPU, Packet Parsing.
__________________________________________________*****_________________________________________________
I.
INTRODUCTION
_______________________________________________________________________________________
ISSN: 2321-8169
2741 - 2744
_______________________________________________________________________________________________
II.
2742
IJRITCC | May 2015, Available @ http://www.ijritcc.org
_______________________________________________________________________________________
ISSN: 2321-8169
2741 - 2744
_______________________________________________________________________________________________
We are proposing an idea to build libraries for protocol
parsing suite on CUDA platform. By using parallel computing
capabilities of CUDA, we can improve system performance.
CUDA enables GPU is working as a streaming processor and
we can offload protocol parsing task to the GPU.
The whole work is divided into three phases:
Phase 1: where parsing is done by INTEL CPU
Phase 2: where parsing is done by CUDA GPU
Phase 3: Possible test cases
PROPOSED SYSTEM
_______________________________________________________________________________________
ISSN: 2321-8169
2741 - 2744
_______________________________________________________________________________________________
allocate memory in GPU uses cudaMalloc () API. As each
GPU has its own dedicated memory, the programmer has to
decide and allocate required memory. After that, data has been
copied
out
from
the
CPU
to
GPU
using
cudaMemcpyHostToDevice () API. Once the packet has
transferred to CUDA enable GPU than each packet will
process by one thread as followed by SIMT (Single Instruction
Multi-Threading) and decide which type of packet is. After
deciding the type of packet, particular packet will process and
the result will send back to the host. Figure 5 shows the flow
of the system and how each packet has been proceeded by the
GPU.
70
60
50
40
30
20
10
0
Image
Audio
Video
Image
PDF
Audio
Video
CONCLUSION
RESULTS
[5]
[6]
[7]
[8]
[9]
REFERENCES
[10]
2744
IJRITCC | May 2015, Available @ http://www.ijritcc.org
_______________________________________________________________________________________