0% found this document useful (0 votes)
37 views5 pages

Conference Template

Uploaded by

hoodboy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views5 pages

Conference Template

Uploaded by

hoodboy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Title of your Report

Given Name Surname Given Name Surname


Faculty of Computer Science & Engg. Faculty of Computer Science & Engg.
AI Research Group GIK Institute of Engg. Sciences & Tech.
GIK Institute of Engg. Sciences & Tech. Topi, Khyber Pakhtunkhwa, Pakistan.
Topi, Khyber Pakhtunkhwa, Pakistan. email address or ORCID
email address or ORCID

Abstract—*CRITICAL: Do Not Use Symbols, Special Charac- B. Gap Analysis


ters, Footnotes, or Math in Report Title or Abstract.
One paragraph defining what has not been done or what is
Two sentences of background and importance of this topic.
Two sentences defining the gap analysis (work not done in still missing in the field (gap analysis).
field) and your problem statement. C. Problem Statement
Two sentences of Results and your findings.
One sentence for significance of the results and your contri- Following are the main research questions addressed in this
bution. study.
In total, this section must consist of 180 words. 1) Research Question 1.
Index Terms—keyword1, keyword2, keyword3, keyword4 2) Research Question 2.
3) Research Question 3.
I. G ENERAL INSTRUCTIONS
D. Novelty of our work
Comment out this section after completing your write up. One paragraph explaining your approach and nov-
1) You must have exactly the mentioned sections with the elty/contributions of your work.
mentioned number of paragraphs.
E. Our Solutions
2) DO NOT DELETE ANY TEXT FROM THIS REPORT.
JUST COMMENT OUT AND WRITE THE RELE- One paragraph on what you are doing in this report (your
VANT TEXT UNDER EACH COMMENT. contributions) and a small one-two liner summary of your
3) Observe the report page limits (minimum 5 pages with- results.
out references) and limit on maximum text is 7 pages III. M ETHODOLOGY
without references.
A. Dataset
4) In case of figures, keep your raw data table also stored
as excel file in this repository. One paragraph and one figure representing your dataset, also
5) Each paragraph must consist of 7-10 sentences. give references (citation) from where the dataset is available,
6) Add references where required. and the labels/ground truth as shown in Figure 2 OR as in
7) Total references should be between 20 and 30. Figure 1.
8) Use latest references with 80% references later than B. Overall Workflow
2019.
9) Use Google scholar for finding references and not One paragraphs defining your methodology through a flow
Google. diagram of your work as shown in Figure 4 OR in Figure 3.
C. Experimental Settings
II. I NTRODUCTION (Optional) One paragraph for hyper-parameter settings and
One paragraph on introducing the field and the topic of network architecture as shown in Table II.
interest. (Optional) One paragraph for experimental settings of your
One paragraph on the importance of the selected topic. and competing methods (if any).
One paragraph on why is it significant to work on this IV. R ESULTS
field/topic today.
Three (or more) paragraphs explaining your results. At least
one paragraph targeting one research question with at least one
A. Related Work
figure (preferably) or table (where figure is not possible). This
One paragraph defining the work that has been done in this section must contain only results and nothing else (not your
field, with a table summarizing the work that has been done own opinion or any sort of discussion on quality of results).
in literature as shown below in Table I. A sample figure is shown in Figure 5.
TABLE I
L ITERATURE REVIEW TABLE SHOWING THE CONTRIBUTIONS OF VARIOUS AUTHORS FOR QUANTIZATION OF NETWORKS .

Applied on Layerwise
Paper FCNs L2 Error Signal No. of Sem.
Conv. Skip Trans. Fully Conn. Dataset used sens.
Name Used Minim. Quantized bits segm.
Layer Layer Layer Layer Analysis
Vanhoucke
✓ ✓ ×
et al. [1]
MNIST,
Courbariaux
✓ ✓ SVHN, 10
et al. [2]
CIFAR-10
Gupta MNIST,
✓ ✓ 12
et al. [3] CIFAR-10
Proposed
✓ ✓ ✓ ✓ ✓ × × Pascal VOC 2012 2,3,4,5 ✓ ✓
Approach

Fig. 1. Image showing some sample images present in the dataset, their pixel-wise labels and resulting pixel labels from floating point network, hybrid
quantized network, and two configurations of quantized networks. The legend displays the color and class (name) of the object to be identified in the image.
Five sample images containing aeroplane, dogs, person, and chair are shown along with their classification. The data and the pixel labels (ground truth) are
taken from Pascal VOC 2012 dataset.

TABLE II that you have explored here. Any other point you would like
C ONFIGURATION TABLE SHOWING THE NETWORK CONFIGURATION OF to discuss related to this study.
FCN USED IN THIS STUDY. T HE TABLE SHOWS THE VARIOUS
CONFIGURATION SETTINGS USED FOR FCN8. A. Future Directions
Network Configuration One paragraph for what are the future directions in your
Epochs 50 opinion for continuing this study.
Learning rate 0.0001
Mini batch size 20 VI. C ONCLUSION
Optimizer SGD
Momentum 0.9 One paragraph related to conclusions drawn from your
Weight decay 0.0002 whole experimentation.
L2 Regularization None
Samples in training set 8498 In total this section must consist of 240-260 words.
Samples in validation set 786 References will be added automatically by using the fol-
lowing lines. Add the relevant citations in the attached bibli-
ogrpahy.bib file. Get help from me where you want to work
V. D ISCUSSION on citations.
Three to four paragraphs discussing the results (at least one R EFERENCES
paragraph for each research question). Your opinion on how
[1] V. Vanhoucke, A. Senior, and M. Z. Mao, “Improving the speed of neural
good/bad the results are. Draw inferences from the results here. networks on cpus,” in Deep Learning and Unsupervised Feature Learning
Explain novelty of your contributions and what was missing Workshop, NIPS 2011, 2011.
SAMPLE RGB FLOATING POINT HYBRID QUANTIZED QUANTIZED
IMAGES FROM GROUND TRUTH NETWORK (NO QUANTIZED NETWORK NETWORK
DATASET QUANTIZATION) NETWORK (CONFIG 1) (CONFIG 4)
COLOR CODING
FOR OBJECTS

Aeroplane

Background

Bike

Bird

Boat

Bottle

Bus

Car

Cat
Chair

Cow

Dog

Horse

Motor Bike

Person

Plant

Sheep

Sofa

Train

Television

Table

Fig. 2. Image showing some sample images present in the dataset, their
pixel-wise labels and resulting pixel labels from floating point network, hybrid
quantized network, and two configurations of quantized networks. The legend
displays the color and class (name) of the object to be identified in the image.
Five sample images containing aeroplane, dogs, person, and chair are shown
along with their classification. The data and the pixel labels (ground truth)
are taken from Pascal VOC 2012 dataset.

[2] J. David, M. Courbariaux, and Y. Bengio, “Training deep neural networks


with low precision multiplications,” Computer Science, 2014.
[3] S. Gupta, A. Agrawal, K. Gopalakrishnan, and P. Narayanan, “Deep
learning with limited numerical precision,” in International Conference
on Machine Learning, 2015, pp. 1737–1746.
Fig. 3. Figure showing the flowchart proposed for FCN-8 quantization and the comparison pipeline followed (for quantization technqiues, i.e., Direct
Quantization, Llyod’s Quantizer and L2 error minimization) in the current study based on pixel accuracy, mean IOU, and mean accuracy.

14 LQ
12 L2 error minimization
FCN layer wise Quantized Network using
Codebook partitions LQ technique at level M Layer wise Sensitivity Analysis
10
9
Lookup 13
Table
11
Best m for each
layer
8
Configurations

Direct Quantization
Trained FCN L2 Quantized Network
1 2 3 5 14
6 at level M Retraining with modified
Quantization levels
backpropagation algorithm
( m = 3,7,15 or 31)

Images Training Data Untrained


from Pascal FCN
VOC 2012 6 15
dataset 7
16
Comparison of algorithms
2 3 Quantized Network with
L2 error minimization at
Accuracy Measure 17 level M
Sparsity induced
4 (Size of quantized
• Pixel accuracy
• Mean IOU
network)
• Mean accuracy

Validation Data

Fig. 4. Figure showing the flowchart proposed for FCN-8 quantization and the comparison pipeline followed (for quantization technqiues, i.e., Direct
Quantization, Llyod’s Quantizer and L2 error minimization) in the current study based on pixel accuracy, mean IOU, and mean accuracy.
Encoder
A) Mean IoU Decoder
100

100
FP LQ L2 FP LQ L2
80

80
Percentage value

Percentage value
60

60
40

40
20

20
0

2 3 4 5 2 3 4 5

# of bits per weight # of bits per weight

B) Pixel accuracy
Encoder Decoder
100

100

FP LQ L2 FP LQ L2
80

80
Percentage value

Percentage value
60

60
40

40
20

20
0

2 3 4 5 2 3 4 5

# of bits per weight # of bits per weight

C) Mean accuracy
Encoder Decoder
100

100

FP LQ L2 FP LQ L2
80

80
Percentage value

Percentage value
60

60
40

40
20

20
0

2 3 4 5 2 3 4 5

# of bits per weight # of bits per weight

Fig. 5. Figure comparing the three quantization techniques Fixed Point


(FP), Lloyd’s quantizer (LQ) and L2 error minimization (L2 ) on the three
performance metrics divided into encoder and decoder layers. Mean IoU is
shown for the three techniques in Panel A), pixel accuracy in Panel B), and
mean accuracy in Panel C) respectively. Note that FP is consistently worse
than both LQ and L2 , while L2 and LQ are of comparable accuracy. Also,
FP is most sensitive to number of bits in all metrics while L2 and LQ are
relatively insensitive.

You might also like