Full Document - Hyperspectral PDF
Full Document - Hyperspectral PDF
With the improvement of remote sensing application, hyper spectral images have been
used in large number of applications. And lot of works have been done to extract the features
from remote sensing and accurate learning for classify the classes. The spectral and spatial
information of images have been allows to classify the results with improved accuracy. Fusion of
spatial and spectral data is an actual way in improving the accuracy of hyper-spectral image
classification. In this work, we proposed spectral with spatial details based on hyper-spectral
image classification method using neural network classifiers and using multi neurons based
learning approach is used to classify the remote sensing images with specific class labels. The
features may be supernatural and latitudinal data is extracted using boundary values using
decision boundary feature extraction (DBFE). These extracted features are trained using
convolutional neural networks (CNN) for improve the accuracy for labeling the classes. The
methodology entails of training and embedding regularizer towards the loss function recycled for
train the neural networks. Training is done using various layers with additional balancing
constraints to avoid falling into local minima. In testing phase, classify each remote sensing
image and avoid false truth map. Experimental results shows that improved accuracy in class
specification rather than other state of art algorithms.
CHAPTER 1
1. INTRODUCTION
Land cover is an elementary variable that impacts on and links many parts of the human
and physical environments. Thus, information on the spatial distribution of the land cover classes
is of vital importance for the investigation of environmental processes. Satellite remote sensing
techniques are widely used for the environmental monitoring. Hyperspectral imagery is a
valuable source from which one can extract detailed information about earth surface phenomena
and objects. In fact, the sensors are characterized by a very high spectral resolution that usually
results in hundreds of narrow spectral channels. Classification of land cover hyperspectral
images is a very challenging task due to the unfavourable ratio between the number of spectral
bands and the number of training samples. The focus in many applications is to investigate an
effective classifier in terms of accuracy. The conventional multiclass classifiers have the ability
to map the class of interest but the considerable efforts and large training sets are required to
fully describe the classes spectrally. Support Vector Machine (SVM) is suggested in this paper to
deal with the multiclass problem of hyperspectral imagery. The attraction to this method is that it
locates the optimal hyper plane between the class of interest and the rest of the classes to
separate them in a new high-dimensional feature space by taking into account only the training
samples that lie on the edge of the class distributions known as support vectors and the use of the
kernel functions made the classifier more flexible by making it robust against the outliers
1.2.1 DISADVANTAGES
The proposed work focus to provide feature learning and sparse representation based
approach to handle irregular class boundaries in hyper-spectral image classification. So we
implement the system using neural networks with features representation. In the proposed
framework, supervised Feature Extraction (FE) is first executed at the enter records and the first
features with cumulative eigenvalues. The functions are extracted from the Decision Boundary
Feature Matrix (DBFM). In order to attain the equal classification accuracy as within the unique
space, retaining the eigenvectors of the decision boundary feature matrix similar to nonzero
eigenvalues is vital. The performance of this technique does no longer become worse even if
there's no difference within the mean vectors or covariance matrices. The efficiency of DBFE is
rather depending on the satisfactory and wide variety of education samples, which isn't essential.
And also implement convolutional neural network algorithm to classify the pixels with improved
accuracy. CNNs represent feed-forward neural networks which consist of various combinations
of the convolutional layers, max pooling layers, and fully related layers and Take advantage of
spatially local correlation by way of enforcing a local connectivity pattern between neurons of
adjacent layers. Convolutional layers exchange with max pooling layers mimicking the character
of complicated and easy cells in mammalian visible cortex. A CNN includes one or extra pairs of
convolution and max pooling layers and ultimately ends with completely related neural
networks. The hierarchical structure of CNNs is steadily proved to be the most efficient and
successful manner to analyse visible representations
1.3.1 ADVANTAGES
• Image Acquisition
• Pre-processing
• Features extraction
• Classification
• Image Acquisition
• Preprocessing
– In this module, we can perform gray scale conversion to convert the RGB image
into gray scale
• Features Extraction
– In this module, perform features extraction steps to extract low level and high
level features
• Classification
AUTHOR: QI WANG
In this paper, propose a novel method of MR-based band selection. Instead of rating the
similarities in the Euclidean space, the manifold structure is taken into consideration to properly
assess the hyper spectral data structure. The associated measurement is input to a ranking
operation and a subsequent band selection is based on the obtained ranking score. This is a novel
alternative that reformulates the hyper spectral band selection as a ranking problem. Estimate the
interband distance in a batch manner. Most existing techniques for band selection always
compute the distance between two individual bands. The calculated results then serve as
guidance for band selection. However, this strategy is not suitable for the sequential selection
because the selected band at this time might resemble the one selected at previous time. In our
implementation, we treat the already selected batch of bands as the query, and the examined
band is compared with the whole batch. This can ensure the further selected band is distinct with
the previously selected ones. Provide a thorough comparison using different band selection
methods and classifiers. In order to validate the effectiveness of the proposed method, we
compare it with several recently presented methods. Besides, we also test these methods on
typical classifiers that are frequently used for HSI classification.
1.5.2. TITLE: ADVANCES IN HYPERSPECTRAL IMAGE CLASSIFICATION
The technological evolution of optical sensors over the last few decades has provided
remote sensing analysts with rich spatial, spectral, and temporal information. In particular, the
increase in spectral resolution of hyperspectral images and infrared sounders opens the doors to
new application domains and poses new methodological challenges in data analysis.
Hyperspectral images (HSI) allow to characterize the objects of interest (for example land-cover
classes) with unprecedented accuracy, and to keep inventories up-to-date. Improvements in
spectral resolution have called for advances in signal processing and exploitation algorithms.
This paper focuses on the challenging problem of hyperspectral image classification, which has
recently gained in popularity and attracted the interest of other scientific disciplines such as
machine learning, image processing and computer vision. In the remote sensing community, the
term ‗classification‘ is used to denote the process that assigns single pixels to a set of classes,
while the term ‗segmentation‘ is used for methods aggregating pixels into objects, then assigned
to a class. Despite all these commonalities, the analysis of hyperspectral images turns out to be
more difficult, especially because of the high dimensionality of the pixels, the particular noise
and uncertainty sources observed, the high spatial and spectral redundancy, and their potential
non-linear nature. Such nonlinearities can be related to a plethora of factors, including the multi-
scattering in the acquisition process, the heterogeneities at subpixel level, as well as the impact
of atmospheric and geometric distortions
1.5.3 TITLE: HYPERSPECTRAL IMAGE CLASSIFICATION USING DEEP PIXEL-
PAIR FEATURES
AUTHOR: WEI LI
Hyperspectral image (HSI) classification, which aims at categorizing pixels into one of
several land-use land-over classes, is an important application in the remote sensing field. To
date, numerous HSI classification techniques have been proposed. Among these approaches, the
support vector machine (SVM) is capable of discriminating two classes by fitting an optimal
separating hyper plane to the training data within a multidimensional feature space, and has
shown excellent performance in HSI classification even with limited training samples. An
improved SVM exploited the properties of Mercers conditions to construct a composite kernel
(CK) for the combination of both spectral and spatial information, which is referred to as SVM-
CK. However, we notice that JCR takes the surrounding pixels with the same weights, which is
suboptimal, particularly in heterogeneous regions where the central pixel and neighbouring
pixels do not belong to the same class. Under such a case, only these neighboring pixels that are
associated with the central pixels should be taken into consideration. Nevertheless, removal of
the irrelevant pixels is not easy, which may increase additional computational complexity.
Therefore, in this letter, we propose a simple but effective method to describe the contribution
from a neighbouring pixel neighbouring pixel with adaptive weights. In the resulting WJCR,
more appropriate weights are determined by using a Gaussian kernel function. The WJCR
provides the benefit of efficiently extracting more accurate spectral–spatial features, which is
particularly useful to data with a heterogeneous image scene.
1.5.5 TITLE: SUBSPACE-BASED SUPPORT VECTOR MACHINES FOR
HYPERSPECTRAL IMAGE CLASSIFICATION
Given a training set mapped into a space by some mapping, the SVM separates the data
by an optimal hyper plane. If the data are linearly separable, we can select two hyper planes in a
way that they separate the data and there are no points between them, and then try to maximize
their distance. The region bounded by them is called the margin. If the data are not linearly
separable, soft margin classification with slack variables can be used to allow misclassification
of difficult or noisy cases. However, the most widely used approach in SVM classification is to
combine soft margin classification with a kernel trick that allows separation of the classes in a
higher dimensional space by means of a nonlinear transformation. In other words, the SVM used
with a kernel function is a nonlinear classifier, where the nonlinear ability is included in the
kernel and different kernels lead to different types of SVMs. The extension of SVMs to
multiclass problems is usually done by combining several binary classifiers [20]. In this letter,
our main contribution is to incorporate a subspace-projection-based approach to the classic SVM
formulation, with the ultimate goal of having a more consistent estimation of the class
distributions. The resulting classification technique, called SVMsub, is shown in this work to be
robust to the presence of noise, mixed pixels, and limited training samples. In this letter, we
extend the subspace-projection-based concept to support vector machines (SVMs), a very
popular technique for remote sensing image classification. For that purpose, we construct the
SVM nonlinear functions using the subspaces associated to each class. The resulting approach,
called SVMsub, is experimentally validated using a real hyperspectral data set collected using
the National Aeronautics and Space Administration‘s Airborne Visible/Infrared Imaging
Spectrometer. The obtained results indicate that the proposed algorithm exhibits good
performance in the presence of very limited training samples.
CHAPTER 2
3.1 INTRODUCTION
3.1.1 Purpose
The purpose of this document is to present a detailed description of the hyper spectral image
processing and classify the pixels to identify the type of land covers such as grass, building,
Heading:
Font Size:16
Sub heading:
Font Size:14
Content:
Font Size:12
The document is intended for users and administrator. The SRS document also contain some
information about the system like scope, system features, assumptions and dependencies and
other useful information. Suggesting the reader to read the document very well to understand the
goal of the system, the advantages and how the system will work.
3.1.4 Scope
This project aimed for classification of satellite images with improved accuracy by using deep
learning algorithm.
In various places, hyper spectral image processing can be used to identify the various land
The user can access the system. They can only make the changes in the dataset. They are
Our system limits ourselves to .NET as the programming language and Visual studio as IDE.
A simple video of how it works will be included in package in recorded document format.
The algorithm used is efficient enough to analyse the various types of hyper spectral pixels. The
size of the dataset is limited according to the ability of the RAM of the system. The user should
The hyper spectral image processing project requires to handle the following:
3.4.1.2 PREPROCESSING
Pre-processing is a common name for operations with images at the lowest level of
abstraction -- both input and output are intensity images. The aim of pre-processing is an
improvement of the image data that suppresses unwanted distortions or enhances some image
features important for further processing. In this pre-processing, we can convert the RGB image
into Gray scale images. And also reduce the noises using Median filter.
3.4.1.4 CLASSIFICATION
The executing time or CPU time of a given task is defined as the time spent by the system
for executing the task, including the time spent on executing run-time system executing that task,
including the time spent executing on run-time services on its behalf. Hence the system
maintains starting time of the process and its ending time which can be helpful to find the
• Mouse : Logitech.
depicting the specifications of a use case. A use case diagram can portray the different types of
users of a system and the case and will often be accompanied by other types of diagrams as well.
System
Image Acquisition
Preprocessing
Features Extraction
User
Classification
Type of Land
1 : Upload image()
3 : Noise filtering()
4 : Color features()
5 : Shape Feature()
6 : CNN algorithm()
behaviour of individual objects as well as the overall operation of the system in real time.
Objects are shown as rectangles with naming labels inside. These labels are preceded by colons
and may be underlined. The relationships between the objects are shown as lines connecting the
rectangles. The messages between objects are shown as arrows connecting the relevant
of the system. Activity diagram is basically a flowchart to represent the flow from one activity to
Upload image
Preprocessing
Features Extraction
Classification
In software engineering, a class diagram in UML is a type of static structure that describes the
structure diagram that describes the structure of a system by showing the system‘s classes, their
attributes, operations, and relationships among objects.
Image_Acquisition Features_Extraction
+Satellite image +Features
+Upload image() +Features Extraction()
+Preprocessing() +Color and Shape features()
Image_classification
+Features Values
+CNN classification()
+Type of Land()
Decide if h(X)
else
where h(X)=-ln
t=ln
Fig. 2 shows examples of the discriminant informative and redundant characteristic. It became
shown that discriminately informative functions and discriminately redundant capabilities are
associated with the choice boundary and may be extracted from the choice boundary. It was
additionally proven that discriminately informative characteristic vectors have a component that
is normal to the decision boundary at the least one factor on the choice boundary and
discriminately redundant feature vectors are orthogonal to the vector every day to choice
boundary at every point on selection boundary.
with CC(f) being the linked components of the regularly occurring picture f. There is an
inclusion relationship a number of the related additives extracted via both the upper or decrease
stage sets (belonging to U (f) or L (f), respectively). This property permits for the association of
a node in the tree to each connected issue and accordingly represents the photo as a hierarchical
structure: the max-tree and min-tree systems represent, respectively, the additives in U (f) and L
(f) with their inclusion relations via the thresholding operations. Attribute filters are form
retaining, considering that they in no way introduce new edges in an picture, and perform on
regions in line with the end result of a binary predicate P. In particular, the filtering criteria
usually decide whether the price of an attribute α of a given connected issue CC verifies a
predicate: P = α (CC) ≥ λ with α (CC), λ ∈ R or Z, wherein λ is a threshold price. When
characteristic filters are implemented to the tree representation of the picture, the operator leads
to a pruning of the tree by way of casting off the nodes whose associated areas do not fulfill P.
Two distinctive filtering methods have been proposed: pruning the tree by using casting off
entire branches, and pruning via now not casting off all of the branches.
6.1.2 CLASSIFICATION
A CNN includes one or extra pairs of convolution and max pooling layers and ultimately ends
with completely related neural networks. The hierarchical structure of CNNs is steadily proved to be the
most efficient and successful manner to analyze visible representations. The fundamental challenge in
such visual tasks is to model the intra-class appearance and shape variation of objects. The hyper-spectral
data with hundreds of spectral channels can be illustrated as 2D curves. We can see that the curve of
every class has its own visual shape which is different from other classes, although it is relatively difficult
to distinguish some classes with human eye (e.g., gravel and self-blocking bricks). We know that CNNs
can accomplish competitive and even better performance than human being in some visual problems, and
its capability inspires us to study the possibility of applying CNNs for HSI classification using the
spectral signatures. The CNN varies in how the convolutional and max pooling layers are realized and
how the nets are trained.
As illustrated in fig 4 the net contains five layers with weights, including the input layer, the
convolutional layer C1, the max pooling layer M2, the full connection layer F3, and the output layer.
Assuming 𝜃 represents all the trainable parameters (weight values), 𝜃 = {𝜃𝑖} and 𝑖 = 1, 2, 3, 4, where 𝜃𝑖 is
the parameter set between the (𝑖−1)th and the 𝑖th layer. In HSI, each HSI pixel sample can be regarded as
a 2D image whose height is equal to 1 (as 1D audio inputs in speech recognition). Therefore, the size of
the input layer is just (𝑛1, 1), and 𝑛1 is the number of bands. The first hidden convolutional layer C1
filters the 𝑛1 × 1 input data with 20 kernels of size 𝑘1 × 1. Layer C1 contains 20 × 𝑛2 × 1 nodes, and 𝑛2 =
𝑛1 − 𝑘1 + 1. There are 20 × (𝑘1 + 1) trainable parameters between layer C1 and the input layer. The max
pooling layer M2 is the second hidden layer, and the kernel size is (𝑘2, 1). Layer M2 contains 20 × 𝑛3 × 1
nodes, and 𝑛3 = 𝑛2/𝑘2. There is no parameter in this layer. The fully connected layer F3 has 𝑛4 nodes and
there are (20 × 𝑛3 + 1) × 𝑛4 trainable parameters between this layer and layer M2. The output layer has
𝑛5 nodes, and there are (𝑛4 + 1) × 𝑛5 trainable parameters between this layer and layer F3. Consequently,
the architecture of our proposed CNN classifier totally has 20 × (𝑘1 + 1) + (20 × 𝑛3 + 1) × 𝑛4 + (𝑛4 + 1)
× 𝑛5 trainable parameters. Classifying a specified HSI pixel wants the corresponding CNN with the
aforementioned parameters, where 𝑛1 and 𝑛5 are the spectral channel size and the number of output
classes of the data set, respectively. In our experiments, 𝑘1 is better to be ⌈𝑛1/9⌉, and 𝑛2 = 𝑛1−𝑘1+1. 𝑛3
can be any number between 30 and 40, and 𝑘2 = ⌈𝑛2/𝑛3⌉. 𝑛4 is set to be 100.These choices might not be
the best but are in effect for general HSI data. In our architecture, layer C1 and M2 can be viewed as a
trainable feature extractor to the input HSI data, and layer F3 is a trainable classifier to the feature
extractor. The output of subsampling is the real feature of the original data. In our proposed CNN
structure, 20 features can be extracted from each original hyper-spectral, and each feature has 𝑛3
dimensions.
layer.type = layerType[𝑖];
layer.inputSize = 𝑛𝑖
layer.params = 𝜃𝑖;
model.addLayer(layer);
end for
return model;
end function
Initialize learning rate 𝛼, number of maximum iteration ITERmax, minimum error ERRmin,
training batches BATCHEStraining, batch size SIZEbatch, and so on;
err = 0;
[∇𝜃𝐽(𝜃), 𝐽(𝜃)] = cnnModel.train (TrainingDatas, TrainingLabels), as (4) and (8); Update 𝜃 using
(7);
iter++;
end while
This network varies affording to the spectral channel size and the number of output classes of input HSI
data. So our proposed work overcomes irregular boundaries separation in hyper-spectral image
classification with spectral and spatial features extraction
6.2 CODING
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Windows.Forms.DataVisualization.Charting;
//using AForge.Math.Histogram;
using Histogram = AForge.Math;
using ImageStatistics = AForge.Imaging;
klm = "Histogram";
pictureBox1.Image = bmp;
kk();
kk1();
kk2();
}
int i = 0;
foreach (int val in activeHistogram.Values)
{
// Console.WriteLine(val);
i++;
DataResults1.Rows.Add(i, val);
//chart1.DataSource = DataResults.Tables["salary"];
chart2.Series["Red"].XValueMember = "Name";
chart2.Series["Red"].YValueMembers = "Values";
this.chart2.Titles.Add("Histogram Of Red Plane");
chart2.Series["Red"].ChartType = SeriesChartType.Column;
//chart1.Series["accuracy"].IsValueShownAsLabel = true;
chart2.DataSource = DataResults1;
}
//histogram1.Values = activeHistogram.Values;
// histogram1.Color = System.Drawing.Color.Green;
//oo.Dispose();
//histogram1.Refresh();
int i = 0;
foreach (int val in activeHistogram.Values)
{
// Console.WriteLine(val);
i++;
DataResults2.Rows.Add(i, val);
//chart1.DataSource = DataResults.Tables["salary"];
chart3.Series["Green"].XValueMember = "Name";
chart3.Series["Green"].YValueMembers = "Values";
this.chart3.Titles.Add("Histogram Of Green Plane");
chart3.Series["Green"].ChartType = SeriesChartType.Column;
//chart1.Series["accuracy"].IsValueShownAsLabel = true;
chart3.DataSource = DataResults2;
}
// histogram3.Values = activeHistogram.Values;
// histogram3.Color = System.Drawing.Color.Blue;
// histogram3.Values
//oo.Dispose();
// histogram3.Refresh();
int i=0;
foreach (int val in activeHistogram.Values)
{
// Console.WriteLine(val);
i++;
DataResults.Rows.Add(i, val);
//chart1.DataSource = DataResults.Tables["salary"];
chart1.Series["Blue"].XValueMember = "Name1";
chart1.Series["Blue"].YValueMembers = "Values1";
this.chart1.Titles.Add("Histogram Of Blue Plane");
chart1.Series["Blue"].ChartType = SeriesChartType.Column;
//chart1.Series["accuracy"].IsValueShownAsLabel = true;
chart1.DataSource = DataResults;
}
private void panel2_Paint(object sender, PaintEventArgs e)
{
}
}
}
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
//using Accord.Imaging.Filters;
//using Accord.Math;
namespace HYPERSPECTRAL
{
extern alias Acc;
public decimal a, b, c;
public Feature_Extract()
{
InitializeComponent();
}
List<Acc.Accord.Imaging.SpeededUpRobustFeaturePoint> points =
surf.ProcessImage(lenna);
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Drawing.Imaging;
using System.Drawing;
using System.Runtime.InteropServices;
namespace HYPERSPECTRAL
{
public class ConvMatrix
{
public int TopLeft = 0, TopMid = 0, TopRight = 0;
public int MidLeft = 0, Pixel = 1, MidRight = 0;
public int BottomLeft = 0, BottomMid = 0, BottomRight = 0;
public int Factor = 1;
public int Offset = 0;
public void SetAll(int nVal)
{
TopLeft = TopMid = TopRight = MidLeft = Pixel = MidRight = BottomLeft =
BottomMid = BottomRight = nVal;
}
}
unsafe
{
byte* p = (byte*)(void*)Scan0;
b.UnlockBits(bmData);
return true;
}
unsafe
{
byte* p = (byte*)(void*)Scan0;
b.UnlockBits(bmData);
return true;
}
int nVal = 0;
unsafe
{
byte* p = (byte*)(void*)Scan0;
p[0] = (byte)nVal;
++p;
}
p += nOffset;
}
}
b.UnlockBits(bmData);
return true;
}
contrast *= contrast;
int red, green, blue;
unsafe
{
byte* p = (byte*)(void*)Scan0;
p += 3;
}
p += nOffset;
}
}
b.UnlockBits(bmData);
return true;
}
public static bool Gamma(Bitmap b, double red, double green, double blue)
{
if (red < .2 || red > 5) return false;
if (green < .2 || green > 5) return false;
if (blue < .2 || blue > 5) return false;
unsafe
{
byte* p = (byte*)(void*)Scan0;
int nOffset = stride - b.Width * 3;
p += 3;
}
p += nOffset;
}
}
b.UnlockBits(bmData);
return true;
}
public static bool Color(Bitmap b, int red, int green, int blue)
{
if (red < -255 || red > 255) return false;
if (green < -255 || green > 255) return false;
if (blue < -255 || blue > 255) return false;
unsafe
{
byte* p = (byte*)(void*)Scan0;
p += 3;
}
p += nOffset;
}
}
b.UnlockBits(bmData);
return true;
}
unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* pSrc = (byte*)(void*)SrcScan0;
int nPixel;
p += 3;
pSrc += 3;
}
p += nOffset;
pSrc += nOffset;
}
}
b.UnlockBits(bmData);
bSrc.UnlockBits(bmSrc);
return true;
}
public static bool Smooth(Bitmap b, int nWeight /* default to 1 */)
{
ConvMatrix m = new ConvMatrix();
m.SetAll(1);
m.Pixel = nWeight;
m.Factor = nWeight + 8;
m.Offset = 127;
BitmapFilter.Conv3x3(b, m);
switch (nType)
{
case EDGE_DETECT_SOBEL:
m.SetAll(0);
m.TopLeft = m.TopRight = 1;
m.BottomLeft = m.BottomRight = -1;
m.TopMid = 2;
m.BottomMid = -2;
m.Offset = 0;
break;
case EDGE_DETECT_PREWITT:
m.SetAll(0);
m.BottomLeft = m.BottomMid = m.BottomRight = -1;
m.TopLeft = m.TopMid = m.TopRight = 1;
m.Offset = 0;
break;
case EDGE_DETECT_KIRSH:
m.SetAll(-3);
m.Pixel = 0;
m.BottomLeft = m.BottomMid = m.BottomRight = 5;
m.Offset = 0;
break;
}
BitmapFilter.Conv3x3(bTemp, m);
int nPixel = 0;
b.UnlockBits(bmData);
bTemp.UnlockBits(bmData2);
return true;
}
unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* p2 = (byte*)(void*)Scan02;
int nPixel = 0;
p += stride;
p2 += stride;
(p + stride)[0] = (byte)nPixel;
++p;
++p2;
}
p += 9 + nOffset;
p2 += 9 + nOffset;
}
}
b.UnlockBits(bmData);
bmTemp.UnlockBits(bmData2);
return true;
}
unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* p2 = (byte*)(void*)Scan02;
p += nStride3;
p2 += nStride3;
p[0] = (byte)nPixel;
++p;
++p2;
}
p += 3 + nOffset;
p2 += 3 + nOffset;
}
}
b.UnlockBits(bmData);
bmTemp.UnlockBits(bmData2);
return true;
}
unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* p2 = (byte*)(void*)Scan02;
p += stride;
p2 += stride;
p[0] = (byte)nPixelMax;
++p;
++p2;
}
p += 3 + nOffset;
p2 += 3 + nOffset;
}
}
b.UnlockBits(bmData);
b2.UnlockBits(bmData2);
return true;
}
public static bool EdgeDetectDifference(Bitmap b, byte nThreshold)
{
// This one works by working out the greatest difference between a pixel and it's eight
neighbours.
// The threshold allows softer edges to be forced down to black, use 0 to negate it's effect.
Bitmap b2 = (Bitmap)b.Clone();
unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* p2 = (byte*)(void*)Scan02;
p[0] = (byte)nPixelMax;
++p;
++p2;
}
p += 3 + nOffset;
p2 += 3 + nOffset;
}
}
b.UnlockBits(bmData);
b2.UnlockBits(bmData2);
return true;
unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* p2 = (byte*)(void*)Scan02;
int nOffset = stride - b.Width * 3;
int nWidth = b.Width * 3;
p += stride;
p2 += stride;
p += nOffset + 3;
p2 += nOffset + 3;
}
}
b.UnlockBits(bmData);
b2.UnlockBits(bmData2);
return true;
}
byteBuffer[k] = pixelBuffer[0];
byteBuffer[k + 1] = pixelBuffer[1];
byteBuffer[k + 2] = pixelBuffer[2];
}
bmpNew.UnlockBits(bmpData);
bmpData = null;
byteBuffer = null;
//return bmpNew;
return true;
}
return bmpNew;
}
}
}
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
namespace HYPERSPECTRAL
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
}
}
}
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
namespace HYPERSPECTRAL
{
public partial class Home : Form
{
public Home()
{
InitializeComponent();
}
public string name;
string ff;
}
}
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
namespace HYPERSPECTRAL
{
public partial class Neural : Form
{
public Neural()
{
InitializeComponent();
}
Neuralclassfication();
}
pictureBox1.Image = pic;
}
}
}
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Drawing.Imaging;
using AForge;
using AForge.Imaging.Filters;
namespace HYPERSPECTRAL
{
public partial class Preprocessing : Form
{
public Preprocessing()
{
InitializeComponent();
}
}
public static Bitmap AdjustBrightness(Bitmap Image, int Value)
{
float[][] FloatColorMatrix ={
Attributes.SetColorMatrix(NewColorMatrix);
Attributes.Dispose();
NewGraphics.Dispose();
return NewBitmap;
}
private void button4_Click(object sender, EventArgs e)
{
Feature_Extract f = new Feature_Extract();
f.orginal = original;
f.ff = ff;
f.Show();
}
7.2.1 CLASSIFICATION
Test ID IT 01
Unit tested To test the hyper spectral data
Purpose Hyper spectral image classification
Pre requirement Pre-processed data
Test Data Hyper spectral values
Test status Name of land such as grass, building and tree
Test result Pass
CHAPTER 8
8. IMPLEMENTATION
While developing this project, we came across many experiences. They are
1. Before starting the project, we must have proper plan about it.
2. Unit testing is very important as bugs are identified then and there itself.
3. We should not jump into code directly. First the project should be analysed thoroughly.
4. Larger coding must be split into many smaller coding so that it may be much efficient.
CHAPTER 9
We developed a new novel framework for hyper-spectral classification to extract spectral and
spatial information. Features are extracted as multi attributes profiles and we reduced the
dimensionality by using supervised features extraction method DBFE. And implementing CNN
classification for improves the accuracy in results. The Proposed framework is considerably
examined on extensively used hyper-spectral statistics units, i.e., Pavia University scene.
Different strategies have been used to enforce the supplied framework, and the consequences
furnished had been as compared in terms of category accuracies. The exact classification
accuracies obtained in provided framework. In addition, the new approach achieves better
classification accuracies than other extensively used classification strategies, with acceptable
CPU processing time. We emphasize that the proposed system is fully computerized, that's a
exceedingly acceptable characteristic.
FUTURE WORK
In future, we can extend the framework to improve the accuracy in various kinds of
datasets and try to analyse parallel processing approach and include other performance metrics.
REFERENCES
[1] Wang, Qi, Jianzhe Lin, and Yuan Yuan. "Salient band selection for hyperspectral image
classification via manifold ranking." IEEE transactions on neural networks and learning
systems 27.6 (2016): 1279-1289.
[2] Camps-Valls, Gustavo, et al. "Advances in hyperspectral image classification: Earth
monitoring with statistical learning methods." IEEE signal processing magazine 31.1 (2013): 45-
54.
[3] Li, Wei, et al. "Hyperspectral image classification using deep pixel-pair features." IEEE
Transactions on Geoscience and Remote Sensing 55.2 (2016): 844-853.
[4] Xiong, Mingming, et al. "Hyperspectral image classification using weighted joint
collaborative representation." IEEE Geoscience and Remote Sensing Letters 12.6 (2015): 1209-
1213.
[5] Gao, Lianru, et al. "Subspace-based support vector machines for hyperspectral image
classification." IEEE Geoscience and Remote Sensing Letters 12.2 (2014): 349-353.
[6] M. Pesaresi and J. A. Benediktsson, ―A new approach for the morphological segmentation of
high-resolution satellite imagery,‖ IEEE Trans. Geosci. Remote Sens., vol. 39, no. 2, pp. 309–
320, Feb. 2001.
[7] E. J. Breen and R. Jones, ―Attribute openings, thinnings, and granulometries,‖ Comput. Vis.
Image Understand., vol. 64, no. 3, pp. 377–389, Nov. 1996.
[8] M. Chini, N. Pierdicca, and W. Emery, ―Exploiting SAR and VHR optical images to quantify
damage caused by the 2003 Bam earthquake,‖ IEEE Trans. Geosci. Remote Sens., vol. 47, no. 1,
pp. 145–152, Jan. 2009.
[9] M. K. D. Tuia, F. Pacifici, and W. Emery, ―Classification of very high spatial resolution
imagery using mathematical morphology and support vector machines,‖ IEEE Trans. Geosci.
Remote Sens., vol. 47, no. 11, pp. 3866–3879, Nov. 2009.
[10] P. Soille, Morphological Image Analysis, Principles and Applications, 2nd ed. New York,
NY, USA: Springer-Verlag, 2003.
WEBSITES REFERENCE
https://www.tutorialspoint.com/csharp/index.htm
https://en.wikipedia.org/wiki/C_Sharp_(programming_language)
http://csharp.net-tutorials.com/
http://csharp.net-tutorials.com/basics/introduction/
https://softwareengineering.stackexchange.com/questions/44810/relationship-between-c-
net-asp-asp-net-etc