0% found this document useful (0 votes)
68 views96 pages

Full Document - Hyperspectral PDF

This document discusses hyperspectral image classification using neural networks. It proposes using feature extraction on hyperspectral images to extract spectral and spatial features using decision boundary feature extraction. These features would then be classified using convolutional neural networks to improve classification accuracy over other methods. The key advantages of this approach are that it reduces complexity, extracts relevant features, can reduce dimensionality, and allows for parallel processing in classification.

Uploaded by

Jayaprabha Mani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views96 pages

Full Document - Hyperspectral PDF

This document discusses hyperspectral image classification using neural networks. It proposes using feature extraction on hyperspectral images to extract spectral and spatial features using decision boundary feature extraction. These features would then be classified using convolutional neural networks to improve classification accuracy over other methods. The key advantages of this approach are that it reduces complexity, extracts relevant features, can reduce dimensionality, and allows for parallel processing in classification.

Uploaded by

Jayaprabha Mani
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 96

ABSTRACT

With the improvement of remote sensing application, hyper spectral images have been
used in large number of applications. And lot of works have been done to extract the features
from remote sensing and accurate learning for classify the classes. The spectral and spatial
information of images have been allows to classify the results with improved accuracy. Fusion of
spatial and spectral data is an actual way in improving the accuracy of hyper-spectral image
classification. In this work, we proposed spectral with spatial details based on hyper-spectral
image classification method using neural network classifiers and using multi neurons based
learning approach is used to classify the remote sensing images with specific class labels. The
features may be supernatural and latitudinal data is extracted using boundary values using
decision boundary feature extraction (DBFE). These extracted features are trained using
convolutional neural networks (CNN) for improve the accuracy for labeling the classes. The
methodology entails of training and embedding regularizer towards the loss function recycled for
train the neural networks. Training is done using various layers with additional balancing
constraints to avoid falling into local minima. In testing phase, classify each remote sensing
image and avoid false truth map. Experimental results shows that improved accuracy in class
specification rather than other state of art algorithms.
CHAPTER 1

1. INTRODUCTION

1.1 PROBLEM STATEMENT:

Due to the rapid development and proliferation of hyperspectral remote sensing


technology, hundreds of narrow spectral wavelengths for each image pixel can be easily acquired
by space borne or airborne sensors, such as AVIRIS, HyMap, HYDICE, and Hyperion. This
detailed spectral reflectance signature makes accurately discriminating materials of interest
possible. Because of the numerous demands in ecological science, ecology management,
precision agriculture, and military applications, a large number of hyperspectral image
classification algorithms have appeared on the scene by exploiting the spectral similarity and
spectral-spatial feature These methods can be divided into two categories: supervised and
unsupervised. The former is generally based on clustering first and then manually determining
the classes. Through incorporating the label information, these supervised methods leverage
powerful machine learning algorithms to train a decision rule to predict the labels of the testing
pixels. In this project, we mainly focus on the supervised hyperspectral image classification
techniques. In the past decade, the remote sensing community has introduced intensive works to
establish an accurate hyperspectral image classifier. A number of supervised hyperspectral image
classification methods have been proposed, such as Bayesian models, neural networks, random
forest, support vector machine (SVM), sparse representation classification, extreme learning
machine (ELM), and their variants. Benefiting from elaborately established hyperspectral image
databases, these well-trained classifiers have achieved remarkably good results in terms of
classification accuracy.
1.2 EXISTING WORK

Land cover is an elementary variable that impacts on and links many parts of the human
and physical environments. Thus, information on the spatial distribution of the land cover classes
is of vital importance for the investigation of environmental processes. Satellite remote sensing
techniques are widely used for the environmental monitoring. Hyperspectral imagery is a
valuable source from which one can extract detailed information about earth surface phenomena
and objects. In fact, the sensors are characterized by a very high spectral resolution that usually
results in hundreds of narrow spectral channels. Classification of land cover hyperspectral
images is a very challenging task due to the unfavourable ratio between the number of spectral
bands and the number of training samples. The focus in many applications is to investigate an
effective classifier in terms of accuracy. The conventional multiclass classifiers have the ability
to map the class of interest but the considerable efforts and large training sets are required to
fully describe the classes spectrally. Support Vector Machine (SVM) is suggested in this paper to
deal with the multiclass problem of hyperspectral imagery. The attraction to this method is that it
locates the optimal hyper plane between the class of interest and the rest of the classes to
separate them in a new high-dimensional feature space by taking into account only the training
samples that lie on the edge of the class distributions known as support vectors and the use of the
kernel functions made the classifier more flexible by making it robust against the outliers

1.2.1 DISADVANTAGES

 Computational complexity is high


 Irrelevant features are extracted
 Dimensionality can be high
 Classification accuracy is low
1.3 PROPOSED SYSTEM

The proposed work focus to provide feature learning and sparse representation based
approach to handle irregular class boundaries in hyper-spectral image classification. So we
implement the system using neural networks with features representation. In the proposed
framework, supervised Feature Extraction (FE) is first executed at the enter records and the first
features with cumulative eigenvalues. The functions are extracted from the Decision Boundary
Feature Matrix (DBFM). In order to attain the equal classification accuracy as within the unique
space, retaining the eigenvectors of the decision boundary feature matrix similar to nonzero
eigenvalues is vital. The performance of this technique does no longer become worse even if
there's no difference within the mean vectors or covariance matrices. The efficiency of DBFE is
rather depending on the satisfactory and wide variety of education samples, which isn't essential.
And also implement convolutional neural network algorithm to classify the pixels with improved
accuracy. CNNs represent feed-forward neural networks which consist of various combinations
of the convolutional layers, max pooling layers, and fully related layers and Take advantage of
spatially local correlation by way of enforcing a local connectivity pattern between neurons of
adjacent layers. Convolutional layers exchange with max pooling layers mimicking the character
of complicated and easy cells in mammalian visible cortex. A CNN includes one or extra pairs of
convolution and max pooling layers and ultimately ends with completely related neural
networks. The hierarchical structure of CNNs is steadily proved to be the most efficient and
successful manner to analyse visible representations

1.3.1 ADVANTAGES

• Reduce complexity in classification


• Relevant features are extracted
• Dimensionality can be reduced
• Parallel processing in classification
1.4 MODULES

• Image Acquisition
• Pre-processing
• Features extraction
• Classification

1.4.1 MODULES DESCRIPTION

• Image Acquisition

– In this module, we can upload the image as hyper spectral image

– Image may be any type and any size

• Preprocessing

– In this module, we can perform gray scale conversion to convert the RGB image
into gray scale

– Then implement noise filtering algorithm named as Median filtering algorithm to


eliminate noises in images

• Features Extraction

– In this module, perform features extraction steps to extract low level and high
level features

– Features includes color, shape and texture features

– These features are constructed as feature vectors

• Classification

– In this module, perform convolutional neural network algorithm to classify the


pixels

– Finally label the data as water, trees and land details

– Improve the performance in pixel classification with improved accuracy


1.5 LITERATURE SURVEY

1.5.1 TITLE: SALIENT BAND SELECTION FOR HYPERSPECTRAL IMAGE


CLASSIfiCATION VIA MANIFOLD RANKING

AUTHOR: QI WANG

In this paper, propose a novel method of MR-based band selection. Instead of rating the
similarities in the Euclidean space, the manifold structure is taken into consideration to properly
assess the hyper spectral data structure. The associated measurement is input to a ranking
operation and a subsequent band selection is based on the obtained ranking score. This is a novel
alternative that reformulates the hyper spectral band selection as a ranking problem. Estimate the
interband distance in a batch manner. Most existing techniques for band selection always
compute the distance between two individual bands. The calculated results then serve as
guidance for band selection. However, this strategy is not suitable for the sequential selection
because the selected band at this time might resemble the one selected at previous time. In our
implementation, we treat the already selected batch of bands as the query, and the examined
band is compared with the whole batch. This can ensure the further selected band is distinct with
the previously selected ones. Provide a thorough comparison using different band selection
methods and classifiers. In order to validate the effectiveness of the proposed method, we
compare it with several recently presented methods. Besides, we also test these methods on
typical classifiers that are frequently used for HSI classification.
1.5.2. TITLE: ADVANCES IN HYPERSPECTRAL IMAGE CLASSIFICATION

AUTHOR: GUSTAVO CAMPS

The technological evolution of optical sensors over the last few decades has provided
remote sensing analysts with rich spatial, spectral, and temporal information. In particular, the
increase in spectral resolution of hyperspectral images and infrared sounders opens the doors to
new application domains and poses new methodological challenges in data analysis.
Hyperspectral images (HSI) allow to characterize the objects of interest (for example land-cover
classes) with unprecedented accuracy, and to keep inventories up-to-date. Improvements in
spectral resolution have called for advances in signal processing and exploitation algorithms.
This paper focuses on the challenging problem of hyperspectral image classification, which has
recently gained in popularity and attracted the interest of other scientific disciplines such as
machine learning, image processing and computer vision. In the remote sensing community, the
term ‗classification‘ is used to denote the process that assigns single pixels to a set of classes,
while the term ‗segmentation‘ is used for methods aggregating pixels into objects, then assigned
to a class. Despite all these commonalities, the analysis of hyperspectral images turns out to be
more difficult, especially because of the high dimensionality of the pixels, the particular noise
and uncertainty sources observed, the high spatial and spectral redundancy, and their potential
non-linear nature. Such nonlinearities can be related to a plethora of factors, including the multi-
scattering in the acquisition process, the heterogeneities at subpixel level, as well as the impact
of atmospheric and geometric distortions
1.5.3 TITLE: HYPERSPECTRAL IMAGE CLASSIFICATION USING DEEP PIXEL-
PAIR FEATURES

AUTHOR: WEI LI

Hyperspectral imagery consists of hundreds of narrow contiguous wavelength bands


carrying a wealth of spectral information. Taking advantage of the rich spectral information,
classification using hyperspectral data has been developed for a variety of applications, such as
land use land-cover mapping, mineral exploration, water pollution detection, etc. In this paper, a
novel classification framework based on pixel-pair features (PPFs) learned by deep CNN is
proposed. In the proposed method, training samples are first paired with any two selected
samples using the following criteria—a pair of samples from the same class is labelled with no
change while that of samples selected from different classes is labelled as 0. For the training
procedure, paired samples with new labels are fed into deep CNN, whose architecture is well
designed; during the testing process, for each testing pixel, neighbouring pixel-pairs constructed
using its surroundings are classified by the trained CNN, and the final label is then determined
via a voting strategy based on joint classification results. The reason we chose deep CNN is due
to the fact that CNN has been proved to effectively classify hyperspectral data after building
appropriate layer architecture. To solve this issue, the proposed framework operates on pixel-pair
model where a new data combination is constructed via pairing with any two selected samples
from the available labeled data and the data entry is relabelled. In doing so, the amount of input
data for training exhibits quadratic growth, ensuring the setting of well-tuned parameters.
Furthermore, the proposed method fully utilizes the internal correlation of neighbours in
hyperspectral imagery, which is ignored by the original CNN.
1.5.4 TITLE: HYPERSPECTRAL IMAGE CLASSIFICATION USING WEIGHTED
JOINT COLLABORATIVE REPRESENTATION

AUTHOR: MINGMING XIONG

Hyperspectral image (HSI) classification, which aims at categorizing pixels into one of
several land-use land-over classes, is an important application in the remote sensing field. To
date, numerous HSI classification techniques have been proposed. Among these approaches, the
support vector machine (SVM) is capable of discriminating two classes by fitting an optimal
separating hyper plane to the training data within a multidimensional feature space, and has
shown excellent performance in HSI classification even with limited training samples. An
improved SVM exploited the properties of Mercers conditions to construct a composite kernel
(CK) for the combination of both spectral and spatial information, which is referred to as SVM-
CK. However, we notice that JCR takes the surrounding pixels with the same weights, which is
suboptimal, particularly in heterogeneous regions where the central pixel and neighbouring
pixels do not belong to the same class. Under such a case, only these neighboring pixels that are
associated with the central pixels should be taken into consideration. Nevertheless, removal of
the irrelevant pixels is not easy, which may increase additional computational complexity.
Therefore, in this letter, we propose a simple but effective method to describe the contribution
from a neighbouring pixel neighbouring pixel with adaptive weights. In the resulting WJCR,
more appropriate weights are determined by using a Gaussian kernel function. The WJCR
provides the benefit of efficiently extracting more accurate spectral–spatial features, which is
particularly useful to data with a heterogeneous image scene.
1.5.5 TITLE: SUBSPACE-BASED SUPPORT VECTOR MACHINES FOR
HYPERSPECTRAL IMAGE CLASSIFICATION

AUTHOR: LIANRU GAO

Given a training set mapped into a space by some mapping, the SVM separates the data
by an optimal hyper plane. If the data are linearly separable, we can select two hyper planes in a
way that they separate the data and there are no points between them, and then try to maximize
their distance. The region bounded by them is called the margin. If the data are not linearly
separable, soft margin classification with slack variables can be used to allow misclassification
of difficult or noisy cases. However, the most widely used approach in SVM classification is to
combine soft margin classification with a kernel trick that allows separation of the classes in a
higher dimensional space by means of a nonlinear transformation. In other words, the SVM used
with a kernel function is a nonlinear classifier, where the nonlinear ability is included in the
kernel and different kernels lead to different types of SVMs. The extension of SVMs to
multiclass problems is usually done by combining several binary classifiers [20]. In this letter,
our main contribution is to incorporate a subspace-projection-based approach to the classic SVM
formulation, with the ultimate goal of having a more consistent estimation of the class
distributions. The resulting classification technique, called SVMsub, is shown in this work to be
robust to the presence of noise, mixed pixels, and limited training samples. In this letter, we
extend the subspace-projection-based concept to support vector machines (SVMs), a very
popular technique for remote sensing image classification. For that purpose, we construct the
SVM nonlinear functions using the subspaces associated to each class. The resulting approach,
called SVMsub, is experimentally validated using a real hyperspectral data set collected using
the National Aeronautics and Space Administration‘s Airborne Visible/Infrared Imaging
Spectrometer. The obtained results indicate that the proposed algorithm exhibits good
performance in the presence of very limited training samples.
CHAPTER 2

2. SOFTWARE PROJECT PLAN

No Plan Description Time Deliverable Remarks


Bound
1 Process Identification of 23rd Abstract Identification of
Initialization project activity January project mentor
and responsible 2020 and technology
person. familiarization
2 Software Software 25th SRS, Functional & various kinds of
Requirements Requirements January Non-Functional requirements are
are documented 2020 requirements identified.
3 Software Convert 1st User Interface Minor change
Architecture analysis model February are to be
Description into design 2020 completed
model
4 Software Designs are 5th Code and module Modification in
Development converted into February generation initial module
functional 2020
module are
integrated.
5 Software The software is 10th Software installation The Software
Deployment implemented in February has been
java language. 2020 installed and
new features are
added.
6 Software Test plans, 20th Test report and Various flows
Testing various testing March feedback are identified
methods and 2020
tools
7 Software Formation of 11th May Problems(reporting The record
Maintenance maintenance 2020 solution, distribution, required for
team, problem defect prevention) proper
repository and maintenance in
risk repository PDF format
CHAPTER 3

3.1 INTRODUCTION

3.1.1 Purpose

The purpose of this document is to present a detailed description of the hyper spectral image

processing and classify the pixels to identify the type of land covers such as grass, building,

water bodies and so on.

3.1.2 Document Conventions

The document is prepared as rules of IEEE format.

Heading:

Font Size:16

Font Style: Bold

Font type: Times New Roman

Sub heading:

Font Size:14

Font Style: Bold

Font type: Times New Roman

Content:
Font Size:12

Font Style: Normal

Font type: Times New Roman

3.1.3 Intended Audience and Reading Suggestions

The document is intended for users and administrator. The SRS document also contain some

information about the system like scope, system features, assumptions and dependencies and

other useful information. Suggesting the reader to read the document very well to understand the

goal of the system, the advantages and how the system will work.

3.1.4 Scope

This project aimed for classification of satellite images with improved accuracy by using deep

learning algorithm.

3.2 OVERALL DESCRIPTIONS

3.2.1 Product Perspective

In various places, hyper spectral image processing can be used to identify the various land

covers based on spatial mining

3.2.2 User Classes and Characteristics

The user can access the system. They can only make the changes in the dataset. They are

only responsible for updating and uploading of dataset.


3.2.3 Design and Implementation Constraints

Hardware limitations include Memory Requirement (Large Volume of dataset to be stored).

Our system limits ourselves to .NET as the programming language and Visual studio as IDE.

3.2.4 User Documentation

A simple video of how it works will be included in package in recorded document format.

3.2.5 Assumptions and Dependencies

The algorithm used is efficient enough to analyse the various types of hyper spectral pixels. The

size of the dataset is limited according to the ability of the RAM of the system. The user should

have some basic knowledge of the computer.

3.3 EXTERNAL INTERFACE REQUIREMENTS

3.3.1 Hardware Interfaces

Need hardware such as Keyboard, mouse, monitor.

3.3.2 Software Interfaces

The .NET Framework (pronounced dot net) is a software framework developed by


Microsoft that runs primarily on Microsoft Windows. It includes a large library and provides
language interoperability (each language can use code written in other languages) across several
programming languages. Programs written for the .NET Framework execute in a software
environment (as contrasted to hardware environment), known as the Common Language
Runtime (CLR), an application virtual machine that provides services such as security, memory
management, and exception handling. The class library and the CLR together constitute the
.NET Framework.

3.4 SYSTEM FEATURES

The hyper spectral image processing project requires to handle the following:

3.4.1 Functional Requirements

3.4.1.1 IMAGE ACQUISITION

Digital image classification (and mapping) is a process of abstraction and generalization


from image data to produce categories of interest depending on the application. Hyperspectral
images are obtained by earth orbiting imaging spectrometers of high spatial resolution. To
classify and map hyperspectral images, a field engineer may be sent to the corresponding area to
collect some class information, i.e., which pixel belongs to what category, such as water, bare
soil, vegetation, etc.. Since collecting field information can be very time consuming, only a very
small portion of the concerned area can be examined by the field engineer. In this module, user
can upload the satellite images in image format. Image can be any type and any size.

3.4.1.2 PREPROCESSING

Pre-processing is a common name for operations with images at the lowest level of
abstraction -- both input and output are intensity images. The aim of pre-processing is an
improvement of the image data that suppresses unwanted distortions or enhances some image
features important for further processing. In this pre-processing, we can convert the RGB image
into Gray scale images. And also reduce the noises using Median filter.

3.4.1.3 FEATURES EXTRACTION

In pattern recognition and in image processing, feature extraction is a special form of


dimensional reduction. Transforming the input data into the set of features is called feature
extraction. If the features extracted are carefully chosen it is expected that the features set will
extract the relevant information from the input data in order to perform the desired task using this
reduced representation instead of the full size input. The meaning of the word ―feature‖ is in
general highly application dependent. A feature is result of some calculations performed on the
input data stream. Extracted feature is then matched with the stored feature data to complete the
object recognition task accurately. There are so many techniques developed for feature
extraction, to make the shape based object recognition easier as well as accurate.

3.4.1.4 CLASSIFICATION

Classification includes a broad range of decision-theoretic approaches to the


identification of images (or parts thereof). All classification algorithms are based on the
assumption that the image in question depicts one or more features (e.g., geometric parts in the
case of a manufacturing classification system, or spectral regions in the case of remote sensing,
as shown in the examples below) and that each of these features belongs to one of several
distinct and exclusive classes. The classes may be specified a priori by an analyst (as
in supervised classification) or automatically clustered (i.e. as in unsupervised classification) into
sets of prototype classes, where the analyst merely specifies the number of desired categories.
(Classification and segmentation have closely related objectives, as the former is another form
of component labelling that can result in segmentation of various features in a scene.) Image
classification analyses the numerical properties of various image features and organizes data into
categories. A deep neural network (DNN) is an artificial neural network (ANN) with multiple
layers between the input and output layers. The DNN finds the correct mathematical
manipulation to turn the input into the output, whether it be a linear relationship or a non-linear
relationship. The network moves through the layers calculating the probability of each output.
For example, a DNN that is trained to recognize dog breeds will go over the given image and
calculates the probability that the dog in the image is a certain breed. The user can review the
results and select which probabilities the network should display (above a certain threshold, etc.)
and return the proposed label. Each mathematical manipulation as such is considered a layer, and
complex DNN have many layers, hence the name "deep" networks.
3.4.2 Non-Functional Requirements

3.4.2.1 Performance Requirements

3.4.2.1.1 Execution Time

The executing time or CPU time of a given task is defined as the time spent by the system

for executing the task, including the time spent on executing run-time system executing that task,

including the time spent executing on run-time services on its behalf. Hence the system

maintains starting time of the process and its ending time which can be helpful to find the

execution time of that particular process.

3.4.2.2 Resource Requirements

3.4.2.2.1 Hardware requirements:

• Hard Disk : 40 GB.

• Floppy Drive : 1.44 Mb.

• Monitor : 15 VGA Colour.

• Mouse : Logitech.

• RAM : 256 Mb.

3.4.2.2.2 Software requirements:

• Operating system : Windows 7/8.

• Front End : .NET (C#)

• Tools : Visual studio


CHAPTER 4
4. SYSTEM ANALYSIS

4.1 ARCHITECTURE DIAGRAM

Fig 4.1 System Architecture


4.2 USE CASE DIAGRAM
A use case diagram at its simplest is a representation of a user's interaction with the system and

depicting the specifications of a use case. A use case diagram can portray the different types of

users of a system and the case and will often be accompanied by other types of diagrams as well.

System
Image Acquisition

Preprocessing

Features Extraction

User

Classification

Type of Land

Fig 4.2 Use Case Diagram


4.3 SEQUENCE DIAGRAM:
A sequence diagram in Unified Modelling Language (UML) is a kind of interaction
diagram that shows how processes operate with one another and in what order. It is a construct of
a Message Sequence Chart. Sequence diagrams are sometimes called event diagrams, event
scenarios, and timing diagrams.

Image Acquisition Preprocessing Features Extraction


Classification

1 : Upload image()

2 : Gray scale conversion()

3 : Noise filtering()

4 : Color features()

5 : Shape Feature()

6 : CNN algorithm()

7 : Land cover details()

Fig 4.3 Sequence Diagram


4.4 Collaboration diagram:
A collaboration diagram resembles a flowchart that portrays the roles, functionality and

behaviour of individual objects as well as the overall operation of the system in real time.

Objects are shown as rectangles with naming labels inside. These labels are preceded by colons

and may be underlined. The relationships between the objects are shown as lines connecting the

rectangles. The messages between objects are shown as arrows connecting the relevant

rectangles along with labels that define the message sequencing.

Fig 4.4 Collaboration Diagram


4.5 Activity Diagram:
Activity diagram is another important diagram in UML to describe the dynamic aspects

of the system. Activity diagram is basically a flowchart to represent the flow from one activity to

another activity. The activity can be described as an operation of the system..

Upload image

Preprocessing

Features Extraction

Classification

Fig 4.5 Activity Diagram


4.6 Class Diagram

In software engineering, a class diagram in UML is a type of static structure that describes the
structure diagram that describes the structure of a system by showing the system‘s classes, their
attributes, operations, and relationships among objects.

Image_Acquisition Features_Extraction
+Satellite image +Features
+Upload image() +Features Extraction()
+Preprocessing() +Color and Shape features()

Image_classification
+Features Values
+CNN classification()
+Type of Land()

Fig 4.6 Class Diagram


CHAPTER 5
5. DESIGN
5. 1 FRONT END DESIGN
CHAPTER 6
6. CODING
6.1 ALGORITHM
6.1.1 FEATURE EXTRACTION
The functions are extracted from the Decision Boundary Feature Matrix (DBFM). In
order to attain the equal classification accuracy as within the unique space, retaining the
eigenvectors of the decision boundary feature matrix similar to nonzero eigenvalues is vital. The
performance of this technique does no longer become worse even if there's no difference within
the mean vectors or covariance matrices. The efficiency of DBFE is rather depending on the
satisfactory and wide variety of education samples, which isn't essential. Another shortcoming of
DBFE is that it could be computationally extensive. Let X be an statement in the N-dimensional
Euclidean area E^N beneath speculation H_i:X∈ω_i i=1, 2. Decisions might be made according
to the subsequent rule.

Decide if h(X)

else

where h(X)=-ln

t=ln

Fig. 2 shows examples of the discriminant informative and redundant characteristic. It became
shown that discriminately informative functions and discriminately redundant capabilities are
associated with the choice boundary and may be extracted from the choice boundary. It was
additionally proven that discriminately informative characteristic vectors have a component that
is normal to the decision boundary at the least one factor on the choice boundary and
discriminately redundant feature vectors are orthogonal to the vector every day to choice
boundary at every point on selection boundary.

Improvements in terms of capability in modeling the spatial information are doable on


account that those operators are not primarily based on fixed Structuring Elements and the
photograph transformation is most effective computed by way of merging its related
components. The concept is to extract one of kind varieties of facts, represented with the aid of
the attributes, from unique flat areas, i.e., parts of the scene with the same gray tiers. Attribute
Filters are effectively applied with an equivalent illustration of the photograph as a tree. In
specific, a thresholding operation of all the mapped values gift in the photograph f, consequences
in top and lower level units which can be connected additives (i.E., flat zones) that may be
grouped in the following units:

U(f) = X∶ X ∈ CC([f ≥ λ]),λ ∈ Z,

L(f) = X∶ X ∈ CC([f < λ]),λ ∈ Z,

with CC(f) being the linked components of the regularly occurring picture f. There is an
inclusion relationship a number of the related additives extracted via both the upper or decrease
stage sets (belonging to U (f) or L (f), respectively). This property permits for the association of
a node in the tree to each connected issue and accordingly represents the photo as a hierarchical
structure: the max-tree and min-tree systems represent, respectively, the additives in U (f) and L
(f) with their inclusion relations via the thresholding operations. Attribute filters are form
retaining, considering that they in no way introduce new edges in an picture, and perform on
regions in line with the end result of a binary predicate P. In particular, the filtering criteria
usually decide whether the price of an attribute α of a given connected issue CC verifies a
predicate: P = α (CC) ≥ λ with α (CC), λ ∈ R or Z, wherein λ is a threshold price. When
characteristic filters are implemented to the tree representation of the picture, the operator leads
to a pruning of the tree by way of casting off the nodes whose associated areas do not fulfill P.
Two distinctive filtering methods have been proposed: pruning the tree by using casting off
entire branches, and pruning via now not casting off all of the branches.

6.1.2 CLASSIFICATION
A CNN includes one or extra pairs of convolution and max pooling layers and ultimately ends
with completely related neural networks. The hierarchical structure of CNNs is steadily proved to be the
most efficient and successful manner to analyze visible representations. The fundamental challenge in
such visual tasks is to model the intra-class appearance and shape variation of objects. The hyper-spectral
data with hundreds of spectral channels can be illustrated as 2D curves. We can see that the curve of
every class has its own visual shape which is different from other classes, although it is relatively difficult
to distinguish some classes with human eye (e.g., gravel and self-blocking bricks). We know that CNNs
can accomplish competitive and even better performance than human being in some visual problems, and
its capability inspires us to study the possibility of applying CNNs for HSI classification using the
spectral signatures. The CNN varies in how the convolutional and max pooling layers are realized and
how the nets are trained.

As illustrated in fig 4 the net contains five layers with weights, including the input layer, the
convolutional layer C1, the max pooling layer M2, the full connection layer F3, and the output layer.
Assuming 𝜃 represents all the trainable parameters (weight values), 𝜃 = {𝜃𝑖} and 𝑖 = 1, 2, 3, 4, where 𝜃𝑖 is
the parameter set between the (𝑖−1)th and the 𝑖th layer. In HSI, each HSI pixel sample can be regarded as
a 2D image whose height is equal to 1 (as 1D audio inputs in speech recognition). Therefore, the size of
the input layer is just (𝑛1, 1), and 𝑛1 is the number of bands. The first hidden convolutional layer C1
filters the 𝑛1 × 1 input data with 20 kernels of size 𝑘1 × 1. Layer C1 contains 20 × 𝑛2 × 1 nodes, and 𝑛2 =
𝑛1 − 𝑘1 + 1. There are 20 × (𝑘1 + 1) trainable parameters between layer C1 and the input layer. The max
pooling layer M2 is the second hidden layer, and the kernel size is (𝑘2, 1). Layer M2 contains 20 × 𝑛3 × 1
nodes, and 𝑛3 = 𝑛2/𝑘2. There is no parameter in this layer. The fully connected layer F3 has 𝑛4 nodes and
there are (20 × 𝑛3 + 1) × 𝑛4 trainable parameters between this layer and layer M2. The output layer has
𝑛5 nodes, and there are (𝑛4 + 1) × 𝑛5 trainable parameters between this layer and layer F3. Consequently,
the architecture of our proposed CNN classifier totally has 20 × (𝑘1 + 1) + (20 × 𝑛3 + 1) × 𝑛4 + (𝑛4 + 1)
× 𝑛5 trainable parameters. Classifying a specified HSI pixel wants the corresponding CNN with the
aforementioned parameters, where 𝑛1 and 𝑛5 are the spectral channel size and the number of output
classes of the data set, respectively. In our experiments, 𝑘1 is better to be ⌈𝑛1/9⌉, and 𝑛2 = 𝑛1−𝑘1+1. 𝑛3
can be any number between 30 and 40, and 𝑘2 = ⌈𝑛2/𝑛3⌉. 𝑛4 is set to be 100.These choices might not be
the best but are in effect for general HSI data. In our architecture, layer C1 and M2 can be viewed as a
trainable feature extractor to the input HSI data, and layer F3 is a trainable classifier to the feature
extractor. The output of subsampling is the real feature of the original data. In our proposed CNN
structure, 20 features can be extracted from each original hyper-spectral, and each feature has 𝑛3
dimensions.

Constructing the CNN Model

function INITCNNMODEL (𝜃, [𝑛1–5])

layerType = [convolution, max-pooling, fully-connected, fully-connected];

layerActivation = [tanh(2), max(),softmax()]

model = new Model();


for 𝑖=1 to 4 do

layer = new Layer();

layer.type = layerType[𝑖];

layer.inputSize = 𝑛𝑖

layer.neurons = new Neuron [𝑛𝑖+1];

layer.params = 𝜃𝑖;

model.addLayer(layer);

end for

return model;

end function

Training the CNN Model

Initialize learning rate 𝛼, number of maximum iteration ITERmax, minimum error ERRmin,
training batches BATCHEStraining, batch size SIZEbatch, and so on;

Compute 𝑛2, 𝑛3, 𝑛4, 𝑘1, 𝑘2, according to 𝑛1 and 𝑛5;

Generate random weights 𝜃 of the CNN;

cnnModel = InitCNNModel(𝜃, [𝑛1–5]);

iter = 0; err = +inf;

while err > ERRmin and iter < ITERmax do

err = 0;

for bach = 1 to BATCHEStraining do

[∇𝜃𝐽(𝜃), 𝐽(𝜃)] = cnnModel.train (TrainingDatas, TrainingLabels), as (4) and (8); Update 𝜃 using
(7);

err = err + mean(𝐽(𝜃));


end for err = err/BATCHEStraining;

iter++;

end while

Save parameters 𝜃 of the CNN

This network varies affording to the spectral channel size and the number of output classes of input HSI
data. So our proposed work overcomes irregular boundaries separation in hyper-spectral image
classification with spectral and spatial features extraction
6.2 CODING

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Windows.Forms.DataVisualization.Charting;
//using AForge.Math.Histogram;
using Histogram = AForge.Math;
using ImageStatistics = AForge.Imaging;

using image = System.Drawing.Image;


using points = System.Drawing.Point;
namespace HYPERSPECTRAL
{
public partial class Colorfeature : Form
{
public Colorfeature()
{
InitializeComponent();
}
public Bitmap bmp;
public Bitmap bmp1;
public string klm;
public Bitmap bmmp;
public string imagename;

public decimal ac1, ac2, ac3, er1, er2, er3;


public DataTable DataResults = new DataTable("Items");
public DataTable DataResults1 = new DataTable("Items1");
public DataTable DataResults2 = new DataTable("Items2");

private void Colorfeature_Load(object sender, EventArgs e)


{
bmp = new Bitmap(imagename);
bmp1 = new Bitmap(imagename);

klm = "Histogram";

pictureBox1.Image = bmp;

// AForge.Math.Histogram activeHistogram = null;


// AForge.Imaging.ImageStatistics stat =
//new AForge.Imaging.ImageStatistics(bmp);
// if (stat != null)
// {
// //Do if the pic is gray
// if (stat.IsGrayscale)
// {
// activeHistogram = stat.Red;
// }
// //Do if the pic is colourful
// if (!stat.IsGrayscale)
// {
// activeHistogram = stat.Red;
// }
// }

kk();
kk1();
kk2();
}

public void kk()


{

AForge.Math.Histogram activeHistogram = null;


AForge.Imaging.ImageStatistics stat =
new AForge.Imaging.ImageStatistics(bmp1);
if (stat != null)
{
//Do if the pic is gray
if (stat.IsGrayscale)
{
activeHistogram = stat.Red;
}
//Do if the pic is colourful
if (!stat.IsGrayscale)
{
activeHistogram = stat.Red;
}
}

DataColumn dcItemValue = new DataColumn("Name");


DataColumn dcItemN1 = new DataColumn("Values");
//dcItemN1.DataType = System.Type.GetType("System.Int32");
DataResults1.Columns.Add(dcItemValue);
DataResults1.Columns.Add(dcItemN1);
//System .Int32[]
//DataResults.Rows.Add("K-Means", ac1);
//DataResults.Rows.Add("Fuzzy K-Means", ac2);
// DataResults.Rows.Add("Adaptive Fuzzy K-Means", activeHistogram.Values);

int i = 0;
foreach (int val in activeHistogram.Values)
{

// Console.WriteLine(val);
i++;
DataResults1.Rows.Add(i, val);

//chart1.DataSource = DataResults.Tables["salary"];
chart2.Series["Red"].XValueMember = "Name";
chart2.Series["Red"].YValueMembers = "Values";
this.chart2.Titles.Add("Histogram Of Red Plane");
chart2.Series["Red"].ChartType = SeriesChartType.Column;
//chart1.Series["accuracy"].IsValueShownAsLabel = true;

chart2.DataSource = DataResults1;

public void kk1()


{
AForge.Math.Histogram activeHistogram = null;
AForge.Imaging.ImageStatistics stat =
new AForge.Imaging.ImageStatistics(bmp1);
if (stat != null)
{
//Do if the pic is gray
if (stat.IsGrayscale)
{
activeHistogram = stat.Green;
}
//Do if the pic is colourful
if (!stat.IsGrayscale)
{
activeHistogram = stat.Green;
}

}
//histogram1.Values = activeHistogram.Values;
// histogram1.Color = System.Drawing.Color.Green;
//oo.Dispose();
//histogram1.Refresh();

DataColumn dcItemValue = new DataColumn("Name");


DataColumn dcItemN1 = new DataColumn("Values");
//dcItemN1.DataType = System.Type.GetType("System.Int32");
DataResults2.Columns.Add(dcItemValue);
DataResults2.Columns.Add(dcItemN1);
//System .Int32[]
//DataResults.Rows.Add("K-Means", ac1);
//DataResults.Rows.Add("Fuzzy K-Means", ac2);
// DataResults.Rows.Add("Adaptive Fuzzy K-Means", activeHistogram.Values);

int i = 0;
foreach (int val in activeHistogram.Values)
{

// Console.WriteLine(val);
i++;
DataResults2.Rows.Add(i, val);

//chart1.DataSource = DataResults.Tables["salary"];
chart3.Series["Green"].XValueMember = "Name";
chart3.Series["Green"].YValueMembers = "Values";
this.chart3.Titles.Add("Histogram Of Green Plane");
chart3.Series["Green"].ChartType = SeriesChartType.Column;
//chart1.Series["accuracy"].IsValueShownAsLabel = true;

chart3.DataSource = DataResults2;

public void kk2()


{
AForge.Math.Histogram activeHistogram = null;
AForge.Imaging.ImageStatistics stat =
new AForge.Imaging.ImageStatistics(bmp1);
if (stat != null)
{
//Do if the pic is gray
if (stat.IsGrayscale)
{
activeHistogram = stat.Blue;
}
//Do if the pic is colourful
if (!stat.IsGrayscale)
{
activeHistogram = stat.Blue;
}

}
// histogram3.Values = activeHistogram.Values;
// histogram3.Color = System.Drawing.Color.Blue;
// histogram3.Values
//oo.Dispose();
// histogram3.Refresh();

DataColumn dcItemValue1 = new DataColumn("Name1");


DataColumn dcItemN11 = new DataColumn("Values1");
//dcItemN1.DataType = System.Type.GetType("System.Int32");
DataResults.Columns.Add(dcItemValue1);
DataResults.Columns.Add(dcItemN11);
//System .Int32[]
//DataResults.Rows.Add("K-Means", ac1);
//DataResults.Rows.Add("Fuzzy K-Means", ac2);
// DataResults.Rows.Add("Adaptive Fuzzy K-Means", activeHistogram.Values);

int i=0;
foreach (int val in activeHistogram.Values)
{
// Console.WriteLine(val);
i++;
DataResults.Rows.Add(i, val);

//chart1.DataSource = DataResults.Tables["salary"];
chart1.Series["Blue"].XValueMember = "Name1";
chart1.Series["Blue"].YValueMembers = "Values1";
this.chart1.Titles.Add("Histogram Of Blue Plane");
chart1.Series["Blue"].ChartType = SeriesChartType.Column;
//chart1.Series["accuracy"].IsValueShownAsLabel = true;

chart1.DataSource = DataResults;

private void button1_Click(object sender, EventArgs e)


{

}
private void panel2_Paint(object sender, PaintEventArgs e)
{

}
}
}

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;

//using Accord.Imaging.Filters;
//using Accord.Math;

namespace HYPERSPECTRAL
{
extern alias Acc;

public partial class Feature_Extract : Form


{

static int[] BallX, BallY;


public string orginal, ff;

public decimal a, b, c;

public Feature_Extract()
{
InitializeComponent();
}

private void Feature_Extract_Load(object sender, EventArgs e)


{
pictureBox1.Image = new Bitmap(orginal);
}

private void button1_Click(object sender, EventArgs e)


{

Bitmap lenna = new Bitmap(pictureBox1.Image);

float threshold = (float)0.000200;


int octaves = (int)5;
int initial = (int)2;

// Create a new SURF Features Detector using the given parameters


Acc.Accord.Imaging. SpeededUpRobustFeaturesDetector surf =
new Acc.Accord.Imaging.SpeededUpRobustFeaturesDetector(threshold, octaves,
initial);

List<Acc.Accord.Imaging.SpeededUpRobustFeaturePoint> points =
surf.ProcessImage(lenna);

// Create a new AForge's Corner Marker Filter


Acc.Accord.Imaging.Filters.FeaturesMarker features = new
Acc.Accord.Imaging.Filters.FeaturesMarker(points);

// Apply the filter and display it on a picturebox


pictureBox1.Image = features.Apply(lenna);

private void button2_Click(object sender, EventArgs e)


{

Neural cc = new Neural();


cc.imagename = orginal;
cc.Show();
}

private void button3_Click(object sender, EventArgs e)


{
Colorfeature cc = new Colorfeature();
cc.imagename = orginal;
cc.Show();
}
}
}

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Drawing.Imaging;
using System.Drawing;
using System.Runtime.InteropServices;
namespace HYPERSPECTRAL
{
public class ConvMatrix
{
public int TopLeft = 0, TopMid = 0, TopRight = 0;
public int MidLeft = 0, Pixel = 1, MidRight = 0;
public int BottomLeft = 0, BottomMid = 0, BottomRight = 0;
public int Factor = 1;
public int Offset = 0;
public void SetAll(int nVal)
{
TopLeft = TopMid = TopRight = MidLeft = Pixel = MidRight = BottomLeft =
BottomMid = BottomRight = nVal;
}
}

public class BitmapFilter


{
public const short EDGE_DETECT_KIRSH = 1;
public const short EDGE_DETECT_PREWITT = 2;
public const short EDGE_DETECT_SOBEL = 3;

public static bool Invert(Bitmap b)


{
// GDI+ still lies to us - the return format is BGR, NOT RGB.
BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;

unsafe
{
byte* p = (byte*)(void*)Scan0;

int nOffset = stride - b.Width * 3;


int nWidth = b.Width * 3;

for (int y = 0; y < b.Height; ++y)


{
for (int x = 0; x < nWidth; ++x)
{
p[0] = (byte)(255 - p[0]);
++p;
}
p += nOffset;
}
}

b.UnlockBits(bmData);
return true;
}

public static bool GrayScale(Bitmap b)


{
// GDI+ still lies to us - the return format is BGR, NOT RGB.
BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;

unsafe
{
byte* p = (byte*)(void*)Scan0;

int nOffset = stride - b.Width * 3;

byte red, green, blue;

for (int y = 0; y < b.Height; ++y)


{
for (int x = 0; x < b.Width; ++x)
{
blue = p[0];
green = p[1];
red = p[2];

p[0] = p[1] = p[2] = (byte)(.299 * red + .587 * green + .114 * blue);


p += 3;
}
p += nOffset;
}
}

b.UnlockBits(bmData);

return true;
}

public static bool Brightness(Bitmap b, int nBrightness)


{
if (nBrightness < -255 || nBrightness > 255)
return false;

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;

int nVal = 0;

unsafe
{
byte* p = (byte*)(void*)Scan0;

int nOffset = stride - b.Width * 3;


int nWidth = b.Width * 3;
for (int y = 0; y < b.Height; ++y)
{
for (int x = 0; x < nWidth; ++x)
{
nVal = (int)(p[0] + nBrightness);

if (nVal < 0) nVal = 0;


if (nVal > 255) nVal = 255;

p[0] = (byte)nVal;

++p;
}
p += nOffset;
}
}

b.UnlockBits(bmData);

return true;
}

public static bool Contrast(Bitmap b, sbyte nContrast)


{
if (nContrast < -100) return false;
if (nContrast > 100) return false;

double pixel = 0, contrast = (100.0 + nContrast) / 100.0;

contrast *= contrast;
int red, green, blue;

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;

unsafe
{
byte* p = (byte*)(void*)Scan0;

int nOffset = stride - b.Width * 3;

for (int y = 0; y < b.Height; ++y)


{
for (int x = 0; x < b.Width; ++x)
{
blue = p[0];
green = p[1];
red = p[2];

pixel = red / 255.0;


pixel -= 0.5;
pixel *= contrast;
pixel += 0.5;
pixel *= 255;
if (pixel < 0) pixel = 0;
if (pixel > 255) pixel = 255;
p[2] = (byte)pixel;

pixel = green / 255.0;


pixel -= 0.5;
pixel *= contrast;
pixel += 0.5;
pixel *= 255;
if (pixel < 0) pixel = 0;
if (pixel > 255) pixel = 255;
p[1] = (byte)pixel;

pixel = blue / 255.0;


pixel -= 0.5;
pixel *= contrast;
pixel += 0.5;
pixel *= 255;
if (pixel < 0) pixel = 0;
if (pixel > 255) pixel = 255;
p[0] = (byte)pixel;

p += 3;
}
p += nOffset;
}
}

b.UnlockBits(bmData);

return true;
}
public static bool Gamma(Bitmap b, double red, double green, double blue)
{
if (red < .2 || red > 5) return false;
if (green < .2 || green > 5) return false;
if (blue < .2 || blue > 5) return false;

byte[] redGamma = new byte[256];


byte[] greenGamma = new byte[256];
byte[] blueGamma = new byte[256];

for (int i = 0; i < 256; ++i)


{
redGamma[i] = (byte)Math.Min(255, (int)((255.0 * Math.Pow(i / 255.0, 1.0 / red)) +
0.5));
greenGamma[i] = (byte)Math.Min(255, (int)((255.0 * Math.Pow(i / 255.0, 1.0 / green))
+ 0.5));
blueGamma[i] = (byte)Math.Min(255, (int)((255.0 * Math.Pow(i / 255.0, 1.0 / blue)) +
0.5));
}

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;

unsafe
{
byte* p = (byte*)(void*)Scan0;
int nOffset = stride - b.Width * 3;

for (int y = 0; y < b.Height; ++y)


{
for (int x = 0; x < b.Width; ++x)
{
p[2] = redGamma[p[2]];
p[1] = greenGamma[p[1]];
p[0] = blueGamma[p[0]];

p += 3;
}
p += nOffset;
}
}

b.UnlockBits(bmData);

return true;
}

public static bool Color(Bitmap b, int red, int green, int blue)
{
if (red < -255 || red > 255) return false;
if (green < -255 || green > 255) return false;
if (blue < -255 || blue > 255) return false;

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
int stride = bmData.Stride;
System.IntPtr Scan0 = bmData.Scan0;

unsafe
{
byte* p = (byte*)(void*)Scan0;

int nOffset = stride - b.Width * 3;


int nPixel;

for (int y = 0; y < b.Height; ++y)


{
for (int x = 0; x < b.Width; ++x)
{
nPixel = p[2] + red;
nPixel = Math.Max(nPixel, 0);
p[2] = (byte)Math.Min(255, nPixel);

nPixel = p[1] + green;


nPixel = Math.Max(nPixel, 0);
p[1] = (byte)Math.Min(255, nPixel);

nPixel = p[0] + blue;


nPixel = Math.Max(nPixel, 0);
p[0] = (byte)Math.Min(255, nPixel);

p += 3;
}
p += nOffset;
}
}
b.UnlockBits(bmData);

return true;
}

public static bool Conv3x3(Bitmap b, ConvMatrix m)


{
// Avoid divide by zero errors
if (0 == m.Factor) return false;

Bitmap bSrc = (Bitmap)b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmSrc = bSrc.LockBits(new Rectangle(0, 0, bSrc.Width, bSrc.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


int stride2 = stride * 2;
System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr SrcScan0 = bmSrc.Scan0;

unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* pSrc = (byte*)(void*)SrcScan0;

int nOffset = stride - b.Width * 3;


int nWidth = b.Width - 2;
int nHeight = b.Height - 2;

int nPixel;

for (int y = 0; y < nHeight; ++y)


{
for (int x = 0; x < nWidth; ++x)
{
nPixel = ((((pSrc[2] * m.TopLeft) + (pSrc[5] * m.TopMid) + (pSrc[8] *
m.TopRight) +
(pSrc[2 + stride] * m.MidLeft) + (pSrc[5 + stride] * m.Pixel) + (pSrc[8 +
stride] * m.MidRight) +
(pSrc[2 + stride2] * m.BottomLeft) + (pSrc[5 + stride2] * m.BottomMid) +
(pSrc[8 + stride2] * m.BottomRight)) / m.Factor) + m.Offset);

if (nPixel < 0) nPixel = 0;


if (nPixel > 255) nPixel = 255;

p[5 + stride] = (byte)nPixel;

nPixel = ((((pSrc[1] * m.TopLeft) + (pSrc[4] * m.TopMid) + (pSrc[7] *


m.TopRight) +
(pSrc[1 + stride] * m.MidLeft) + (pSrc[4 + stride] * m.Pixel) + (pSrc[7 +
stride] * m.MidRight) +
(pSrc[1 + stride2] * m.BottomLeft) + (pSrc[4 + stride2] * m.BottomMid) +
(pSrc[7 + stride2] * m.BottomRight)) / m.Factor) + m.Offset);

if (nPixel < 0) nPixel = 0;


if (nPixel > 255) nPixel = 255;

p[4 + stride] = (byte)nPixel;


nPixel = ((((pSrc[0] * m.TopLeft) + (pSrc[3] * m.TopMid) + (pSrc[6] *
m.TopRight) +
(pSrc[0 + stride] * m.MidLeft) + (pSrc[3 + stride] * m.Pixel) + (pSrc[6 +
stride] * m.MidRight) +
(pSrc[0 + stride2] * m.BottomLeft) + (pSrc[3 + stride2] * m.BottomMid) +
(pSrc[6 + stride2] * m.BottomRight)) / m.Factor) + m.Offset);

if (nPixel < 0) nPixel = 0;


if (nPixel > 255) nPixel = 255;

p[3 + stride] = (byte)nPixel;

p += 3;
pSrc += 3;
}
p += nOffset;
pSrc += nOffset;
}
}

b.UnlockBits(bmData);
bSrc.UnlockBits(bmSrc);

return true;
}
public static bool Smooth(Bitmap b, int nWeight /* default to 1 */)
{
ConvMatrix m = new ConvMatrix();
m.SetAll(1);
m.Pixel = nWeight;
m.Factor = nWeight + 8;

return BitmapFilter.Conv3x3(b, m);


}

public static bool GaussianBlur(Bitmap b, int nWeight /* default to 4*/)


{
ConvMatrix m = new ConvMatrix();
m.SetAll(1);
m.Pixel = nWeight;
m.TopMid = m.MidLeft = m.MidRight = m.BottomMid = 2;
m.Factor = nWeight + 12;

return BitmapFilter.Conv3x3(b, m);


}
public static bool MeanRemoval(Bitmap b, int nWeight /* default to 9*/ )
{
ConvMatrix m = new ConvMatrix();
m.SetAll(-1);
m.Pixel = nWeight;
m.Factor = nWeight - 8;

return BitmapFilter.Conv3x3(b, m);


}
public static bool Sharpen(Bitmap b, int nWeight /* default to 11*/ )
{
ConvMatrix m = new ConvMatrix();
m.SetAll(0);
m.Pixel = nWeight;
m.TopMid = m.MidLeft = m.MidRight = m.BottomMid = -2;
m.Factor = nWeight - 8;
return BitmapFilter.Conv3x3(b, m);
}
public static bool EmbossLaplacian(Bitmap b)
{
ConvMatrix m = new ConvMatrix();
m.SetAll(-1);
m.TopMid = m.MidLeft = m.MidRight = m.BottomMid = 0;
m.Pixel = 4;
m.Offset = 127;

return BitmapFilter.Conv3x3(b, m);


}
public static bool EdgeDetectQuick(Bitmap b)
{
ConvMatrix m = new ConvMatrix();
m.TopLeft = m.TopMid = m.TopRight = -1;
m.MidLeft = m.Pixel = m.MidRight = 0;
m.BottomLeft = m.BottomMid = m.BottomRight = 1;

m.Offset = 127;

return BitmapFilter.Conv3x3(b, m);


}

public static bool EdgeDetectConvolution(Bitmap b, short nType, byte nThreshold)


{
ConvMatrix m = new ConvMatrix();

// I need to make a copy of this bitmap BEFORE I alter it 80)


Bitmap bTemp = (Bitmap)b.Clone();
switch (nType)
{
case EDGE_DETECT_SOBEL:
m.SetAll(0);
m.TopLeft = m.BottomLeft = 1;
m.TopRight = m.BottomRight = -1;
m.MidLeft = 2;
m.MidRight = -2;
m.Offset = 0;
break;
case EDGE_DETECT_PREWITT:
m.SetAll(0);
m.TopLeft = m.MidLeft = m.BottomLeft = -1;
m.TopRight = m.MidRight = m.BottomRight = 1;
m.Offset = 0;
break;
case EDGE_DETECT_KIRSH:
m.SetAll(-3);
m.Pixel = 0;
m.TopLeft = m.MidLeft = m.BottomLeft = 5;
m.Offset = 0;
break;
}

BitmapFilter.Conv3x3(b, m);

switch (nType)
{
case EDGE_DETECT_SOBEL:
m.SetAll(0);
m.TopLeft = m.TopRight = 1;
m.BottomLeft = m.BottomRight = -1;
m.TopMid = 2;
m.BottomMid = -2;
m.Offset = 0;
break;
case EDGE_DETECT_PREWITT:
m.SetAll(0);
m.BottomLeft = m.BottomMid = m.BottomRight = -1;
m.TopLeft = m.TopMid = m.TopRight = 1;
m.Offset = 0;
break;
case EDGE_DETECT_KIRSH:
m.SetAll(-3);
m.Pixel = 0;
m.BottomLeft = m.BottomMid = m.BottomRight = 5;
m.Offset = 0;
break;
}

BitmapFilter.Conv3x3(bTemp, m);

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmData2 = bTemp.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr Scan02 = bmData2.Scan0;
unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* p2 = (byte*)(void*)Scan02;

int nOffset = stride - b.Width * 3;


int nWidth = b.Width * 3;

int nPixel = 0;

for (int y = 0; y < b.Height; ++y)


{
for (int x = 0; x < nWidth; ++x)
{
nPixel = (int)Math.Sqrt((p[0] * p[0]) + (p2[0] * p2[0]));
if (nPixel < nThreshold) nPixel = nThreshold;
if (nPixel > 255) nPixel = 255;
p[0] = (byte)nPixel;
++p;
++p2;
}
p += nOffset;
p2 += nOffset;
}
}

b.UnlockBits(bmData);
bTemp.UnlockBits(bmData2);

return true;
}

public static bool EdgeDetectHorizontal(Bitmap b)


{
Bitmap bmTemp = (Bitmap)b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmData2 = bmTemp.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr Scan02 = bmData2.Scan0;

unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* p2 = (byte*)(void*)Scan02;

int nOffset = stride - b.Width * 3;


int nWidth = b.Width * 3;

int nPixel = 0;

p += stride;
p2 += stride;

for (int y = 1; y < b.Height - 1; ++y)


{
p += 9;
p2 += 9;

for (int x = 9; x < nWidth - 9; ++x)


{
nPixel = ((p2 + stride - 9)[0] +
(p2 + stride - 6)[0] +
(p2 + stride - 3)[0] +
(p2 + stride)[0] +
(p2 + stride + 3)[0] +
(p2 + stride + 6)[0] +
(p2 + stride + 9)[0] -
(p2 - stride - 9)[0] -
(p2 - stride - 6)[0] -
(p2 - stride - 3)[0] -
(p2 - stride)[0] -
(p2 - stride + 3)[0] -
(p2 - stride + 6)[0] -
(p2 - stride + 9)[0]);

if (nPixel < 0) nPixel = 0;


if (nPixel > 255) nPixel = 255;

(p + stride)[0] = (byte)nPixel;

++p;
++p2;
}

p += 9 + nOffset;
p2 += 9 + nOffset;
}
}

b.UnlockBits(bmData);
bmTemp.UnlockBits(bmData2);

return true;
}

public static bool EdgeDetectVertical(Bitmap b)


{
Bitmap bmTemp = (Bitmap)b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmData2 = bmTemp.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr Scan02 = bmData2.Scan0;

unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* p2 = (byte*)(void*)Scan02;

int nOffset = stride - b.Width * 3;


int nWidth = b.Width * 3;
int nPixel = 0;

int nStride2 = stride * 2;


int nStride3 = stride * 3;

p += nStride3;
p2 += nStride3;

for (int y = 3; y < b.Height - 3; ++y)


{
p += 3;
p2 += 3;

for (int x = 3; x < nWidth - 3; ++x)


{
nPixel = ((p2 + nStride3 + 3)[0] +
(p2 + nStride2 + 3)[0] +
(p2 + stride + 3)[0] +
(p2 + 3)[0] +
(p2 - stride + 3)[0] +
(p2 - nStride2 + 3)[0] +
(p2 - nStride3 + 3)[0] -
(p2 + nStride3 - 3)[0] -
(p2 + nStride2 - 3)[0] -
(p2 + stride - 3)[0] -
(p2 - 3)[0] -
(p2 - stride - 3)[0] -
(p2 - nStride2 - 3)[0] -
(p2 - nStride3 - 3)[0]);

if (nPixel < 0) nPixel = 0;


if (nPixel > 255) nPixel = 255;

p[0] = (byte)nPixel;

++p;
++p2;
}

p += 3 + nOffset;
p2 += 3 + nOffset;
}
}

b.UnlockBits(bmData);
bmTemp.UnlockBits(bmData2);

return true;
}

public static bool EdgeDetectHomogenity(Bitmap b, byte nThreshold)


{
// This one works by working out the greatest difference between a pixel and it's eight
neighbours.
// The threshold allows softer edges to be forced down to black, use 0 to negate it's effect.
Bitmap b2 = (Bitmap)b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmData2 = b2.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
int stride = bmData.Stride;
System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr Scan02 = bmData2.Scan0;

unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* p2 = (byte*)(void*)Scan02;

int nOffset = stride - b.Width * 3;


int nWidth = b.Width * 3;

int nPixel = 0, nPixelMax = 0;

p += stride;
p2 += stride;

for (int y = 1; y < b.Height - 1; ++y)


{
p += 3;
p2 += 3;

for (int x = 3; x < nWidth - 3; ++x)


{
nPixelMax = Math.Abs(p2[0] - (p2 + stride - 3)[0]);
nPixel = Math.Abs(p2[0] - (p2 + stride)[0]);
if (nPixel > nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs(p2[0] - (p2 + stride + 3)[0]);


if (nPixel > nPixelMax) nPixelMax = nPixel;
nPixel = Math.Abs(p2[0] - (p2 - stride)[0]);
if (nPixel > nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs(p2[0] - (p2 + stride)[0]);


if (nPixel > nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs(p2[0] - (p2 - stride - 3)[0]);


if (nPixel > nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs(p2[0] - (p2 - stride)[0]);


if (nPixel > nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs(p2[0] - (p2 - stride + 3)[0]);


if (nPixel > nPixelMax) nPixelMax = nPixel;

if (nPixelMax < nThreshold) nPixelMax = 0;

p[0] = (byte)nPixelMax;

++p;
++p2;
}

p += 3 + nOffset;
p2 += 3 + nOffset;
}
}

b.UnlockBits(bmData);
b2.UnlockBits(bmData2);
return true;

}
public static bool EdgeDetectDifference(Bitmap b, byte nThreshold)
{
// This one works by working out the greatest difference between a pixel and it's eight
neighbours.
// The threshold allows softer edges to be forced down to black, use 0 to negate it's effect.
Bitmap b2 = (Bitmap)b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmData2 = b2.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr Scan02 = bmData2.Scan0;

unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* p2 = (byte*)(void*)Scan02;

int nOffset = stride - b.Width * 3;


int nWidth = b.Width * 3;

int nPixel = 0, nPixelMax = 0;


p += stride;
p2 += stride;

for (int y = 1; y < b.Height - 1; ++y)


{
p += 3;
p2 += 3;

for (int x = 3; x < nWidth - 3; ++x)


{
nPixelMax = Math.Abs((p2 - stride + 3)[0] - (p2 + stride - 3)[0]);
nPixel = Math.Abs((p2 + stride + 3)[0] - (p2 - stride - 3)[0]);
if (nPixel > nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs((p2 - stride)[0] - (p2 + stride)[0]);


if (nPixel > nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs((p2 + 3)[0] - (p2 - 3)[0]);


if (nPixel > nPixelMax) nPixelMax = nPixel;

if (nPixelMax < nThreshold) nPixelMax = 0;

p[0] = (byte)nPixelMax;

++p;
++p2;
}

p += 3 + nOffset;
p2 += 3 + nOffset;
}
}

b.UnlockBits(bmData);
b2.UnlockBits(bmData2);

return true;

public static bool EdgeEnhance(Bitmap b, byte nThreshold)


{
// This one works by working out the greatest difference between a nPixel and it's eight
neighbours.
// The threshold allows softer edges to be forced down to black, use 0 to negate it's effect.
Bitmap b2 = (Bitmap)b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmData2 = b2.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr Scan02 = bmData2.Scan0;

unsafe
{
byte* p = (byte*)(void*)Scan0;
byte* p2 = (byte*)(void*)Scan02;
int nOffset = stride - b.Width * 3;
int nWidth = b.Width * 3;

int nPixel = 0, nPixelMax = 0;

p += stride;
p2 += stride;

for (int y = 1; y < b.Height - 1; ++y)


{
p += 3;
p2 += 3;

for (int x = 3; x < nWidth - 3; ++x)


{
nPixelMax = Math.Abs((p2 - stride + 3)[0] - (p2 + stride - 3)[0]);

nPixel = Math.Abs((p2 + stride + 3)[0] - (p2 - stride - 3)[0]);

if (nPixel > nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs((p2 - stride)[0] - (p2 + stride)[0]);

if (nPixel > nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs((p2 + 3)[0] - (p2 - 3)[0]);

if (nPixel > nPixelMax) nPixelMax = nPixel;

if (nPixelMax > nThreshold && nPixelMax > p[0])


p[0] = (byte)Math.Max(p[0], nPixelMax);
++p;
++p2;
}

p += nOffset + 3;
p2 += nOffset + 3;
}
}

b.UnlockBits(bmData);
b2.UnlockBits(bmData2);

return true;
}

public static bool CopyAsNegative(Image sourceImage)


{
Bitmap bmpNew = GetArgbCopy(sourceImage);
BitmapData bmpData = bmpNew.LockBits(new Rectangle(0, 0,
sourceImage.Width, sourceImage.Height),
ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);

IntPtr ptr = bmpData.Scan0;

byte[] byteBuffer = new byte[bmpData.Stride * bmpNew.Height];

Marshal.Copy(ptr, byteBuffer, 0, byteBuffer.Length);


byte[] pixelBuffer = null;
int pixel = 0;

for (int k = 0; k < byteBuffer.Length; k += 4)


{
pixel = ~BitConverter.ToInt32(byteBuffer, k);
pixelBuffer = BitConverter.GetBytes(pixel);

byteBuffer[k] = pixelBuffer[0];
byteBuffer[k + 1] = pixelBuffer[1];
byteBuffer[k + 2] = pixelBuffer[2];
}

Marshal.Copy(byteBuffer, 0, ptr, byteBuffer.Length);

bmpNew.UnlockBits(bmpData);

bmpData = null;
byteBuffer = null;

//return bmpNew;
return true;
}

private static Bitmap GetArgbCopy(Image sourceImage)


{
Bitmap bmpNew = new Bitmap(sourceImage.Width, sourceImage.Height,
PixelFormat.Format32bppArgb);

using (Graphics graphics = Graphics.FromImage(bmpNew))


{
graphics.DrawImage(sourceImage, new Rectangle(0, 0,
bmpNew.Width, bmpNew.Height), new Rectangle(0, 0,
bmpNew.Width, bmpNew.Height), GraphicsUnit.Pixel);
graphics.Flush();
}

return bmpNew;
}

}
}

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;

namespace HYPERSPECTRAL
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}

private void button1_Click(object sender, EventArgs e)


{
Home hh = new Home();
hh.Show();

private void button2_Click(object sender, EventArgs e)


{

private void label1_Click(object sender, EventArgs e)


{

private void button2_Click_1(object sender, EventArgs e)


{

}
}
}

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
namespace HYPERSPECTRAL
{
public partial class Home : Form
{
public Home()
{
InitializeComponent();
}
public string name;

string ff;

private void button1_Click(object sender, EventArgs e)


{
OpenFileDialog op = new OpenFileDialog();
op.ShowDialog();
if (op.FileName == "")
{
MessageBox.Show("Please Choose Image");
}
else
{
pictureBox1.Image = new Bitmap(op.FileName);
name = op.FileName;
ff = System.IO.Path.GetFileName(op.FileName);

}
}

private void button3_Click(object sender, EventArgs e)


{
if (pictureBox1.Image == null)
{
MessageBox.Show("Please Choose Cbir Image");
}
else
{
Preprocessing p = new Preprocessing();
p.bmp = new Bitmap(pictureBox1.Image);
p.original = name;
p.ff = ff;
p.Show();
}
}
}
}

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;

namespace HYPERSPECTRAL
{
public partial class Neural : Form
{
public Neural()
{
InitializeComponent();
}

public string imagename;

private void Neural_Load(object sender, EventArgs e)


{
pictureBox1.Image = new Bitmap(imagename);

Neuralclassfication();

public void Neuralclassfication()


{
Bitmap img = new Bitmap(pictureBox1.Image);
Bitmap pic = new Bitmap(img, img.Width, img.Height);
int a1 = img.Width;
int a2 = img.Height;
System.Drawing.Color[,] pixels = new System.Drawing.Color[a1, a2];

for (int i = 0; i < img.Width; i++)


{
for (int j = 0; j < img.Height; j++)
{

System.Drawing.Color pxl = img.GetPixel(i, j);


int intensity = (int)(0.3f * pxl.R + 0.59f * pxl.G + 0.11f * pxl.B);

if (intensity > 0 & intensity < 30)


{
pic.SetPixel(i, j, System.Drawing.Color.Blue);
}

if (intensity > 30 & intensity < 60)


{
pic.SetPixel(i, j, System.Drawing.Color.DarkGreen);
}
if (intensity > 60 & intensity < 100)
{
pic.SetPixel(i, j, System.Drawing.Color.LightGreen);
}
if (intensity > 100 & intensity < 200)
{
pic.SetPixel(i, j, System.Drawing.Color.Red);
}
//if (intensity > 120 & intensity < 200)
//{
// pic.SetPixel(i, j, System.Drawing.Color.Silver);
//}
}

}
pictureBox1.Image = pic;
}
}
}

using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Drawing.Imaging;
using AForge;
using AForge.Imaging.Filters;

namespace HYPERSPECTRAL
{
public partial class Preprocessing : Form
{

private System.Drawing.Bitmap m_Bitmap;


private System.Drawing.Bitmap m_Bitmap1;
private System.Drawing.Bitmap m_Bitmap2;
private System.Drawing.Bitmap m_Bitmap3;
private System.Drawing.Bitmap m_Undo;
private System.Drawing.Bitmap m_Undo1;
private System.Drawing.Bitmap m_Undo2;
public Bitmap bmp;
public string original;
public decimal a, b, c;

public string ff;

public Preprocessing()
{
InitializeComponent();
}

private void button1_Click(object sender, EventArgs e)


{
m_Undo = (Bitmap)m_Bitmap.Clone();
if (BitmapFilter.GrayScale(m_Bitmap))
pictureBox1.Image = m_Bitmap;
}

private void button2_Click(object sender, EventArgs e)


{
m_Undo1 = (Bitmap)m_Bitmap1.Clone();
// if (BitmapFilter.GrayScale(m_Bitmap))
if (BitmapFilter.Invert(m_Bitmap))
pictureBox1.Image = m_Bitmap;
}

private void button3_Click(object sender, EventArgs e)


{
m_Undo2 = (Bitmap)m_Bitmap2.Clone();
if (BitmapFilter.GrayScale(m_Bitmap2))
this.Invalidate();
m_Undo2 = (Bitmap)m_Bitmap2.Clone();
AdjustBrightness(m_Bitmap2, 60);
pictureBox1.Image = m_Bitmap2;

}
public static Bitmap AdjustBrightness(Bitmap Image, int Value)
{

Bitmap TempBitmap = Image;

Bitmap NewBitmap = new Bitmap(TempBitmap.Width, TempBitmap.Height);

Graphics NewGraphics = Graphics.FromImage(NewBitmap);

float FinalValue = (float)Value / 255.0f;

float[][] FloatColorMatrix ={

new float[] {1, 0, 0, 0, 0},

new float[] {0, 1, 0, 0, 0},

new float[] {0, 0, 1, 0, 0},

new float[] {0, 0, 0, 1, 0},

new float[] {FinalValue, FinalValue, FinalValue, 1, 1}


};

ColorMatrix NewColorMatrix = new ColorMatrix(FloatColorMatrix);


ImageAttributes Attributes = new ImageAttributes();

Attributes.SetColorMatrix(NewColorMatrix);

NewGraphics.DrawImage(TempBitmap, new Rectangle(0, 0, TempBitmap.Width,


TempBitmap.Height), 0, 0, TempBitmap.Width, TempBitmap.Height, GraphicsUnit.Pixel,
Attributes);

Attributes.Dispose();

NewGraphics.Dispose();

return NewBitmap;
}
private void button4_Click(object sender, EventArgs e)
{
Feature_Extract f = new Feature_Extract();
f.orginal = original;
f.ff = ff;
f.Show();
}

private void Preprocessing_Load(object sender, EventArgs e)


{
pictureBox1.Image = Bitmap.FromFile(original);
pictureBox1.SizeMode = PictureBoxSizeMode.StretchImage;
m_Bitmap = (Bitmap)Bitmap.FromFile(original, false);
m_Bitmap1 = (Bitmap)Bitmap.FromFile(original, false);
m_Bitmap2 = (Bitmap)Bitmap.FromFile(original, false);
m_Bitmap3 = (Bitmap)Bitmap.FromFile(original, false);
}
private void button2_Click_1(object sender, EventArgs e)
{
m_Undo1 = (Bitmap)m_Bitmap1.Clone();
// if (BitmapFilter.GrayScale(m_Bitmap))
// if (BitmapFilter.Invert(m_Bitmap))
// pictureBox1.Image = m_Bitmap;
if (BitmapFilter.GrayScale(m_Undo1)) { }

Median filter = new Median();


// apply the filter
filter.ApplyInPlace(m_Undo1);

pictureBox1.Image = new Bitmap(m_Undo1);


}
}
}
CHAPTER 7
7. TESTING
Testing is vital to the success of the system. System testing makes a logical assumption
that if all parts of the system are correct, the goal will be achieved successfully. The system is
tested module by module.
7.1 UNIT TESTING
Unit testing focuses on verification effort on the smallest unit of software design in the
module. The unit testing is always white box oriented and the steps can be conducted in parallel
for all modules.
7.1.1 GENE CLUSTERING
Test ID UT01
Unit tested To test features of image data
Purpose To extract the features
Pre requirement Hyper spectral data
Test Data Pre-processed hyper spectral data
Test status Color and Shape features of image data
Test result Pass

7.1.2 GENE CLASSIFICATION


Test ID UT02
Unit tested To classify the hyper spectral data
Purpose Classify the type of land
Pre requirement Feature values
Test Data Color and Shape features
Test status Name of the land
Test result Pass
7.2 INTEGRATION TESTING
Integration tests are designed to test integrated software components to determine if they
actually run as one program. Testing is an event and is more concerned with the basic outcomes
of screens or fields. Integration tests demonstrate that although the components were individually
satisfied, the combination of components were correct and consistent. Integrating testing is
specifically aimed at exposing the problem that arises from the combination of components.

7.2.1 CLASSIFICATION

Test ID IT 01
Unit tested To test the hyper spectral data
Purpose Hyper spectral image classification
Pre requirement Pre-processed data
Test Data Hyper spectral values
Test status Name of land such as grass, building and tree
Test result Pass
CHAPTER 8

8. IMPLEMENTATION

8.1 PROBLEMS FACED

1. The system does not compile and run with .NET


2. Input image file not support in other formats.
3. Exception can be occurred because of memory shortages.

8.2 LESSIONS LEARNT

While developing this project, we came across many experiences. They are
1. Before starting the project, we must have proper plan about it.
2. Unit testing is very important as bugs are identified then and there itself.
3. We should not jump into code directly. First the project should be analysed thoroughly.
4. Larger coding must be split into many smaller coding so that it may be much efficient.
CHAPTER 9

9. CONCLUSION AND FUTURE WORK

We developed a new novel framework for hyper-spectral classification to extract spectral and
spatial information. Features are extracted as multi attributes profiles and we reduced the
dimensionality by using supervised features extraction method DBFE. And implementing CNN
classification for improves the accuracy in results. The Proposed framework is considerably
examined on extensively used hyper-spectral statistics units, i.e., Pavia University scene.
Different strategies have been used to enforce the supplied framework, and the consequences
furnished had been as compared in terms of category accuracies. The exact classification
accuracies obtained in provided framework. In addition, the new approach achieves better
classification accuracies than other extensively used classification strategies, with acceptable
CPU processing time. We emphasize that the proposed system is fully computerized, that's a
exceedingly acceptable characteristic.

FUTURE WORK

In future, we can extend the framework to improve the accuracy in various kinds of
datasets and try to analyse parallel processing approach and include other performance metrics.
REFERENCES
[1] Wang, Qi, Jianzhe Lin, and Yuan Yuan. "Salient band selection for hyperspectral image
classification via manifold ranking." IEEE transactions on neural networks and learning
systems 27.6 (2016): 1279-1289.
[2] Camps-Valls, Gustavo, et al. "Advances in hyperspectral image classification: Earth
monitoring with statistical learning methods." IEEE signal processing magazine 31.1 (2013): 45-
54.
[3] Li, Wei, et al. "Hyperspectral image classification using deep pixel-pair features." IEEE
Transactions on Geoscience and Remote Sensing 55.2 (2016): 844-853.
[4] Xiong, Mingming, et al. "Hyperspectral image classification using weighted joint
collaborative representation." IEEE Geoscience and Remote Sensing Letters 12.6 (2015): 1209-
1213.
[5] Gao, Lianru, et al. "Subspace-based support vector machines for hyperspectral image
classification." IEEE Geoscience and Remote Sensing Letters 12.2 (2014): 349-353.
[6] M. Pesaresi and J. A. Benediktsson, ―A new approach for the morphological segmentation of
high-resolution satellite imagery,‖ IEEE Trans. Geosci. Remote Sens., vol. 39, no. 2, pp. 309–
320, Feb. 2001.
[7] E. J. Breen and R. Jones, ―Attribute openings, thinnings, and granulometries,‖ Comput. Vis.
Image Understand., vol. 64, no. 3, pp. 377–389, Nov. 1996.
[8] M. Chini, N. Pierdicca, and W. Emery, ―Exploiting SAR and VHR optical images to quantify
damage caused by the 2003 Bam earthquake,‖ IEEE Trans. Geosci. Remote Sens., vol. 47, no. 1,
pp. 145–152, Jan. 2009.
[9] M. K. D. Tuia, F. Pacifici, and W. Emery, ―Classification of very high spatial resolution
imagery using mathematical morphology and support vector machines,‖ IEEE Trans. Geosci.
Remote Sens., vol. 47, no. 11, pp. 3866–3879, Nov. 2009.
[10] P. Soille, Morphological Image Analysis, Principles and Applications, 2nd ed. New York,
NY, USA: Springer-Verlag, 2003.
WEBSITES REFERENCE

 https://www.tutorialspoint.com/csharp/index.htm
 https://en.wikipedia.org/wiki/C_Sharp_(programming_language)
 http://csharp.net-tutorials.com/
 http://csharp.net-tutorials.com/basics/introduction/
 https://softwareengineering.stackexchange.com/questions/44810/relationship-between-c-
net-asp-asp-net-etc

You might also like