0% found this document useful (0 votes)
18 views73 pages

AGR-205 Notes - Final by Svs

The document outlines the course AGR-205, focusing on Geoinformatics and Nanotechnology for Precision Farming, covering concepts, tools, and techniques essential for modern agriculture. It highlights the differences between traditional and precision farming, emphasizing the use of technologies like GPS, GIS, and sensors to enhance crop production efficiency and reduce environmental impact. The document also discusses the challenges of implementing precision agriculture in India, particularly due to small land holdings and lack of technical expertise.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views73 pages

AGR-205 Notes - Final by Svs

The document outlines the course AGR-205, focusing on Geoinformatics and Nanotechnology for Precision Farming, covering concepts, tools, and techniques essential for modern agriculture. It highlights the differences between traditional and precision farming, emphasizing the use of technologies like GPS, GIS, and sensors to enhance crop production efficiency and reduce environmental impact. The document also discusses the challenges of implementing precision agriculture in India, particularly due to small land holdings and lack of technical expertise.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 73

1

AGR-205 : Geoinformatics and Nano-technology for Precision Farming :(1+1)


Weeks Lecture Outlines
1 &2 Precision agriculture: concepts and techniques; their issues and concerns for Indian
agriculture. Definition of Precision Agriculture, Difference between Conventional
agriculture and Precision Agriculture. Different Terminologies of Precision Agriculture.
3 Geo-informatics- definition, concepts, tool and techniques; their use in Precision
Agriculture
4 &5 Components of Precision Agriculture, Global Positioning System (GPS), Components
and its functions, Use of GPS in Agriculture.
6& 7 Geographical Information Systems (GIS); Definitions, Concepts, Components, Spatial
data, attribute data, Vector and Raster data/Models.
8 Sensors, Use of Sensors in Agriculture, Satellite Remote sensing; Definitions,
Concepts, Principles, LISS-3 ,LISS-4 Images , Spectral signatures of Soil, water and
vegetation,
9 Crop discrimination, Application in agriculture, Image processing and interpretation
10 Variable Rate Technology, Yield Mapping/Monitoring; Definition, Working Principles,
Advantages
11 Geodesy and its basic principles
12 Soil mapping; fertilizer recommendation using geospatial technologies
13 System Simulation- Concepts and principles, Introduction to crop Simulation Models
and their uses for optimization of Agricultural Inputs
14 STCR approach for precision agriculture; Definition of STCR, Principles, Calculationf
&15 of nutrints
16 Nanotechnology; Definition, Concepts and Techniques, History of Nanotechnology,
Instruments Used for Nanotechnologies
17 & Brief introduction about nanoscale effects, nano-particles, nano-pesticides, nano-
18 fertilizers, nano-sensors,
19 Use of nanotechnology in tillage, seed, water, fertilizer, plant protection for scaling-up
farm productivity.

2
UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Chapter 1
*Geoinformatics and Nanotechnology and Precision Farming*

Geoinformatics examines ways that this powerful technology can help farmers enhance
crop production with more efficiency and at lower costs. Recent advances in geoinformatics have
created new opportunities and challenges in applying geoinformatics to agriculture in the form of
precision agriculture.
Nanomaterials in agriculture aims at reducing the amount of sprayed chemical products
by smart delivery of active ingredients minimise nutrient losses in fertilisation and increase
yields through optimised water and nutrient management.
Precision agriculture is one of many modern farming practices that make production
more efficient. Farmers use precision agriculture practices to apply nutrients, water, seed and
other agricultural inputs to grow more crops in a wide range of soil environments. Precision
agriculture can help farmers know how much and when to apply these inputs. It reduces the
misapplication of inputs and increases crop productivity and farm efficiency.

1.1. PRECISION AGRICULTURE

Precision agriculture (PA) or precision farming (PF) aims at optimising profitability and
protecting environment through efficient use of inputs based on temporal and spatial variability
of soils and crops. Both sensor based and satellite image based technologies have been
developed and are being promoted in the developed world. Economic analyses of adoption of
precision farming have indicated marginal profitability to already existing best management
practices (BMPs) and higher productivity levels. Wide gap between potential and actual yield
levels in developing world necessitates promotion of PF to achieve the intended benefits.
Difference netween traditional farming and precision farming are given below:

Traditional farming Precision farming


Unit of treatment and organisation: Field is Arable site is regarded as different from one
viewed as homogenous site point to the other
Nutrient management based on average for the Nutrient management is based on GPS and
entire field point sampling
Plant protection is based on average of samples Plant protection is based on GPS point based
damaged sampling
Uniform rate of input application Variable rates of application based on sampling
Low yield with high inputs High yield with low inputs

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

1.1.1. CONCEPTS OF PRECISION FARMING

Precision farming basically depends on measurement and understanding of variability.


Main components of precision farming system must address the variability. Precision farming is
a farm management concept based on modern information technologies. Components (enabling
technologies) of precision farming include:

➢ Remote sensing (RS).


➢ Geographical information system (GIS).
➢ Global positioning system (GPS).
➢ Soil testing.
➢ Yield monitors.
➢ Variable rate technology (VRT).

Precision agriculture is a phrase that captures the imagination of many concerned with the
production of food, feed and fiber. The concept of precision agriculture offers the promise of
increasing productivity while decreasing production cost and minimising environmental impacts.
Precision agriculture conjures up images of farmers overcoming the elements with computerised
machinery that is precisely controlled via satellites and local sensors and using planning
software that accurately predicts crop development. This image has been called the future of
agriculture.

In Indian contest, precision farming may be defined as an accurate application of


agricultural inputs for crop growth, considering relevant factors such as soil, weather and crop
management practices. It is actually information and technology based farming system where
inputs are managed and distributed on a site-specific basis for long-term benefits

Precision farming system (PFS) is based on the recognition of spatial and temporal
variability in crop production. Variability is accounted for in farm management with the aim of
increasing. In developed countries, farms are often large productivity and reducing
environmental risks. In developed countries, farms are often large (sometimes 1000 ha or more)
and comprise several fields. The spatial variability in large farms therefore, has two components:
within- field variability and between-field variability. The concepts of PF can be presented as
shown in Fig. 1.1.

Precision farming system within a field is also referred to as site-specific crop management
(SSCM). According to the Second International Conference on Site-Specific Management for
Agricultural Systems, held in Minneapolis, Minnesota, in March 1994, precision farming or
SSCM refers to a developing agricultural management system that promotes variable
management practices within a field according to site or soil conditions (National Research
Council 1997).

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

However, according to Batte and Van Buren (1999), SSCM is not a single technology, but an
integration of technologies permitting:

Fig. 1.1. Components of precision agriculture.

1. Collection of data on an appropriate scale at a suitable time.


2. Interpretation and analysis of data to support a range of management decisions.
3. Implementation of management response on appropriate scale and at suitable time.

Precision farming is concept of using the new technologies and collected field
information. Precision farming provides farmers with a tool to apply fertiliser according to the
need of a particular sub-field and no longer based on the average of the field. The savings made
with this variable application can be fairly large. Precision farming technology would be a viable
alternative to improve profitability and productivity (Fig. 1.2).

Production of food, feed and fiber are dependent on the quantity and quality of soil, plant,
water and air. No matter what agricultural systems are used, without protecting the natural
resources, yields will decrease until the point of no return. The concept that precision agriculture
is a system, (Webster: interrelated, interacting, independent elements forming a complex whole),
provides a more useful foundation for understanding precision agriculture. An agricultural
system that can be used for:
1. Land preparation.
2. Seeding.
3. Chemical application.
4. Fertiliser application.
5. Crop monitoring.
6. Nutrient auditing.
7. Soil and leaf testing.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

8. Pest management.
9. Conservation practices.
10. Gross margin analysis.

Fig. 1.2. Precision agriculture : a comprehensive approach.

In other way, precision agriculture (PA) can loosely be defined as the application of
technologies and principles to manage spatial and temporal variability associated with all aspects
of agricultural production for improving production and environmental quality.

1.1.2. TOOLS AND TECHNIQUES

In addition to mechanisation, other tools and equipment (techniques) used in PF is briefly


presented.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Global Positioning System (GPS)

The GPS is a navigation system on a network of satellites that helps users to record
positional information (latitude, longitude and elevation) with an accuracy of between 100 and
0.01 m. GPS allows farmers to locate the exact position of field features, such as soil type, pest
occurrence, weed invasion, water holes, boundaries and obstructions. There is an automatic
controlling system, with light or sound guiding panel (DGPS), antenna and receiver. GPS
satellites broadcast signals that allow GPS receivers to calculate their position. In many
developed countries, GPS is commonly used as a navigator to guide drivers to a specific location
(Fig. 1.5).

Fig. 1.3. GPS for precision agriculture.

The GPS provides the same precise guidance for field operations. The system allows tal
to reliably identify field locations so that inputs (seeds, fertilisers, pesticides, herbicides
irrigation water) can be applied to an individual field, based on performance criteria and po input
applications. Specific advantages of GPS in farm operations include:

1. Farm machines are guided along a track, hundreds of meters long, making only centimeter
scale deviations.
2. Rows are not forgotten and overlaps are not made.
3. Number of rows can be counted during work.
4. Tools and equipment can be operated in the same way from years

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

5. It is possible to work at night or in dirt with precision.


6. The system is not affected by wind.
7. An additional recorder can store field information to be used in to be used in making a map.

Readers are advised to refer Chapter 1 (1.2.2) for further information on GPS.

Sensor Technologies

Various technologies—electromagnetic, conductivity, photo-electricity, ultrasound—are


used measure humidity, vegetation, temperature, vapour, air etc. Remote sensing data are used
to: distinguish crop species, locate stress conditions, discover pests and weeds and monitor
drought, soil and plant conditions. Sensors enable the collection of immense quantities of data
without laboratory analysis. The specific uses of sensor technologies in farm operations are as
follows:

1. Sense soil characteristics : Texture, structure, physical character, humidity, nutrient


level and presence of clay.
2. Sense colours to understand conditions relating to : Plant population, water shortage
and plant nutrients.
3. Monitor yield : Crop yield and crop humidity.
4. Variable rate system : To monitor the migration of fertilisers and discover weed
invasion.

Geographic Information System (GIS)


Use of GIS began in 1960. This system comprises hardware, software and procedures
designed to support the compilation, storage, retrieval and analysis of feature attributes and
location data to produce maps. GIS links information in one place so that it can be extrapolated
when needed. Computerised GIS maps are different from conventional maps and contain various
layers of information (yield, soil survey maps, rainfall, crops, soil nutrient levels and pests). GIS
helps convert digital information to a form that can be recognised and used. Digital images are
analysed to produce a digital information map of the land use and vegetation cover. GIS is a kind
of computerised map, but its real role is using statistics and spatial methods to analyse characters
and geography. Further information is extrapolated from the analysis. A farming GIS database
can provide information on : filed topography, soil types, surface drainage, subsurface drainage,
soil testing, irrigation, chemical application rates and crop yield. Once analysed, this information
is used to understand the relationships between the various elements affecting a crop on a
specific site. To sum up, the GIS technologies support people working in agriculture by
providing :
1. Greater analytical support for precision farming.
2. Better understanding of risk factors.
3. Higher revenue generation and cost recovery.
4. Greater efficiency through task automation.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

5. More accurate support for decision making.


6. Greater insight to policy making applications.
7. Easier reporting for government applications and regulatory compliance.
8. Better resource management.

Readers may refer 1.2.2 and 1.3.3 for further information on GIS

Variable-Rate Technologies (VRT)


Variable rate technologies (VRT) are automatic and may be applied to numerous farming
of delivery of farm inputs depending on the soil type noted in a soil map. Information
extrapolated from the GIS can control processes, such as seeding, fertiliser and pesticide
application and herbicide selection and application, at a variable (appropriate) rate in the right
place at the right time. The VRT is perhaps the most widely used PFS technology.
Chapter 1.2.3 may be referred for further details in this regard.

Grain Yield Monitors for Mapping


A monitor mounted on a combine continuously measures and records the flow of grain in
the grain elevator. When linked with a GPS receiver, yield monitors can provide data for a yield
map that helps farmers to determine the sound management of inputs, such as fertilizer, lime,
seed, pesticides, tillage and irrigation.

1.1.3. PERCISION FARMING CONCERNS FOR INDIAN AGRICULTURE


Farmers in developed countries typically own large farms (10-1000 ha or more) and crop
production systems are highly mechanised in most cases. Large farms may comprise several
fields in differing conditions. Even within a relatively small field (<30 ha) the degree of pest
infestation, disease infection and weed competition may differ from one area to another.

In conventional agriculture, although a soil map of the region may exist, farmers still tend
to practice the same crop management throughout their fields: crop varieties, land preparation,
fertilisers, pesticides and herbicides are uniformly applied in spite of variation. Optimum growth
and development are thus not achieved. Furthermore, there is inefficient use of inputs and labour.
Availability of information technology since the 1980s provides farmers with new tools and
approaches to characterise the nature and extent of variation in the fields, enabling them to
develop the most appropriate management strategy for a specific location, increasing the
efficiency of input application.

Practical Problems in Indian Agriculture

Precision agriculture has been mostly confined to developed countries. Limitations for its
implementation in developing countries like India are :
1. Small land holdings.
2. Heterogeneity of cropping systems and market imperfections.
3. Complexity of tools and techniques requiring new skills.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

4. Lack of technical expertise knowledge and technology (India spends only 0.3 per cent of
its agricultural GDP in research and development).
5. Infrastructure and institutional constraints including market imperfections.
6. High cost.

In India, major problem is the small field size. More than 58 per cent of operational holdings
in the country have size less than 1 ha. Only in the states of Punjab, Rajasthan, Haryana and
Gujarat more than 20 per cent of agricultural lands have operational holding size of more to 4 ha.
There is scope of implementing precision agriculture for crops like, rice and wheat especial in
the states of Punjab and Haryana. Commercial as well as horticultural crops show a show scope
for precision agriculture.

1.2. GEOINFORMATICS

Natural resource management activities seek to increase agricultural productivity through


adoption of practices that maintain the long-term ecological and biological integrity of natural
resources.

Hence towards achieving the goal of livelihood security, it is important to conserve the
natural resource bases and improve economic viability of farming. Geoinformatics deal with
handling digital geoinformation, such as collecting (mainly through remote sensing and field
investigation), processing, storing, archiving, preservation, retrieving, transmitting, accessing,
visualisation, analysing, synthesising, presenting and disseminating geoinformation.

Definition
Geoinformatics is a modern technology that provides accurate means of measuring the
wtent and pattern of changes and other related information about environment (Boakye et al
2008). The term geoinformation consists of two main words: geo means earth's surface or the
environment and informatics stands for fact about something. Thus, geoinformation is the
science and technology of communicating the evidences about the state of the earth's surface. It
is known for technological robustness to assess spatial and temporal change occurring on the
earth's surface.
Application of geoinformatics includes land use mapping and farm planning, assessing
crop variability and performance tracking, plant nutrition assessments, in-field plant vigour zone
delineation, irrigation and drainage assessments, storm, frost or fire crop damage insurance
assessments, crop yield management, monitoring and prediction, impacts of soil compaction,
pest and disease management, spatial management systems and databases, sustainable
agricultural engineering and many more. Therefore, geoinformatics is playing an increasing
role in agriculture throughout the world by helping farmers increase production, reduce costs and
manage their land more efficiently. While natural inputs in farming cannot be controlled, they
can be better understood and managed with geoinformatics tools.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Agrogeoinformation, the agricultural-related geoinformation, is the key information in


the agricultural decision making and policy formulation process. Agrogeoinformatics, a branch
of geoinformatics, is the science and technology about handling digital agrogeoinformation, such
as collecting (mainly through remote sensing and field investigation), processing, storing,
archiving preservation, retrieving, transmitting, accessing, visualisation, analysing, synthesising,
presenting and disseminating agrogeoinformation. Recent advances in geoinformatics have
created new opportunities and challenges in applying agrogeoinformatics to agriculture
monitoring, assessment and decision making.

1.2.1. GEOINFORMATIC CONCEPT, TOOLS AND PRINCIPLES

Geoinformatics is a new discipline concerned with the modelling of spatial data and the
techniques in spatial information systems. It is a multidisciplinary science that integrates the
technologies and principles of digital cartography, remote sensing, photogrammetry global
positioning systems (GPS), geographic information systems (GIS) and automated data capture
systems using high-resolution geo-referenced spatial information from aerospace remote sensing
platforms.

Thus, geoinformatics provide tools that allows for the processing, manipulation and
analysis of spatial data into information tied explicitly to, and used to make decisions about
portions of earth and environmental problems. The techniques can include all stages of data
collection, data processing, data base management, data analysis and modelling and data
presentation to end use in the creation of maps and spatial information products. We can
understand the concepts clearer when we consider the principles of the following component
sub-fields ;

Cartographic principles involves the map, map design and map visualisation and
production in analogue or digital computer environment.

Remote sensing involves the acquisition of spatial data of the environment without
physical contact with the objects or features sensed by using electromagnetic energy radiation,
interaction and detection principles in analogue or digital formats.

Photogrammetric principles involve the art and scientific processes of obtaining


reliable information about the physical environment by interpreting remotely sensed aerospace
data (aerial photographs and satellite imageries) in analogue or digital formats.

Surveying principles involve the adroit use of fundamental methods (processes) and
technologies (instruments) to determine the precise position and dimensions of points (features)
on the earth's surface and the presentation of the results in analogue or digital format.

Global positioning systems (GPS) involve precise surveying (determination of position


dimensions of points) by applying resection and satellite constellation principles and the
presentation of the results in analogue (maps, tables) or digital formats.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Geographic information systems (GIS) principles involve data gathering, data


processing, database management, data modelling and visualisation in a digital environment.

Automated data capture systems include multi-spectral remote sensing processes, GPS
data, map digitisation and scanning and computer input and output technologies.

1.2.2. RELEVANCE OF GEOINFORMATICS IN AGRICULTURE

Geoinformatics, and in particular remote sensing, geographic information systems and


global positioning systems technologies, have become indispensable in modern agriculture.
Advances in remote sensing have revolutionised the gathering of information on agricultural
activities, including land-use, soil condition, weather condition etc, that are essential for site
characterisation and consequent site selection for farming.

Remote Sensing System

Since remote sensing techniques have the unique capability of recording data in visible as
well as invisible (including ultraviolet, reflected infrared, thermal infrared and microwave) parts
of the electromagnetic spectrum, it enables us see beyond the capability of the human eye. For
instance, trees or plants, which are affected by diseases or insect attack, can be detected by
remote sensing technique much before human eye sees them. Such early detection is vital for the
application of remedial measures.

Detection, identification, measurement and monitoring of agricultural phenomena are


predicated on the assumption that agricultural landscape features (such as crops, livestock, crop
infestation and soil anomalies) have consistently identifiable signatures on the type of remote
sensing data. These identifiable signatures are a reflection of crop type, state of maturity, crop
density, crop geometry, crop vigour, crop moisture, crop temperature and soil moisture as well
temperature. Areas of specific application of remote sensing in agricultural surveys Include;

Applicable to crop survey : Crop identification, area under crop, crop vigour, crop density, crop
stage of crop growth, crop growth rates, yield forecasting, actual yield, soil fertility, effects of
fertilisers, toxicity on crops, water quality, irrigation requirement, pests and diseases incidence,
water availability and location of canals.

Applicable to range survey : Delineation of forest types, condition of range, carrying


capacity, forage, time of seasonal change, location of water, water quality, soil fertility, soil
moisture, insects infestations, wild life inventory.

Applicable to livestock survey : Cattle population, sheep population, pig population,


poultry population, age sex distribution, distribution of animals, animal behaviour, disease
identification, types of farm buildings.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Geographic Information System

Geographic information system is another geoinformatic technique that is quite


relevant in agricultural development. There are numerous definitions of geographic information
systems in the literature. For our purpose geographic information system can be defined as a
system for capturing, storing, checking, manipulating, analysing and displaying data, which are
spatially referenced to the earth. Thus, a true geographic information system is designed to
accept, organise, statistically analyse and display diverse types of spatial information that are
geographically referenced to a common coordinate system of a particular projection and scale.

Geographic information system comprises five major components and three main
subsystems. Main components of geographic information systems are :

1. The hardware which include a host computer, data acquisition device(s) such as digitiser,
scanner, digital image processing system, digital theodolite, analytical and digital
photogrammetric plotter and output device(s) such as plotter, printer, high resolution screen
among others.
2. The spatial database, containing the objects of interest, including the objects' geometric
(position and spatial relationships) and thematic data in structured form.
3. Software for the acquisition, manipulation and management of data in the database.
4. Procedures (conventions and algorithms to guide its operations).
5. Expertise in terms of skilled human operators.

The main subsystems of a geographic information system are:

1. Data acquisition subsystem for collecting and/or processing spatial data from existing
maps, remotely sensed data, aerial photography, land survey among others
2. Database management subsystem for the storage, retrieval, manipulation and analysis of
data.
3. Visualisation and reporting subsystem for displaying database query results graphic and/or
alphanumeric form.

Geographic information systems are needed for the collection, analysis and management of
agricultural data for the purpose of timely decision-making. Database will contain layers of
special data from remote sensors, existing maps or field surveys. Information system has the
compatibility of the various data sources.

Geographic information system analysis, basically, includes rectification for geometrical


correction of digital image data, spatial and spectral enhancements, classification and
visualisation of digital images. In this way, information on vegetation and soil types, plant stress
crop intensification etc can be harnessed. Geographic information system modelling capability
through analytical functions like overlay, cluster analysis, clumping functions, reclassification,
indexin searching provides information on which agricultural land-use planning can be based.
Readers may refer Chapter 1 (1.3.3) also for further information on GIS.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Global Positioning System

The global positioning system is yet another geoinformatic tool required for agricultural
development. Global positioning system enables accuracy in the location of terrain features. It
receives signals and positioning information from a series of satellites in space. Global position
system is basically for the georeferencing of terrain attributes. Georeferencing is done because
location serves as a means to link terrain data collected by different mapping disciplines through
overlay analysis. Therefore, global positioning system capability is necessary in the integration
of the diverse agricultural data sets from diverse sources, in geographic information system
environment. Spatial data collected by global positioning system can be automatically recorded
with the geographic information system programme. In addition, the use of global positioning
system allows for the accurate location of soil sample points within a field and hence the
determination of physical, chemical and biological characteristics of the soil at different
locations. Consequently, fertility levels can be mapped across the field to serve as a basis for the
application of farm inputs. The global positioning system is also required to establish the
accurate location of yield data collected. It is thus needed for the production of yield maps and
for yield monitoring. Readers may refer Chapter 1 (1.3.1) for further information on GPS.

1.2.3. GEOINFORMATICS AND PRECISION AGRICULTURE

Integration of remote sensing, geographic information system and global positioning


system technologies has taken agriculture to the space age. This has given rise to a new concept
variously termed precision agriculture (PA), precision farming (PF) or site specific crop
management (SSCM). Precision farming is an information and technology-based agricultural
management system that identifies analyses and manage site-soil spatial and temporal variability
within paddocks (farm fields) for optimum yield or productivity, profitability, sustainability and
protection of the environment. The concept identifies the agricultural suitability of land parcels
from the spatial variability of soil fertility status and other land qualities (water and oxygen
availability and retention capacity, plant root conditions, salt hazards and topographic
conditions). The concept recognises that variations occur within agricultural fields and thus seeks
to identify the spatial location and extent of such variations. The objective is to assess the causes
of such variations to ensure that the right decision is taken with regards to type of crops to be
cultivated, time to cultivate and management practices required. Precision farming implies doing
the right thing, the right way, at the right place and at the right time.

Components of precision farming are remote sensing, geographic information systems


(GIS), differential global positioning system (DGPS) and variable rate applicator (VRA).
Remote sensing techniques play a pivotal role in precision farming by providing continuous data
on spatial and temporal variability in agricultural fields. Sensors provide data on soil properties,
crops condition and yield, fertiliser flow, as well as weed detection. GIS is a potential tool for
handling voluminous remotely sensed and has capability to support spatial statistical analysis,
presentation of spatial data in the form of a map, as well as storage, management, modelling of
input data and presentation of model results.

Differential global positioning system (DGPS) is used for precise location of activities.
Global positioning system (GPS) device makes use of a series of military satellites that identify

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

location of farm equipment within a meter of an actual site in the field. This is required to
accurately link location and result of soil samples to a soil map, prescribe farm inputs to fit soil
properties, adjust tillage to suit field conditions and determine yield data across the field.

Variable rate applicator is used to operationalise precision farming at the farm level.
Variable rate technology (VRT) consists of the machines and systems used to apply a desired
rate of crop production materials at a specific time and by implication a specific location as
discussed already. The components are a control computer, a locator, and an actuator. The
control computer co-ordinates field operations with the aid of database in its memory. Based on
the desired activity, the computer from the locator (which holds a GPS) receives the current
location of equipment and issue command to the actuator, which does the input application.

The operational procedure of precision farming requires georeferenced point data


obtained with grid spacing with minimal number of observations. Patterns are obtained by
geostatistical interpolation of these point data. Techniques involves the integration of databases
or sensors that provide information needed to develop input response to site-specific conditions,
positioning capabilities to know where equipment is located and real-time mechanism for
controlling crop production inputs. The new paradigm is being adopted in the United States and
Europe since the middle of 1990s.

1.2.4. CROP DISCRIMINATION

Currently computers are being used for automation and to expand decision support
systems (DSS) for the agricultural research. Recently, geographic information systems (GIS)
and remote sensing technology has come up with a capable role in agricultural research,
predominantly in crop yield prediction in addition to crop suitability studies and site specific
resource allocation. Role of geoinformatics to discriminate different crops at various levels of
classification, monitoring crop growth and prediction of the crop yield has been briefly
presented.

Remote sensing is an efficient technology and worthy source of earth surface


information, as it can capture images of reasonably large area on the earth. Due to advancement
in the sensor technologies, there is availability of high spatial as well as spectral resolutions
imageries and also non-imaging spectroradiometer. With the use of these imaging and non-
imaging data, we can easily characterise the different species.

Different crops show distinct phenological characteristics and timings according to their
nature of germination, tillering, flowering, boll formation (cotton), ripening etc. Even for the
same crop and growing season, the duration and magnitude of each phonological stage can differ
between the varieties which introduce data variability for crop type discrimination with imaging
systems. Agricultural crops are significantly better characterised, classified modelled and
mapped using hyperspectral data.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Feature Extraction
Feature extraction is the process of defining Image characteristics or features which
effectively provides meaningful information for image interpretation or classification. The
ultimate goals of feature extraction are :

1. Effectiveness and efficiency in classification.


2. Avoiding redundancy of data.
3. Identifying useful spatial as well as spectral features.
4. Maximising the pattern discrimination.

For crop type discrimination, spatial features are useful. Crops are planted in rows,
either multiple or single rows, as per the crop types for convenience and to maximise yields.
Different spatial arrangement of the crops gives better spatial information, but it requires high
spatial resolution images. In spatial image classification, spatial image elements are combined
with spectral properties in reaching a classification decision. Most commonly used elements are
texture, contexture and geometry (shape). Due to the availability of commercial high
resolution multispectral satellite imagery such as Geoeye-1, IKONOS-2, Quick Bird-2 with less
than 4 m spatial resolution, it has become possible to identify small-scale features from complex
environments.

Role of Texture in Classification

In general, it is possible to distinguish between the regular textures manifested by man-


made objects from the irregular manner that natural objects exhibit texture. Hence, the texture
characteristic can be used to discriminate between divergent objects. Therefore, they support
their segmentation from remotely sensed data, both the conventional texture analysis and the
grey level co-occurrence matrix (GLCM) methods describing the grey value relationships in
the neighbourhood of the current pixel. However, in the GLCM method, this is analysed within
the GLCM space and not from the original grey values, as is the case in the former method.

Grey Level Co-Occurrence Matrix (GLCM)

The GLCM can be viewed as a two-dimensional histogram of the frequency with which
pairs of grey level pixels occur in a given spatial relationship, defined by a specific inter-pixel
distance and a given pixel orientation. Hence, in the segmentation of urban objects, texture
analysis is usually performed within a GLCM matrix space. A variety of texture measures can be
extracted from the GLCM. Four useful measures that can be derived from the probability
density are energy, variance, dissimilarity and homogeneity where energy measures the
uniformity of the texture; variance measures the heterogeneity of the pixel values. Similar to
contrast dissimilarity measures, the difference between adjoining pixels and homogeneity
measures the tonal uniformity.

Local Binary Pattern (LBP)

It is a simple yet very efficient texture operator which labels the pixels of an image by
thresholding the neighbourhood of each pixel and considers the result as a binary number. Due to

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

its discriminative power and computational plainness, LBP texture operator has become a
popular approach in various applications. It can be seen as a uniting approach to the traditionally
divergent statistical and structural models of texture analysis. Possibly, the most important assets
of the LBP operator in real-world applications is its robustness to monotonic gray-scale changes
instigated, for example, by illumination differences. Another important property is its
computational simplicity, which makes it possible to analyse images in challenging real-time
settings. Spatial feature extraction for crop type discrimination works well if we have high
spatial resolution satellite imagery.

1.2.5 SPECTRAL FEATURES FOR CROP DISCRIMINATION

Spectral characteristics of green vegetation have very noticeable features. Two valleys in
the visible portion of the spectrum are determined by the pigments contained in the plant.
Chlorophyll absorbs strongly in the blue (0.4-0.5 um) and red (0.68 um) regions, also known as
the chlorophyll absorption bands. Chlorophyll is the primary photosynthetic pigment in green
plants. This is the reason for the human eye perceiving healthy vegetation as green. When the
plant is subjected to stress that hinders normal growth and chlorophyll production, there is less
absorption in the red and blue regions and the amount of reflection in the red waveband
increases.

The spectral reflectance signature has a dramatic increase in the reflection for healthy
vegetation at around 0.7 um. In the near infrared (NIR) between 0.7 um and 1.3 um, a plant
leaf will naturally reflect between 40 and 60 per cent, the rest is transmitted, with only about 5
per cent being adsorbed. For comparison, the reflectance in the green range reaches 15–20 per
cent. This high reflectance in the NIR is due to scattering of the light in the intercellular volume
of the leaves mesophyll. Structural variability in leaves in this range allows one to differentiate
between species, even though they might look the same in the visible region. Beyond 1.3 um, the
incident energy upon the vegetation is largely absorbed or reflected with very little transmittance
of energy. Three strong water absorption bands are noted at around 1.4, 1.9, and 2.7 um and can
be used for plant-water content estimation.

Band Selection
Band selection is one of the important steps in hyperspectral remote sensing. There are
two conceptually different approaches of band selection like unsupervised and supervised. Due
to availability of hundreds of spectral bands, there may be same values in several bands which
increase the data redundancy. To avoid the data redundancy and to get distinct features from
available hundreds of bands, we have to choose the specific bands by studying the reflectance
behavior of crops.

Narrowband Vegetation Indices

Spectral indices assume that the combined interaction between a small numbers of
wavelengths is adequate to describe the biochemical or biophysical interaction between light and
matter. The simplest form of index is a simple ratio (SR), a potentially greater contribution of
hyperspectral systems is their ability to create new indices that integrate wavelengths not

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

sampled by any broadband system and to quantify absorptions that are specific to important
biochemical and biophysical quantities of vegetation. Examples include most of the pigment-
oriented indices, all indices formulated for the red edge, several water absorption indices and
indices that use three or more wavelengths. Vegetation properties measured with hyperspectral
vegetation indices (HVIs) can be divided into three main categories : (1) structure, (2)
biochemistry and (3) plant physiology/stress.

Structural properties : These properties include fractional cover, green leaf biomass,
leaf area index (LAI), senesced biomass and fraction absorbed photosynthetically active
radiation (FPAR). Majority of the indices developed for structural analysis were formulated for
broadband systems and have narrowband, systems and have narrowband, hyperspectral
equivalents.

Biochemical properties : It includes water, pigments (chlorophyll, carotenoid


anthocyanins), other nitrogen-rich compounds (proteins) and plant structural materials (lignin
and cellulose).

Physiological and stress indices : It measure delicate changes due to a stress-induced


change in the state of xanthophyll's, changes in chlorophyll content, fluorescence or changes in
leaf moisture. In general, biochemical and physiological/stress indices were formulated using
laboratory or field instruments (d"10 nm spectral sampling) and are targeted at very fine spectral
features.

Narrowband vegetation indices can be used as potential variables for crop type
discrimination Best vegetation indices of different category (Table 1.1) to discriminate the seven
crop types which are greenness/leaf pigment indices (ARVI, EVI, NDVI and SGI), chlorophyll
red edge indices (RENDVI and VOG-1), light use efficiency indices (SIPI and PRI) and leaf
water indices (DWSI and NDWI).

TABLE 1.1. Narrowband vegetation indices.

Sl. No. Index Acronym Formula


1 Narmalised difference vegetation index NDVI (p864 - p671)/(p864 + p671)
2 Simple ratio SR (p864/p671)
2.5(p864 - p671)/(p864 + 6 x p671) -
3 Enhanced vegetation index EVI
7.5 x 7467 + 1)
(p864 - (2 x p671) - P467)/(p864
4 Atmospherically resistant vegetation index ARVI
+ (2 x p671- p467)
(p508 + p518 + p528 + p538 + p549 +
5 Sum green index SGI
p559 +p569+p579 + p590 + p600/10)
6 Red edge normalised difference vegetation index RENDVI (p752 - p701)/(p752 + p701)
7 Vogelmenn red edge index VREI (p743/p722)
8 Structure intensive pigment index SIPI (p803 – 0467)/(p803 + p681)
9 Photochemical reflectance index PRI (p529– p569)/(p529 + p569)
10 Disease water stress index DWSI (p803/p1598)
p = Reflectance of the closest hyper ion bands to the original wavelength formulations.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

1.2.6. YIELD MONITORING

Estimation of crop yield well-before the harvest at regional and national scale is
imperative for planning at micro-level and predominantly the demand for crop insurance.
Currently, it is being done by extensive field surveys and crop cutting experimentation. In most
of the developing countries, the crop yield estimation is, generally, based on traditional methods
of data collection which is based on ground based field surveys.

Conventional methods have been found to be expensive, time consuming and are prone to
large errors due to incomplete and inaccurate ground observations leading to deprived crop area
estimations and crop yield assessment. In most of the developing countries, required data 15,
generally, available too late for any appropriate decision making. Different approaches and
technologies used for crop inventory are briefly presented.

Aerial Photography

To obtain crop yield information, one must be able to recognise tone, pattern, texture
other features. Crop yield information is used in conjunction with crop area statistics to obtain
crop production. There are two distinct aspects of yield determination: 1). Forecast of yield based
"sharacteristics of the plant or crop and relationship based on experience in prior years and (2)
Estimates of the yield known from the actual weight of the harvest crop for the current year.
After the World War II, various researchers used the emerged concept of aerial photography
for optimised use of resources for agriculture and crop inventory.

Multispectral Scanners

Multispectral scanners (MSS) have certain advantages and disadvantages compared to


photography. Ability to differentiate wheat from other agricultural crops using multispectral data
in a computer format with pattern recognition techniques has been shown. An important
consideration in the task of species identification is the stage of growth of the crop.

Radar

Advantages and limitation of using either air-borne or space-borne radars for crop
identification have been discussed in the past. It has been pointed out that many of the radar
studies have concentrated on seasonal change between crops and that numerous variables must
be considered in making even the simplest determinations.

Satellite Data

Remote sensing data has been proved effective in predicting crop yield and provide
representative and spatially exhaustive information on the development of the model for the crop
growth monitoring. In India remarkable spurt in the remote sensing activities has started with the
launch of the IRS (Indian Remote Sensing Satellite) 1A in the year 1988. India launched a
variety of satellites devoted to particular area of relevance such as ResourceSat, CartoSat and
OceanSat.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Various indices, based on remote sensing, have been employed to estimate the yield of
several types of crops. Normalised difference vegetation index (NDVI) has been used to
estimate the yield of rice. However, yield estimation with remote sensing has limitations, mainly
due to the indirect nature of the link between the NDVI and biomass but also due to the sensor
spatial resolution or insufficient repeat coverage.

The current stage of the technological development engrossed the amalgamation of the
remote sensing, GIS and GPS. Due to theoretical and scientific achievements in the yield
estimation using remote sensing, researchers and the application scientists frequently use multi-
date high resolution satellite and meteorological data with the support of the GIS to estimate
yield and they also operate on coarse resolution data as a sampling tool to improve the precision.
Chronologies Browth in terms of crop inventory using geoinformatics is shown in Table 1.2.

TABLE 1.2. Chronological growth of crop inventory using geoinformatics

Period Technology Features


Before 1940 Crop cutting experiments Qualitative analysis.
1950-1970 Aerial photography and computers Regression models based on statistical data.
1970-1990 Satellite imagery. Crop yield at global scale.
1990-2000 High resolution satellite imagery. Statistical as well as vegetation indices.
Crop inventory based on crop simulation
2000 onwards Amalgamation of remote sensing, GIS and GPS.
models and crop growth models.

1.2.7. SOIL MAPPING

Soil maps are required on different scales varying from 1:1 million to 1 : 4,000 to meet
the requirements of planning at various levels. As the scale of a soil map has direct correlation
with the information content and field investigations that are carried out, small scale soil maps of
1-1 million are needed for macro-level planning at national level. Soil maps at 1 : 250,000 scale
provide information for planning at regional or state level with generalised interpretation of soil
information for determining the suitability and limitations for several agricultural uses and
requires less intensity of soil observations and time. Soil maps at 1 : 50,000 scale where
association of soil series are depicted, serve the purpose for planning resources conservation and
optimum land use at district level and require moderate intensity of observations in the field.
Large scale soil maps at 1:8,000 or 1 : 4,000 scale are specific purpose maps which can be
generated through high intensity of field observations based on maps at 1 : 50,000 scale of large
scale aerial photographs or very high resolution satellite data. Similarly, information on degraded
lands like salt affected soils, eroded soils, waterlogged areas, jhum lands (shifting cultivation) etc
is required at different scales for planning strategies for reclamation and conservation of
degraded lands.

Remote Sensing for Soil and Land Degradation Mapping

Though conventional soil surveys were providing information on soils they are
subjective, time consuming and laborious. Remote sensing techniques have significantly
contributed speeding up conventional soil survey programmes. In conventional approach,

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

approximately 80 per cent of total work requires extensive field traverses in identification of soil
types and mapping their boundaries and 20 per cent in studying soil profiles, topographical
features and for other works. In the case of soil surveys with aerial photographs or satellite data,
considerable field work with respect to locating soil types and boundaries is reduced owing to
synoptic view. Remote sensing techniques have reduced field work to a considerable extent and
soil boundaries are more precisely delineated than in conventional methods. Satellite data were
utilised in preparing small scale soil resource maps showing soil subgroups and their association
for about three decades. Remote sensing data from Landsat MSS are used for mapping soils and
degraded lands like eroded lands, ravine lands, salt-affected soils and shifting cultivation areas.
LandSat TM, SPOT and IRS satellites enabled to map soils at 1 : 50,000 scale at the level of
association of soil series due to higher spatial and spectral resolutions.

Soil Mapping Methods

Soil surveyors consider the topographic variation as a base for depicting the soil
variability. Even with the aerial photographs, only physiographic variation in terms of slope and
aspects and land cover are being practiced for delineating the soil boundary. Multispectral
satellite data are being used for mapping soil up to family association level (1 : 50,000).
Methodology in most of the cases involves visual interpretation. However, computer aided
digital image processing technique has also been used for mapping soil and advocated to be a
potential tool.

Visual Image Interpretation

Visual interpretation is based on shape, size, tone, shadow, texture, pattern, site and
association. This has the advantage of being relatively simple and inexpensive. Soil mapping
needs identification of a number of elements. The elements which are of major importance for
soil survey are land type, vegetation, land-use, slope and relief. Soils are surveyed and mapped,
following a three-tier approach, comprising interpretation of remote sensing imagery and/or
aerial photograph, field survey (including laboratory analysis of soil samples) and cartography.
Several workers have concluded that the technology of remote sensing provides better efficiency
than the conventional soil survey methods at the reconnaissance (1:50,000) and detailed
(1:10,000) scale of mapping.

Computer-Aided Approach

Numerical analysis of remote sensing data utilising the computers has been developed
because of requirement to analyse faster and extract information from the large quantities of
data. Computer-Aided Approach utilise the spectral variations for classification. Pattern
recognition in remote sensing assists in identification of homogeneous areas, which can be used
as a base for carrying out detailed field investigations and generating models between remote
sensing and field parameters. Major problem with conventional soil survey and soil cartography
is accurate delineation of boundary. Field observations based on conventional soil survey are
tedious and time consuming. Remote sensing data in conjunction with ancillary data provide the
best alternative, with a better delineation of soil mapping units.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

1.3. FERTILISER RECOMMENDATION USING GEOSPATIAL TECHNOLOGIES

Geospatial technologies for precision farming (PA) involves an integrated technology such
as GPS, GIS, remote sensing, variable rate technology (VRT), crop models, yield monitors and
precision irrigation. Various configurations of these technologies are suitable for different
precision farming operations. Information technology such as the internet is good means for
some agri business companies to deliver their services and products.

Site specific nutrient management (SSNM) approach, relatively new approach of nutrient
recommendations, is mainly based on the indigenous nutrient supply from the soil and nutrient
demand of the crop for achieving targeted yield. The SSNM recommendations could be evolved
on the basis of solely plant analysis or soil cum plant analysis.

1.3.1. PLANT ANALYSIS BASED SSNM

It is considered that the nutrient status of the crop is the best indicator of soil nutrient
supplies as well as nutrient demand of the crops. Thus, the approach is built around plant
analysis. Initially, SSNM was tried for lowland rice, but subsequently, it proved advantageous to
several contemporary approaches of fertiliser recommendations in rice, wheat and other rice-
based production systems prevalent in Asian countries. Five key steps for developing field-
specific Fertiliser NPK recommendations have been developed for rice, through the basic
principles remain the same for other crops as well.

1. Selection of the yield goal.


2. Assessment of crop nutrient requirement.
3. Estimation of indigenous nutrient supplies.
4. Computation of fertiliser nutrient rates.
5. Dynamic adjustment of rates.

Selection of the Yield Goal

A yield goal exceeding 70-80 per cent of the variety-specific potential yield (Ymax) has
to chosen. Ymax is defined as the maximum possible grain yield limited only by climatic
conditions of the site, where there are no other factors limiting crop growth. The logic behind
selection of the yield goal to the extent of 70–80 per cent of the Ymax is that the internal NUEs
decrease at very high yield levels near Ymax. Crop growth models (DSSAT) can be used to
work out Ymax of crop variety under particular climatic conditions.

Assessment of Crop Nutrient Requirement

The nutrient uptake requirements of a crop depend both on yield goal and Ymax. In
SSNM. Nutrient requirements are estimated with the help of quantitative evaluation of fertility of
tropical soils (QUEFTS) models. Nutrient requirements for a particular yield goal of a crop
variety may be smaller in a high yielding season than in a low yielding one.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Estimation of Indigenous Nutrient Supplies

Indigenous nutrient supply (INS) is defined as the total amount of a particular nutrient
that is available to the crop from the soil during the cropping cycle, when other nutrients are non
limiting. The INS is derived from soil incorporated crop residues, water and atmospheric
deposition. It is estimated by measuring plant nutrient uptake in an omission plot embedded in
the farmers' field, wherein all other nutrients except the one (N, P or K) in question, are applied
in sufficient amounts.

Computation of Fertiliser Nutrient Rates

Field-specific fertiliser N, P or K recommendations are calculated on the basis of above


steps (1-3) and the expected fertiliser recovery efficiency (RE, kg of fertiliser nutrient taken up
by the crop per kg of the applied nutrient). Studies indicated RE values of 40–60 per cent for N,
20-30 per cent for P and 40–50 per cent for K in rice under normal growing conditions, when the
nutrients are applied as water soluble fertiliser sources.

Dynamic Adjustment of N Rates

Whereas fertiliser P and K, as computed above, are applied basally (at the time of sowing
planting), the N rates and application schedules can be further adjusted as per the crop demand
using chlorophyll meter (popularly known as SPAD: Soil Plant Analysis Development) or leaf
colour chart (LCC). Recent on-farm studies in India and elsewhere have revealed significant
advantage of SPAD/LCC-based N management schedules in rice and wheat in terms of yield
gain, N use efficiency and economic returns over the conventionally recommended N
application involving 2 or 3 splits during crop growth irrespective of N supplying capacity of the
soils. In winter season maize crop, SPAD-based (d”37) N application resulted in a saving of 55
kg N ha-1 as compared to soil test crop response equation-based N application without any yield
reduction. Agronomic efficiency was also higher in the crop. In wheat, timing of N application
at SPAD value d”42 resulted in 9 per cent higher wheat yield along with 20 kg ha-1 N saving,
than the recommended soil based N supply.

1.3.2. SOIL-CUM-PLANT ANALYSIS BASED SSNM

In this case, nutrient availability in the soil, plant nutrient demands for a higher target
yield (not less than 80% of Ymax) and RE of applied nutrients are considered for developing
fertiliser use schedule to achieve maximum economic yield of a crop variety. In order to
ascertain desired crop growth, not limited by apparent or hidden huger of nutrients, soil is
analysed for all macro and micronutrients well-before sowing/planting. Total nutrient
requirement for the targeted yield and estimated with the help of documented information
available for similar crop growing ments. Field-specific fertiliser rates are then suggested to
meet the nutrient demand of (variety) without depleting soil reserves. These soil-test crop
response (STCR) based recommendations are now in practice to achieve desired yield targets in
many field crops. Thus, studies with intensive cropping systems have shown that fertiliser
recommendations with above approach offer greater economic gains as compared with NPK
fertiliser schedules conventionally prescribed by soil testing laboratories.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Decision Support Systems

Nutrient Expert® (NE) is an easy-to-use, interactive and computer-based decision support tool
that can rapidly provide nutrient recommendations for an individual farmer field in the presence
or absence of soil testing data. NE is nutrient decision support software that uses the principles of
SSNM and enables farm advisors to develop fertiliser recommendations tailored to a specific
field or growing environment. NE allows users to draw required information from their own
experience, farmers' knowledge of the local region and farmers' practices. NE can use
experimental data but it can also estimate the required SSNM parameters using existing site
information. The algorithm for calculating fertiliser requirements in NE is determined from a set
of on-farm trial data using SSNM guidelines. The parameters needed in SSNM are usually
measured in nutrient omission trials conducted in farmers' fields, which require at least one crop
season. With NE, parameters can be estimated using proxy information, which allows farm
advisors to develop fertiliser guidelines for a location without data from field trials.

Decision Rules to Estimate Site-Specific Nutrient Management Parameters

The NE estimates attainable yield and yield response to fertiliser from site information
using decision rules developed from on-farm trials. Specifically, NE uses characteristics of the
growing environment—water availability (irrigated, fully rainfed and rainfed with supplemental
irrigation) and any occurrence of flooding or drought, soil fertility indicators-soil texture, soil
color and organic matter content, soil test for P or K (if available), historical use of organic
materials (if any) and problem soils (if any), crop sequence in farmer's cropping pattern; crop
residue management and fertiliser inputs for the previous crop and farmers' current yields. Data
for specific crops and specific geographic regions are required in developing the decision rules
for NE. The datasets must represent diverse conditions in the growing environment characterised
by variations in the amount and distribution of rainfall, crop cultivars and growth durations soil
and cropping systems.

Current Versions of Nutrient Expert

The NE has been developed for specific crops and geographic regions. Nutrient Expert
hybrid maize (NEHM) for favourable tropical environments (South-East Asia) was developed in
late 2009 and underwent field evaluation in Indonesia and the Philippines. Using NEHM as a
model, the NE concept has been adapted to other crops and geographic regions or countries. In
2011, beta versions of NE for maize were developed for South Asia, China, Kenya and
Zimbabwe, Likewise, beta versions of NE for wheat were developed for South Asia as well as
China. In 2013, field validated versions of NE maize and NE wheat have been released for public
use in South Asia and China.

1.3.3. SPATIAL DATA AND THEIR MANAGEMENT IN GIS

Most data management professionals are more experienced with classical tabular data in
Cartesian (rows and columns) structures as found in most business, government and scientific
databases.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Geospatial data has significantly different structure and function. It includes structured
data about objects in the spatial universe—their identity, location, shape and orientation and
other things we may know about them. Geographical data describe an incredibly wide range of
objects or business assets-roads, buildings, property lines, terrain, infrastructure, hydrology and
ecosystems. All these objects can be described in terms of points, lines and polygons—and tables
of these objects constitute the tabular portion of geospatial data.

Geographical information system (GIS) technology also accommodates some kinds of


unstructured data (usually raster imagery) that can be tagged and geocoded (given precise
positional characteristics) and integrated by GIS software to the other kinds of map data.

Thus, the management of GIS data and metadata is somewhat different. Whereas
traditional tabular data could be understood by a human looking at any printed expression of the
data (usually in rows and columns, even on paper), raw GIS data is generally meaningless to the
human eye until converted into a map. This is what GIS software does.

What is a GIS?

A geographic information system (GIS) is a computer-based tool for mapping and


analysing things that exist and events that happen on earth. GIS technology integrates common
database operations such as query and statistical analysis with the unique visualisation and
geographic analysis benefits offered by maps. These abilities distinguish GIS from other
information systems and make it valuable to a wide range of public and private enterprises for
explaining events, predicting outcomes and planning strategies. Chapter 1.2.2 may also be
referred for further information in this regard.

Whether siting a new business, finding the best soil for growing crops or figuring out the
best route for an emergency vehicle, local problems also have a geographical component. GIS
will give you the power to create maps, integrate information, visualise scenarios, solve
complicated problems, present powerful ideas and develop effective solutions like never before.
GIS is a tool used by individuals and organisations, schools, governments and businesses seeking
innovative ways to solve their problems.

Mapmaking and geographic analysis are not new, but a GIS performs these tasks better
and faster than do the old manual methods. Before GIS technology, only a few people had the
skills necessary to use geographic information to help with decision making and problem
solving.

Today, GIS is a multibillion-dollar industry employing hundreds of thousands of people


worldwide. GIS is taught in schools, colleges and universities throughout the world.
Professionals in every field are increasingly aware of the advantages of thinking and working
geographically.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Importance of GIS
1. Perform geographic queries and analysis.
2. Improve organisational integration.
3. Making maps.
4. Make better decisions.

Perform geographic queries and analysis: The ability of GISs to search databases and
perform geographic queries has saved many companies literally millions of dollars. GISs have
helped reduce costs by :

• Streamlining customer service.


• Reducing land acquisition costs through better analysis.
• Reducing fleet maintenance costs through better logistics.
• Analysing data quickly.

Improve organisational integration: Many organisations that have implemented GIS


have found that one of its main benefits is improved management of their own organisation and
resources. Because GISs have the ability to link data sets together by geography, they facilitate
interdepartmental information sharing and communication. By creating a shared database, one
department can benefit from the work of another data can be collected once and used many
times.

Make better decisions : The old adage "better information leads to better decisions” is
as true for GIS as it is for other information systems. A GIS, however, is not an automated
decision making system but a tool to query, analyse and map data in support of the decision
making process. The GIS technology has been used to assist in tasks such as presenting
information at planning inquiries, helping resolve territorial disputes and siting pylons in such a
way as to minimise visual intrusion.

The GIS can be used to help reach a decision about the location of a new housing
development that has minimal environmental impact, is located in a low-risk area and is close to
a population canter. The information can be presented succinctly and clearly in the form of a
map and accompanying report, allowing decision makers to focus on the real issues rather than
trying to understand the data. Because GIS products can be produced quickly, multiple scenarios
can be evaluated efficiently and effectively.

Making maps : Maps have a special place in GIS. The process of making maps with GIS
is much more flexible than are traditional manual or automated cartography approaches. It
begins with database creation. Existing paper maps can be digitised and computer-compatible
information can be translated into the GIS. The GIS-based cartographic database can be both
continuous and scale free. This allows the creation of map products which are cantered on any
location, at any scale and showing selected information symbolised effectively to highlight
specific characteristics.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

1.3.4. GEODESY AND ITS BASIC PRINCIPLES

Geodesy also known as geodetics, geodetic engineering or geodetics engineering – a branch of


applied mathematics and earth sciences, is the scientific discipline that deals with the
measurement and representation of the earth (or any planet), including its gravitational field, in
a three-dimensional time-varying space. Geodesists also study geodynamical phenomena such
as crustal motion, tides and polar motion. For this, they design global and national control
networks, using space and terrestrial techniques while relying on datums and coordinate
systems. Basic principles of geodesy are briefly presented.

Coordinates and Coordinate Reference Systems

Coordinates belong to a coordinate system. A coordinate system (CS) describes the


mathematical rules governing the coordinate space including : the number of axes, their name,
their direction, their units and their order. When coordinates are used to describe position on the
earth, they belong to a coordinate reference system. A coordinate reference system (CRS) is a
coordinate system which is referenced to the earth. The referencing is achieved through a datum
(details to follow).

Surface of the earth is irregular and is therefore difficult to calculate on directly. Instead,
surveyors use a model of the earth for their calculations. Numerous models exist and any one
model may have several variations in position or orientation relative to the earth. Each variation
leads to a different CRS. In general, if the coordinate reference system is changed then the
coordinates of a point change. Consequently, coordinates describe location unambiguously only
when the CRS to which they are referenced has been fully identified. The open geospatial
consortium (OGP) has created a database of these CRS to regulate and minimise the risk of
applying the wrong systems or component parameters.

The open geospatial consortium (OGC) is an international, not for profit organisation,
committed to making quality open standards for the global geospatial community. These
standards are made through a consensus process and are freely available for anyone to use to
improve sharing of the world's geospatial data. The OGC standards are used in a wide variety of
domains including environment, defense, health, agriculture, meteorology, sustainable
development and many more.

Earth and the Geoid

Geodesy is defined as the science of measurement and mapping of the earth's surface. As
most of the earth's surface is shaped by gravity, determination of geometric aspects of the earth's
external gravity field, the geoid, is a key element of geodesy as a science.

Surface of the earth with its topography is far too irregular to be a convenient basis for
computing position. Surveyors reduce their observations to the gravitational surface, which
approximates mean sea level. This equipotential surface is known as the geoid. It is
approximately spherical, but because of the rotation of the earth, there is slight bulge at the
equator and flattening at the poles. In addition, because of the variations in rock density that

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

impact the gravitational field, there are many local irregularities. These factors make the geoid a
complex surface.

Ellipsoids (Spheroids)

To simplify computing of position, the geoid is approximated by the nearest


mathematically definable figure, the ellipsoid. The ellipsoid is effectively a 'best fit' to the geoid.
However, there are numerous ellipsoids available, each of them uniquely named and defined
either based on their semi-major axis, semi-minor axis or more usually, a ratio of these axes
called ‘inverse flattening’.

Approximation of the geoid by a reference ellipsoid could traditionally only be done


locally, not globally and this limitation led to the existence of many ellipsoids, each with a
different size and shape. Some of these ellipsoids approximated different parts of the surface of
the geoid, whereas others expressed the increasing knowledge about the earth's shape and size
over time. Now many of these ellipoids have become obsolete, whilst other have become
enshrined in national mapping systems, such as the airy ellipsoid from 1830, which still forms
the basis of the British National Grid.

In summary, ellipsoids determine shape and provide a best fit of the geoid. The
importance of ellipsoids will be discussed in more detail in “Geodetic Datums" below.

Geodetic Datums

A geodetic datum defines the position and orientation of the reference ellipsoid relative
to entre of the earth and the meridian used as zero longitude – the prime meridian. Size and
Chape of the ellipsoid are traditionally chosen to best fit the geoid in area of interest. A local best
fit will attempt to align the minor axis of the ellipsoid with the earth's rotational axis. It will also
ensure that the zero longitude of the ellipsoid coincides with a defined prime meridian. The
prime meridian is usually that through Greenwich, England, but historically, countries used the
meridian through their national astronomic observatory. This best fit is centered on a position on
the earth's surface within the area of interest, e.g: the Helmert Tower at Potsdam, near Berlin,
was used for the European Datum 1950 (ED 50). A geodetic datum is inextricably linked to the
generation of geographical coordinates.

Geographical Coordinates (Latitude and Longitude)

Position of a point relative to a geographical coordinate reference system is described on


the CRS ellipsoid and is generally expressed by means of geographical coordinates: latitude (0)
and longitude (a). These are angular expressions related to the equator and the prime meridian,
usually, but not always, the meridian passing through Greenwich, London (these being the 0°
references for the N-S/E-W directions respectively). For example, a typical position would be
expressed as Latitude 57°30'15"N, Longitude 3°40'20"W. Note that a position with a latitude
south of the equator or longitude west of the prime meridian is sometimes shown as negative,
e.g. -3°40'20". It is very important to appreciate that latitude and longitude are not unique and are
therefore entirely dependent on the chosen geodetic datum (see following section). Conversely,

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

any given values of latitude and longitude can refer to any geodetic datum. Heights above the
ellipsoid are not of much practical use because it is easier to measure height from or to the geoid.
In summary, as with any other type of coordinate, geographical coordinates in themselves do not
describe position unambiguously: the associated CRS must be identified.

Without knowledge of the geodetic datum, the latitude and longitude of a point will have
an Inherent ambiguity of up to 1500 meters, this being the maximum positional effect caused by
the irregular shape of the geoid. An ambiguity of this magnitude may be disastrous for achieving
E&P business goals, it may even lead to very severe HSE (health, safety and environment)
incidents by e.g. drilling into a shallow gas pocket believed to be hundreds of meters away.

Each of the many models (ellipsoids) may have several determinations of its reference to
the carth, each resulting in a different geodetic datum. For example, the International 1924
ellipsoid is referenced to the earth at Potsdam for the European 1950 datum, but also referenced
to the earth near Rome for the Monte Mario 1940 datum used in Italy. Because of the
irregularities of the geoid, a point has coordinates referenced to European datum that differ by
several meters from coordinates referenced to Monte Mario datum, despite both datum using the
same ellipsoid. Similarly, if the model is changed (a different ellipsoid adopted), even when the
reference point retained, coordinates of positions away from the reference point will differ.

Latitude and Longitude are not Unique

Although the WGS 84 (World Geodetic System 84) and ED 50 (European Datum 50).
coordinates of Eiffel Tower in Paris share the exact same latitude and longitude values
(48°51'29N 2°17'40"E), they do not represent the same physical point on the earth's surface. In
this example, difference between the two coordinate reference system positions is approximately
140 meters. This demonstrates that latitude and longitude are not unique without the associated
CRS being identified.

Global Positioning System (GPS)

Use of GPS is now widespread within the Exploration & Production (E&P industry)
and its applications are far ranging. The GPS is a worldwide navigation system operated by the
US Department of Defense and formed by a constellation of 24 satellites and their ground
stations. Its receivers use these satellites as reference points to calculate positions accurate to a
matter of meters, on or above the earth's surface. These "black box” units generate a 3D
coordinate, which can be used for navigation (amongst other numerous purposes) and ultimately
determine your position in terms of a latitude and longitude. In addition, they compute a height
above the ellipsoid for that associated position.

The coordinate reference system used by the GPS system is known as WGS 84. The
WGS 84 CRS has its own ellipsoid, confusingly also known as WGS 84. There is no single
datum origin point for the WGS 84 datum and geographic coordinates are derived from a world
adjustment of several geodetic markers surveyed by GPS.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Coordinate Transformations

In order to merge points such as surface well locations (whose geographical coordinates
are referenced to one particular CRS) with other points based on a different CRS, one of the two
datasets must be transformed. It is possible to measure and calculate the displacements, rotations
and scale differences between them. There are numerous different methods of transforming
coordinates. Various E&P companies adopt different CRS to store georeferenced data in their
corporate databases. It is therefore quite common to have to transform data sets to suit the
recipient's prescribed CRS, prior to sharing data with other operators or submitting information
to regulatory bodies.

1.4. REMOTE SENSING, IMAGE PROCESSING AND GLOBAL POSITIONING


SYSTEM

Remote sensing techniques play an important role in crop identification, crop area and
production estimation, disease and stress detection, soil and water resources etc. Remote sensing
applications have become very important for making macroeconomic decisions related to food
security, poverty alleviation and sustainable development in the country.

1.4.1. WHAT IS REMOTE SENSING?

Remote sensing is the science (and to some extent, art) of acquiring information about the
earth's surface without actually being in contact with it. This is done by sensing and recording
reflected or emitted energy and processing, analysing and applying that information. In much
remote sensing, the process involves an interaction between incident radiation and the targets of
interest. This is exemplified by the use of imaging systems where the following seven elements
are involved. Remote sensing also involves the sensing of emitted energy and the use of non-
imaging sensors.

1. Energy source or illumination (A) : The first requirement for remote sensing is to have
in energy source which illuminates or provides electromagnetic illuminates or provides
electromagnetic energy to the target of interest.

2. Radiation and the atmosphere (B) : As the energy travels from its source to the target,
it will come in contact with and interact with the atmosphere it passes through. This
interaction may take place a second time as the energy travels from the target to the
sensor.

3. Interaction with the target (C) : Once the energy makes its way to the target through
the atmosphere, it interacts with the target depending on the properties of both the target
and the radiation.

4. Recording of energy by the sensor (D) : After the energy has been scattered by or
emitted from the target, we require a sensor (remote-not in contact with the target) to
collect and record the electromagnetic radiation.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

5. Transmission, reception and processing (E) : The energy recorded by the sensor has to
be transmitted, often in electronic form, to a receiving and processing station where the
data are processed into an image (hardcopy and/or digital).

6. Interpretation and analysis (F) : The processed image is interpreted, visually and/or
digitally or electronically, to extract information about the target which was illuminated.

7. Application (G): The final element of the remote sensing process is achieved when we
apply the information we have been able to extract from the imagery about the target in
order to better understand it, reveal some new information or assist in solving a particular
problem.

1.4.2. REMOTE SENSING APPLICATION IN AGRICULTURE

Satellite and airborne images are used as mapping tools to classify crops, examine their health
and viability and monitor farming practices. Agricultural applications of remote sensing include
the following:

1. Soil properties sensing : Soil texture, structure and physical condition, soil moisture and
soil nutrients.

2. Crop sensing : Plant population, crop stress and nutrient status.

3. Yield monitoring systems : Crop yield, harvest swath width and moisture content of
grain.

4. Variable rate technology systems : Fertiliser flow, weed detection etc.

Crop Type Mapping

Identifying and mapping crops is important for a number of reasons. Maps of crop type
are created by national and multinational agricultural agencies, insurance agencies and regional
agricultural boards to prepare an inventory of what was grown in certain areas and when. This
serves the purpose of forecasting grain supplies (yield prediction), collecting crop production
statistics, facilitating crop rotation records, mapping soil productivity, identification of factors
influencing crop stress, assessment of crop damage due to storms and drought and monitoring
farming activity.

Key activities include identifying the crop types and delineating their extent (often
measured in acres). Traditional methods of obtaining this information are census and ground
surveying. In order to standardise measurements however, particularly for multinational agencies
and consortiums, remote sensing can provide common data collection and information extraction
strategies.

Remote sensing offers an efficient and reliable means of collecting the information
required, in order to map crop type and area. Besides providing a synoptic view, remote sensing
can provide structure information about the health of the vegetation. The spectral reflection of a
field will vary with respect to changes in the phenology (growth), stage type, and crop health

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

and thus can be measured and monitored by multispectral sensors. Radar is sensitive to the
structure, alignment and moisture content of the crop and thus can provide complementary
information to the optical data. Combining the information from these two types of sensors
increases the information available for distinguishing each target class and its respective
signature and thus there is better chance of performing more accurate classification.
Interpretations from remotely sensed data can be input to a geographic information system (GIS)
and crop rotation systems and combined with ancillary data, to provide information of
ownership, management practices etc.

Crop identification and mapping benefit from the use of multitemporal imagery to
facilitate classification by taking into account changes in reflectance as a function of plant
phenology (stage of growth). This in turn requires calibrated sensors and frequent repeat imaging
throughout the growing season. For example, crops like canola may be easier to identify when
they are flowering, because of both the spectral reflectance change and the timing of the
flowering.

Multisensor data are also valuable for increasing classification accuracies by


contributing more information than a sole sensor could provide. The VIR sensing contributes
information relating to the chlorophyll content of the plants and the canopy structure, while radar
provides information relating to plant structure and moisture. In areas of persistent cloud cover
or haze, radar is an excellent tool for observing and distinguishing crop type due to its active
sensing capabilities and long wavelengths, capable of penetrating through atmospheric water
vapour.

Crop Monitoring and Damage

Early detection and assessment of crop pest and pathogen infestations and soil moisture
stress is critical in effective plant protection leading to optimum yield. This process requires that
remote sensing imagery be provided on a frequent basis (at a minimum, weekly) and be
delivered to the farmer quickly, usually within a day or two.

Also, crops do not generally, grow, evenly across the field and consequently crop yield
can vary greatly from one spot in the field to another. These growth differences may be due to
soil nutrient deficiencies or other forms of stress. Remote sensing allows the farmer to identify
problem areas (nutrient deficiencies, pesticide and herbicide needs etc) within a field, so that the
farmer can take up timely remedial measures for improving the productivity of crops.

Remote sensing has a number of attributes that lend themselves to monitoring the health
of crops. One advantage of optical (VIR) sensing is that it can see beyond the visible
wavelengths into the infrared, where wavelengths are highly sensitive to crop vigour as well as
crop stress and crop damage. Remote sensing imagery also gives the required spatial overview of
the land. Recent advances in communication and technology allow a farmer to observe images of
his fields and make timely decisions about managing the crops. Remote sensing can aid in
identifying crops affected by conditions that are too dry or wet, affected by insect, weed or
fungal infestations or weather related damage. Images can be obtained throughout the growing
season to not only detect problems, but also to monitor the success of the treatment.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Healthy vegetation contains large quantities of chlorophyll, the substance that gives most
vegetation its distinctive green colour. In referring to healthy crops, reflectance in the blue and
red parts of the spectrum is low since chlorophyll absorbs this energy. In contrast, reflectance in
the green and near-infrared spectral regions is high. Stressed or damaged crops experience a
decrease in chlorophyll content and changes to the internal leaf structure. Reduction in
chlorophyll content decrease reflectance in the green region and internal leaf damage results in
decrease in near-infrared reflectance. These reductions in green and infrared reflectance provide
early detection of crop stress. Examining the ratio of reflected infrared to red wavelengths is an
excellent measure of vegetation health. This is the premise behind some vegetation indices,
such as the normalised differential vegetation index (NDVI). Healthy plants have a high
NDVI value because of their high reflectance of infrared light and relatively low reflectance of
red light. Phenology and vigour are the main factors in affecting NDVI. An excellent example is
the difference between irrigated crops and non-irrigated land. Irrigated crops appear bright green
in a real-colour simulated image. Darker areas are dry rangeland with minimal vegetation. In a
CIR (colour infrared simulated) image, where infrared reflectance is displayed in red, the healthy
vegetation appears bright red, while the rangeland remains quite low in reflectance.

Examining variations in crop growth within one field is possible. Areas of consistently
healthy and vigorous crop would appear uniformly bright. Stressed vegetation would appear dark
amongst the brighter, healthier crop areas. If the data is georeferenced, and if the farmer has a
GPS (global position satellite) unit, he can find the exact area of the problem very quickly, by
matching the coordinates of his location to that on the image.

Detecting damage and monitoring crop health requires high-resolution imagery and
multispectral imaging capabilities. One of the most critical factors in making imagery useful to
farmers is a quick turnaround time from data acquisition to distribution of crop information.
Receiving an image that reflects crop conditions of two weeks earlier does not help real time
management or damage mitigation. Images are also required at specific times during the growing
season and on a frequent basis.

Remote sensing doesn't replace the field work performed by farmers to monitor their
fields. but it does direct them to the areas in need of immediate attention.

1.4.3. IMAGE PROCESSING AND INTERPRETATION

Image processing and interpretation/analysis can be defined as the “act of examining


images for the purpose of identifying objects and judging their significance” Image analyst study
the remotely sensed data and attempt through logical process in detecting, identifying,
classifying, measuring and evaluating the significance of physical and cultural objects, their
patterns and spatial relationship. Some procedures commonly used in analysing/interpreting
remote sensing images are briefly presented.

1. Pre-processing.
2. Image enhancement.
3. Image classification.
4. Spatial feature extraction.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

5. Measurement of biogeophysical parameters.


6. Geographical information system (GIS).

Pre-Processing

Prior to data analysis, initial processing on the raw data is usually carried out to correct
for any distortion due to the characteristics of the imaging system and imaging conditions.
Depending on the user's requirement, some standard correction procedures may be carried out by
the ground station operators before the data is delivered to the end-user. These procedures
include radiometric correction to correct for uneven sensor response over the whole image and
geometric correction to correct for geometric distortion due to earth’s rotation and other
imaging conditions (such as oblique viewing). The image may also be transformed to conform to
a specific map projection system. Furthermore, if accurate geographical location of an area on
the image needs to be known, ground control points (GCP's) are used to register the image to a
precise map (georeferencing).

Image Enhancement

In order to aid visual interpretation, visual appearance of the objects in the image can be
improved by image enhancement techniques such as grey level stretching to improve the
contrast and spatial filtering for enhancing the edges. A bluish tint all-over the image, producing
a hazy appearance is due to scattering of sunlight by atmosphere into the field of view of the
sensor. This effect also degrades the contrast between different landcovers.

The image can be enhanced by a simple linear grey-level stretching. In this method, a
level threshold value is chosen so that all pixel values below this threshold are mapped to zero.
An upper threshold value is also chosen so that all pixel values above this threshold are mapped
to 255. All other pixel values are linearly interpolated to lie between 0 and 255. The lower and
upper thresholds are usually chosen to be values close to the minimum and maximum pixel
values of the image.

The result of applying the linear stretch is that the hazy appearance will be removed,
except for some parts near to the top of the image. The contrast between different features will be
improved.

Image Classification

Different landcovers types in an image can be discriminated using some image


classification algorithms using spectral features, i.e. the brightness and "colour” information
contained in each pixel. Classification procedures can be "supervised" or "unsupervised”.

In supervised classification, the spectral features of some areas of known landcovers


types are extracted from the image. These areas are known as the training areas. Every pixel in
the whole image is then classified as belonging to one of the classes depending on how close
spectral features are to the spectral features of the training areas.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

In unsupervised classification, the computer programme automatically groups the pixels in the
image into separate clusters, depending on their spectral features. Each cluster will then be
assigned a landcovers type by the analyst.

Each class of landcovers is referred to as a theme and the product of classification is


known as a thematic map.

Spatial Feature Extraction

In high spatial resolution imagery, details such as buildings and roads can be seen. In
order in fully exploit the spatial information contained in the imagery, image processing and
analysis algorithms utilising the textural, contextual and geometrical properties are required.
Such algorithms make use of the relationship between neighbouring pixels for information
extraction. Incorporation of a-priori information is sometimes required. A multi-resolutional
approach (analysis at different spatial scales and combining the resoluts) is also a useful strategy
when dealing with very high resolution imagery. In this case, pixel-based method can be used in
the lower resolution mode and merged with the contextual and textural method at higher
resolutions.

Measurement of Biogeophysical Parameters

Specific instruments carried on-board the satellites can be used to make measurements of
the biogeophysical parameters of the earth. Some of the examples are: atmospheric water vapour
content, stratospheric ozone, land and sea surface temperature, sea water, chlorophyll
concentration, forest biomass, sea surface wind field, troposphere aerosol, etc. Specific satellite
missions have been launched to continuously monitor the global variations of these
environmental parameters that may show the causes or the effects of global climate change and
the impacts of human activities on the environment.

Geographical Information System (GIS)

Different forms of imagery such as optical and radar images provide complementary
information about the landcovers. More detailed information can be derived by combining
several different types of images. For example, radar image can form one of the layers in
combination with the visible and near infrared layers when performing classification. Chapter
1.2.2 and 1.3.3 may also be referred for further information on GIS.

Thematic information derived from the remote sensing images is often combined with
other auxiliary data to form the basis for a geographic information system (GIS). A GIS is a
database of different layer contains information about a specific aspect of the same area Which
is used for analysis by the resource scientists.

1.4.3. GLOBAL POSITIONING SYSTEM (GPS) COMPONENTS AND ITS


FUNCTIONS

Global positioning system (GPS) is a satellite-based navigation system, consisting of more than

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

satellites and several supporting ground facilities, which provides accurate, three-dimensional
position, velocity and time, 24 hours a day, everywhere in the world and in all weather
conditions. The global positioning system consists of three main components.

GPS Components

Basic components of global positioning include :

I. GPS ground control stations.


II. GPS satellites.
III. GPS receivers. GPS Ground Control Stations

The ground control component includes the master control station at Falcon Air Force Base
Colorado Springs, Colorado and monitor stations at Falcon AFB, Hawaii, Ascension Island in
the Atlantic, Diego Garcia in the Indian Ocean and Kwajalein Island in the South Pacific. The
control segment uses measurements collected by the monitor stations to predict the behaviour of
each satellite's orbit and atomic clocks. Prediction data is linked up to the satellites for
transmission to users. The control segment also ensures that GPS satellite orbits remain within
limits and that the satellites do not drift too far from nominal orbits.

GPS Satellites

The space segment includes the satellites and the delta rockets that launch the satellites from
Cape Canaveral in Florida, United States. GPS satellites orbit in circular orbits at 17,440 km
altitude, each orbit lasting 12 hours. The orbits are tilted to the equator by 55 ° to ensure
coverage in polar regions. The satellites are powered by solar cells to continually orientate
themselves to point the solar panels towards the sun and the antennas towards the earth. Each
satellite contains four atomic clocks.

GPS Receivers

The ground stations send control signals to the GPS satellites. The GPS satellites transmit
radio signals and the GPS receivers, receive these signals and use it to calculate its position.

Calculations used to determine your GPS receiver's position is based on very small time
differences, from when the satellite transmitted the signal, to, when the GPS receiver received
the signal. These small differences are then used to calculate the distance from the receiver to the
satellite. However, when receiving only one signal, we can only calculate how far away from the
satellite we are. When receiving two signals, we can determine two likely positions where we
are. We need three satellite signals to determine our exact position on the earth's surface. (2D/2
dimensional positioning). When more than three satellites are 'visible to the GPS receiver, it will
also calculate the altitude of the receiver (3D/3 dimensional positioning).

The GPS receiver requires signals from at least three satellites to determine your unique
position on the earth's surface. With a fourth signal, altitude can also be determined. Receiving
signals from more than four different satellites, the position of the GPS receiver can more
accurately be determined.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

The GPS satellite constellation is designed in such a manner as to guarantee that at least
four satellites are visible from any place on earth at any moment in time. Most of the time
(+95%) however, we should have at least six satellites visible. Many commercial GPS receivers
can receive and process signals from 12 satellites for increased reliability and accuracy.

The GPS satellites carry atomic clocks that measure time to a high degree of accuracy.
Time information is placed in the codes broadcast by the satellite so that a receiver can
continuously determine the time the signal was broadcast. The signal contains data that a
receiver uses to compute the locations of the satellites and to make other adjustments needed for
accurate positioning. Receiver uses the time difference between the time of signal reception and
the cast time to compute the range to the satellite. Receiver must account for propagation delays
caused by the ionosphere and the troposphere. With three ranges to three satellites and knowing
the location of the satellite when the signal was sent, the receiver can compute its three-
dimensional position.

To compute ranges directly, however, the user must have an atomic clock synchronised
to the global positioning system. By taking a measurement from an additional satellite, the
receiver is the need for an atomic clock. The result is that the receiver uses four satellites to
compute latitude, longitude, altitude and time.

GPS Functions

The GPS functions include :

1. Giving a location: This is the whole point of a navigation system: its ability to
accurately triangulate your position based on the data transmissions from multiple
satellites. It will give your location in coordinates, either latitude and longitude or
Universal Transverse Mercators (UTMs). Developed by the military, UTMs are used
to pinpoint a location on a map. Most topographical maps have UTM gridlines printed on
them.
2. Point-to-point navigation: This GPS navigation feature allows you to add waypoints to
your trips. By using a map, the coordinates of a trailhead or road or the point where you
are standing, you can create a point-to-point route to the place where you are headed.
You will have the trip mapped out, including any stops you add in.
3. Plot navigation: This feature in a global positioning system allows you to combine
multiple waypoints and move point-to-point. Once you reach the first waypoint, the GPS
can automatically point you on your way to the next one. The waypoint management
software comes with most handheld GPS units for easy database management.
4. Keeping track of your track: Tracks are some of the most useful functions of portable
navigation systems. You can map where you have already been. This virtual map is
called a track and you can program the GPS system to automatically drop track-points as
you travel, either over intervals of time or distance. This can be done on land or in a
nautical setting and allows you to retrace your steps.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

GPS Applications

The GPS applications include:


Guidance
• Point guidance.
• Swath guidance.
Control
• Variable rate application.
• Variable tillage depth.
• Variable irrigation.
Mapping
Soil properties.
Chemical application.
Chemical prescriptions.
Tillage maps.
Yield mapping
• Pest mapping.
• Topographic maps.
• Planting maps.

1.5. SIMULATION AND MODELLING

Modelling and simulation is a discipline for developing a level of understanding of the


interaction of the parts of a system and of the system as a whole. The level of understanding
which may be developed via this discipline is seldom achievable via any other discipline.

Modelling and simulation is a discipline, it is also very much an art form. One can learn
about riding a bicycle from reading a book. To really learn to ride a bicycle one must become
actively engaged with a bicycle. Modelling and simulation follows much the same reality. You
can learn much about modeling and simulation from reading books and talking with other
people. Skill and talent in developing models and performing simulations is only developed
through the building of models and simulating them.

Simulation can be broadly defined as a technique for studying real-world dynamical


systems by imitating their behaviour using a mathematical model of the system implemented on
a digital computer.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Simulation can also be viewed as a numerical technique for solving complicated


probability models, ordinary differential equation and partial differential equation, analogously
to the way in which we can use a computer to numerically evaluate the integral of a complicated
function. That is why science of simulation is considered as an interdisciplinary subject.

Computer simulation is a powerful methodology for design and analysis of complex


systems. Overall approach in computer simulation is to represent the dynamic characteristics of a
real world system in a computer model. The model is subjected to experiments to obtain
predictive information useful in making informed decision making about the characteristics of
the real system. Simulations are suitable for problems in which there are no closed-form
analytical solutions. Since most dynamic problems in practice cannot be represented and solved
fully using mathematical equations, computer simulation is a powerful and flexible methodology
in complex systems analysis.

Simulations can be classified into continuous and discrete simulations. In continuous


simulations, the state variables, i.e. the collection of variables needed to describe the system,
change continuously over time and the behaviour of the system is typically described by
differential equations. Examples of continuous systems include the modelling of thermal or
hydraulic systems. Discrete simulations are event-driven where the state variables change at
discrete time points. Examples of discrete-event simulations include service industry applications
such as queues in a grocery store and manufacturing applications involving material flow
analysis.

1.5.1. WHEN TO USE SIMULATION

There are several situations in which simulations can be used as indicated below :

1. Study internals of a complex system e.g. biological system.


2. Optimise an existing design e.g. routing algorithms, assembly line.
3. Examine effect of environmental changes e.g. weather forecasting.
4. System is dangerous or destructive e.g. atom bomb, atomic reactor, missile launching.
5. Study importance of variables.
6. Verify analytic solutions (theories).
7. Test new designs or policies.
8. Impossible to observe/influence/build the system.
9. When it allows inspection of system internals that might not otherwise be observable.
10. Observation of the simulation gives insights into system behaviour.
11. System parameters can be adjusted in the simulation model allowing assessment of their
sensitivity (scale of impact on overall system behaviour).
12. Simulation verifies analysis of a complex system or can be used as a teaching tool to provide
insight into analytical techniques.
13. A simulator can be used for instruction, avoiding tying up or damaging an expensive, actual
system (e.g. a flight simulation vs. use of multimillion dollar aircraft).

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

1.5.2. MODELLING CONCEPTS

There are several concepts underlying simulation. These include system and model, events,
system state variables, entities and attributes, list processing, activities and delays, and finally the
definition of discrete-event simulation.

The process of making and testing hypotheses about models and then revising designs or
theories has its foundation in the experimental sciences. Similarly, computational scientists use
modelling to analyse complex, real-world problems in order to predict what might happen with
some course of action. For example, Dr Julianne Collins, a genetic epidemiologist (statistical
genetics) at the Greenwood Genetics Canter, runs genetic analysis programmes and analyses
epidemiological studies using the Statistical Analysis Software (SAS). Scientists are using a
combination of mathematics, signal processing and scientific visualisation to model, image and
discover land mines.

System, Model and Events

A model is a representation of an actual system. Modelling and simulation concepts, as


introduced by Zeigler are :

• A model is an abstraction of the real system.


• Simplifying assumptions are used to capture (only) important behaviours.
• Linearisation, time-bound behaviours etc may make analysis tractable.

Modelling is defined as the application of methods to analyse complex, real-world problems


in order to make predictions about what might happen with various actions.

Object : It is some entity in the real-world. Such an object can exhibit widely varying
behaviour depending on the context in which it is studied, as well as the aspects of its behaviour
which are under study.

Base model : It is the hypothetical, abstract representation of the object's properties in


particular, its behaviour, which is valid in all possible contexts and describes all the object’s
facets. A base model is hypothetical as we will never, in practice, be able to construct/represent
such a total model. The question whether a base model exists at all is a philosophical one.

System : System is a well-defined object in the real-world under specific conditions, only
considering specific aspects of its structure and behaviour.

Experimental frame : When one studies a system in the real-world, the experimental frame
(EF) describes experimental conditions (context), aspects, within which that system and
corresponding models will be used. As such, the experimental frame reflects the objectives of the
experimenter who performs experiments on a real system or through simulation on a model.

Immediately, there is a concern about the limits or boundaries of the model that
supposedly represent the system. The model should be complex enough to answer the questions

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

raised, but not too complex. Consider an event as an occurrence that changes the state of the
system. In the example, events include the arrival of a customer for service at the bank, the
beginning of service for a customer and the completion of a service. There are both internal and
external events, also called endogenous and exogenous events, respectively. For example, an
endogenous event in the example is the beginning of service of the customer since that is within
the system being simulated. An exogenous event is the arrival of a customer for service since
that occurrence is outside of the simulation. However, the arrival of a customer for service
impinges on the system and must be taken into consideration.

Discrete-event simulation models are contrasted with other types of models such as
mathematical models, descriptive models, statistical models and input-output models. A discrete
event model attempts to represent the components of a system and their interactions to such an
extent that the objectives of the study are met. Most mathematical, statistical and input output
models represent a system's inputs and outputs explicitly, but represent the internals of the model
with mathematical or statistical relationships. An example is the mathematical model from
physics,
Force = Mass > Acceleration

based on theory. Discrete-event simulation models include a detailed representation of the actual
internals.

Discrete-event models are dynamic, i.e. the passage of time plays a crucial role. Most
mathematical and statistical models are static in that they represent a system at a fixed point of
time. Consider the annual budget of a firm. This budget resides in a spreadsheet. Changes can be
made in the budget and the spreadsheet can be recalculated, but the passage of time is usually not
a critical issue. Further comments will be made about discrete-event models after several
additional concepts are presented.

Models have many uses, typically : .

• To understand the behaviour of an existing system (why does my network performance


die when more than 10 people are at work?).
• To predict the effect of changes or upgrades to the system (will spend 100,000 on a new
switch cure the problem?).
• To study new or imaginary systems (let's bin the ethernet and design our own scalable
custom routing network).

System State Variables

System state variables are the collection of all information needed to define what is
happening within the system to a sufficient level (i.e. to attain the desired output) at a given point
in time. Determination of system state variables is a function of the purposes of the investigation,
so what may be the system state variables in one case may not be the same in another case even
though the physical system is the same.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Determining the system state variables is as much an art as a science. However, during
the modelling process, any omissions will readily come to light. (And, on the other hand,
unnecessary state variables may be eliminated.) Having defined system state variables, a contrast
can be made between discrete-event models and continuous models based on the variables
needed to track the system state variables in a discrete-event model remain constant over
intervals of time and change value only at certain well-defined points called event times.
Continuous models have system state variables defined by differential or difference equations
giving rise to variables that may change continuously over time.

Some models are mixed discrete-event and continuous. There are also continuous models
that are treated as discrete-event models after some re-interpretation of system state variables and
vice versa.

Entities and attributes: An entity represents an object that requires explicit definition.
An entity can be dynamic in that it “moves” through the system or it can be static in that it serves
other entities. In the example, the customer is a dynamic entity, whereas the bank teller is a static
entity.

An entity may have attributes that pertain to that entity alone. Thus, attributes should be
considered as local values. In the example, an attribute of the entity could be the time of arrival.
Attributes of interest in one investigation may not be of interest in another investigation. Thus, if
red parts and blue parts are being manufactured, the colour could be an attribute. However, if the
time in the system for all parts is of concern, the attribute of colour may not be of importance.
From this example, it can be seen that many entities can have the same attribute or attributes (i.e.
more than one part may have the attribute “red”).

Resources : A resource is an entity that provides service to dynamic entities. The resource
can serve one or more than one dynamic entity at the same time i.e. operates as a parallel server.
A dynamic entity can request one or more units of a resource. If denied, the requesting entity
Joins a queue or takes some other action (i.e. diverted to another resource, ejected from the
system). Other terms for queues include files, chains, buffers and waiting lines. If permitted to
capture the resource, the entity remains for a time and then releases the resource.

There are many possible states of the resource. Minimally, these states are idle and busy.
But other possibilities exist including failed, blocked or starved.

List processing: Entities are managed by allocating them to resources that provide
service, by attaching them to event notices thereby suspending their activity into the future or by
placing them into an ordered list. Lists are used to represent queues. Lists are often processed
according to FIFO (first-in first-out), but there are many other possibilities. For example, the list
could be processed by LIFO (last-in-first out), according to the value of an attribute or
randomly, to on a few. An example where the value of an attribute may be important is in SPT
(shortest process time) scheduling. In this case, the processing time may be stored as an attribute
of each entity. The entities are ordered according to the value of that attribute with the lowest
value at the head or front of the queue.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Activities and delays: An activity is duration of time whose duration is known prior to
commencement of the activity. Thus, when the duration begins, its end can be scheduled. The
duration can be a constant, a random value from a statistical distribution, the result of an
equation input from a file or computed based on the event state. For example, a service time may
be a constant 10 minutes for each entity, it may be a random value from an exponential
distribution with a mean of 10 minutes, it could be 0.9 times a constant value from clock time 0
to clock time 4 hours, and 1.1 times the standard value after clock time 4 hours or it could be 10
minutes when the preceding queue contains at most four entities and 8 minutes when there are
five or more in the preceding queue. A delay is an indefinite duration that is caused by some
combination of system conditions. When an entity joins a queue for a resource, the time that it
will remain in the queue may be unknown initially since that time may depend on other events
that may occur. An example of another event would be the arrival of a rush order that pre-empts
the resource. When the pre-empt occurs, the entity using the resource relinquishes its control
instantaneously. Another example is a failure necessitating repair of the resource. Discrete-event
simulations contain activities that cause time to advance. Most discrete-event simulations also
contain delays as entities wait. The beginning and ending of an activity or delay is an event.

Principles of Successful Simulation

Ten principles (10 commandments) for building a successful simulation product are :

1. Simplicity.
2. Learn from the past.
3. Create conceptual model.
4. Build a prototype.
5. Push the user's desire.
6. Model to data available.
7. Separate data from software.
8. Trust your creative juices.
9. Fit universal constraints.
10. Distill your own principles.

1.5.3. MODELS IN AGRICULTURE AND OPTIMISATION OF AGRICULTURAL


INPUTS

Efficient crop production technology is based on a right decision at right time in a right
way. Traditionally, crop production functions that are used in agricultural decision making were
derived from conventional experienced base agronomic research, in which crop yield were
related to some defined variable based on correlation and regression or regression analysis. Crop
yield were expressed as polynomial or exponential mathematical function of the defined
variables, with regression coefficient. Application of correlation and regression analysis has
provided some qualitative understanding of the variable and their interactions that were involved
in cropping system and has contributed to the progress of agricultural sciences. However,
quantitative information obtained from this type of analysis is very site specific. Information
obtained can only be reliably applied to other site where climate, soil parameters and crop
management are similar to those used in developing the original functions. Thus, the quantitative
application of regression crop based model for decision-making is severely limited.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Models in Agriculture

Agricultural models are mathematical equations that represent the reactions that occur
within the plant and the interactions between the plant and its environment. Owing to the
complexity of the system and the incomplete status of present knowledge, it becomes impossible
to completely represent the system in mathematical terms and hence, agricultural models are
images of the reality. Unlike in the fields of physics and engineering, universal models do not
exist within the agricultural sector. Models are built for specific purposes and the level of
complexity is accordingly adopted. Inevitably, different models are built for different subsystems
and several models may be built to simulate a particular crop or a particular aspect of the
production system.

Agriculture models are, however, only crude representations of the real systems because
of the incomplete knowledge resulting from the inherent complexity of the systems. Judicious
use of such model is possible only if the user has a sound understanding of model structure,
scope and limitation. Crop modelling is a new discipline and back-ground literature is scarce.

Input Data for Crop Modelling

Crop modelling requires data related to weather, crop, soil, management practices and
insect pests as indicated below :

Weather data : Maximum and minimum temperature, rainfall, relative humidity, solar
radiation and wind speed. Weather data is required at daily time step to assess daily crop growth
processes.

Crop data : Crop, variety, crop phenology (days to anthesis, days to maturity etc), leaf
area index, grain yield above ground biomass, 1000-grain weight.

Soil data : Thickness of soil layer, pH, EC, N, P, K, soil organic carbon, soil texture,
sand and clay per cent (soil moisture, saturation, field capacity and wilting point), bulk density
etc.

Crop management data : Date of sowing of crop is required to initiate the simulation
process. Generally, sowing date is taken as the start time for the simulation. In case of
transplanted rice, date of transplanting is used instead of sowing date. Seed rate and depth of
seeding are also required. Use of inputs in the crop field, namely, irrigation, fertiliser, manure,
crop resident needs to be mentioned. Amount of these inputs are specified along with their tone
dam application and depth of placement. If crop residues or organic nutrient sources are annline
N ratio of those sources has to be quantified.

Pest data : Name and type of the pest, their mode of attack, pest population at different
crop growth stages. Data on insects or pests are included only in those models which stages
models which contains the pest module.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Optimisation of Agricultural Inputs

Optimising models have the specific objective of devising the best option in terms of
management inputs for practical operation of the system. For deriving solutions, they use
decision rules that are consistent with some optimising algorithm. An algorithm is a procedure
or formula for solving a problem based on conducting a sequence of specified actions. This
forces some rigidity into their structure resulting in restrictions in representing stochastic and
dynamic aspects of agricultural systems. Linear and non-linear programming was used
initially at farm level for enterprise selection and resource allocation. Later, applications to
assess long-term adjustments in agriculture, regional competition, transportation studies,
integrated production and distribution systems as well as policy issues in the adoption of
technology, industry re-structuring and natural resources have been developed. Optimising
models do not allow the incorporation of many biological details and may be poor
representations of reality. Using the simulation approach to identify a restricted set of
management options that are then evaluated with the optimising models has been reported as a
useful option.

Some crop models reported in recent literature include :

Software Details
SLAM II Forage harvesting operation
SPICE Whole plant water flow
REALSOY Soybean
MODVEX Model development and validation system
IRRIGATE Irrigation scheduling model
COTTAM Cotton
APSIM Modelling framework for a range of crops
GWM General weed model in row crops
MPTGro Acacia sp and Leucaena sp.
GOSSYM-COMAX Cotton
CropSyst Wheat and other crops
SIMCOM Crop (CERES crop modules) and economics
LUPINMOD Lupin
TUBERPRO Potato and disease
SIMPOTATO Potato
WOFOST Wheat and maize, Water and nutrient
WAVE Water and agrochemicals
SUCROS Crop models
ORYZA1 Rice, water
SIMRIW Rice, water
SIMCOY Corn
CERES-Rice Rice, water
GRAZPLAN Pasture, water, lamb
EPIC Erosion Productivity Impact Calculator
CERES Series of crop simulation models

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Framework of crop simulation models including modules of


DSSAT
CERES, CROPGRO and CROPSIM
PERFECT QCANE Sugarcane, potential conditions
AUSCANE Sugarcane, potential and water stress conds., erosion
CANEGRO Sugarcane, potential and water stress conds.
APSIM-Sugarcane Sugarcane, potential growth, water and nitrogen stress
NTKenaf Kenaf, potential growth, water stress

1.5.4. AGRICULTURAL MODEL USES AND LIMITATIONS

Modals are developed by agricultural scientists but the user-group includes the latter as well as
breeders, agronomists, extension workers, policy makers and farmers. As different users possess
varying degrees of expertise in the modelling field, misuse of models may occur. Since crop
models are not universal, the user has to choose the most appropriate model according to his
objectives. Even when a judicious choice is made, it is important that aspects of model
limitations be borne in mind such that modelling studies are put in the proper perspective and
successful applications are achieved.

Model Uses

Simulation modelling is increasingly being applied in research, teaching, farm and


resource management, policy analysis and production forecasts. These models can be applied
into three areas, namely, research tools, crop system management tools and policy analysis tools.
A summary of some specific applications within the different groups follows:

A. As research tools

Research understanding : Model development ensures the integration of research


understanding acquired through discreet disciplinary research and allows the identification of the
major factors that drive the system and can highlight areas where knowledge is insufficient.
Thus, adopting a modelling approach could contribute towards more targeted and efficient
research planning. For example, changing the plant density in a sugar beet model resulted in
model failure. This failure stimulated studies that gave additional information concerning
biomass partitioning in the sugar beet.

Integration of knowledge across disciplines : Adoption of a modular approach in model


coding allows the scientist to pursue his discipline-oriented research in an independent manner
and at a later stage to integrate the acquired knowledge into a model. For example, the modular
aspect of the APSIM software allows the integration of knowledge across crops as well as across
disciplines for a particular crop. Adoption of a modular framework also allows for the integration
of basic research that is carried out in different regions, countries and continents. This ensures a
reduction of research costs (e.g. through a reduction in duplication of research) as well as the
collaboration between researchers at an international level.

Improvement in experiment documentation and data organisation : Simulation


modal development, testing and application demand the use of large amount of technical and

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

observational data supplied in given units and in a particular order. Data handling forces the
modeler to resort to formal data organisation and database systems. The systematic organisation
of data enhances the efficiency of data manipulation in other research areas (e.g. productivity
analysis soil fertility status over time)

Genetic improvement : As simulation models become more detailed and mechanistic,


they can mimic the system more closely. More precise information can be obtained be obtained
regarding the impact of different genetic traits on economic yields and these can be integrated in
genetic improvement programs, e.g. the NTKenaf model. Researchers used the modelling
approach to design crop ideotypes for specific environments.

Yield analysis : When a model with a sound physiological background is adopted, it is


possible to extrapolate to other environments. The CANEGRO model has been used along the
same lines in the South African sugar industry. Through the modelling approach, quantification
of yield reductions caused by non-climatic causes (e.g. delayed sowing, soil fertility, pests and
diseases) becomes possible. Almost all simulation models have been used for such purposes.
Simulation models have also been reported as useful in separating yield gain into components
due to changing weather trends, genetic improvements and improved technology.

B. As crop system management tools

Cultural and input management : Management decisions regarding cultural practices


and inputs have major impact on yield. Simulation models, that allow the specification of
management options, offer a relatively inexpensive means of evaluating a large number of
strategies that would rapidly become too expensive if the traditional experimentation approach
were to be adopted. Many publications are available describing the use of simulation models
with respect to cultural management (planting and harvest date, irrigation, spacing, selection of
variety type) and input application (water and fertiliser).

Risks assessment and investment support : Using a combination of simulated yields


and gross margins, economic risks and weather-related variability can be assessed. These data
can then be used as an investment decision support tool.

Site-specific farming : Profit maximisation may be achieved by managing farms as sets


of sub-units and providing the required inputs at the optimum level to match variation in soil
properties across the farm. Such an endeavour is attainable by coupling simulation models with
geographic information systems (GIS) to produce maps of predicted yield over the farm. But,
one of the prerequisites is a systematic characterisation of units that may prove costly.

C. As policy analysis tools

Best management practices : Models having chemical leaching or erosion components


can be used to determine the best practices over the long-term. The EPIC model has been used
to evaluate erosion risks due to cropping practices and tillage.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Yield forecasting : Yield forecasting for industries over large areas is important to the
producer (harvesting and transport), the processing agent (milling period) as well as the
marketing agency. The technique uses weather records together with forecast data to estimate
yield across the industry.

Introduction of a new crop : Agricultural research is linked to the prevailing cropping


system in a particular region. Hence, data concerning the growth and development of a new crop
in that region would be lacking. Developing a simulation model based on scientific data
collected elsewhere and a few datasets collected in the new environment helps in the assessment
of temporal variability in yield using long-term climatic data. Running the simulations with
meteorological data in a balanced network of locations also helps in locating the industry.

Global climate change and crop production : Increased levels of CO2 and other
greenhouse gases are contributing to global warming with associated changes in rainfall pattern.
Assessing the effects of these these changes on crop yield is important at the producer as well as
at the government level for planning purposes.

Model Limitations

Agricultural systems are characterised by high levels of interaction between the


components that are not completely understood. Models are, therefore, crude representations of
reality. Wherever knowledge is lacking, the modeler usually adopts a simplified equation to
describe an extensive subsystem. Simplifications are adopted according to the model purpose
and / or the developer's views and therefore constitute some degree of subjectivity. Models that
do not result from strong interdisciplinary collaboration are often good in the area of the
developer’s expertise but are weak in other areas. Model quality is related to the quality of
scientific data used in model development, calibration and validation.

When a model is applied in a new situation (e.g switching a new variety), the calibration
and validation steps are crucial for correct simulations. The need for model verification arises
because all processes are not fully understood and even the best mechanistic model still contains
some empirism making parameter adjustments vital in a new situation. Model performance is
limited to the quality of input data. It is common in cropping systems to have large volumes of
data relating to the above-ground crop growth and development, but data relating to root growth
and soil characteristics are generally not as extensive. Using approximations may lead to
erroneous results.

Most simulation models require that meteorological data be reliable and complete.
Meteorological sites may not fully represent the weather at a chosen location. In some cases, data
may be available for only one (usually rainfall) or a few (rainfall and temperature) parameters
but data for solar radiation, which is important in the estimation of photosynthesis and biomass
accumulation, may not be available. In such cases, the user would rely on generated data. At
times, records may be incomplete and gaps have to be filled. Using approximations would have
an impact on model performance.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Model users need to understand the structure of the chosen model, its assumptions, its
limitations and its requirements before any application is initiated, e.g using a model like
QCANE, developed for cane growth under non-limiting conditions, would lead to erroneous
output and analysis if it is used to simulate under water or nitrogen stress conditions. At times,
model developers may raise the expectations of model users beyond model capabilities. Users,
therefore, need to judiciously assess model capabilities and limitations before it is adopted for
application and decision-making purposes. Generally, crop models are developed by crop
scientists and if interdisciplinary collaboration is not strong, the coding may not be well-
structured and model documentation may be poor. This makes alteration and adaptation to
simulate new situations dificult, specially for users with limited expertise. Finally, using a model
for an objective for Which it had not been designed or using a model in a situation that is
drastically different from that for which it had been developed would lead to model failure.

1.5.5. STCR APPROACH FOR FERTILISER RECOMMENDATIONS

Earlier work on soil testing in India was based on the approach of correlation between test values
with yield, nutrient uptake or response to graded levels of applied nutrient. The approach as
many limitations and its validity under field conditions has always been questioned. Currently
the targeted yield approach has become the mainstay of Coordinated Soil Test Crop Response
(STCR) correlation work in India as applied to a number of crops both on research farms and at
cultivator's fields in different agroclimatic regions of the country.

What is Soil Test Crop Response Approach?

Efficient crop fertilisation programme to meet the crop nutrient needs is key to
sustainable agriculture. Efficient crop fertilisation means optimising crop yields, while
minimising nutrient losses to the environment, which is important economically and
environmentally. Efficient nutrient application necessitates balanced fertiliser use and sound
management decisions and practices.

Soil’s nutrient supplying capacity (soil fertility) can be easily determined in laboratories.
However, soil fertility assessment of specific locations at a countrywide scale requires
systematic soil sampling, delivery and feedback reporting. Crop responses to added nutrients can
be tested in field experiments; nevertheless, results are site-specific and often not applicable to
other locations with different soils or climate. Recognising the lack of correlation between soil
tests and crop responses to fertiliser in multi-location fertiliser-rate trials in the past and the
frequent need for site-specific refinements of fertiliser prescriptions, a novel and unique field
experimentation methodology was designed for soil test crop response (STCR) correlation
studies at IARI (Ramamoorthy 1968).

The STRC approach takes into account nutrient contribution from three measurable
sources: 1. Soil fertility (available nutrients, based on chemical soil tests), 2. Added fertilisers
and 3. Added organic manure. Over 2,000 demonstration trials in farmers' fields have validated
the concept, realising the yield targets within a 10 per cent deviation. This novel approach has
become a useful strategy to increase fertiliser use efficiency and boost food production in India.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Objective of STC is to prescribe (recommend) fertiliser doses for a given crop based on
soil test values to achieve the targeted yields in a specific agroclimatic region under irrigation or
protective irrigated conditions by using mathematical equations for different crops and different
agroclimatic zones separately. This takes into consideration-the efficiency of utilisation of soil
and added fertiliser nutrient by the crops and its nutrient requirements for a “desired yield level”.

Concept of STCR is that this approach aims at obtaining a basis for precise quantitative
adjustment of fertiliser doses under varying soil test values and response conditions of the
farmers and for targeted levels of crop production. These are tested in follow up verification by
field trials to back up soil testing laboratories for their advisory purpose under specific soil, crop
and agroclimatic conditions. Fertilisers can be recommended based on regression analysis for
certain per cent of maximum yield. The STCR methodology takes in to account the three factors:
nutrient requirement of the produce, percentage contribution from soil available nutrients and
percentage contribution from added fertilisers, as indicated above.

The adjustment equation derived from these parameters takes the form :

F(N, P, K) = a(N, P, K) * T - b(N, P, K) ~ S(N, P, K)

where, F = fertiliser nutrients (kg ha-1).


S = Soil available nutrients (kg ha-1).
T = Target yield (q ha-1).

Nutrient requirement (NR), fertiliser efficiency (FE) and soil efficiency (SE) can be
calculated from the experimental data as given below :

Using the above parameters, adjustment equations have been developed for different crops in
different agroclimatic regions on various soils as indicated below:

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Fertiliser adjustment equations for rice :

FN = 2.83 T - 0.32 SN
FP2O5 = 2.29 T - 2.98 SP
FK20 = 1.34 T - 0.17
SK Fertiliser adjustment equation for wheat :
FN = 7.54 T - 0.74
SN FP2O5 = 1.90 T - 2.88 SP
FK20 = 4.49 T - 0.22

SK Fertiliser adjustment equation for sorghum :

FN = 4.04 T - 0.22
SN FP2O5 = 2.72 T - 8.26 SP
FK20 = 3.80 T - 0.17
SK Fertiliser adjustment equation for sugarcane :
FN = 5.40 T - 1.08
SN FP2O5 = 6.83 T - 6.51 SP
FK20 = 1.90 T - 0.15 SK

Based on the fertiliser adjustment equations, ready reckoner of fertiliser doses at varvin, test
values for specific yield target have been prepared for different crops under diff agroclimatic
conditions. As an example, ready reckoner of fertiliser doses for rice at varvino test values for
specific yield target are presented in Table 1.3.

Table 1.3. Ready reckoner of fertiliser doses at varying soil test values for specific yield target in
Nandyal region (AP).
Fertiliser adjustment equations for rice in Nandyal region :
FN = 2.83 T - 0.32 SN
FP,05 = 2.29 T - 2.98 SP
FK2O = 1.34 T - 0.17 SK

TABLE 1.3

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

STCR Approach for Precision Agriculture

Agricultural production system is an outcome of a complex interaction of seed, soil,


water, fertilisers and other agrochemicals. Therefore, judicious management of all the inputs is
essential for sustainability of such a complex system. Focus on enhancement the productivity
during green revolution coupled with total disregard to proper management of inputs and without
considering the ecological impacts, has resulted in environmental degradation. The only
alternative left to enhance productivity in a sustainable manner from the limited natural resources
is by maximising resource input use efficiency. An integrated crop management system that
attempts to match the kind and amount of inputs with the actual crop needs for small areas within
the field appears to be precision farming.

In order to collect and utilise information effectively, it is important to be familiar with


modern technological tools available indicated below:

• Global positioning system (GPS) receivers.


• Yield monitoring and mapping.
• Grid sampling and variable-rate (VRT) fertiliser.
• Remote sensing..
• Crop scouting.
• Geographic information system (GIS).
• Information management.
• Quantifying on-farm variability.

Important tools, among the above, have been briefly discussed in Chapter 1.3 Precision
Agriculture. Grid sampling and variable-rate fertiliser application for precision agriculture is
given below :

Grid Soil Sampling and Variable-Rate Technology (VRT) for Fertiliser Application

Under normal conditions, recommended soil sampling procedure is to take samples from
portions of fields that are no more than 20 acres in area. Soil cores taken from random locations
in the sampling area are combined and sent to a laboratory to be tested. Crop advisors make
Fertiliser application recommendations from the soil test information for the 20-acre area. Grid
soil sampling uses the same principles of soil sampling but increases the intensity of sampling.
For example, a 20-acre sampling area would have 10 samples using a 2-acre grid sampling
system (samples are spaced 300 feet from each other) compared to one sample in the traditional
recommendations. Soil samples collected in a systematic grid also have location information that
allows the data to be mapped. The goal of grid soil sampling is to generate a map of nutrient
requirement, called an application map. Grid soil samples are analysed in the laboratory and an
interpretation of crop nutrient needs is made for each soil sample. Fertiliser application map is
plotted using the entire set of soil samples. Application map is loaded into a computer mounted
on a variable-rate fertiliser spreader. Computer uses the application map and a GPS receiver to
direct a product-delivery controller that changes the amount and or kind of fertiliser product,
according to the application map.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

1.6. NANOTECHNOLOGY

The word “nano” comes from Greek and means “dwarf”. A nanometer (nm) is a thousandth of a
meter (10-9 m). One nanometer is about 60,000 times smaller than a human hair in diameter or
the size of a virus, a typical sheet of paper is about 100,000 nm thick, a red blood cell is about
2,000 to 5,000 nm in size and the diameter of DNA is in the range of 2.5 nm. Background of
nanotechnology is given below:

2000 years ago Sulphide nanocrystals used by Greek and Romans to dye hairs
1000 years ago Gold nanoparticles of different sizes used to produce different colours in
stained glass windows
1959 “There is plenty of room at the bottom” by R Feyrman
1961 Scanning confocal microscope developed by Marvin Minsky
1974 Taniguchi used the term nanotechnology for the first time
1981 IBM develops Scanning Tunnelling Microscope
1985 “Buckavball”-Scientists at Rice University and University of Sussex discover
C60
1986 “Engines of Creation” First book on nanotechnology by Eric Drexler Atomic
Force Microscope invented by Binning, Quate and Gerbe
1989 IBM logo made with individual atoms
1991 Carbon nanotube discovered by S Ligima
1999 Nanomedicine-First Nanomedicine book by R Freitas
2000 “National Nanotechnology” launched

1.6.1. DEFINITION, CONCEPTS AND TECHNIQUES

Definition

Nanoscience is the study of phenomena and manipulation of materials at atomic,


molecular and macromolecular scales, where properties differ significantly from those at larger
scale.

Nanotechnologies are the design, characterisation, production and application of


structures, devices and systems by controlling shape and size at nanometer. Nanotechnology is a
process that builds controls and restructures materials that are the size of atoms and molecules.

The US National Nanotechnology Initiative (NNI) provides the following definition :


Nanotechnology is the understanding and control of matter at dimensions between
approximately 1 and 100 nanometers (nm), where unique phenomena enable novel
applications. Encompassing nanoscale science, engineering and technology, nanotechnology
involves imaging, measuring, modelling and manipulating matter at this length scale.

Concepts

The idea of nanotechnology was for first time introduced in 1959, by Richard Feynman,
a physicist at Caltech. He never mentioned “nanotechnology”, but he suggested that it will

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

eventually be possible to precisely manipulate atoms and molecules. The term nanotechnology
was first used in 1974 by late Norio Taniguchi and refer to the ability to engineer materials
precisely at the scale of nanometres. This is in fact its current meaning, “Engineer Materials” is
usually taken to comprise design, characterisation, production and application of materials and
the scope has now-a-days been widened to include devices and systems, rather than just
materials.

A nanoparticle is defined as the small object that acts as a whole unit in terms of
transport and properties.

Nanoparticle is ultrafine unit with dimensions measured in nanometers (nm; 1 nm = 10-9


meter). Nanoparticle exists in the natural world and is also created as a result of human activities.
Because of their submicroscopic size, they have unique material characteristics and
manufactured nanoparticle may find practical applications in a variety of areas including
agriculture, medicine, engineering, catalysis and environmental remediation.

Nanoparticle is a natural, incidental or manufactured material containing particles, in an


unbound state or as an aggregate or as agglomerate and where, for 50 per cent or more of the
particles in the number size distribution, one or more external dimensions is in the size range 1
nm–100 nm. There are three major physical properties of nanoparticle and all are interrelated:

1. They are highly mobile in the free state.


2. They have enormous specific surface areas.
3. They may exhibit what are known as quantum effects.

Two main approaches are used in nanotechnology. In the bottom-up approach, materials
and devises are built from molecular components and which assemble themselves chemically by
principles of molecular recognition. In the top-down approach, nanoobjects are constructed
from larger entities without atomic level control (Fig. 1.4).

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Fig. 1.4. Top-down and bottom-up approaches for the production of nanocapsules.

A number of physical phenomena become pronounced as the size of the system decreases
These include statistical mechanical effects as well as quantum mechanical effects, for example
the “quantum size effect” where the electronic properties of solids are altered with reduction in
particle size. This effect does not come into play by going from macro to micro dimensions.
However, it becomes dominant when the nanometer size range is reached. Additionally, a
number of physical (mechanical, electrical, optical etc) properties change when compared to
macroscopic systems. One example is increase in surface area to volume ratio altering
mechanical, thermal and electrolytic properties of materials. Novel mechanical properties of
nanosystems are of interest in nanomechanics research. The catalytic activity of nonmaterial also
opens potential risks in their interaction with biomaterials.

Materials reduced to nanoscale can show different properties compared to what they
exhibit on a macroscale, enabling unique applications. For instance, opaque substances become
transparent (copper), stable materials turn combustible (aluminum), solids turn into liquids at
room temperature (gold); insulators become conductors (silicon). A material such as gold which
is chemically inert at normal scale, can serve as a potent chemical catalyst at nanoscale Much of
the fascination with nanotechnology stems from these quantum and surface phenomena that
matter exhibits at a nanoscale.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Modern synthetic chemistry has reached the point where it is possible to prepare small
molecules to almost any structure. These methods are used today to produce a wide variety of
useful chemicals such as pharmaceuticals or commercial polymers. This ability raises the
question of expanding this kind of control to the next-larger level, seeking methods to assemble
these single molecules into supramolecular assemblies consisting of many molecules arranged in
a well-defined manner.

These approaches utilise the concepts of molecular self-assembly and/or supramolecular


chemistry to automatically arrange themselves into some useful conformations through bottom-
up approach. The concept of molecular recognition is especially important: molecules can be
designed so that a specific conformation or arrangement is fovoured due to non-covalent
intermolecular forces.

Tools and Techniques

There are traditional techniques developed during 20th century in interface and colloid
science for characterising nonmaterial. These are widely used for first generation passive
nonmaterial.

These methods include several different techniques for characterising particle size
distribution. This characterisation is imperative because many materials that are expected to be
nanosised are actually aggregated in solutions. Some of methods are based on light scattering.
Others apply ultrasound, such as ultrasound attenuation spectroscopy for testing concentrated
nanodispersions and micro-emulsions.

There is also a group of traditional techniques for characterising surface charge or zeta
potential of nanoparticles in solutions. This information is required for proper system
stabilsation, preventing its aggregation or flocculation. These methods include
microelectrophoresis, electrophoretic light scattering and electroacoustics. The last one, for
instance colloid vibration current method is suitable for characterising concentrated systems.

Next group of nanotechnological techniques include those used for fabrication of


nanowires, those used in semiconductor fabrication such as deep ultraviolet lithography,
electron beam lithography, focused ion beam machining, nanoimprint lithography, atomic layer
deposition and molecular vapour deposition and further including molecular self-assembly
techniques such as those employing di-block copolymers. However, all of these techniques
preceded the nanotech era and are extensions in the development of scientific advancements
rather than techniques which were devised with the sole purpose of creating nanotechnology and
which were results of nanotechnology research.

There are several important modern developments. The atomic force microscope
(AFM) and the scanning tunneling microscope (STM) are two early versions of scanning
probes that launched nanotechnology. There are other types of scanning probe microscopy, all
flowing from the ideas of the scanning confocal microscope developed by Marvin Minsky in
1961 and the scanning acoustic microscope (SAM) developed by Calvin Quate and coworkers in
the 1970s, that made it possible to see structures at the nanoscale. The tip of a scanning probe

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

can also be used to manipulate nanostructures (a process called positional assembly). Feature-
oriented scanning-positioning methodology suggested by Rostislav Lapshin appears to be a
promising way to implement these nanomanipulations in automatic mode. However, this is still a
slow process because of low scanning velocity of the microscope. Various techniques of
nanolithography such as dip pen nanolithography, electron beam lithography or
nanoimprint lithography were also developed. Lithography is a top-down fabrication
technique where a bulk material is reduced in size to nanoscale pattern.

The top-down approach anticipates nanodevices that must be built piece by piece in
stages, has manufactured items are made. Scanning probe microscopy is an important
technique ch for characterisation and synthesis of nonmaterial. Atomic force microscopes and
scanning tunneling microscopes can be used to look at surfaces and to move atoms around. By
designing different tips for these microscopes, they can be used for carving out structures on
surfaces and to help guide self-assembling structures. By using, for example, feature-oriented
scanning positioning approach, atoms can be moved around on a surface with scanning probe
microscopy techniques. At present, it is expensive and time-consuming for mass production but
very suitable for laboratory experimentation.

In contrast, bottom-up techniques build or grow larger structures atom by atom or


molecule by molecule. These techniques include chemical synthesis, self-assembly and
positional assembly. Another variation of the bottom-up approach is molecular beam epitaxy or
MBE. Researchers at Bell Telephone Laboratories like John R Arthur. Alfred Y Cho and Art C
Gossard developed and implemented MBE as a research tool in the late 1960s and 1970s.
Samples made by MBE were key to the discovery of the fractional quantum Hall effect for which
the 1998 Nobel Prize in Physics was awarded. The MBE allows scientists to lay down
atomically-precise layers of atoms and, in the process, build up complex structures. Important for
research on semiconductors, MBE is also widely used to make samples and devices for the
newly emerging field of spintronics.

Newer techniques such as dual polarisation interferometer are enabling scientists to


measure quantitatively the molecular interactions that take place at the nanoscale.

However, new therapeutic products, based on responsive nonmaterial, such as the


ultradeformable, stress-sensitive transfersome vesicles, are under development and already
approved for human use in some countries.

1.6.2. NANOSCALE EFFECTS

Two principal factors cause the properties of nonmaterial to differ significantly from
other materials : increased relative surface area and quantum effects. Morphology-aspect
ratio/size. hydrophobicity, solubility-release of toxic species, surface area/roughness, surface
species contaminations/adsorption during synthesis/history, reactive oxygen species (ROS)
0,/H.O capacity to produce ROS, structure/composition, competitive binding sites with receptor
and dispersion/aggregation are the important properties of nanoparticles.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

When particle sizes of solid matter in the visible scale are compared to what can be seen
in regular optical microscope, there is little difference in the properties of the particles. But when
particles are created with dimensions of about 1-100 nanometers (where the particles can be
“seen” only with powerful specialised microscopes), the materials' properties change
significantly from those at larger scales. This is the size scale where so-called quantum effects
rule the behaviour and properties of particles. Properties of materials are size-dependent in this
scale range. Thus, when particle size is made to be nanoscale, properties such as melting point,
fluorescence, electrical conductivity, magnetic permeability, and chemical reactivity change as a
function of the size of the particle.

Nanoscale gold illustrates the unique properties that occur at the nanoscale. Nanoscale
gold particles are not the yellow color with which we are familiar; nanoscale gold can appear red
or purple. At the nanoscale, motion of gold’s electrons is confined. Because this movement is
restricted, gold nanoparticles react differently with light compared to larger-scale gold particles.
Their size and optical properties can be put to practical use: nanoscale gold particles selectively
accumulate in tumors, where they can enable both precise imaging and targeted laser destruction
of the tumor by means that avoid harming healthy cells.

A fascinating and powerful result of the quantum effects of the nanoscale is the concept
of tunability of properties. That is, by changing the size of the particle, a scientist can literally
fine tune a material property of interest (changing fluorescence color; in turn, the fluorescence
color of a particle can be used to identify the particle and various materials can be “labeled" with
fluorescent markers for various purposes). Another potent quantum effect of the nanoscale is
known as tunnelling, which is a phenomenon that enables the scanning tunneling microscope
and flash memory for computing.

Nanoscale materials have far larger surface areas than similar masses of larger-scale
materials (Table 1.4). As surface area per mass of a material increases, a greater amount of the
material can come into contact with surrounding materials, thus affecting reactivity.

TABLE 1.4. Increase in surface area from 1 m3 to 1 nm3.

Size of cube side Number of cubes Collective surface area


1.0 m 1 6 m2
0.1 m 1000 60 m2
0.01 m = 1 cm 106 = 1 million 600 m2
0.001 m = 1 mm 109 = 1 million 6000 m2
0.0-9 m = 1 nm 1027 = 1 billion 6 x 109 = 6000 km3

1.6.3. NANOPESTICIDES

Nanopesticides or nano plant protection, products represent an emerging technological


development that, in relation to pesticide use, could offer a range of benefits including increased
efficacy, durability and reduction in the amounts of active ingredients that need to be used. A
number of formulation types have been suggested including emulsions (nanoemulsions),
nanocapsules (with polymers) and products containing pristine engineered nanoparticles, such as

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

metals, metal oxides and nanoclays. These products, which are at different stages in the product
development cycle, can be used to improve the efficacy of existing pesticide active ingredients or
to enhance their environmental safety profiles or both.

Overall, the hypothesis that smaller means more reactive and thus more potent has not
been substantiated for agrochemicals. Majority of nanopesticides described as "nano" in
literature greatly exceed the 100 nm size boundary that has been recommended for regulatory
purposes. There are considerable issues relating to the definition of nanoparticles and how the
criteria proposed could apply to nanopesticides. Most importantly, a definition based on size
alone would

Nano-cmulsion Nano-capsule Metal nanoparticles in a


polymer formulation

Fig. 1.5. Examples of nanopesticides.

exclude many recent so-called nanoformulations and, on the other hand, include products that
have been on the market for decades without posing particular problems (microemulsions,
formulants such as clays and polymers). In this context, it may be more useful to speak about
nano-enabled or formulation technology, rather than focusing only on the nanoparticles and how
they should be defined (Fig. 1.5).

When a commercial formulation for practical field application is desired, it is very


important to employ materials that are compatible with the proposed applications: environment
friendly, readily biodegradable, not generating toxic degradation by products and low cost.
Common polymers (synthetic and natural ones) used in controlled release fertilisers (CRFs) for
insecticides application are listed in Table 1.5.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

TABLE 1.5. Examples of polymers often used in the nanoparticle production.

Polymer Active compound Nanomaterial

Lignin-polyethyleneglycol-ethylcellulose Imidacloprid Capsule


Polyethylene
Piperonyl
Polyethylene Capsule
Butoxide
Deltamethrin
Carboxymethylcellulose Carbaryl Capsule
Neem seed oil,
Alginate-glutaraldehyde Imidacloprid, Clay
Cyromazine
Polyamide Pheromones Fiber
Aldicarb,
Lignin Imidacloprid, Granules
Cyromazine
Polyethyleneglycol-dimethyl esters Carbofuran Micelle
Poly(methyl methacrylate)-Poly (ethylene glycol) Carbofuran, Suspension
Imidacloprid,
Chitosan-polylactide) polyvinylchloride Particle
Chlorpyrifos

Nanopesticides defines as any formulation that intentionally includes elements in the nm


size range and/or claims novel properties associated with these small size range, it would appear
that some nanopesticides have already been on the market for several years. Nanopesticides
encompass a great variety of products and cannot be considered as a single category.
Nanopesticides can consist of organic ingredients (polymers) and/or inorganic ingredients (r
various forms (particles and micelles). The aims of nanoformulations are, generally, common to
other pesticide formulations and consist in:

• Increasing the apparent solubility of poorly soluble active ingredient.


• Releasing the active ingredient in a slow/targeted manner and/or protecting the active
ingredient against premature degradation.

Nanoformulations are expected to :

1. Have significant impacts on the fate of active ingredient.


2. Introduce new ingredients whose environmental fate is still poorly understood
(nanosilver).
3. The current level of knowledge does not appear to allow a fair assessment of the
advantages and disadvantages that will result from the use of some nanopesticides.

It is clear that a great deal of work will be required to successfully combine analytical
techniques that can detect, characterise (through size, size range, shape or nature, surface

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

properties) and quantify the active ingredient and adjuvants emanating from nanoformulations
and also to understand how their characteristics evolve with time, under realistic conditions.

1.6.4. NANOFERTILISERS

Nanofertilisers are nutrient carriers of nanodimensions ranging from 30 to 40 nm (10-' m or one


billionth of a meter) capable of holding bountiful of nutrient ions due to their high surface are
and release it slowly and steadily that commensurate with crop demand. These fertilisers can be
used to control the release of nutrients from the fertiliser granules so as to improve nutrient use
efficiency (NUE) while preventing nutrient ions from either getting fixed or lost in the
environment.

As per the literature, it appears that nanofertilisers are more beneficial as compared to
chemical fertilisers :

• Three-times increase in nutrient use efficiency (NUE).


• Around 80-100 times less requirement to chemical fertilisers.
• Nearly10 times more stress tolerant by the crops.
• Complete bio-source, so eco-friendly.
• More nutrient mobilisation by the plants.
• Aboutb17-54 per cent improvement in the crop yield.
• Improvement in soil aggregation, moisture retention and carbon build up.

Nowadays, it appears that nanotechnology is progressively moving away from the


experimental into the practical areas. For example, the development of slow/controlled release
fertilisers, conditional release of pesticides and herbicides, on the basis of nanotechnology has
become critically important for promoting the development of environment friendly and
sustainable agriculture. Nanotechnology may provide the feasibility of exploiting nanoscale or
nanostructured materials as fertiliser carriers or controlled-release vectors for building o f so-
called smart fertiliser as new facilities to enhance nutrient use efficiency and reduce costs of
environmental protection.

Encapsulation (Fig. 1.5) of fertilisers within a nanoparticle is one of these new facilities
which are done in three ways :

1. The nutrient can be encapsulated inside nanoporous materials.


2. Coated with thin polymer film.
3. Delivered as particle or emulsions of nanoscale dimensions.

In addition, nanofertilisers will combine nanodevices in order to synchronise release of


fertiliser N and P with crop uptake, thus preventing nutrient losses to soil, water and air via direct
internalisation by crops and avoiding the interaction of nutrients with soil, microorganisms,
water and air.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Nanoporous Zeolites

Nanoclays and zeolites, naturally occurring minerals with a honeycomb-like layered


crystal structure, are other strategies for increasing fertiliser use efficiency. Its network can be
filled with nitrogen, potassium, phosphorous, calcium and a complete set of minor and trace
nutrients for slow release on demand. Application of soluble N fertilisers is one of the major
reasons for groundwater contamination. Nitrogen releasing dynamics of the absorbed form (in
zeolites) is much slower than for the ionic form.

Urea-fertilised zeolite chips can be used as slow release nitrogen fertilisers.


Ammonium charged zeolites have shown their capacity to raise the solubilisation of phosphate
minerals and thus go to improved phosphorus uptake and yield of crops. Studies conducted to
check solubility and cation-exchange in mixtures of rock phosphate and NH4 and K-saturated
clinoptilolite showed that mixtures of zeolite and phosphate rock have potential to provide slow-
release fertilisation in synthetic soils by dissolution and ion-exchange reactions.

Slow/Controlled Release Nanofertilisers

Coating and binding of nano and subnano-composites are able to regulate the release of
nutrients from the fertiliser capsule. It has been shown that application of a nano-composite
consisting of N, P, K micronutrients and amino acids enhance the uptake and use of nutrients by
grain crops. Moreover, nanotechnology could supply tools and mechanisms to synchronise the
nitrogen release from fertilisers with crop requirements. This will be accomplished only when
they can be directly internalised by the plants. Zinc-aluminum layered double-hydroxide
nanocomposites have been employed for the controlled release of chemical compounds which
act as plant growth regulators.

More recent strategies have focused on technologies to provide nanofertiliser delivery


systems which can react to environmental changes. The final goal is production of nanofertilisers
that will release their shipment in a controlled manner (slowly or quickly) in reaction to different
signals such as heat, moisture etc. Furthermore, it is known that under nutrient limitation, crops
secrete carbonaceous compounds into rhizosphere to enable biotic mineralisation of N and/or p
from soil organic matter and of P associated with soil inorganic colloids. Since, these root
exudates can be considered as environmental signals and be selected to prepare nanobiosensors
that will be incorporated into novel nanofertilisers. Some of the advantages related to
transformed formulation of conventional fertilisers using nanotechnology are presented in Table
1.6.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

TABLE 1.6. Advantages related to transformed formulation of conventional fertilisers


using nanotechnology.

Desirable properties Examples of nanofertilisers-enabled technologies

So-called smart fertilisers might become reality through


Controlled release transformed formulation of conventional products using
formulation nanotechnology. Nanostructured formulation might permit
fertiliser intelligently control the release speed of nutrients to
match the crop uptake.
Solubility and dispersion Man-sized formulation of mineral micronutrients may improve
for mineral micronutrients solubility and dispersion of insoluble nutrients in soil, reduce soil
absorption and fixation and increase the bio-availability.
Nutrient uptake efficiency Nanostructured formulation might increase fertiliser efficiency and
uptake ratio of the soil nutrients in crop production and save the
fertiliser.
Controlled release modes Both release rate and release pattern of nutrients for water soluble
fertilisers might be precisely controlled through encapsulation in
envelop forms of semi-permeable membranes coated by resin-
polymer, waxes and sulphur.
Effective duration of Nanostructured formulation can extend effective duration of
nutrient release nutrient supply of fertiliser into soil.
Loss rate of fertiliser Nanostructured formulation can reduce loss rate of fertiliser
nutrients nutrients into soil by leaching and/or leaking.

Since fertilisers, particularly synthetic fertilisers, have major potential to pollute soil,
water and air, in recent years, many efforts were made to minimise these problems by agricultura
practices and the design of the new improved fertilisers. Nanotechnology opens up potential
novel applications in different fields of agriculture and biotechnology. Nanostructured
formulation through mechanisms such as targeted delivery or slow/controlled release
mechanisms, conditional release, could release their active ingredients in responding to
environmental triggers and biological demands more precisely. There is possibility of using these
mechanisms to design and construction of nanofertilisers. Use of these nanofertilisers causes
increase in their efficiency, reduces soil toxicity, minimises the potential negative effects
associated with over dosage and reduces frequency of application. Nanofertilisers mainly delays
the release of the nutrients and extends the fertiliser effect period. Obviously, there is an
opportunity for nanotechnology to have a significant influence on energy, economy and
environment, by improving fertilisers. Hence, nanotechnology has a high potential for achieving
sustainable agriculture, especially in developing countries.

1.6.5. NANOSENSORS

In the recent past, development of sensing devices is in boom. When it comes to test a
particular analyte from the soil causing disturbance in the field there are assays which give
accurate result but it has a drawback of consumption of time and also the high cost for
performing. Sensors are those which give better results with the live pictures and conditions of
the field. Sensors do monitor changes or the effects caused by various pesticides, fertilisers and

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

herbicide, besides physical conditions of soil like pH, soil moisture and crop growth. When it
comes to wireless technology, certain node installation is carried out which makes the person to
monitor the happenings in the field. All the nodes can be controlled at the same time through
cloud computing or even through air programming.

Nanosensors are any biological, chemical or surgical sensory points used to convey
information about nanoparticles to the macroscopic world. Their use mainly includes various
medicinal nurposes and as gateways to building other nanoproducts, such as computer chips that
work at the nanoscale and nanorobots.

Sensors Using Semiconductor Nanowire Detection Elements

These sensors are capable of detecting a range of chemical vapours. When molecules
bond in nanowires made from semiconducting materials such as zinc oxide, the conductance of
the wire changes. The amount that the conductance changes and in which direction depends on
the molecule bonded to the nanowire.

For example, nitrogen dioxide gas reduces how much current the wire conducts and
carbon monoxide increases the conductivity. Researchers can calibrate a sensor to determine
which chemical is present in the air by measuring how the current changes when a voltage is
applied across the nanowires.

Semiconducting Carbon Nanotubes

To detect chemical vapours, you can first functionalise carbon nanotubes by bonding
them with molecules of a metal, such as gold. Molecules of chemicals then bond to the metal,
changing the conductance of the carbon nanotube. As with semiconducting nanowires, the
amount that the conductance and its direction changes depend on the molecule that bonds to the
nanotube. This type of sensor is now commercially available.

Nanotubes and Nanowires that Detect Bacteria or Viruses

These also utilise changes in electrical conductivity, in this case that of carbon nanotubes
to which an antibody is bonded. When a matching bacteria or virus attaches to the antibody a
change in conductivity can be measured.

In this process you attach nano-tubes to metal contacts in the detector and apply a voltage
across the nanotube. When a bacteria or virus bonds to the nanotube, the current changes and
generates a detection signal. Researchers believe that this method should provide a fast way to
detect bacteria and viruses.

One promising application of this technique is checking for bacteria in hospitals. If


hospital personnel can spot contaminating bacteria, they may be able to reduce the number of
patients Who develop complications such as staph infections.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Nanocantilevers

These devices are being used to develop sensors that can detect single molecules. These
sensors take advantage of the fact that the nanocantilever oscillates at a resonance frequency
that change if a molecule lands on the cantilever, changing its weight. Coating a cantilever with
molecules, such as antibodies, that bond to a particular bacteria or virus determines what bacteria
or virus will bond to the cantilever.

One example of nanoparticles used in sensors is a hydrogen sensor that contains a laver
of Closely spaced palladium nanoparticles that are formed by a beading action similar to
collecting on a windshield When hydrogen is absorbed, the palladium nanoparticles swell which
causes shorts between nanoparticles and lowers the resistance of the palladium.

Another use of nanoparticles is in the detection of volatile organic compounds (VOCs).


Researchers have found that by embedding metal nanoparticles made of substances such as gold
in a polymer film, you create a VOC nanosensor.

Nanotechnology applications are developed to improve soil fertility and crop production.
Nanosensors could also monitor crop and animal health and magnetic nanoparticles could
remove soil contaminants. Lab on a chip technology also could have significant impacts on
developing nations.

1.6.6. NANOBIOSENSORS

Nanobiosensor is a modified version of a biosensor which may be defined as a compact


analytical device/unit incorporating a biological or biologically derived sensitised element linked
to a physico-chemical transducer.

With the progression in sciences, nanobiosensors with miniature sensors have been
designed and developed in 21st century based on the ideas of nanotechnology. Recently,
researchers have used an integrated approach by combining nanosciences, electronics, computers
and biology to create biosensors with extraordinary sensing capabilities that show unprecedented
spatial and temporal resolution and reliability. Nanosensors with immobilised bioreceptor probes
that are selective for target analyte molecules are called nanobiosensors. A nanobiosensor is
usually built on the nanoscale to obtain process and analyse the data at the level of atomic scale.
Nanobiosensors open up new opportunities for basic research and provide tools for real
bioanalytical applications, which was impossible in the past. They can be integrated into other
technologies such as lab-on a-chip to facilitate molecular analysis. Their applications include
detection of analytes like urea, glucose, pesticides etc, monitoring of metabolites and detection of
various microorganisms/ pathogens.

Characteristics for an Ideal Nanobiosensor

• Highly specific for the purpose of the analyses i.e. a sensor must be able to distinguish
• between analyte and any “other” material.
• Stable under normal storage conditions.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

• Specific interaction between analyses should be independent of any physical parameters


• such as stirring, pH and temperature.
• Reaction time should be minimal.
• Responses obtained should be accurate, precise, reproducible and linear over the useful
• analytical range and also be free from electrical noise.
• Nanobiosensor must be tiny, biocompatible, nontoxic and non-antigenic.
• Should be cheap, portable and capable of being used by semi-skilled operators.

Role of Nanobiosensors in Agriculture

Presently, nanomaterial-based biosensors exhibit fascinating prospects over traditional


biosensors. Nanobiosensors have marked advantages such as enhanced detection sensitivity/
specificity and possess great potential for its applications in different fields including
environmental and bioprocess control, quality control of food, agriculture, biodefence and
particularly, medical applications. But here we are concerned with the role of nanobiosensor in
agriculture and agroproducts. Some of the potential applications of nanobiosensors are listed
below:

As diagnostic tool for soil quality and disease assessment: Nanosensors may be used
to diagnose soil-borne disease (caused by infecting soil micro-organisms, such as viruses,
bacteria and fungi) via the quantitative measurement of differential oxygen consumption in the
respiration (relative activity) of “good microbes” and “bad microbes” in the soil. Measurement
proceeds through the following steps: two sensors impregnated with "good microbes" and "bad
microbes" atively, are immersed in a suspension of soil sample in buffer solution and the oxygen
consumption data by two microbes detected. By comparing the two data, we can easily decide
which microbe favours the soil. Apart from that, we can also predict whether or not soil-borne
disease is ready to break out in the tested soil beforehand. Biosensor offers an innovative
technique of diagnosing soil condition based on semi-quantitative approach.

As an agent to promote sustainable agriculture: A nanofertiliser refers to a product


that delivers nutrients to crops encapsulated within a nanoparticle. There are three ways of
encapsulation as discussed under 1.6 nanofertilisers:

1. The nutrient can be encapsulated inside nanomaterials such as nanotubes or nanoporous


materials.
2. Coated with a thin protective polymer film.
3. Delivered as particles or emulsions of nanoscale dimensions.

Nanofertilisers could be used to reduce nitrogen loss due to leaching, emissions and long
term assimilation by soil microorganisms. Recently carbon nanotubes were shown to penetrate
tomato seeds and zinc oxide nanoparticles were shown to enter the root tissue of ryegrass. This
suggests that new nutrient delivery systems that exploit the nanoscale porous domains on plant
surfaces can be developed. But, the nanofertilisers should show sustained release of nutrients on
demand while preventing them from prematurely converting into chemical/gaseous forms that
cannot be absorbed by plants. To achieve this, biosensor could be attached to this nanofertiliser
that allows selective nitrogen release linked to time, environmental and soil nutrient condition.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Slow-controlled-release of fertileisers may also improve soil by decreasing toxic effects


associated with fertiliser over application.

Zeolites are naturally occurring crystalline aluminum silicates that can:

1. Enable better plant growth.


2. Improve the efficiency and value of fertiliser.
3. Improve water infiltration and retention.
4. Improves yield
5. Retain nutrients for use by plants.
6. Improve long term soil quality
7. Reduce loss of nutrients in soil.

Zeolite holds nutrients in the root zone for plants to use when required. This leads to more
cient use of N and K fertilisers-either less fertiliser for the same yield or the same amount Aliser
lasting longer and producing higher yields. An added benefit of zeolite application unlike other
soil amendments (gypsum and lime) it does not break down over time but ms in the soil to help
improve nutrient and water retention permanently. With subsequent cations, the zeolite will
further improve the soil's ability to retain nutrients and produce oved yields. Zeolites linked to a
nanobiosensor can modernise agriculture in the sense that the biosensor can sense the deficiency
in either plant or soil and control the release of water/ nutrients retained in the zeolite.

Pesticides inside nanoparticle are being developed that can be timed-release or have release
linked to an environmental trigger. Also, combined with a smart delivery system, herbicide could
be applied only when necessary, resulting in greater production of crops and less injury to
agricultural workers.

As a device to detect contaminants and other molecules: Several nanobiosensors are


designed to detect contaminants, pests, nutrient content and plant stress due to drought,
temperature or pressure. They are also helpful for farmers to enhance competence by applying
inputs only when necessary. Organophosphorus pesticides such as dichlorvos and paraoxon at
very low levels could be monitored by liposome-based biosensors. When the bacteria are
resistant to the phages (uninfected bacteria), small voltage fluctuations are observed in the
nanowell displaying a power spectral density (PSD). The biosensors developed using PS II
(photosystem II), known to bind several groups of herbicides, isolated from photosynthetic
organisms may have potential to monitor polluting chemicals, leading to the setup of a low cost,
easy-to-use apparatus, able to reveal specific herbicides and eventually, a wide range of organic
compounds present in industrial and urban effluents, sewage sludge, landfill leak-water,
groundwater and irrigation water.

As tool for effective detection of DNA and protein: There are several nanosensors to detect
specific kinds of DNA oligonucleotides. First nanowire field effect transistor based biosensor
achieves simple and ultra-sensitive electronic DNA methylation detection and avoids
complicated bisulphite treatment and PCR amplification.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Similarly, using protein-ligand (antigen) interaction properties, protein-nano-particles based


biosensors can realise the ultra-sensitive detection of special protein molecules. Use of these
DNA and protein detecting biosensors might play a vital role in detection of plant pathogens,
certain abnormalities in plants linked to mineral deficiency, biomarkers and discriminate one
plant species from another etc.

1.6.7. USE OF NANOTECHNOLOGY IN AGRICULTURE

Agriculture has always been the backbone of the developing countries. It does not only
fill the people abdomen but also fuel the economy. With the concern of providing food to ever
increasing population, there has to be a new technology giving more yields in short period
without polluting the environment for sustainable crop production. A smarter way for sustainable
agriculture appears to be nanotechnology (Fig. 1.6).

As indicated in Fig. 1.6, some of the potential applications of nanotechnology in


agriculture include:

A. Increase the productivity using nanopesticides and nanofertilisers.


B. Improve the soil quality using nanozeolites and hydro-gels.
C. Stimulate crop growth using nanomaterials (SiO2, TiO2 and carbon nanotubes).
D. Provide smart monitoring using nanosensors by wireless communication devises.

Nanotechnology can be exploited in the value chain of entire agriculture production


system. It is emerging as the sixth revolutionary technology in the current era after the industrial
revolution of mid 1700s, nuclear energy revolution of 1940s, green revolution of 1960s,
information technology revolution of 1980s and biotechnology revolution of 1990s.
Nanotechnology is now emerging and fast growing field of science which is being exploited over
a wide range of scientific disciplines including agriculture.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Fig. 1.6. Potential applications of nanotechnology in agriculture. Nanotechnology in Tillage

Mechanical tillage practices improve soil structure and increase porosity leading to better
distribution of soil aggregates and eventually modify the physical properties of soil. Literature ct
of nanoparticals on tilth and tillage is limited.

Use of nanomaterials increase soil pH and improve soil structure. Nanomaterials also
reduce mobility, availability and toxicity of heavy metals besides reducing soil erosion.
Nanoparticles improve soil quality by increasing water holding capacity and nutrient
availability.

Nanoparticles in soil reduce cohesion and internal friction besides reducing the shear
strength of the soil. Reduction in adhesion of soil particles allows easy crushing of lumps with
less energy.

Nanotechnology in Seed Science

Seed is nature's nano-gift to man. It is self perpetuating biological entity that is able to
survive in harsh environment on its own. Nanotechnology can be used to harness the full
potential of seed. Seed production is a tedious process especially in wind pollinated crops.
Detecting pollen load that will cause contamination is a sure method to ensure genetic purity.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Pollen flight is determined by air temperature, humidity, wind velocity and pollen production of
the crop. Use of bionanosensors specific to contaminating pollen can help alert the possible
contamination and thus reduces contamination. The same method can also be used to prevent
pollen from genetically, modified crop from contaminating field crops. Novel genes are being
incorporated into/seeds and sold in the market. Tracking of sold seeds could be done with the
help of nanobarcodes that are encodable, machine-readable, durable and sub-micron sized
taggants. Disease spread through seeds and many times stored seeds are killed by pathogens.
Nanocoating of seeds using elemental forms of Zn, Mn, Pa, Pt, Au, Ag will not only protect
seeds but used in far less quantities than done today. Technologies such as encapsulation and
controlled release methods have revolutionised the use of pesticides. Seeds can also be imbibed
with nanoencapsulations with specific bacterial strain termed as smart seed. It will thus reduce
seed rate, ensure right field stand and improved crop performance. A smart seed can be
programmed to germinate when adequate moisture is available that can be dispersed over a
mountain range for reforestation. Coating seeds with nanomembrane, which senses the
availability of water and allow seeds to imbibe only when time is right for germination, aerial
broadcasting of seeds embedded with magnetic particle, detecting the moisture content during
storage to take appropriate measure to reduce the damage and use of bioanalytical nanosensors to
determine ageing of seeds are some possible thrust areas of research. Metal oxide nanoparticles
and carbon nanotube can improve the germination of rainfed crops. Carbon nanotubes (CNTs)
serve as new pores for water permeation by penetration of seed coat and act as a passage to
channelise the water from the substrate into the seeds. These processes facilitate germination
which can be exploited in rainfed agricultural system.

Nanotechnology in Water Use

Water purification using nanotechnology exploits nanoscopic materials such as carbon


nanotubes and alumina fibers for nanofiltration. It also utilises the existence of nanoscopic
pores in zeolite filtration membranes, as well as nanocatalysts and magnetic nanoparticles.
Nanosensors, such as those based on titanium oxide nanowires or palladium nanoparticles are
used for analytical detection of contaminants in water samples.

Impurities that nanotechnology can tackle depend on the stage of purification of water to
which the technique is applied. It can be used for removal of sediments, chemical effluents,
charged particles, bacteria and other pathogens. Toxic trace elements such as arsenic and viscous
liquid impurities such as oil can also be removed using nanotechnology.

Main advantages of using nanofilters, as opposed to conventional systems, are that less
pressure is required to pass water across the filter, they are more efficient and they have
incredibly large surface areas and can be more easily cleaned by back-flushing compared with
conventional methods.

For instance, carbon nanotube membranes can remove almost all kinds of water
contaminants including turbidity, oil, bacteria, viruses and organic contaminants. Although their
pores are significantly smaller carbon nanotubes have shown to have an equal or a faster flow
rate as compared to larger pores, possibly because of the smooth interior of the nanotubes.
Nanofibrous alumina filters and other nanofiber materials also remove negatively charged

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

contaminants such as viruses, bacteria and organic and inorganic colloids at a faster rate than
conventional filters.

Researchers point out that several fundamental aspects of nanotechnology have raised
concerns among the public and activist groups. They concede that the risks associated with
nonmaterial may not be the same as the risks associated with the bulk versions of the same
materials because the much greater surface area to volume ratio of nanoparticles can make them
more reactive than hulk materials and lead to so far unrecognised and untested interactions with
biological surfaces. Water purification based on nanotechnology has not yet led to any human
health or environmental problems but the team echoes the sentiment of others that further
research into the biological interactions of nanoparticles should be carried out.

Nanotechnology in Fertilisers

A nanofertiliser refers to a product in nanometer regime that delivers nutrients to crops.


For example, encapsulation inside nanomaterials coated with a thin protective polymer film or
in the form of particles or emulsions of nanoscale dimensions. Surface coatings of nanomaterials
on fertiliser particles hold the material more strongly due to higher surface tension than the
conventional surfaces and thus help in controlled release. Delivery of agrochemical substance
such as fertiliser supplying macro and micronutrients to the plants is an important aspect of
application of nanotechnology in agriculture.

Conventional fertilisers are, generally, applied on the crops by either spraying or


broadcasting. However, one of the major factors that decide the mode of application is the final
concentration of the fertilisers reaching to the plant. In practical scenario, very less concentration
to the targeted site due to leaching of chemicals, drift, runoff, evaporation, hydrolysis by soil
moisture and photolytic and microbial degradation. It has been estimated that around 40-70 per
cent of nitrogen, 80-90 per cent of phosphorus, and 50-90 per cent of potassium content of
applied fertilisers are lost to environment and could not reach the plant which causes sustainable
and economic losses. These problems have initiated repeated use of fertiliser and pesticide which
adversely affects the inherent nutrient balance of the soil. Hence, it is very important to optimise
the use of chemical fertilisation to fulfil the crop nutrient requirements and to minimize the risk
of environmental pollution.

Nanotechnology has provided the feasibility of exploring nanoscale or nanostructured


material as fertiliser carrier or controlled release vectors for building of the so-called smart
fertilisers as new facilities to enhance the nutrient use efficiency and reduce the cost of
environmental pollution.

Localised application of large amounts of fertiliser, in the form of ammonium salts, urea
and nitrate or phosphate compounds are harmful. Besides much of the fertilisers are unavailable
to plants as they are lost as runoff leaching causing pollution. Nanomaterials have potential
contributions in slow release of fertilisers. Nanocoating or surface coating of nanomaterials on
fertiliser particles hold the material more strongly from the plant due to higher surface tension
conventional surfaces. Moreover, nanocoating provide surface protection for larger.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

Fertilisers with sulphur nanocoating ( 100 nm layer) are useful slow release fertilisers as
the sulphur contents are beneficial especially for sulphur deficient soils. Stability of the coating
reduced the rate of dissolution of the fertiliser and allowed slow sustained release of sulphur
coated fertiliser. In addition to sulphur nanocoatings or encapsulation of urea and phosphate,
their release will be beneficial to meet the soil and crop demands. Other nanomaterials with
potential application include kaolin and polymeric biocompatible NPs used biodegradable,
polymeric chitosan NPs (~78 nm) for controlled release of the NPK fertiliser sources such as
urea, calcium phosphate and potassium chloride

Slow release of nanofertilisers and nanocomposites are excellent alternatives to soluble


fertilisers. Nutrients are released at a slower rate throughout the crop growth, plants are able to
take up most of the nutrients without any waste. Slow release of nutrients in the environments
could be achieved by using zeolites that are a group of naturally occurring minerals having a
honeycomb-like layered crystal structure. Its network of interconnected tunnels and cages can be
loaded with nitrogen and potassium, combined with other slowly dissolving ingredients
containing phosphorous, calcium and a complete suite of minor and trace nutrients. Zeolite acts
as a reservoir for nutrients that are slowly released "on demand." Fertiliser particles can be
coated with nanomembranes that facilitate slow and steady release of nutrients.

Nanotechnology in Plant Protection

Currently spraying of pesticides involves either knapsacks that deliver large droplets (9-
66 im) associated with splash loss or ultra light volume sprayers for controlled droplet
application (CDA) with smaller droplets (3-28 um) causing spray drift. Constraints due to
droplet size may be overcome by using NP encapsulated or nanosized pesticides that will
contribute to efficient spraying and reduction of spray drift and splash losses.

Another practical problem faced during pesticide application in the field is settlement of
formulation components in the spray tank and clogging of spray nozzles. Recent nanosized
fungicide (~ 100 nm, BannerM AXX, Syngent) prevented spray tank filters from clogging, did
not required mixing and did not settle down in the spray tank due to the smaller sized particles.
Furthermore, this fungicide did not separate from water for up to one year due to nanosize,
whereas fungicides that contained larger particle size ingredients typically required agitation
every two hours to prevent clogging in the tank.

Persistence of pesticides in the initial stage of crop growth helps in bringing down the
pest population below the economic threshold level and to have an effective control for a longer
period. Hence, use of active ingredients in the applied surface remains one of the most cost
effective and versatile means of controlling insect pests. In order to protect the active ingredient
from the adverse environmental conditions and to promote persistence, a nanotechnology
approach, namely nanoencapsulation can be used to improve the insecticidal value.

Nanoencapsulation (Fig. 1.5) comprises nanosised particles of the active ingredients


being sealed by a thin-walled sac or shell (protective coating). Nanoencapsulation of insecticides,
fungicides or nematicides will help in producing a formulation which offers effective control of
pests while preventing accumulation of residues in soil. In order to protect the active ingredient

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.


UNIVERSITY OF AGRICULTURAL SCIENCES RAICHUR

from degradation and to increase persistence, a nanotechnology approach of controlled release


of the active ingredient may be used to improve effectiveness of the formulation that may
greatly decrease amount of pesticide input and associated environmental hazards.

Nanopesticides will reduce the rate of application because the quantity of product
actually being effective is at least 10-15 times smaller than that applied with classical
formulations, hence much smaller than the normal amount could be required to have much better
and prolonged management. Several pesticide manufacturers are developing pesticides
encapsulated in nanoparticles. These pesticides may be time released or released upon the
occurrence of an environmental trigger (for example, temperature, humidity, light). It is unclear
whether these pesticide products will be commercially available in the short term. Clay
nanotubes (halloysite) have been developed as carriers of pesticides at low cost, for extended
release and better contact with plants and they will reduce the amount of pesticides by 70-80 per
cent, thereby reducing the cost of pesticide with minimum impact on water streams.

Nanotechnology in Weed Management

Multi-species approach with single herbicide in the cropped environment resulted in poor
control and herbicide resistance. Continuous exposure of plant community having mild
sceptibility to herbicide in one season and different herbicide in other season develops resistance
in due course and become uncontrollable through chemicals. Developing a target specific
herbicide molecule encapsulated with nanoparticle is aimed at specific receptor in the roots of
target weeds, which enter into roots system and translocated to parts that inhibit glycolysis of
food reserve in the root system. This will make the specific weed plant to starve for food and
gets.

In rainfed areas, application of herbicides with insufficient soil moisture may lead to loss
as vapour. Still we are unable to predict the rainfall very preciously. Herbicides cannot be
applied in advance anticipating rainfall. Controlled release of encapsulated herbicides is
expected to take care of the competing weeds with crops. Adjuvants for herbicide application
are currently available that claim to include nanomaterials. One nanosurfactant based on
soybean micelles has been reported to make glyphosate-resistant crops susceptible to glyphosate
when it is applied with the ‘nanotechnology-derived surfactant’.

Excessive use of herbicides leave residue in the soil and cause damage to the succeeding
crops. Continuous use of single herbicide leads to evolution of herbicide resistant weed species
and shift in weed flora. Atrazine, an s-triazine-ring herbicide, is used globally for the control of
pre-and post-emergence broadleaf and grassy weeds, which has high persistence (half life-125
days) and mobility in some types of soils. Residual problems due to application of atrazine
herbicide pose a threat towards widespread use of herbicide and limit the choice of crops in
Totation. It appears that application of silver modified with nanoparticles of magnetite stabilized
Carboxy Methyl Cellulose (CMC) nanoparticles can degrade herbicide atrazine residue.

Prepared by: Dr. Vishwanath, S ., Dr. Shwetha, B. N. and Dr. Shrinivas, C. S.

You might also like