Temperature and Rainfall Change Analysis With R Program
Temperature and Rainfall Change Analysis With R Program
I. RCLIMDEX
The import is done in R with an import program allowing to display a window of
RCLIMDEX. With the outputs of the RCLIMDEX program under R, four documents
including the climatic indices; the qualities of controls and graphics and trends will be
provided to us. The indices are 27 indicators divided into precipitation, maximum temperature
and minimum temperature, among which:
Temperature indices including Min and Max temperature (SU25, TXx, TNx, TMaxMean,
TNn, TMinMean, DTR,,… etc.).
Precipitation indices including humidity and rain (PRCPTOT SDII, CDD, CWD, RX1day,
RX5day, R10, R20, R95p, R99p,… etc.).
The log document provides graphs of temperature and precipitation variations, as well as
other graphic illustrations in Excel format for tables and pdf for images.
The plot file groups together the figures of the climate change indices associated with their
respective slopes and subjected to the Mann Kendall trend test in order to evaluate the
significant trends (sign of the slope; Student significance; regression line; correlation;
coefficient of determination). The last Trend document sufficiently summarizes the
information in graphic format from the previous file.
I. RClimTool
L’importation The import is done under R with an import program allowing to display a
window of RClimtool. With the outputs of the RClimTool program, three files, including
homogeneity; Missing_data and Quality_Control will have to be extracted to use. The first
file that emerges is the Quality_Control, for each of these variables, we study the atypical,
outlier or outliers values presented by our daily, monthly and annual observations. The second
is that of missing value, the processing will be done separately with values and without
missing value. The last homogeneity document presents the various tests of normality, Mann
Kendall significance, correlation between sites, with the associated graphs. It is possible to
deal with missing values, because this prevents the calculation loop from functioning
normally. Various deterministic methods such as statistics such as the average; the median or
even probabilistic techniques (Amelia, MICE, PPCA, etc.) are used to supplement missing
values if they do not exceed certain thresholds of significance.
II. RCLIMTREND
Plusieurs Méthodes statistiques pour les sciences du climat comme le Tests d'homogénéité
absolue SNHT absolu ; SD différent, breaks, Buishand, Pettitt, rapport de von Neumann et
ratio-rank, Worsley et Craddock, tests d'homogénéité relative SNHT absolu ; break SD ; la
correlation ; la recherche des valeurs aberrantes sont les diverses atout que l’on peut
bénéficier avec ce programme.
III. ClimPact
Le programme de ClimPact s’exécute de la même façon que RClimDex, les issues sont
similaires cette fois-ci, une étude plus approfondie que le précèdent avec une évaluation de
variation extrême. D’autant plus que nous avons comme suppléments sur le treshold (seuil).
Les différents de statistiques descriptives (boxplot, histogramme.etc.). En comparant les
valeurs des tests de Mann Kendall respectifs sur les indices de calculs retenus, on constate que
c’est presque similaire.
IV. RHtest
With this program, it is possible to detect point changes of the studied series and to inform us
about what needs adjustment. Geoclim ; Anclim ; EdGCM ; MASH ; ProClimDB…etc
Spatial analysis tools are designed for analysis in climatology. Provides scientists and non-
scientists with more decision support and facilitates filling observation gaps (precipitation and
temp) using a mix of field observations, readily available satellite data and networks.
Quality Control
The preliminary study on data quality detects the extreme values (tools) by realizing the
statistics of maximum and minimum. The existence of missing data disrupts the analysis of
the observation series for calculating indices under the RclimDex program (Zhang and Yang,
2004). A month containing more than 3 missing days is not considered in the year. An
incomplete year is also private in index calculations. Statistical indicators such as the median
make it possible to complete the missing values when the proportion of these values is
negligible.
Homogenization
Several tests make it possible to detect the homogenization of the series observed. The
bivariate case like the Craddock test (1979) or the univariate case like the PETTITT test
(1979). The programs of RclimDex, Rclimtool, Rclimpact (Zhang and Yang, 2004). allow the
analysis of the homogenization of temperature and Precipitation series.
#TREND ANANLYSIS
data=read.csv(file("clipboard"),header=T, sep="\t", row.names=1)
data
str(data)
#install.packages("trend")
require(trend)
#data(maxau)
prectot <- data[,"prectot"]
mk.test(prectot)
require(trend)
smk.test(dat1)
s <- maxau[,"s"]
Q <- maxau[,"Q"]
cor.test(s,Q, meth="spearman")
partial.mk.test(s,Q)
partial.cor.trend.test(s,Q, "spearman")
s <- maxau[,"s"]
sens.slope(prectot)
require(trend)
(res <- snh.test(Nile))
#Buishand U test
#Nile
(res <- bu.test(prectot))
require(graphics)
x <- rnorm(50)
y <- runif(30)
# Do x and y come from the same distribution?
ks.test(x, y)
# Does x come from a shifted gamma distribution with shape 3 and
rate 2?
ks.test(x+2, "pgamma", 3, 2) # two-sided, exact
ks.test(x+2, "pgamma", 3, 2, exact = FALSE)
ks.test(x+2, "pgamma", 3, 2, alternative = "gr")
# Load Data
data <- read.csv(file = 'c:/Users/pooya/Downloads/Torbat Heydariyeh
- Daily.csv',
header = TRUE)
data=read.csv(file("clipboard"),header=T, sep="\t", row.names=1)
# Select Variable
Tave <- data %>%
mutate(date = as.Date(x = paste0(Year, '-', Month, '-', Day), '%Y-
%m-%d')) %>%
select(date, t)
# Tave Summaries
summary(object = Tave)
# NA Remove
Tave <- na.omit(Tave)
# Plot
require(ggplot2)
ggplot(data = Tave, mapping = aes(x = date, y = prectot)) +
geom_line()
# Pettitt’s test
pettittTest <- trend::pettitt.test(x = Tave[['prectot']])
print(pettittTest)
print(Tave[['date']][pettittTest$estimate])
# Plot
ggplot(data = Tave, mapping = aes(x = date, y = t)) +
geom_line() +
geom_vline(mapping = aes(xintercept = as.numeric(Tave[['date']]
[pettittTest$estimate])),
linetype = 2,
colour = "red",
size = 2)
# Break Point
#first generate some data that has monthly observations on 6 years
of a seasonal
#process followed by 2 years of monthly that is not seasonal
#-------------------------------------------------------------------
------------------
ry=read.table(file("clipboard"),header=T, sep="\t", row.names=1)
attach(ry)
str(ry)
prectot=ts(prectot, start=1961,end=2017)
plot(prectot, ylab="Precipitation (mm)", type="o",xlab="Year")
abline(h= mean(prectot),col='blue')
require(strucchange)
prectot=ts(prectot, start=1961,end=2017)
ocus.nile <- efp(prectot ~ 1, type = "OLS-CUSUM")
omus.nile <- efp(prectot ~ 1, type = "OLS-MOSUM")
rocus.nile <- efp(prectot ~ 1, type = "Rec-CUSUM")
fs.days =
Fstats(year~Dec+Feb+Mar+Apr+May+Jun+Jul+Aug+Sep+Oct+Nov,data=rm)
plot(fs.days)
plot(prectot)
## or
bp.nile <- breakpoints(prectot ~ 1)
summary(bp.nile)
## confidence interval
ci.nile <- confint(bp.nile)
ci.nile
lines(ci.nile)
## confidence intervals
ci.seat2 <- confint(bp.nile, breaks = 2)
ci.seat2
lines(ci.seat2)
#rm(list=ls())
data=read.csv(file("clipboard"),header=T, sep="\t")
str(data)
if(!require(mblm)){install.packages("mblm")}
if(!require(ggplot2)){install.packages("ggplot2")}
data$Year = round(data$Year)
data$Year = data$Year - 1961
library(mblm)
model= mblm(prectot ~ Year, data=data)
summary(model)
Sum = summary(model)$coefficients
library(ggplot2)
ggplot(data, aes(x=Year, y=prectot)) +
geom_point() +
geom_abline(intercept = Sum[1], slope = Sum[2], color="blue",
size=1.2) +
labs(x = "Years after 1961")
library(ggplot2)
model.null = rq(prectot ~ 1, data = data, tau = 0.5)
anova(model.q, model.null)
Sumq = summary(model.q)$coefficients
ggplot(data, aes(x=Year, y=prectot)) +
geom_point() +
geom_abline(intercept = Sumq[1], slope = Sumq[2], color="red",
size=1.2) +
labs(x = "Years after 1961")
1)Seasonality
###############
r=read.table(file("clipboard"),header=T, sep="\t", dec=".")
attach(r)
reg=lm(JFM~year)
summary(reg)
y = -0.25x + 519.54
p-value = .09054
rm=read.table(file("clipboard"),header=T, sep="\t")
str(rm)
attach(rm)
#plot(0,0,xlim=c(1961,2017), ylim=c(0,300))
DJF=ts(DJF, start=1961,end=2017)
Y=ts(Y, start=1961, end=2017)
#par(new=TRUE)
#plot(DJF, type="o", xlab="", ylab="",axes=FALSE)
#par(new=TRUE)
#plot(Y, type="l", xlab="", ylab="", add=TRUE, axes=FALSE)
names(r)
reg=lm(SNO.tav~X,data=r)
summary(reg)
str(r)
library(climatol) # load the functions of the package
homogen("r", 1980, 2017, expl=TRUE)
#Missing Value
library(naniar)
vis_miss(r[,-1])
#dd2m("r", 1981, 2000, homog=TRUE)
#dahstat(’Ttest’, 1981, 2000, stat=’series’)
library(naniar)
require("UpSetR")
gg_miss_upset(r)
library(mice)
md.pattern(r)
library(reshape2)
library(ggplot2)
#Homogeneous
############
#Homogeneous
boxplot(data)
hist(data)
#median ajustement
pairs(datatable, col="blue", main="Scatterplots")
Y=cbind(LRY)
X=cbind(LRV, LRC, INT)
#
hist(Y, prob=TRUE, col = "blue", border = "black")
lines(density(Y))
#
OLSreg=lm(Y~X)
summary(OLSreg)
#
Qreg25=rq(Y~X, tau=0.25)
summary(Qreg25)
#
QR=rq(Y~X, tau=seq(0.2, 0.8, by=0.1))
sumQR=summary(QR)
#
anova(Qreg25, Qreg75)
#
QR=rq(Y~X, tau=seq(0.2, 0.8, by=0.1))
sumQR=summary(QR)
#
plot(sumQR)
plot(merge(
Nile = as.zoo(Nile),
zoo(mean(Nile), time(Nile)),
CUSUM = cumsum(Nile - mean(Nile)),
zoo(0, time(Nile)),
MOSUM = rollapply(Nile - mean(Nile), 15, sum),
zoo(0, time(Nile))
), screen = c(1, 1, 2, 2, 3, 3), main = "", xlab = "Time",
col = c(1, 4, 1, 4, 1, 4)
)
plot(merge(
Nile = as.zoo(Nile),
zoo(c(NA, cumsum(head(Nile, -1))/1:99), time(Nile)),
CUSUM = cumsum(c(0, recresid(lm(Nile ~ 1)))),
zoo(0, time(Nile))
), screen = c(1, 1, 2, 2), main = "", xlab = "Time",
col = c(1, 4, 1, 4)
)
#TREND ANANLYSIS
#################
data=read.csv(file("clipboard"),header=T, sep="\t", row.names=1)
data
str(data)
#install.packages("trend")
require(trend)
#data(maxau)
prectot <- data[,"prectot"]
mk.test(prectot)
require(trend)
smk.test(dat1)
s <- maxau[,"s"]
Q <- maxau[,"Q"]
cor.test(s,Q, meth="spearman")
partial.mk.test(s,Q)
partial.cor.trend.test(s,Q, "spearman")
s <- maxau[,"s"]
sens.slope(prectot)
# Changement Dectection
#######################
#data(PagesData)
require(trend)
data=read.csv(file("clipboard"),header=T, sep="\t", row.names=1)
attach(data)
require(trend)
(res <- snh.test(Nile))
#Buishand U test
#Nile
(res <- bu.test(prectot))
require(graphics)
x <- rnorm(50)
y <- runif(30)
# Do x and y come from the same distribution?
ks.test(x, y)
# Does x come from a shifted gamma distribution with shape 3 and
rate 2?
ks.test(x+2, "pgamma", 3, 2) # two-sided, exact
ks.test(x+2, "pgamma", 3, 2, exact = FALSE)
ks.test(x+2, "pgamma", 3, 2, alternative = "gr")
# Load Data
data <- read.csv(file = 'c:/Users/pooya/Downloads/Torbat Heydariyeh
- Daily.csv',
header = TRUE)
data=read.csv(file("clipboard"),header=T, sep="\t", row.names=1)
# Select Variable
Tave <- data %>%
mutate(date = as.Date(x = paste0(Year, '-', Month, '-', Day), '%Y-
%m-%d')) %>%
select(date, t)
# Tave Summaries
summary(object = Tave)
# NA Remove
Tave <- na.omit(Tave)
# Plot
require(ggplot2)
ggplot(data = Tave, mapping = aes(x = date, y = prectot)) +
geom_line()
# Pettitt’s test
pettittTest <- trend::pettitt.test(x = Tave[['prectot']])
print(pettittTest)
print(Tave[['date']][pettittTest$estimate])
# Plot
ggplot(data = Tave, mapping = aes(x = date, y = t)) +
geom_line() +
geom_vline(mapping = aes(xintercept = as.numeric(Tave[['date']]
[pettittTest$estimate])),
linetype = 2,
colour = "red",
size = 2)
# Break Point
#first generate some data that has monthly observations on 6 years
of a seasonal
#process followed by 2 years of monthly that is not seasonal
#-------------------------------------------------------------------
------------------
ry=read.table(file("clipboard"),header=T, sep="\t", row.names=1)
attach(ry)
str(ry)
prectot=ts(prectot, start=1961,end=2017)
plot(prectot, ylab="Precipitation (mm)", type="o",xlab="Year")
abline(h= mean(prectot),col='blue')
require(strucchange)
prectot=ts(prectot, start=1961,end=2017)
ocus.nile <- efp(prectot ~ 1, type = "OLS-CUSUM")
omus.nile <- efp(prectot ~ 1, type = "OLS-MOSUM")
rocus.nile <- efp(prectot ~ 1, type = "Rec-CUSUM")
fs.days =
Fstats(year~Dec+Feb+Mar+Apr+May+Jun+Jul+Aug+Sep+Oct+Nov,data=rm)
plot(fs.days)
plot(prectot)
## or
bp.nile <- breakpoints(prectot ~ 1)
summary(bp.nile)
## confidence interval
ci.nile <- confint(bp.nile)
ci.nile
lines(ci.nile)
## confidence intervals
ci.seat2 <- confint(bp.nile, breaks = 2)
ci.seat2
lines(ci.seat2)
#rm(list=ls())
data=read.csv(file("clipboard"),header=T, sep="\t")
str(data)
if(!require(mblm)){install.packages("mblm")}
if(!require(ggplot2)){install.packages("ggplot2")}
data$Year = round(data$Year)
data$Year = data$Year - 1961
library(mblm)
model= mblm(prectot ~ Year, data=data)
summary(model)
Sum = summary(model)$coefficients
library(ggplot2)
ggplot(data, aes(x=Year, y=prectot)) +
geom_point() +
geom_abline(intercept = Sum[1], slope = Sum[2], color="blue",
size=1.2) +
labs(x = "Years after 1961")
library(ggplot2)
model.null = rq(prectot ~ 1, data = data, tau = 0.5)
anova(model.q, model.null)
Sumq = summary(model.q)$coefficients
ggplot(data, aes(x=Year, y=prectot)) +
geom_point() +
geom_abline(intercept = Sumq[1], slope = Sumq[2], color="red",
size=1.2) +
labs(x = "Years after 1961")
#
# 5> >> << ANCLIM
at=read.table(file("clipboard"),header=T, sep="\t")
write(at, file = "data.txt", sep = " ")
# ANNEXE
########
require(ggplot2)
ggplot(data = quakes) +
geom_density(mapping = aes(x = mag, fill = region), alpha =
0.5)
library("RColorBrewer")
display.brewer.all()
#Prediction
###########
#Import Data
data=read.table(file("clipboard"), header=T, sep="\t", dec=".")
str(data)
summary(data)
library(forecast)
ts_passengers = ts(prectot,start=1961,end=2017,frequency=1)
plot(ts_passengers,xlab="", ylab="")
m_ets = ets(ts_passengers)
f_ets = forecast(m_ets, h=24) # forecast 24 months into the future
plot(f_ets, type="o", xlab="Year", ylab="Precipitaion(mm)")
m_tbats = tbats(ts_passengers)
f_tbats = forecast(m_tbats, h=24)
plot(f_tbats)
m_aa = auto.arima(ts_passengers)
f_aa = forecast(m_aa, h=24)
plot(f_aa)
#https://a-little-book-of-r-for-time-series.readthedocs.io/en/
latest/src/timeseries.html
#https://otexts.com/fpp2/arima-r.html
library("fUnitRoots")
urkpssTest(tsData, type = c("tau"), lags = c("short"),use.lag =
NULL, doplot = TRUE)
tsstationary = diff(tsData, differences=1)
plot(tsstationary)
acf(tsstationary, lag.max=34)
pacf(tsstationary, lag.max=34)
acf(fitARIMA$residuals)
library(FitAR)
LjungBoxTest (fitARIMA$residuals,k=2,StartLag=1)
qqnorm(fitARIMA$residuals)
qqline(fitARIMA$residuals)
auto.arima(tsData, trace=TRUE)
predict(fitARIMA,n.ahead = 20)
require(forecast)
futurVal <- forecast.Arima(fitARIMA,h=10, level=c(99.5))
plot.forecast(futurVal)
plot(fitARIMA,h=10, level=c(99.5))
A=auto.arima(prectot)
autoplot(forecast(A))
autoplot(forecast(prectot), n=100)
library(forecast)
AutoArimaModel=auto.arima(prectot)
AutoArimaModel
plot(AutoArimaModel)
require(tseries); require(astsa)
acf(prectot)
pacf(prectot)
predict(fit, n.ahead = 6)
# plot series
plot(myts)
# Seasonal decomposition
fit <- stl(myts, s.window="period")
plot(fit)
# additional plots
monthplot(myts)
library(forecast)
seasonplot(myts)
# predictive accuracy
library(forecast)
accuracy(fit)
library(forecast)
# Automated forecasting using an exponential model
fit <- ets(myts)
x <- c(32,64,96,118,126,144,152.5,158)
y <- c(99.5,104.8,108.5,100,86,64,35.3,15)
x <- 1:10
y <- x + c(-0.5,0.5)
library(MASS)
library(nnet)
# apprentissage
nnet.reg=nnet(O3obs~.,data=datappr,size=5,decay=1,linout=TRUE,maxit=
500)
summary(nnet.reg)
library(e1071)
plot(tune.nnet(O3obs~.,data=datappr,size=c(2,3,4),
decay=c(1,2,3),maxit=200,linout=TRUE))
plot(tune.nnet(O3obs~.,data=datappr,size=4:5,decay=1:10)
nnet.reg=nnet(O3obs~.,data=datappr,size=3,decay=2,
linout=TRUE,maxit=200)
# calcul et graphe des résidus
fit.nnetr=predict(nnet.reg,data=datappr)
res.nnetr=fit.nnetr-datappr[,"O3obs"]
plot.res(fit.nnetr,res.nnetr)
# apprentissage
nnet.dis=nnet(DepSeuil~.,data=datappq,size=5,decay=1)
summary(nnet.reg)
#matrice de confusion
table(nnet.dis$fitted.values>0.5,datappq$DepSeuil)
CVnn(DepSeuil~.,data=datappq,size=7, decay=0)
...
# exécuter pour différentes valeur du decay
pred.nnetr=predict(nnet.reg,newdata=datestr)
pred.nnetq=predict(nnet.dis,newdata=datestq)
# Erreur quadratique moyenne de prévision
sum((pred.nnetr-datestr[,"O3obs"])^2)/nrow(datestr)
# Matrice de confusion pour la prévision du
# dépassement de seuil (régression)
table(pred.nnetr>150,datestr[,"O3obs"]>150)
# Même chose pour la discrimination
table(pred.nnetq>0.5,datestq[,"DepSeuil"])
library(ROCR)
rocnnetr=pred.nnetr/300
prednnetr=prediction(rocnnetr,datestq$DepSeuil)
perfnnetr=performance(prednnetr,"tpr","fpr")
rocnnetq=pred.nnetq
prednnetq=prediction(rocnnetq,datestq$DepSeuil)
perfnnetq=performance(prednnetq,"tpr","fpr")
# tracer les courbes ROC en les superposant
# pour mieux comparer
plot(perflogit,col=1)
plot(perfnnetr,col=2,add=TRUE)
plot(perfnnetq,col=3,add=TRUE)
sigma=sqrt(1/(length(precipitation)-1))*sum((precipitation-moy)^2)
k=(sigma/moy)^-1.086
x=precipitation
GEV=(1/sigma)*exp(- (1+(k*(x-moy)/sigma))^-(1/k)
)*(1+(k*(x-moy)/sigma ))^(-1-(1/k))
GEV
barplot(GEV,type="o")
GP=(1/sigma)*(1+(k*((x-moy)/sigma)) ^(-1-(1/k)))
GP
plot(GP,type="o")
GEV/GP
# load packages
library(extRemes)
library(xts)
# L-moments estimation
pot_lmom <- fevd(as.vector(precipitation_xts), method = "Lmoments",
type="GP", threshold=th)
# diagnostic plots
plot(pot_lmom)
rl_lmom <- return.level(pot_lmom, conf = 0.05, return.period=
c(2,5,10,20,50,100))
library(extRemes)
library(ismev)
fit.AIC=summary(B1.fit, silent=TRUE)$AIC
fit1.AIC=summary(B1.fit1, silent=TRUE)$AIC
fit2.AIC=summary(B1.fit2, silent=TRUE)$AIC
fit3.AIC=summary(B1.fit3, silent=TRUE)$AIC
fit.AIC
# [1] 39976258
fit1.AIC
# [1] 466351.5
fit2.AIC
# [1] 13934878
fit3.AIC
# [1] 466330.8
plot(B1.fit)
plot(B1.fit1)
plot(B1.fit2)
plot(B1.fit3)
rm(list=ls())
rm(list=ls())
rm(list=ls())
str(rm)
attach(rm)
#plot(0,0,xlim=c(1961,2017), ylim=c(0,300))
DJF=ts(DJF, start=1961,end=2017)
Y=ts(Y, start=1961, end=2017)
#par(new=TRUE)
#plot(DJF, type="o", xlab="", ylab="",axes=FALSE)
#par(new=TRUE)
#plot(Y, type="l", xlab="", ylab="", add=TRUE, axes=FALSE)
names(r)
reg=lm(SNO.tav~X,data=r)
summary(reg)
str(r)
library(climatol) # load the functions of the package
homogen("r", 1980, 2017, expl=TRUE)
#Missing Value
library(naniar)
vis_miss(r[,-1])
#dd2m("r", 1981, 2000, homog=TRUE)
#dahstat(’Ttest’, 1981, 2000, stat=’series’)
library(naniar)
require("UpSetR")
gg_miss_upset(r)
library(mice)
md.pattern(r)
library(reshape2)
library(ggplot2
Abdi-Basid Analysis
[email protected]
#Import Data
data=read.table(file("clipboard"), header=T, sep="\t", dec=".")
str(data)
summary(data)
#Homogeneous
boxplot(data)
hist(data)
#median ajustement
pairs(datatable, col="blue", main="Scatterplots")
Y=cbind(LRY)
X=cbind(LRV, LRC, INT)
#
hist(Y, prob=TRUE, col = "blue", border = "black")
lines(density(Y))
#
OLSreg=lm(Y~X)
summary(OLSreg)
#
Qreg25=rq(Y~X, tau=0.25)
summary(Qreg25)
#
QR=rq(Y~X, tau=seq(0.2, 0.8, by=0.1))
sumQR=summary(QR)
#
anova(Qreg25, Qreg75)
#
QR=rq(Y~X, tau=seq(0.2, 0.8, by=0.1))
sumQR=summary(QR)
#
plot(sumQR)
# plot series
plot(myts)
# Seasonal decomposition
fit <- stl(myts, s.window="period")
plot(fit)
# additional plots
monthplot(myts)
library(forecast)
seasonplot(myts)
# predictive accuracy
library(forecast)
accuracy(fit)
library(forecast)
# Automated forecasting using an exponential model
fit <- ets(myts)
plot(merge(
Nile = as.zoo(Nile),
zoo(mean(Nile), time(Nile)),
CUSUM = cumsum(Nile - mean(Nile)),
zoo(0, time(Nile)),
MOSUM = rollapply(Nile - mean(Nile), 15, sum),
zoo(0, time(Nile))
), screen = c(1, 1, 2, 2, 3, 3), main = "", xlab = "Time",
col = c(1, 4, 1, 4, 1, 4)
)
plot(merge(
Nile = as.zoo(Nile),
zoo(c(NA, cumsum(head(Nile, -1))/1:99), time(Nile)),
CUSUM = cumsum(c(0, recresid(lm(Nile ~ 1)))),
zoo(0, time(Nile))
), screen = c(1, 1, 2, 2), main = "", xlab = "Time",
col = c(1, 4, 1, 4)
)
x <- 1:10
y <- x + c(-0.5,0.5)
library(mice)
Mousey = mice(Greenwich)
Greenwich = complete(Mousey)
#
# 5> >> << ANCLIM
at=read.table(file("clipboard"),header=T, sep="\t")
write(at, file = "data.txt", sep = " ")
r=read.table(file("clipboard"),header=T, sep="\
t",dec=",",row.names=1)
str(r)
attach(r)
names(r)
dim(r)
summary(r)
library(Hmisc)
describe(r,num.desc=c("mean","median","var","sd","valid.n"),horizont
al=TRUE)
sapply(r, sd)
sapply(r,range)
t=na.omit(r)
round(cor(t),3)
library(corrplot)
mydata.cor = cor(t, method = c("pearson"))
corrplot(mydata.cor, method = "number", type = "lower")
#methode = number, ellipse, square, shade, color, pie,
#type = full" "upper" "lower"
library(RColorBrewer)
corrplot(mydata.cor, type = "upper", order = "hclust",col =
brewer.pal(n = 8, name = "PuOr"))
reg1=lm(OB~CH,data=t)
reg2=lm(OB~PE,data=t)
reg3=lm(OB~TA,data=t)
reg4=lm(OB~AR,data=t)
reg5=lm(OB~ER,data=t)
par(new=TRUE)
plot(OB~CH,xlim=limx,ylim=limy,pch=20,xaxt="n",yaxt="n",col="red",xl
ab="",ylab="");abline(reg1,col="red",lwd=2)
par(new=TRUE)
plot(OB~PE,xlim=limx,ylim=limy,pch=20,xaxt="n",yaxt="n",col="blue",x
lab="",ylab="");abline(reg2,col="blue",lwd=2)
par(new=TRUE)
plot(OB~TA,xlim=limx,ylim=limy,pch=20,xaxt="n",yaxt="n",col="brown",
xlab="",ylab="");abline(reg3,col="brown",lwd=2)
par(new=TRUE)
plot(OB~AR,xlim=limx,ylim=limy,pch=20,xaxt="n",yaxt="n",col="orange"
,xlab="",ylab="");abline(reg4,col="yellow",lwd=2)
par(new=TRUE)
plot(OB~ER,xlim=limx,ylim=limy,pch=20,xaxt="n",yaxt="n",col="darkgre
en",xlab="",ylab="");abline(reg5,col="turquoise1",lwd=2)
par(new=TRUE)
abline(0,1.33,col="black",lwd=2,lty=2)
summary(reg1)
summary(reg2)
summary(reg3)
summary(reg4)
summary(reg5)
require(hydroGOF)
names(r);attach(r)
rmsec=sqrt(sum((OB-CH)^2)/length(OB));round(rmsec,3)
rmsec=sqrt(sum((OB-PE)^2)/length(OB));round(rmsec,3)
rmsec=sqrt(sum((OB-TA)^2)/length(OB));round(rmsec,3)
rmsec=sqrt(sum((OB-AR)^2)/length(OB));round(rmsec,3)
rmsec=sqrt(sum((OB-ER)^2)/length(OB));round(rmsec,3)
#rmse(sim=Data_Mon_11.325, obs=Datat_Obser)
require(hydroGOF)
PBIAS = 100 * (abs(sum(OB-CH)) / sum(OB));round(PBIAS,3)
PBIAS = 100 * (abs(sum(OB-CH)) / sum(OB));round(PBIAS,3)
PBIAS = 100 * (abs(sum(OB-CH)) / sum(OB));round(PBIAS,3)
PBIAS = 100 * (abs(sum(OB-CH)) / sum(OB));round(PBIAS,3)
PBIAS = 100 * (abs(sum(OB-CH)) / sum(OB));round(PBIAS,3)
#pbias(sim=CH, obs=OB, na.rm=TRUE)
require(hydroGOF)
# mse(CH, OB)
# mae(CH, OB)
gof(CH, OB)
ggof(sim=CH, obs=OB, ftype="dm", FUN=mean)
attach(r)
names(r)
# "OB" "CH" "PE" "TA" "AR" "ER"
round(sd(PE),3)
reg=lm(OB~ER,data=r);summary(reg)
round(cor(r),3)
BIAS = sum(OB-TA)/length(OB);round(BIAS,3)
RMSE=sqrt(sum((OB-CH)^2)/length(OB));round(RMSE,3)
RMSE=sqrt(sum((OB-PE)^2)/length(OB));round(RMSE,3)
RMSE=sqrt(sum((OB-TA)^2)/length(OB));round(RMSE,3)
RMSE=sqrt(sum((OB-AR)^2)/length(OB));round(RMSE,3)
RMSE=sqrt(sum((OB-ER)^2)/length(OB));round(RMSE,3)
RMSD = sqrt(sum((R$OB-R$PE)^2)/length(R$OB));round(RMSD,3)
names(n)
#########
[1] "JF.OB" "JF.CH" "JF.PE" "JF.TA" "JF.AR" "JF.ER"
"MAM.OB"
[8] "MAM.CH" "MAM.PE" "MAM.TA" "MAM.AR" "MAM.ER" "JJAS.OB"
"JJAS.CH"
[15] "JJAS.PE" "JJAS.TA" "JJAS.AR" "JJAS.ER" "OND.OB" "OND.CH"
"OND.PE"
[22] "OND.TA" "OND.AR" "OND.ER"
##########
RSD = abs(sd(CH)/mean(CH));round(RSD, digits = 3)
###########################################
# display the diagram with the better model
#oldpar<-taylor.diagram(OB,CH,pch=19)
#taylor.diagram.modified(OB,CH, text="Model 1")
s=read.table(file("clipboard"),header=T, sep="\
t",dec=",",row.names=1)
attach(s)
names(s)
str(s)
cbind(sapply(s, sd, na.rm=TRUE))
CHrmse=sqrt(sum((OB-CH)^2)/length(OB));round(CHrmse,3)
PErmse=sqrt(sum((OB-PE)^2)/length(OB));round(PErmse,3)
TArmse=sqrt(sum((OB-TA)^2)/length(OB));round(TArmse,3)
ARrmse=sqrt(sum((OB-AR)^2)/length(OB));round(ARrmse,3)
ERrmse=sqrt(sum((OB-ER)^2)/length(OB));round(ERrmse,3)
rbind(CHrmse,PErmse,TArmse,ARrmse,ERrmse)
OB=AIRPORT
CH=CHIRPS
PE=PERSIANN
TA=TAMSATV3
AR=ARCV2
ER=ERA5
#
par(new=TRUE)
require(plotrix)
oldpar<-taylor.diagram(OB,CH,add=F, pch=19,pos.cor=TRUE,
xlab="Standard deviation",ylab="Standard Deviation",main="",
show.gamma=TRUE,ngamma=10,gamma.col="green",sd.arcs=1,
ref.sd=TRUE,sd.method="sample",grad.corr.lines=c(0.1,
0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,0.95,0.99),col="blue",
pcex=1.5,cex.axis=1.1,normalize=F)
pch=c(15,19,19,19,19,19),col=c("darkgreen","red","blue","brown","ora
nge","darkgreen"), cex=0.7)
# SAISONNALITY
#par(mfrow=c(1,2))
#"red","blue","brown","orange","darkgreen"
s=read.table(file("clipboard"),header=T, sep="\
t",dec=",",row.names=1)
names(s)
attach(s)
cor(s)
str(s)
require(plotrix)
taylor.diagram(MAM.OB, MAM.CH, col="blue",
pos.cor=F,pcex=1.5,normalize=F)
taylor.diagram(MAM.OB, MAM.PE,add=TRUE, col="black",
pcex=1.5,normalize=F)
taylor.diagram(MAM.OB, MAM.TA,add=TRUE, col="pink",
pcex=1.5,normalize=F)
taylor.diagram(MAM.OB, MAM.AR,add=TRUE, col="brown",
pcex=1.5,normalize=F)
taylor.diagram(MAM.OB, MAM.ER,add=TRUE, col="green",
pcex=1.5,normalize=F)
require(plotrix)
taylor.diagram(JJAS.OB, JJAS.CH, col="blue",
pos.cor=F,pcex=1.5,normalize=F)
taylor.diagram(JJAS.OB, JJAS.PE,add=TRUE, col="black",
pcex=1.5,normalize=F)
taylor.diagram(JJAS.OB, JJAS.TA,add=TRUE, col="pink",
pcex=1.5,normalize=F)
taylor.diagram(JJAS.OB, JJAS.AR,add=TRUE, col="brown",
pcex=1.5,normalize=F)
taylor.diagram(JJAS.OB, JJAS.ER,add=TRUE, col="green",
pcex=1.5,normalize=F)
require(plotrix)
taylor.diagram(OND.OB, OND.CH, col="blue",
pos.cor=F,pcex=1.5,normalize=F)
taylor.diagram(OND.OB, OND.PE,add=TRUE, col="black",
pcex=1.5,normalize=F)
taylor.diagram(OND.OB, OND.TA,add=TRUE, col="pink",
pcex=1.5,normalize=F)
taylor.diagram(OND.OB, OND.AR,add=TRUE, col="brown",
pcex=1.5,normalize=F)
taylor.diagram(OND.OB, OND.ER,add=TRUE, col="green",
pcex=1.5,normalize=F)
legend(30,53
,legend=c("OND.OBSERVATON","OND.CHIRPS","OND.PESERIANNCDR","OND.TAMS
AT","OND.ARC","OND.ERA"),horiz=FALSE,
pch=c(15,19,19,19,19,19),col=c("darkgreen","red","blue","brown","ora
nge","darkgreen"), cex=0.7)
CHrmse=sqrt(sum((OND.OB-OND.CH)^2)/length(OND.OB))
PErmse=sqrt(sum((OND.OB-OND.PE)^2)/length(OND.OB))
TArmse=sqrt(sum((OND.OB-OND.TA)^2)/length(OND.OB))
ARrmse=sqrt(sum((OND.OB-OND.AR)^2)/length(OND.OB))
ERrmse=sqrt(sum((OND.OB-OND.ER)^2)/length(OND.OB))
round(rbind(CHrmse,PErmse,TArmse,ARrmse,ERrmse),3)
#RMS_Diff =
sum(((OND.CH-mean(OND.CH))-(OND.OB-mean(OND.OB)))^2)/length(OND.OB)
OBsd=sd(OND.OB)
CHsd=sd(OND.CH)
PEsd=sd(OND.PE)
TAsd=sd(OND.TA)
ARsd=sd(OND.AR)
ERsd=sd(OND.ER)
round(rbind(CHsd,PEsd,TAsd,ARsd,ERsd),3)
CHBIAS = sum(OND.OB-OND.CH)/length(OND.OB)
PEBIAS = sum(OND.OB-OND.PE)/length(OND.OB)
TABIAS = sum(OND.OB-OND.TA)/length(OND.OB)
ARBIAS = sum(OND.OB-OND.AR)/length(OND.OB)
ERBIAS = sum(OND.OB-OND.ER)/length(OND.OB)
round(rbind(CHBIAS,PEBIAS,TABIAS,ARBIAS,ERBIAS),3)
R=read.table(file("clipboard"),header=T, sep="\
t",dec=".",row.names=1)
library(caret)
preproc1 <- preProcess(R, method=c("center"))
#"center", "scale"
norm1 <- predict(preproc1,R)
print(norm1)
require(openair)
## in the examples below, most effort goes into making some
artificial data
## the function itself can be run very simply
## Not run:
## dummy model data for 2003
dat <- selectByDate(mydata, year = 2003)
dat <- data.frame(date = mydata$date, obs = mydata$nox, mod =
mydata$nox)
## now make mod worse by adding bias and noise according to the
month
## do this for 3 different models
dat <- transform(dat, month = as.numeric(format(date, "%m")))
mod1 <- transform(dat, mod = mod + 10 * month + 10 * month *
rnorm(nrow(dat)),
model = "model 1")
## lag the results for mod1 to make the correlation coefficient
worse
## without affecting the sd
mod1 <- transform(mod1, mod = c(mod[5:length(mod)], mod[(length(mod)
- 3) :
length(mod)]))
## model 2
mod2 <- transform(dat, mod = mod + 7 * month + 7 * month *
rnorm(nrow(dat)),
model = "model 2")
## model 3
mod3 <- transform(dat, mod = mod + 3 * month + 3 * month *
rnorm(nrow(dat)),
model = "model 3")
## End(Not run)
## Not run:
## all models, by season
TaylorDiagram(mod.dat, obs = "obs", mod = c("mod", "mod2"), group =
"model",
type = "season")
## End(Not run)
# ANNEXE
####################################################3
#Taylor, K.E. (2001) Summarizing multiple aspects of model
performance in a single diagram. Journal of Geophysical Research,
106: 7183-7192.
library(datasets)
library(ncdf4)
library(plotrix)
taylor.diagram(r,r,add=FALSE,col="red",pch=4,pos.cor=TRUE,xlab="MERR
A SD (Normalised)",ylab="RCA4 runs SD (normalised)",main="Taylor
Diagram",show.gamma=TRUE,ngamma=3,sd.arcs=1,ref.sd=TRUE,grad.corr.li
nes=c(0.2,0.4,0.6,0.8,0.9),pcex=1,cex.axis=1,normalize=TRUE,mar=c(5,
4,6,6),lwd=10,font=5,lty=3)
lpos<-1.5*sd(Data_Mon_11.275)
legend(1.5,1.5,cex=1.2,pt.cex=1.2,legend=c("volcano"),pch=4,col=c("r
ed"))
taylor.diagram(data,data,normalize=TRUE)
legend(lpos,lpos,legend=c("Better","Worse"),pch=19,col=c("red","blue
"))
# now restore par values
par(oldpar)
# show the "all correlation" display
taylor.diagram(ref,model1,pos.cor=FALSE)
taylor.diagram(ref,model2,add=TRUE,col="blue")
FIN DU PROGRAMME
Enjoy It !