If you do not have one or many of these libraries installed you can add them by using the install.libraries(x)
function where x is the name of the package you desire to install. Start by loading all of the libraries:
library(burnr)
library(dendroTools)
library(DendroSync)
library(dplR)
library(ggplot2)
library(TRADER)
library(treeclim)
library(utils)
All of these packages are cited in the README file available in the repository found at rNADEF Workbook
Using the script below, you will be able to read in raw ring width files for COFECHA-like analysis. To run your own data, change the fname
to match the data in your data <- read.rwl(fname = ... , format = "auto")
script. You can also examine summary statistics of your ring width file using the rwl.stats(x)
function. Alternatively you can use the RWL file included in this project.
<- read.tucson(fname = "Yellowstone_PSME_format.txt") grow.rwl
There does not appear to be a header in the rwl file
There are 32 series
1 BTK05A 1741 2016 0.001
2 BTK05B 1727 2016 0.001
3 BTK6A 1649 2016 0.001
4 BTK6B 1618 2016 0.001
5 BTK06C 1680 2016 0.001
6 BTK07A 1614 2016 0.001
7 BTK07B 1640 2016 0.001
8 BTK08A 1840 2016 0.001
9 BTK8B 1437 2016 0.001
10 BTK09B 1681 2016 0.001
11 BTK09C 1493 1679 0.001
12 BTK10C 1691 2016 0.001
13 BTK11A 1618 2016 0.001
14 BTK11B 1692 2016 0.001
15 BTK12A 1551 2016 0.001
16 BTK12B 1551 1918 0.001
17 BTK13A 1637 2016 0.001
18 BTK13B 1637 1998 0.001
19 BTK13C 1652 2016 0.001
20 BTK14A 1719 2016 0.001
21 BTK14B 1716 2016 0.001
22 BTK15A 1599 2016 0.001
23 BTK 17A 1758 2016 0.001
24 BTK 17B 1769 2016 0.001
25 BTK19B 1503 2016 0.001
26 BTK19A 1473 2016 0.001
27 BTK20A 1665 2016 0.001
28 BTK20B 1685 2016 0.001
29 BTK 33A 1765 2016 0.001
30 BTK 33B 1751 2016 0.001
31 BTK 34A 1653 2016 0.001
32 BTK 34B 1655 2016 0.001
rwl.stats(grow.rwl)
You can plot this information to view the time series and/or skeleton plot of a series. To see the time series, use the function seg.plot(x)
. To view the skeleton plot of an individual series, use skel.plot(x[#])
where # is the series/core you would like to view.
seg.plot(grow.rwl)
skel.plot(grow.rwl[1])
You can also examine the radii (mm) for each series, common time interval, and mean sensitivity of the rwl. While there isn’t a direct function in dplR to calculate radii, you can use colSums()
from base R to calculate the values. For the common interval and mean sensitivity you can use common.interval()
and sens1()
respectively.
colSums(grow.rwl, na.rm = TRUE, dims = 1)
BTK05A BTK05B BTK6A BTK6B BTK06C BTK07A BTK07B BTK08A BTK8B
125.448 130.678 236.437 288.138 160.378 147.127 141.698 35.854 226.280
BTK09B BTK09C BTK10C BTK11A BTK11B BTK12A BTK12B BTK13A BTK13B
56.100 68.566 128.930 232.076 248.110 110.557 90.246 222.705 168.850
BTK13C BTK14A BTK14B BTK15A BTK 17A BTK 17B BTK19B BTK19A BTK20A
151.969 174.576 172.746 183.356 202.596 186.586 194.488 243.716 216.798
BTK20B BTK 33A BTK 33B BTK 34A BTK 34B
221.785 250.081 281.806 225.388 221.292
<- common.interval(grow.rwl, type=c("years"), make.plot=TRUE) crn.common
sens1(grow.rwl)
[1] 0.2660017
Using functions from dplR you can also obtain a COFECH-like output with the corr.rwl.seg()
function. You can also examine an individual series with series.rwl.plot()
. Please note that the options used in the examples below are specific to this project and might be different from your specific analysis.
corr.rwl.seg(rwl = grow.rwl, seg.length = 50, bin.floor = 100, n = NULL, prewhiten = TRUE, pcrit = 0.05, biweight = TRUE, method = c("spearman"), make.plot = TRUE, label.cex = 1, floor.plus1 = FALSE, master = NULL)
To examine an individual series you will need to be able to identify the specific series in the function.
series.rwl.plot(grow.rwl, series = "BTK05A", series.yrs = as.numeric(names(series)), seg.length = 50, bin.floor = 100, n = NULL, prewhiten = TRUE, biweight = TRUE, floor.plus1 = FALSE)
Using interseries.cor
will allow you to calculate the interseries correlation between each series and the master for the ring width file.
interseries.cor(grow.rwl, n = NULL, prewhiten = TRUE, biweight = TRUE, method = "spearman")
To facilitate crossdating you can also use dplR to calculate marker rings with the pointer()
function.
<- pointer(grow.rwl)
markers markers
Frequently you will see a plot of the ring widths of all series on the same graph. These spaghetti plots can be created using the spag.plot()
function in dplR.
spag.plot(rwl = grow.rwl, zfac = 1, useRaster = FALSE, res = 300)
The standalone ARSTAN program allows you to develop site chronologies and interactively detrend or standardize a series. These options are also baked into dplR. To run interactive detrending you can use the i.detrend()
function. Ihis allows you to explore curve fits for each tree ring series.
<- i.detrend(rwl = grow.rwl, nyrs = NULL, f = 0.5,pos.slope = FALSE) #allows you to see a variety of fits grow.rwi.int
Detrend series 1 of 32
Choose a detrending method for this series BTK05A.
Methods are:
1: Spline
2: ModNegExp
3: Mean
4: Ar
5: Friedman
6: ModHugershoff
1
Detrend series 2 of 32
Choose a detrending method for this series BTK05B.
Methods are:
1: Spline
2: ModNegExp
3: Mean
4: Ar
5: Friedman
6: ModHugershoff
You will need to select a detrending method for each series.
To view the spaghetti plot for the detrended series you can once again use the spag.plot()
function similar to above except substituting the raw ring width data for the detrended data.
spag.plot(rwl = grow.rwi.int, zfac = 1, useRaster = FALSE, res = 300)
If you want to utilize a singular detrending method for all of your series, or you used interactive detrending to determine the best fit for your data, you can use the detrend()
function instead of i.detrend()
. In this function you will choose one of the following detrending options with method = : “Spline”, “ModNegExp”, “Mean”, “Ar”, “Friedman”, or “ModHugershoff”.
<- detrend(rwl = grow.rwl, method = c("Spline"), nyrs = NULL, f = 0.5, pos.slope = FALSE)
grow.rwi grow.rwi
To examine the statistics for the entire chronology you can use rwi.stats()
or rwi.stats.running()
to use running statistics in order to adjust the time periods. See ?rwi.stats.running
for help on this function.
rwi.stats(grow.rwi)
rwi.stats.running(grow.rwi)
The ARSTAN program generates a standardized chronology, an ARSTAN chronology, and a residual chronology. In dplR you can use the chron()
function to build a mean value chronology from detrended ring widths produced from a detrend()
function.
Standardized chronology:
<- chron(x = grow.rwi, prefix = "BTK", biweight = TRUE, prewhiten = FALSE)
grow.crn grow.crn
Standard and Residual chronologies:
<- chron(x = grow.rwi, prefix = "BTK", biweight = TRUE, prewhiten = TRUE)
grow.crn grow.crn
You can then plot the chronology using crn.plot()
to view chronologies developed with chron()
.
crn.plot(crn = grow.crn, add.spline = TRUE, nyrs = NULL, f = 0.5, crn.line.col='grey50', spline.line.col='red', samp.depth.col='grey90', samp.depth.border.col='grey80', crn.lwd=1, spline.lwd=2.0, abline.pos=1, abline.col='black', abline.lty=1,abline.lwd=1, xlab="Time", ylab="RWI")
For daily and monthly dendroclimatological analysis in the dendroTools library you can save the developed chronologies as .csv files using the write.csv(x)
function:
<- grow.crn[1] #subset only year (already as.numeric) and index columns
btk_std write.csv(btk_std, file = "BTK_std.csv")
To export the data use the following script: write.csv(btk_std, file = "BTK_std.csv")
Wavelet Transform allows you to look at frequencies or temporal patterns in your crn for paleoclimatological analysis.
<- as.numeric(rownames(grow.crn))
Years <- grow.crn[, 1]
rings <- morlet(y1 = rings, x1 = Years, p2 = 9, dj = 0.1,
tubular siglvl = 0.99)
wavelet.plot(tubular, useRaster = NA)
The TreeClim package allows for the assessment of growth-climate relationships similar to the older DendroClim2002 software. To begin, you will need to load a chronology. You can use your own chronology or the chronology developed in the previous steps. With the script below, you can view a summary of the chronology you will be using for this analysis. Alternatively, you can change the name of the chronology to one you have generated.
#summary of the chronology
summary(grow.crn)
BTKstd samp.depth
Min. :0.08791 Min. : 1.00
1st Qu.:0.83354 1st Qu.: 6.00
Median :0.98508 Median :24.50
Mean :0.98300 Mean :19.36
3rd Qu.:1.13387 3rd Qu.:30.00
Max. :1.99612 Max. :31.00
Next you will need to load climate data. There are a number of methods for adding climate data but this example has one as a csv file in the project.
<- read.csv("WY_yellowstone_climate.csv", header = TRUE)
climate names(climate)
[1] "Year" "Month" "PCP" "TAVG" "PDSI" "PHDI" "ZNDX"
[8] "PMDI" "CDD" "HDD" "SP01" "SP02" "SP03" "SP06"
[15] "SP09" "SP12" "SP24" "TMIN" "TMAX"
summary(climate)
Year Month PCP TAVG
Min. :1900 Min. : 1.00 Min. :0.540 Min. :21.90
1st Qu.:1929 1st Qu.: 3.75 1st Qu.:2.140 1st Qu.:38.06
Median :1958 Median : 6.50 Median :2.510 Median :52.86
Mean :1958 Mean : 6.50 Mean :2.504 Mean :52.22
3rd Qu.:1987 3rd Qu.: 9.25 3rd Qu.:2.880 3rd Qu.:66.75
Max. :2016 Max. :12.00 Max. :4.440 Max. :76.80
PDSI PHDI ZNDX
Min. :-8.3900 Min. :-8.3900 Min. :-7.1300
1st Qu.:-1.5200 1st Qu.:-1.9800 1st Qu.:-1.1000
Median : 0.7400 Median : 1.1000 Median : 0.1050
Mean : 0.3093 Mean : 0.2772 Mean : 0.0788
3rd Qu.: 2.3150 3rd Qu.: 2.4400 3rd Qu.: 1.2600
Max. : 6.5100 Max. : 6.5100 Max. : 6.9000
PMDI CDD HDD
Min. :-8.3900 Min. : 1.0 Min. : 3.0
1st Qu.:-1.5550 1st Qu.: 10.0 1st Qu.: 56.0
Median : 0.7050 Median : 41.0 Median : 309.5
Mean : 0.2636 Mean :101.1 Mean : 384.7
3rd Qu.: 2.2625 3rd Qu.:184.2 3rd Qu.: 690.0
Max. : 6.5100 Max. :405.0 Max. :1184.0
SP01 SP02 SP03
Min. :-3.090000 Min. :-3.090000 Min. :-3.090000
1st Qu.:-0.660000 1st Qu.:-0.640000 1st Qu.:-0.630000
Median :-0.005000 Median : 0.030000 Median :-0.020000
Mean : 0.004701 Mean : 0.006125 Mean : 0.006432
3rd Qu.: 0.680000 3rd Qu.: 0.610000 3rd Qu.: 0.652500
Max. : 3.090000 Max. : 3.090000 Max. : 3.090000
SP06 SP09 SP12
Min. :-3.090000 Min. :-2.420000 Min. :-2.390000
1st Qu.:-0.640000 1st Qu.:-0.670000 1st Qu.:-0.710000
Median : 0.020000 Median : 0.035000 Median : 0.080000
Mean : 0.006147 Mean : 0.007407 Mean : 0.005705
3rd Qu.: 0.682500 3rd Qu.: 0.670000 3rd Qu.: 0.645000
Max. : 2.960000 Max. : 3.090000 Max. : 2.820000
SP24 TMIN TMAX
Min. :-2.420000 Min. :12.52 Min. :31.26
1st Qu.:-0.650000 1st Qu.:27.11 1st Qu.:48.73
Median : 0.040000 Median :40.29 Median :65.40
Mean : 0.006645 Mean :40.25 Mean :64.19
3rd Qu.: 0.680000 3rd Qu.:54.03 3rd Qu.:79.78
Max. : 2.650000 Max. :63.55 Max. :90.84
<- climate[,1:2] #pull year and month columns
ym <- climate[3] #pull climate variables, 1 or 2 at a time to avoid N problems
var1 <- data.frame(c(ym, var1)) #build climate data frame
clim summary(clim)
Year Month PCP
Min. :1900 Min. : 1.00 Min. :0.540
1st Qu.:1929 1st Qu.: 3.75 1st Qu.:2.140
Median :1958 Median : 6.50 Median :2.510
Mean :1958 Mean : 6.50 Mean :2.504
3rd Qu.:1987 3rd Qu.: 9.25 3rd Qu.:2.880
Max. :2016 Max. :12.00 Max. :4.440
Modeled after Dendroclim2002 Can produce “static”, “moving”, or “evolving” using the dynamic =
argument in the dcc
script.
<- dcc(chrono = grow.crn, climate = clim, selection = -5:10, method = "response", dynamic = "evolving", win_size = 35, win_offset = 1, start_last = TRUE, timespan = NULL, var_names = NULL, ci = 0.05, boot = "std", sb = FALSE) #this is the main function in treeclim resp
Error in dcc(chrono = grow.crn, climate = clim, selection = -5:10, method = "response", :
Overlapping time span of chrono and climate records is smaller than number of parameters! Consider adapting the number of parameters to a maximum of 35.
You can save the output using the following script: write.csv(coef, file = ("pcp_coef.csv"))
Additionally you can write the plot to a file:
tiff("resp_coef.tiff", width = 8, height = 4, units = 'in', res = 300)
plot.new()
plot(resp)
title(main = "Climate", xlab = "Month")
dev.off()
<- dcc(chrono = grow.crn, climate = clim, selection = 6:7, #use a selection with recon variable of interest - modifiers like .mean or .sum can be used to average across months
recon method = "response", dynamic = "static", win_size = 35,
win_offset = 1, start_last = TRUE, timespan = NULL, var_names =
NULL, ci = 0.05, boot = "std", sb = FALSE)
Running for timespan 1900 - 2016...
<- coef(recon) recon_coef
plot(recon)
#show model results recon
Coefficients (significance flags correspond to p < 0.05):
id varname month coef significant ci_lower ci_upper
PCP.curr.jun 1 PCP JUN 0.180 FALSE -0.026 0.373
PCP.curr.jul 2 PCP JUL 0.090 FALSE -0.029 0.210
TAVG.curr.jun 3 TAVG JUN -0.096 FALSE -0.242 0.052
TAVG.curr.jul 4 TAVG JUL -0.065 FALSE -0.178 0.076
#this does the evaluation
<- skills(object = recon, target = .mean(1:2), model = "ols", calibration = "50%", timespan = NULL)
skillz plot(skillz)
#show model results skillz
Call:
skills(object = recon, target = .mean(1:2), model = "ols", calibration = "50%",
timespan = NULL)
Using climate target PCP.mean.curr.jan.curr.feb and calibration model with 50 percent (= 59 years) of data starting at recent end as calibration period:
r p-value
0.09830632 0.19801980
Coefficients:
intercept slope
2.0058845 0.1608999
Verification statistics:
RE CE prediction RMSE
0.030 -0.003 0.051
Durbin-Watson Test: 1.756 (p = 0.1587905)
Model for whole period:
r p-value
0.13144548 0.04950495
Coefficients:
intercept slope
2.0115633 0.1801164
Is your model time stable and does it verify?
<- dlm(chrono = grow.crn, climate = clim, selection = 6, timespan = NULL, var_names = NULL,
recon_dlm param_names = NULL, intercept = TRUE, scale = FALSE)
Running for timespan 1900 - 2016...
<- coef(recon_dlm)
recon_coef plot(recon_dlm)
#show model results recon_dlm
Call:
lm(formula = tree ~ ., data = design_df)
Coefficients:
(Intercept) PCP.curr.jun TAVG.curr.jun
2.17262 0.11500 -0.02185
#this does the evaluation
<- skills(object = recon_dlm, target = 6, model = "ols", calibration = "50%", timespan = NULL)
skillz plot(skillz)
#show model results skillz
Call:
skills(object = recon_dlm, target = 6, model = "ols", calibration = "50%",
timespan = NULL)
Using climate target PCP.curr.jun and calibration model with 50 percent (= 59 years) of data starting at recent end as calibration period:
r p-value
0.0225227 0.4257426
Coefficients:
intercept slope
2.93700516 0.04379646
Verification statistics:
RE CE prediction RMSE
0.020 -0.005 0.080
Durbin-Watson Test: 2.082 (p = 0.6025759)
Model for whole period:
r p-value
0.27200435 0.00990099
Coefficients:
intercept slope
2.4299891 0.5067075
Orginal code developed by Dave Meko in Matlab
<- read.csv("WY_yellowstone_climate.csv", header = TRUE)
climate names(climate)
[1] "Year" "Month" "PCP" "TAVG" "PDSI" "PHDI" "ZNDX" "PMDI"
[9] "CDD" "HDD" "SP01" "SP02" "SP03" "SP06" "SP09" "SP12"
[17] "SP24" "TMIN" "TMAX"
summary(climate)
Year Month PCP TAVG
Min. :1900 Min. : 1.00 Min. :0.540 Min. :21.90
1st Qu.:1929 1st Qu.: 3.75 1st Qu.:2.140 1st Qu.:38.06
Median :1958 Median : 6.50 Median :2.510 Median :52.86
Mean :1958 Mean : 6.50 Mean :2.504 Mean :52.22
3rd Qu.:1987 3rd Qu.: 9.25 3rd Qu.:2.880 3rd Qu.:66.75
Max. :2016 Max. :12.00 Max. :4.440 Max. :76.80
PDSI PHDI ZNDX PMDI
Min. :-8.3900 Min. :-8.3900 Min. :-7.1300 Min. :-8.3900
1st Qu.:-1.5200 1st Qu.:-1.9800 1st Qu.:-1.1000 1st Qu.:-1.5550
Median : 0.7400 Median : 1.1000 Median : 0.1050 Median : 0.7050
Mean : 0.3093 Mean : 0.2772 Mean : 0.0788 Mean : 0.2636
3rd Qu.: 2.3150 3rd Qu.: 2.4400 3rd Qu.: 1.2600 3rd Qu.: 2.2625
Max. : 6.5100 Max. : 6.5100 Max. : 6.9000 Max. : 6.5100
CDD HDD SP01 SP02
Min. : 1.0 Min. : 3.0 Min. :-3.090000 Min. :-3.090000
1st Qu.: 10.0 1st Qu.: 56.0 1st Qu.:-0.660000 1st Qu.:-0.640000
Median : 41.0 Median : 309.5 Median :-0.005000 Median : 0.030000
Mean :101.1 Mean : 384.7 Mean : 0.004701 Mean : 0.006125
3rd Qu.:184.2 3rd Qu.: 690.0 3rd Qu.: 0.680000 3rd Qu.: 0.610000
Max. :405.0 Max. :1184.0 Max. : 3.090000 Max. : 3.090000
SP03 SP06 SP09
Min. :-3.090000 Min. :-3.090000 Min. :-2.420000
1st Qu.:-0.630000 1st Qu.:-0.640000 1st Qu.:-0.670000
Median :-0.020000 Median : 0.020000 Median : 0.035000
Mean : 0.006432 Mean : 0.006147 Mean : 0.007407
3rd Qu.: 0.652500 3rd Qu.: 0.682500 3rd Qu.: 0.670000
Max. : 3.090000 Max. : 2.960000 Max. : 3.090000
SP12 SP24 TMIN TMAX
Min. :-2.390000 Min. :-2.420000 Min. :12.52 Min. :31.26
1st Qu.:-0.710000 1st Qu.:-0.650000 1st Qu.:27.11 1st Qu.:48.73
Median : 0.080000 Median : 0.040000 Median :40.29 Median :65.40
Mean : 0.005705 Mean : 0.006645 Mean :40.25 Mean :64.19
3rd Qu.: 0.645000 3rd Qu.: 0.680000 3rd Qu.:54.03 3rd Qu.:79.78
Max. : 2.820000 Max. : 2.650000 Max. :63.55 Max. :90.84
<- climate[,1:2] #pull year and month columns
ym <- climate[3:4] #pull climate variables, 1 or 2 at a time to avoid N problems
var1 <- data.frame(c(ym, var1)) #build climate data frame
clim summary(clim)
Year Month PCP TAVG
Min. :1900 Min. : 1.00 Min. :0.540 Min. :21.90
1st Qu.:1929 1st Qu.: 3.75 1st Qu.:2.140 1st Qu.:38.06
Median :1958 Median : 6.50 Median :2.510 Median :52.86
Mean :1958 Mean : 6.50 Mean :2.504 Mean :52.22
3rd Qu.:1987 3rd Qu.: 9.25 3rd Qu.:2.880 3rd Qu.:66.75
Max. :2016 Max. :12.00 Max. :4.440 Max. :76.80
<- seascorr(grow.crn, climate, var_names = NULL, timespan = NULL, complete = 9,
seas season_lengths = c(1, 3, 6), primary = 1, secondary = 2, ci = 0.05)#this is the main function
Running for timespan 1901 - 2016...
plot(seas)
seas
Results for a season length of 1 month:
Results for a season length of 3 months:
Results for a season length of 6 months:
NA
TRADER, Tree Ring Analysis of Disturbance Events in R, is a package for disturbance reconstruction from tree-ring data. Analyses include Absolute Increase (Fraver & White 2005), Growth Averaging (Nowacki & Abrams 1997), Boundary Line (Black & Abrams 2003), and Splechtna (Splechtna, Gratzer & Black 2005) which combines growth averaging and boundary line techniques.
To start you need to read in a rwl file to begin the analysis.
For this example the Growth Averaging, growthAveragingALL()
, technique will be used with a the following paramenters: years averaged prior to release = 10 (m1), years averaged after release = 10, number of years between release events = 10, moderate relase = 0.25, major release = 0.50, with 5yrs of exceeding growth to be considered a release. Note, some of the analyses will create a large number of files to be created in your working directory/
This script was not run as part of the workbook to avoid a large number of files created in the GitHub repository
Similar to the script in a previous section, you can create a spaghetti plot of the data.
spag.plot(thedata, zfac = 1, useRaster = FALSE, res = 300)
<- chron(thedata, prefix = "BTK", prewhiten=FALSE)
thedata.raw.crn plot(thedata.raw.crn,abline.pos=NULL,ylab='mm',xlab='Year')
Using the bai.out
function you can use the raw ring width file to calculate basal area increment.
<- read.rwl(fname = "wa091.rwl", format = "auto") grow.rwl
Attempting to automatically detect format.
Assuming a Tucson format file.
There appears to be a header in the rwl file
There are 44 series
1 TCT190A 1609 1987 0.01
2 TCT190B 1584 1987 0.01
3 TCT191A 1608 1987 0.01
4 TCT191B 1638 1987 0.01
5 TCT192A 1556 1987 0.01
6 TCT192B 1596 1987 0.01
7 TCT193A 1592 1986 0.01
8 TCT194A 1693 1986 0.01
9 TCT194B 1691 1987 0.01
10 TCT195A 1608 1987 0.01
11 TCT195B 1601 1987 0.01
12 TCT196A 1581 1987 0.01
13 TCT196B 1564 1987 0.01
14 TCT197A 1589 1987 0.01
15 TCT197B 1569 1987 0.01
16 TCT198A 1708 1987 0.01
17 TCT198B 1680 1987 0.01
18 TCT199A 1561 1987 0.01
19 TCT199B 1585 1987 0.01
20 TCT201A 1410 1986 0.01
21 TCT201B 1489 1986 0.01
22 TCT202A 1664 1987 0.01
23 TCT202B 1657 1987 0.01
24 TCT203A 1810 1987 0.01
25 TCT203B 1410 1987 0.01
26 TCT204A 1655 1987 0.01
27 TCT204B 1550 1987 0.01
28 TCT205A 1629 1987 0.01
29 TCT205B 1707 1986 0.01
30 TCT206B 1681 1982 0.01
31 TCT207A 1611 1987 0.01
32 TCT207B 1586 1987 0.01
33 TCT208A 1643 1987 0.01
34 TCT208B 1778 1987 0.01
35 TCT209A 1410 1987 0.01
36 TCT209B 1410 1987 0.01
37 TCT210A 1637 1987 0.01
38 TCT210B 1714 1987 0.01
39 TCT211A 1700 1987 0.01
40 TCT211B 1703 1987 0.01
41 TCT212A 1702 1987 0.01
42 TCT212B 1660 1987 0.01
43 TCT213A 1693 1987 0.01
44 TCT213B 1597 1987 0.01
<- bai.out(grow.rwl, diam = NULL)
basal <- print(basal) basal_p
spag.plot(basal[1], zfac = 1, useRaster = FALSE, res = 300)
Similar to before, you can export the data using write.csv(basal_p, "basal_bai.csv")
The burnR package creates composite fire history plots similar to the FHX2 and FHAES programs. It uses FHX formatted files. An example of a FHX file is included in the repository.
After loading the burnr
library, you can use read_fhx()
to load data already formatted into the FHX format.
<- read_fhx('Zion.fhx')
Zion Zion
Although each of the samples have an ID, we can add so site information in order to facet our plots. This will sort each of the samples into the appropriate category when facted with the plot_demograph
function.
<- read.csv('ZionSiteIDs.csv')
Sites Sites
<- plot_demograph(Zion, facet_group = Sites$SiteID, facet_id = Sites$series, plot_legend = TRUE)
facetplot print(facetplot)
Another option available in plot_demograph
is a to create a plot of all samples with a composite plot beneath the plot of individuals. In addition to the plot and composite, you can also add annotations to highlight common fire dates.
<- plot_demograph(Zion, composite_rug = TRUE, plot_legend = TRUE)
rugplot <- rugplot +
compositerug annotate('rect', xmin = 1721, xmax = 1723, ymin = 0, ymax = 21, alpha = 0.4) +
annotate('rect', xmin = 1734, xmax = 1736, ymin = 0, ymax = 21, alpha = 0.4) +
annotate('rect', xmin = 1748, xmax = 1750, ymin = 0, ymax = 21, alpha = 0.4) +
annotate('rect', xmin = 1777, xmax = 1779, ymin = 0, ymax = 21, alpha = 0.4) +
annotate('rect', xmin = 1793, xmax = 1795, ymin = 0, ymax = 21, alpha = 0.4) +
scale_x_continuous(limits=c(1450, 2005), breaks = seq(1450,2005,25)) +
theme(axis.text.x = element_text(angle = -90, hjust = 0, vjust = 0.2),
panel.grid.major = element_blank(), panel.grid.minor = element_blank())
print(compositerug)
Alright, let’s step up the game! Let’s try some climate response with daily climate data because trees don’t know what a month is. You can get daily climate data for the US from the PRISM Climate Group.
Load and format the data prior to analysis.
<- read.csv("BTK_daily_pcp.csv", header = TRUE)
pcp_d row.names(pcp_d) <- as.numeric(pcp_d$Year)
<- pcp_d[,2:367] #subset only 1 to 366 days, not year column
pcp_d <- read.csv("Btk_std.csv")
crn_d row.names(crn_d) <- as.numeric(crn_d$X) #set the years as row names
<- crn_d[2] #subset only growth index
crn_d head(crn_d)
Take a look at the daily climate data
glimpse_daily_data(env_data = pcp_d, tidy_env_data = FALSE, na.color = "white")
Analyze growth vs climate with fixed window width. This is set in the script using the fixed_width =
function.
fixed_width <- daily_response(response = crn_d, env_data = pcp_d,
method = "cor", fixed_width = 60, row_names_subset = TRUE,
remove_insignificant = TRUE, alpha = 0.05)
$plot_extreme #creates a plot showing best correlated period fixed_width
You can compare the response across two periods of analysis to assess time stability, or you can leave the subset to all the years.
btk_past <- daily_response(response = crn_d, env_data = pcp_d,
method = "cor", lower_limit = 50, upper_limit = 70,
row_names_subset = TRUE, previous_year = TRUE,
remove_insignificant = TRUE, alpha = 0.05,
plot_specific_window = 60, subset_years = c(1981, 1998))
btk_present <- daily_response(response = crn_d, env_data = pcp_d,
method = "cor", lower_limit = 50, upper_limit = 70,
row_names_subset = TRUE, previous_year = TRUE,
remove_insignificant = TRUE, alpha = 0.05,
plot_specific_window = 60, subset_years = c(1999, 2016))
Now you can plot the results.
$plot_heatmap btk_past
$plot_heatmap btk_present
$plot_specific #choose a specific window length to plot if you set this above btk_past
$plot_specific btk_present
Load data
data(example_proxies_individual)
data(LJ_daily_temperatures)
Example PCA - just create a data frame with multiple crns
example_PCA <- daily_response(response = example_proxies_individual,
env_data = LJ_daily_temperatures, method = "lm",
lower_limit = 60, upper_limit = 70,
row_names_subset = TRUE, remove_insignificant =
TRUE, alpha = 0.01, PCA_transformation = TRUE,
components_selection = "manual", N_components = 2)
Summary output and plot for the PCA analysis.
summary(example_PCA$PCA_output)
Importance of components:
Comp.1 Comp.2 Comp.3 Comp.4 Comp.5
Standard deviation 1.8691406 1.5007125 1.1867201 0.87168877 0.79432916
Proportion of Variance 0.3493686 0.2252138 0.1408305 0.07598413 0.06309588
Cumulative Proportion 0.3493686 0.5745824 0.7154129 0.79139704 0.85449292
Comp.6 Comp.7 Comp.8 Comp.9
Standard deviation 0.68818504 0.61467322 0.57233447 0.49611738
Proportion of Variance 0.04735987 0.03778232 0.03275667 0.02461325
Cumulative Proportion 0.90185279 0.93963510 0.97239178 0.99700502
Comp.10
Standard deviation 0.173060035
Proportion of Variance 0.002994978
Cumulative Proportion 1.000000000
$plot_heatmap example_PCA
Load the sample data.
data(data_TRW)
data(KRE_daily_temperatures)
Reconstruction
example_reconstruction_lin <- daily_response(response = data_TRW,
env_data = KRE_daily_temperatures, method = "lm",
metric = "r.squared", lower_limit = 30,
upper_limit = 40, row_names_subset = TRUE,
temporal_stability_check = "progressive",
cross_validation_type = "randomized", k = 3)
Reconstruction Plots
$plot_extreme example_reconstruction_lin
$temporal_stability example_reconstruction_lin
$cross_validation example_reconstruction_lin
$transfer_function example_reconstruction_lin
<- lm(Optimized_return ~ TRW, data = example_reconstruction_lin$optimized_return)
linear_model
<- data.frame(predictions = predict(linear_model, newdata = data_TRW))
reconstruction
<- lm(Optimized_return ~ TRW, data = example_reconstruction_lin$optimized_return)
linear_model
<- data.frame(predictions = predict(linear_model, newdata = data_TRW))
reconstruction
plot(row.names(data_TRW), reconstruction$predictions, type = "l", xlab = "Year", ylab = "Mean temperature May 15 - Jun 27 [?C]")
load data
<- read.csv("BighornXavier_r.csv", header = TRUE)
flow_d row.names(flow_d) <- as.numeric(flow_d$year)
<- flow_d[,2:13]#subset only 1 to 366 days, not year column flow_d
Run analysis with split period
flow_past <- monthly_response(response = crn_d, env_data = flow_d,
method = "cor", row_names_subset = TRUE, previous_year = TRUE,
remove_insignificant = TRUE, alpha = 0.05, subset_years = c(1935,
1976), aggregate_function = 'mean')
flow_present <- monthly_response(response = data_MVA, env_data =
LJ_monthly_temperatures, method = "cor", row_names_subset =
TRUE, alpha = 0.05, previous_year = TRUE, remove_insignificant =
TRUE, subset_years = c(1977, 2016), aggregate_function = 'mean')
$plot_heatmap flow_past
$plot_heatmap flow_present
$plot_extreme flow_past
$plot_extreme flow_present
You still need to get crns in column format
flow_PCA <- monthly_response(response = example_proxies_individual,
env_data = flow_d, method = "lm", row_names_subset = TRUE,
remove_insignificant = TRUE, alpha = 0.01, PCA_transformation =
TRUE, previous_year = TRUE, components_selection = "manual",
N_components = 2)
summary(flow_PCA$PCA_output)
Importance of components:
Comp.1 Comp.2 Comp.3 Comp.4 Comp.5
Standard deviation 1.8658357 1.5006020 1.1926249 0.87691842 0.80005480
Proportion of Variance 0.3481343 0.2251806 0.1422354 0.07689859 0.06400877
Cumulative Proportion 0.3481343 0.5733149 0.7155503 0.79244893 0.85645770
Comp.6 Comp.7 Comp.8 Comp.9
Standard deviation 0.69546274 0.6144111 0.54823741 0.49391586
Proportion of Variance 0.04836684 0.0377501 0.03005643 0.02439529
Cumulative Proportion 0.90482454 0.9425746 0.97263107 0.99702636
Comp.10
Standard deviation 0.172442402
Proportion of Variance 0.002973638
Cumulative Proportion 1.000000000
$plot_heatmap flow_PCA
$plot_extreme flow_PCA
This package provides functions for the calculation and plotting of synchrony in tree growth from tree-ring width chronologies (TRW index).
Load Data
data(conifersIP) #note the format of the data if you choose to use this analysis
head(conifersIP)
Calculate synchrony for null.model (broad evaluation, mBE) and homoscedastic variant of unstructured model (or full, mUN) for conifersIP data, and heteroscedastic variant for 1970-1999 period.
Fit the homoscedastic set of varcov models (mBE, mNE, mCS, mUN) using taxonomic grouping criteria (i.e. Species)
<- dendro.varcov(TRW ~ Code, varTime = "Year", varGroup = "Species",
ModHm data = conifersIP, homoscedastic = TRUE)
[1] "Please wait. I am fitting the models now :)"
summary(ModHm)# Class and length of list elements
Length Class Mode
mBE 18 lme list
mNE 18 lme list
mCS 18 lme list
mUN 18 lme list
Examine synchrony for mBE and mUN models
sync(ModHm, modname = "mBE")
$Within_Group_Synchrony
Modname a_all SE
1 mBE 0.309 0.043
attr(,"class")
[1] "sync"
sync(ModHm, modname = "mUN")
$Within_Group_Synchrony
Modname GroupName a_Group SE_Group
1 mUN ABAL 0.280 0.041
2 mUN PINI 0.469 0.050
3 mUN PISY 0.318 0.044
$Between_Group_Synchrony
Modname GroupName a_betw_Grp SE_betw_Grp
1 mUN ABAL/PINI 0.278 0.041
2 mUN ABAL/PISY 0.270 0.040
3 mUN PINI/PISY 0.298 0.042
attr(,"class")
[1] "sync"
Subset the data from 1970 to 1999.
.30 <- conifersIP[conifersIP$Year>1969 & conifersIP$Year<2000,]
conifsummary(conif.30$Year)
Min. 1st Qu. Median Mean 3rd Qu. Max.
1970 1977 1984 1984 1991 1999
Fit the heteroscedastic set of variance covariance mixed models (mBE, mHeNE, mHeCS, mHeUN) using taxonomic grouping criteria (ie. Species)
<- dendro.varcov(TRW ~ Code, varTime = "Year", varGroup = "Species",
ModHt30 data = conif.30, homoscedastic = FALSE)
[1] "Please wait. I am fitting the models now :)"
sync(ModHt30, modname = "mBE")
$Within_Group_Synchrony
Modname a_all SE
1 mBE 0.319 0.058
attr(,"class")
[1] "sync"
sync(ModHt30, modname = "mHeUN")
$Within_Group_Synchrony
Modname GroupName a_Group SE_Group
1 mHeUN ABAL 0.505 0.066
2 mHeUN PINI 0.414 0.064
3 mHeUN PISY 0.297 0.055
$Between_Group_Synchrony
Modname GroupName a_betw_Grp SE_betw_Grp
1 mHeUN ABAL/PINI 0.321 0.058
2 mHeUN ABAL/PISY 0.325 0.058
3 mHeUN PINI/PISY 0.259 0.051
attr(,"class")
[1] "sync"
This workbook was created as an R Markdown Notebook. When you execute code within the notebook, the results appear beneath the code. The resulting html is used to display the static contents via GitHub.