Currently, there are 9 functions associated with the
sample
verb in the sgsR
package:
Algorithm | Description | Reference |
---|---|---|
sample_srs() |
Simple random | |
sample_systematic() |
Systematic | |
sample_strat() |
Stratified | Queinnec, White, & Coops (2021) |
sample_sys_strat() |
Systematic Stratified | |
sample_nc() |
Nearest centroid | Melville & Stone (2016) |
sample_clhs() |
Conditioned Latin hypercube | Minasny & McBratney (2006) |
sample_balanced() |
Balanced sampling | Grafström, A. Lisic, J (2018) |
sample_ahels() |
Adapted hypercube evaluation of a legacy sample | Malone, Minasny, & Brungard (2019) |
sample_existing() |
Sub-sampling an existing sample |
sample_srs
We have demonstrated a simple example of using the
sample_srs()
function in vignette("sgsR")
. We
will demonstrate additional examples below.
raster
The input required for sample_srs()
is a
raster
. This means that sraster
and
mraster
are supported for this function.
#--- perform simple random sampling ---#
sample_srs(
raster = sraster, # input sraster
nSamp = 200, # number of desired sample units
plot = TRUE
) # plot
#> Simple feature collection with 200 features and 0 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431250 ymin: 5337710 xmax: 438510 ymax: 5343230
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> geometry
#> 1 POINT (431770 5340890)
#> 2 POINT (434330 5341750)
#> 3 POINT (432750 5339510)
#> 4 POINT (431990 5339850)
#> 5 POINT (435050 5343030)
#> 6 POINT (433510 5341850)
#> 7 POINT (434630 5337830)
#> 8 POINT (435470 5343210)
#> 9 POINT (435390 5340170)
#> 10 POINT (433170 5340970)
sample_srs(
raster = mraster, # input mraster
nSamp = 200, # number of desired sample units
access = access, # define access road network
mindist = 200, # minimum distance sample units must be apart from one another
buff_inner = 50, # inner buffer - no sample units within this distance from road
buff_outer = 200, # outer buffer - no sample units further than this distance from road
plot = TRUE
) # plot
#> Simple feature collection with 200 features and 0 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431130 ymin: 5337710 xmax: 438550 ymax: 5343170
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> geometry
#> 1 POINT (431450 5342850)
#> 2 POINT (433370 5340870)
#> 3 POINT (438550 5338070)
#> 4 POINT (438350 5338590)
#> 5 POINT (434410 5339990)
#> 6 POINT (437530 5342030)
#> 7 POINT (438330 5339470)
#> 8 POINT (436850 5338450)
#> 9 POINT (431450 5342490)
#> 10 POINT (434890 5342530)
sample_systematic
The sample_systematic()
function applies systematic
sampling across an area with the cellsize
parameter
defining the resolution of the tessellation. The tessellation shape can
be modified using the square
parameter. Assigning
TRUE
(default) to the square
parameter results
in a regular grid and assigning FALSE
results in a
hexagonal grid.
The location of sample units can also be adjusted using the
locations
parameter, where centers
takes the
center, corners
takes all corners, and random
takes a random location within each tessellation. Random start points
and translations are applied when the function is called.
#--- perform grid sampling ---#
sample_systematic(
raster = sraster, # input sraster
cellsize = 1000, # grid distance
plot = TRUE
) # plot
#> Simple feature collection with 38 features and 0 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431122.5 ymin: 5337766 xmax: 438445.6 ymax: 5343227
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> geometry
#> 1 POINT (436297.3 5343227)
#> 2 POINT (435313.9 5343046)
#> 3 POINT (434330.5 5342864)
#> 4 POINT (433347.1 5342683)
#> 5 POINT (432363.7 5342501)
#> 6 POINT (431380.3 5342320)
#> 7 POINT (438445.6 5342606)
#> 8 POINT (437462.2 5342425)
#> 9 POINT (436478.7 5342244)
#> 10 POINT (435495.3 5342062)
#--- perform grid sampling ---#
sample_systematic(
raster = sraster, # input sraster
cellsize = 500, # grid distance
square = FALSE, # hexagonal tessellation
location = "random", # randomly sample within tessellation
plot = TRUE
) # plot
#> Simple feature collection with 176 features and 0 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431132.4 ymin: 5337742 xmax: 438535.9 ymax: 5343231
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> geometry
#> 1 POINT (431379.6 5342970)
#> 2 POINT (431235.5 5342688)
#> 3 POINT (431575.5 5343041)
#> 4 POINT (431831.4 5342717)
#> 5 POINT (431331 5341504)
#> 6 POINT (431791.2 5342100)
#> 7 POINT (432072.2 5342952)
#> 8 POINT (431253.6 5340850)
#> 9 POINT (431865.8 5341872)
#> 10 POINT (432084.9 5342740)
sample_systematic(
raster = sraster, # input sraster
cellsize = 500, # grid distance
access = access, # define access road network
buff_outer = 200, # outer buffer - no sample units further than this distance from road
square = FALSE, # hexagonal tessellation
location = "corners", # take corners instead of centers
plot = TRUE
)
#> Simple feature collection with 618 features and 0 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431316.9 ymin: 5337742 xmax: 438393.9 ymax: 5343239
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> geometry
#> 1 POINT (431380.7 5342697)
#> 2 POINT (431380.7 5342697)
#> 3 POINT (431517 5342443)
#> 4 POINT (431973.7 5343178)
#> 5 POINT (431821.5 5342933)
#> 6 POINT (431517 5342443)
#> 7 POINT (431380.7 5342697)
#> 8 POINT (431821.5 5342933)
#> 9 POINT (431957.8 5342679)
#> 10 POINT (431805.5 5342433)
sample_strat
The sample_strat()
contains two method
s to
perform sampling:
"Queinnec"
- Hierarchical sampling using a focal
window to isolate contiguous groups of stratum pixels, which was
originally developed by Martin Queinnec.
"random"
- Traditional stratified random sampling.
This method
ignores much of the functionality of the
algorithm to allow users the capability to use standard stratified
random sampling approaches without the use of a focal window to locate
contiguous stratum cells.
method = "Queinnec"
Queinnec, M., White, J. C., & Coops, N. C. (2021). Comparing airborne and spaceborne photon-counting LiDAR canopy structural estimates across different boreal forest types. Remote Sensing of Environment, 262(August 2020), 112510.
This algorithm uses moving window (wrow
and
wcol
parameters) to filter the input sraster
to prioritize sample unit allocation to where stratum pixels are
spatially grouped, rather than dispersed individuals across the
landscape.
Sampling is performed using 2 rules:
Rule 1 - Sample within spatially grouped stratum
pixels. Moving window defined by wrow
and
wcol
.
Rule 2 - If no additional sample units exist to
satisfy desired sample size(nSamp
), individual stratum
pixels are sampled.
The rule applied to a select each sample unit is defined in the
rule
attribute of output samples. We give a few examples
below:
#--- perform stratified sampling random sampling ---#
sample_strat(
sraster = sraster, # input sraster
nSamp = 200
) # desired sample size # plot
#> Simple feature collection with 200 features and 3 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431150 ymin: 5337750 xmax: 438530 ymax: 5343170
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> strata type rule geometry
#> x 1 new rule1 POINT (436050 5342330)
#> x1 1 new rule1 POINT (434670 5341170)
#> x2 1 new rule1 POINT (434390 5342610)
#> x3 1 new rule1 POINT (436950 5338070)
#> x4 1 new rule1 POINT (433910 5342950)
#> x5 1 new rule1 POINT (437790 5338290)
#> x6 1 new rule1 POINT (434690 5341670)
#> x7 1 new rule1 POINT (435690 5342470)
#> x8 1 new rule1 POINT (436990 5338230)
#> x9 1 new rule1 POINT (433790 5342990)
In some cases, users might want to include an existing
sample within the algorithm. In order to adjust the total number of
sample units needed per stratum to reflect those already present in
existing
, we can use the intermediate function
extract_strata()
.
This function uses the sraster
and existing
sample units and extracts the stratum for each. These sample units can
be included within sample_strat()
, which adjusts total
sample units required per class based on representation in
existing
.
#--- extract strata values to existing samples ---#
e.sr <- extract_strata(
sraster = sraster, # input sraster
existing = existing
) # existing samples to add strata value to
TIP!
sample_strat()
requires the sraster
input
to have an attribute named strata
and will give an error if
it doesn’t.
sample_strat(
sraster = sraster, # input sraster
nSamp = 200, # desired sample size
access = access, # define access road network
existing = e.sr, # existing sample with strata values
mindist = 200, # minimum distance sample units must be apart from one another
buff_inner = 50, # inner buffer - no sample units within this distance from road
buff_outer = 200, # outer buffer - no sample units further than this distance from road
plot = TRUE
) # plot
#> Simple feature collection with 400 features and 3 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431190 ymin: 5337730 xmax: 438530 ymax: 5343210
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> strata type rule geometry
#> 1 1 existing existing POINT (437770 5337930)
#> 2 1 existing existing POINT (437830 5341230)
#> 3 1 existing existing POINT (435710 5341910)
#> 4 1 existing existing POINT (432990 5341830)
#> 5 1 existing existing POINT (435050 5339450)
#> 6 1 existing existing POINT (438070 5341070)
#> 7 1 existing existing POINT (434350 5341950)
#> 8 1 existing existing POINT (433010 5340570)
#> 9 1 existing existing POINT (436970 5338190)
#> 10 1 existing existing POINT (434730 5342250)
The code in the example above defined the mindist
parameter, which specifies the minimum euclidean distance that new
sample units must be apart from one another.
Notice that the sample units have type
and
rule
attributes which outline whether they are
existing
or new
, and whether
rule1
or rule2
were used to select them. If
type
is existing (a user provided
existing
sample), rule
will be
existing as well as seen above.
sample_strat(
sraster = sraster, # input
nSamp = 200, # desired sample size
access = access, # define access road network
existing = e.sr, # existing samples with strata values
include = TRUE, # include existing sample in nSamp total
buff_outer = 200, # outer buffer - no samples further than this distance from road
plot = TRUE
) # plot
#> Simple feature collection with 200 features and 3 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431230 ymin: 5337750 xmax: 438530 ymax: 5343170
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> strata type rule geometry
#> 1 1 existing existing POINT (437770 5337930)
#> 2 1 existing existing POINT (437830 5341230)
#> 3 1 existing existing POINT (435710 5341910)
#> 4 1 existing existing POINT (432990 5341830)
#> 5 1 existing existing POINT (435050 5339450)
#> 6 1 existing existing POINT (438070 5341070)
#> 7 1 existing existing POINT (434350 5341950)
#> 8 1 existing existing POINT (433010 5340570)
#> 9 1 existing existing POINT (436970 5338190)
#> 10 1 existing existing POINT (434730 5342250)
The include
parameter determines whether
existing
sample units should be included in the total
sample size defined by nSamp
. By default, the
include
parameter is set as FALSE
.
method = "random
Stratified random sampling with equal probability for all cells
(using default algorithm values for mindist
and no use of
access
functionality). In essence this method perform the
sample_srs
algorithm for each stratum separately to meet
the specified sample size.
#--- perform stratified sampling random sampling ---#
sample_strat(
sraster = sraster, # input sraster
method = "random", # stratified random sampling
nSamp = 200, # desired sample size
plot = TRUE
) # plot
#> Simple feature collection with 200 features and 2 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431110 ymin: 5337710 xmax: 438550 ymax: 5343210
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> strata type geometry
#> x 1 new POINT (432750 5343210)
#> x1 1 new POINT (437850 5338650)
#> x2 1 new POINT (437690 5339490)
#> x3 1 new POINT (431190 5340570)
#> x4 1 new POINT (438210 5342410)
#> x5 1 new POINT (434510 5341570)
#> x6 1 new POINT (438270 5340950)
#> x7 1 new POINT (433150 5341710)
#> x8 1 new POINT (437530 5339230)
#> x9 1 new POINT (436590 5339570)
sample_sys_strat
sample_sys_strat()
function implements systematic
stratified sampling on an sraster
. This function uses the
same functionality as sample_systematic()
but takes an
sraster
as input and performs sampling on each stratum
iteratively.
#--- perform grid sampling on each stratum separately ---#
sample_sys_strat(
sraster = sraster, # input sraster with 4 strata
cellsize = 1000, # grid size
plot = TRUE # plot output
)
#> Warning: [readStart] source already open for reading
#> Processing strata : 1
#> Warning: [extract] source already open for reading
#> Processing strata : 2
#> Warning: [extract] source already open for reading
#> Processing strata : 3
#> Warning: [extract] source already open for reading
#> Processing strata : 4
#> Warning: [extract] source already open for reading
#> Simple feature collection with 40 features and 1 field
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431270.7 ymin: 5337911 xmax: 438370.5 ymax: 5343035
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> strata geometry
#> 1 1 POINT (437747.6 5342834)
#> 2 1 POINT (438370.5 5338406)
#> 3 1 POINT (437803.1 5339229)
#> 4 1 POINT (436668.2 5340876)
#> 5 1 POINT (435533.3 5342523)
#> 6 1 POINT (434709.9 5341955)
#> 7 1 POINT (435588.8 5338918)
#> 8 1 POINT (435021.4 5339741)
#> 9 1 POINT (433886.5 5341388)
#> 10 1 POINT (432751.6 5343035)
Just like with sample_systematic()
we can specify where
we want our samples to fall within our tessellations. We specify
location = "corners"
below. Note that the tesselations are
all saved to a list file when details = TRUE
should the
user want to save them.
sample_sys_strat(
sraster = sraster, # input sraster with 4 strata
cellsize = 500, # grid size
square = FALSE, # hexagon tessellation
location = "corners", # samples on tessellation corners
plot = TRUE # plot output
)
#> Processing strata : 1
#> Warning: [extract] source already open for reading
#> Processing strata : 2
#> Warning: [extract] source already open for reading
#> Processing strata : 3
#> Warning: [extract] source already open for reading
#> Processing strata : 4
#> Warning: [extract] source already open for reading
#> Simple feature collection with 1187 features and 1 field
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431109.3 ymin: 5337702 xmax: 438510.8 ymax: 5343231
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> strata geometry
#> 1 1 POINT (438138.6 5337796)
#> 2 1 POINT (438222 5338072)
#> 3 1 POINT (438274.3 5339112)
#> 4 1 POINT (438357.7 5339388)
#> 5 1 POINT (438222 5338072)
#> 6 1 POINT (438024.3 5338283)
#> 7 1 POINT (438107.6 5338559)
#> 8 1 POINT (438138.6 5337796)
#> 9 1 POINT (438138.6 5337796)
#> 10 1 POINT (438357.7 5339388)
This sampling approach could be especially useful incombination with
strat_poly()
to ensure consistency of sampling accross
specific management units.
#--- read polygon coverage ---#
poly <- system.file("extdata", "inventory_polygons.shp", package = "sgsR")
fri <- sf::st_read(poly)
#> Reading layer `inventory_polygons' from data source
#> `C:\Users\goodb\AppData\Local\Temp\RtmpKwkxG7\Rinst2ec85f71703f\sgsR\extdata\inventory_polygons.shp'
#> using driver `ESRI Shapefile'
#> Simple feature collection with 632 features and 3 fields
#> Geometry type: MULTIPOLYGON
#> Dimension: XY
#> Bounding box: xmin: 431100 ymin: 5337700 xmax: 438560 ymax: 5343240
#> Projected CRS: UTM_Zone_17_Northern_Hemisphere
#--- stratify polygon coverage ---#
#--- specify polygon attribute to stratify ---#
attribute <- "NUTRIENTS"
#--- specify features within attribute & how they should be grouped ---#
#--- as a single vector ---#
features <- c("poor", "rich", "medium")
#--- get polygon stratification ---#
srasterpoly <- strat_poly(
poly = fri,
attribute = attribute,
features = features,
raster = sraster
)
#--- systematatic stratified sampling for each stratum ---#
sample_sys_strat(
sraster = srasterpoly, # input sraster from strat_poly() with 3 strata
cellsize = 500, # grid size
square = FALSE, # hexagon tessellation
location = "random", # randomize plot location
plot = TRUE # plot output
)
#> Processing strata : 1
#> Warning: [extract] source already open for reading
#> Processing strata : 2
#> Warning: [extract] source already open for reading
#> Processing strata : 3
#> Warning: [extract] source already open for reading
#> Simple feature collection with 179 features and 1 field
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431115.5 ymin: 5337705 xmax: 438554.6 ymax: 5343240
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> strata geometry
#> 1 1 POINT (432177.2 5343114)
#> 2 1 POINT (432546.4 5343207)
#> 3 1 POINT (432304.6 5342684)
#> 4 1 POINT (433067.1 5342912)
#> 5 1 POINT (432821.6 5342656)
#> 6 1 POINT (433560.7 5342583)
#> 7 1 POINT (435971.9 5343195)
#> 8 1 POINT (434004 5342693)
#> 9 1 POINT (434772.7 5342938)
#> 10 1 POINT (435483.1 5342823)
sample_nc
sample_nc()
function implements the Nearest Centroid
sampling algorithm described in Melville &
Stone (2016). The algorithm uses kmeans clustering where the number
of clusters (centroids) is equal to the desired sample size
(nSamp
).
Cluster centers are located, which then prompts the nearest neighbour
mraster
pixel for each cluster to be selected (assuming
default k
parameter). These nearest neighbours are the
output sample units.
#--- perform simple random sampling ---#
sample_nc(
mraster = mraster, # input
nSamp = 25, # desired sample size
plot = TRUE
)
#> K-means being performed on 3 layers with 25 centers.
#> Simple feature collection with 25 features and 4 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431990 ymin: 5338010 xmax: 438450 ymax: 5343050
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> zq90 pzabove2 zsd kcenter geometry
#> 15509 5.86 55.9 1.38 1 POINT (435410 5342410)
#> 27274 12.90 75.0 3.30 2 POINT (431990 5341770)
#> 66746 9.86 64.4 2.57 3 POINT (438130 5339670)
#> 10797 17.50 88.4 4.35 4 POINT (438150 5342670)
#> 36036 20.40 92.7 4.62 5 POINT (435650 5341310)
#> 3725 20.10 71.7 6.12 6 POINT (438450 5343050)
#> 32617 4.58 29.5 1.06 7 POINT (434410 5341490)
#> 65471 7.54 13.9 2.02 8 POINT (435010 5339730)
#> 47146 3.07 6.7 0.60 9 POINT (434050 5340710)
#> 27558 15.90 73.2 4.14 10 POINT (437670 5341770)
Altering the k
parameter leads to a multiplicative
increase in output sample units where total output samples = \(nSamp * k\).
#--- perform simple random sampling ---#
samples <- sample_nc(
mraster = mraster, # input
k = 2, # number of nearest neighbours to take for each kmeans center
nSamp = 25, # desired sample size
plot = TRUE
)
#> K-means being performed on 3 layers with 25 centers.
Visualizing what the kmeans centers and sample units looks like is
possible when using details = TRUE
. The $kplot
output provides a quick visualization of where the centers are based on
a scatter plot of the first 2 layers in mraster
. Notice
that the centers are well distributed in covariate space and chosen
sample units are the closest pixels to each center (nearest
neighbours).
#--- perform simple random sampling with details ---#
details <- sample_nc(
mraster = mraster, # input
nSamp = 25, # desired sample number
details = TRUE
)
#> K-means being performed on 3 layers with 25 centers.
#--- plot ggplot output ---#
details$kplot
sample_clhs
sample_clhs()
function implements conditioned Latin
hypercube (clhs) sampling methodology from the clhs
package.
TIP!
A number of other functions in the sgsR
package help to
provide guidance on clhs sampling including calculate_pop()
and calculate_lhsOpt()
. Check out these functions to better
understand how sample numbers could be optimized.
The syntax for this function is similar to others shown above,
although parameters like iter
, which define the number of
iterations within the Metropolis-Hastings process are important to
consider. In these examples we use a low iter
value for
efficiency. Default values for iter
within the
clhs
package are 10,000.
sample_clhs(
mraster = mraster, # input
nSamp = 200, # desired sample size
plot = TRUE, # plot
iter = 100
) # number of iterations
The cost
parameter defines the mraster
covariate, which is used to constrain the clhs sampling. An example
could be the distance a pixel is from road access
(e.g. from calculate_distance()
see example below), terrain
slope, the output from calculate_coobs()
, or many
others.
#--- cost constrained examples ---#
#--- calculate distance to access layer for each pixel in mr ---#
mr.c <- calculate_distance(
raster = mraster, # input
access = access, # define access road network
plot = TRUE
) # plot
#> |---------|---------|---------|---------|=========================================
sample_clhs(
mraster = mr.c, # input
nSamp = 250, # desired sample size
iter = 100, # number of iterations
cost = "dist2access", # cost parameter - name defined in calculate_distance()
plot = TRUE
) # plot
sample_balanced
The sample_balanced()
algorithm performs a balanced
sampling methodology from the stratifyR / SamplingBigData
packages.
#> Simple feature collection with 200 features and 0 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431110 ymin: 5337730 xmax: 438550 ymax: 5343230
#> Projected CRS: +proj=utm +zone=17 +ellps=GRS80 +units=m +no_defs
#> First 10 features:
#> geometry
#> 1 POINT (433050 5343230)
#> 2 POINT (433610 5343210)
#> 3 POINT (434450 5343210)
#> 4 POINT (435790 5343210)
#> 5 POINT (437410 5343150)
#> 6 POINT (431210 5343130)
#> 7 POINT (436190 5343130)
#> 8 POINT (437290 5343110)
#> 9 POINT (438310 5343110)
#> 10 POINT (438010 5343090)
sample_balanced(
mraster = mraster, # input
nSamp = 100, # desired sample size
algorithm = "lcube", # algorithm type
access = access, # define access road network
buff_inner = 50, # inner buffer - no sample units within this distance from road
buff_outer = 200
) # outer buffer - no sample units further than this distance from road
#> Simple feature collection with 100 features and 0 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431190 ymin: 5337750 xmax: 438510 ymax: 5343230
#> Projected CRS: +proj=utm +zone=17 +ellps=GRS80 +units=m +no_defs
#> First 10 features:
#> geometry
#> 1 POINT (434870 5343230)
#> 2 POINT (435650 5343210)
#> 3 POINT (435030 5343110)
#> 4 POINT (437370 5343030)
#> 5 POINT (432950 5343010)
#> 6 POINT (437870 5342950)
#> 7 POINT (434230 5342890)
#> 8 POINT (435950 5342870)
#> 9 POINT (433770 5342850)
#> 10 POINT (435050 5342850)
sample_ahels
The sample_ahels()
function performs the adapted
Hypercube Evaluation of a Legacy Sample (ahels) algorithm
usingexisting
sample data and an mraster
. New
sample units are allocated based on quantile ratios between the
existing
sample and mraster
covariate
dataset.
This algorithm was adapted from that presented in the paper below, which we highly recommend.
Malone BP, Minansy B, Brungard C. 2019. Some methods to improve the utility of conditioned Latin hypercube sampling. PeerJ 7:e6451 DOI 10.7717/peerj.6451
This algorithm:
Determines the quantile distributions of existing
sample units and mraster
covariates.
Determines quantiles where there is a disparity between sample units and covariates.
Prioritizes sampling within those quantile to improve representation.
To use this function, user must first specify the number of quantiles
(nQuant
) followed by either the nSamp
(total
number of desired sample units to be added) or the
threshold
(sampling ratio vs. covariate coverage ratio for
quantiles - default is 0.9) parameters.
#--- remove `type` variable from existing - causes plotting issues ---#
existing <- existing %>% select(-type)
sample_ahels(
mraster = mraster,
existing = existing, # existing sample
plot = TRUE
) # plot
#> Simple feature collection with 339 features and 7 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431150 ymin: 5337750 xmax: 438530 ymax: 5343210
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> type.x zq90 pzabove2 zsd strata type.y rule geometry
#> 1 existing 5.34 63.9 1.20 1 new rule1 POINT (437770 5337930)
#> 2 existing 7.23 69.0 1.64 1 new rule1 POINT (437830 5341230)
#> 3 existing 9.57 78.1 2.51 1 new rule1 POINT (435710 5341910)
#> 4 existing 5.07 2.8 1.18 1 new rule1 POINT (432990 5341830)
#> 5 existing 3.94 7.6 0.79 1 new rule1 POINT (435050 5339450)
#> 6 existing 6.67 77.2 1.41 1 new rule1 POINT (438070 5341070)
#> 7 existing 3.22 17.1 0.59 1 new rule1 POINT (434350 5341950)
#> 8 existing 7.69 41.1 1.99 1 new rule1 POINT (433010 5340570)
#> 9 existing 8.22 89.4 1.68 1 new rule1 POINT (436970 5338190)
#> 10 existing 8.61 51.0 2.16 1 new rule1 POINT (434730 5342250)
TIP!
Notice that no threshold
, nSamp
, or
nQuant
were defined. That is because the default setting
for threshold = 0.9
and nQuant = 10
.
The first matrix output shows the quantile ratios between the sample and the covariates. A value of 1.0 indicates that the sample is representative of quantile coverage. Values > 1.0 indicate over representation of sample units, while < 1.0 indicate under representation.
sample_ahels(
mraster = mraster,
existing = existing, # existing sample
nQuant = 20, # define 20 quantiles
nSamp = 300
) # desired sample size
#> Simple feature collection with 500 features and 7 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431110 ymin: 5337710 xmax: 438550 ymax: 5343210
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> type.x zq90 pzabove2 zsd strata type.y rule geometry
#> 1 existing 5.34 63.9 1.20 1 new rule1 POINT (437770 5337930)
#> 2 existing 7.23 69.0 1.64 1 new rule1 POINT (437830 5341230)
#> 3 existing 9.57 78.1 2.51 1 new rule1 POINT (435710 5341910)
#> 4 existing 5.07 2.8 1.18 1 new rule1 POINT (432990 5341830)
#> 5 existing 3.94 7.6 0.79 1 new rule1 POINT (435050 5339450)
#> 6 existing 6.67 77.2 1.41 1 new rule1 POINT (438070 5341070)
#> 7 existing 3.22 17.1 0.59 1 new rule1 POINT (434350 5341950)
#> 8 existing 7.69 41.1 1.99 1 new rule1 POINT (433010 5340570)
#> 9 existing 8.22 89.4 1.68 1 new rule1 POINT (436970 5338190)
#> 10 existing 8.61 51.0 2.16 1 new rule1 POINT (434730 5342250)
Notice that the total number of samples is 500. This value is the sum
of existing units (200) and number of sample units defined by
nSamp = 300
.
sample_existing
Acknowledging that existing
sample networks are common
is important. There is significant investment into these samples, and in
order to keep inventories up-to-date, we often need to collect new data
for sample units. The sample_existing
algorithm provides
the user with methods for sub-sampling an existing
sample
network should the financial / logistical resources not be available to
collect data at all sample units. The functions allows users to choose
between algorithm types using (type = "clhs"
- default,
type = "balanced"
, type = "srs"
,
type = "strat"
). Differences in type result in calling
internal sample_existing_*()
functions
(sample_existing_clhs()
(default),
sample_existing_balanced()
,
sample_existing_srs()
,
sample_existing_strat()
). These functions are not exported
to be used stand-alone, however they employ the same functionality as
their sample_clhs()
etc counterparts.
While using sample_existing()
, should the user wish to
specify algorithm specific parameters
(e.g. algorithm = "lcube"
in sample_balanced()
or allocation = "equal"
in sample_strat()
),
they can specify within sample_existing()
as if calling the
function directly.
I give applied examples for all methods below that are based on the following scenario:
We have a systematic sample where sample units are 200m apart.
We know we only have resources to sample 300 of them.
We have some ALS data available (mraster
), which we
can use to improve knowledge of the metric populations.
See our existing
sample for the scenario below.
#--- generate existing samples and extract metrics ---#
existing <- sample_systematic(raster = mraster, cellsize = 200, plot = TRUE)
sample_existing(type = "clhs")
The algorithm is unique in that it has two fundamental approaches:
existing
and the attributes it
contains.#--- sub sample using ---#
sample_existing(existing = e, nSamp = 300, type = "clhs")
#> Simple feature collection with 300 features and 3 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431166.7 ymin: 5337700 xmax: 438559.5 ymax: 5343229
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> zq90 pzabove2 zsd geometry
#> 858 21.20 94.6 4.15 POINT (432611.9 5338202)
#> 733 21.10 95.4 3.72 POINT (432970.8 5339860)
#> 62 19.60 94.2 4.73 POINT (437390.7 5342232)
#> 354 22.20 92.2 5.65 POINT (435806.4 5340260)
#> 703 11.30 84.4 2.73 POINT (433572.2 5339296)
#> 463 7.19 6.0 1.94 POINT (434917.3 5340163)
#> 490 5.51 1.3 2.62 POINT (435074.1 5339550)
#> 646 26.30 95.8 6.59 POINT (433583.5 5340017)
#> 810 17.20 50.5 5.78 POINT (432358.1 5339704)
#> 9 13.40 90.1 2.96 POINT (438123.1 5342942)
raster
distributionsOur systematic sample of ~900 plots is fairly comprehensive, however
we can generate a true population distribution through the inclusion of
the ALS metrics in the sampling process. The metrics will be included in
internal latin hypercube sampling to help guide sub-sampling of
existing
.
#--- sub sample using ---#
sample_existing(
existing = existing, # our existing sample
nSamp = 300, # desired sample size
raster = mraster, # include mraster metrics to guide sampling of existing
plot = TRUE
) # plot
#> Simple feature collection with 300 features and 3 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431129.5 ymin: 5337700 xmax: 438556.3 ymax: 5343237
#> Projected CRS: +proj=utm +zone=17 +ellps=GRS80 +units=m +no_defs
#> First 10 features:
#> zq90 pzabove2 zsd geometry
#> 91950 9.96 85.1 2.42 POINT (433669.2 5338407)
#> 91986 21.00 82.6 5.75 POINT (431984.7 5340653)
#> 91877 21.70 94.7 3.43 POINT (432765.5 5340918)
#> 91779 14.50 95.3 2.99 POINT (433811.5 5340402)
#> 91835 2.76 7.5 0.45 POINT (434233.4 5339008)
#> 91515 9.92 41.4 2.63 POINT (436732.7 5339191)
#> 91686 8.22 16.5 2.20 POINT (434965.7 5339718)
#> 91861 12.20 89.7 2.59 POINT (434823.4 5337723)
#> 91624 22.80 92.2 5.78 POINT (435843.6 5339094)
#> 91477 15.00 51.7 4.21 POINT (435227.7 5342266)
The sample distribution again mimics the population distribution quite well! Now lets try using a cost variable to constrain the sub-sample.
#--- create distance from roads metric ---#
dist <- calculate_distance(raster = mraster, access = access)
#> |---------|---------|---------|---------|=========================================
#--- sub sample using ---#
sample_existing(
existing = existing, # our existing sample
nSamp = 300, # desired sample size
raster = dist, # include mraster metrics to guide sampling of existing
cost = 4, # either provide the index (band number) or the name of the cost layer
plot = TRUE
) # plot
#> Simple feature collection with 300 features and 4 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431155.3 ymin: 5337723 xmax: 438488.4 ymax: 5343237
#> Projected CRS: +proj=utm +zone=17 +ellps=GRS80 +units=m +no_defs
#> First 10 features:
#> zq90 pzabove2 zsd dist2access geometry
#> 91929 20.20 95.4 3.29 171.4450 POINT (432862.5 5340028)
#> 91631 14.80 95.9 3.22 243.9676 POINT (435085.4 5340271)
#> 91485 12.20 80.1 3.13 187.7314 POINT (437659 5338122)
#> 91255 17.30 92.7 4.24 358.8087 POINT (437607.4 5341896)
#> 92002 18.40 91.6 4.38 195.5259 POINT (432683 5339199)
#> 91680 17.00 91.7 3.47 198.0760 POINT (435723.9 5338541)
#> 91629 22.00 92.6 4.92 101.3995 POINT (435302 5339935)
#> 91748 16.90 93.5 3.66 368.9830 POINT (434087.9 5340342)
#> 91975 17.00 92.1 3.97 579.6924 POINT (433284.4 5338635)
#> 91690 3.74 37.2 0.75 156.6058 POINT (434315.9 5340727)
Finally, should the user wish to further constrain the sample based
on access
like other sampling approaches in
sgsR
that is also possible.
#--- ensure access and existing are in the same CRS ---#
sf::st_crs(existing) <- sf::st_crs(access)
#--- sub sample using ---#
sample_existing(
existing = existing, # our existing sample
nSamp = 300, # desired sample size
raster = dist, # include mraster metrics to guide sampling of existing
cost = 4, # either provide the index (band number) or the name of the cost layer
access = access, # roads layer
buff_inner = 50, # inner buffer - no sample units within this distance from road
buff_outer = 300, # outer buffer - no sample units further than this distance from road
plot = TRUE
) # plot
#> Simple feature collection with 300 features and 4 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431155.3 ymin: 5337712 xmax: 438559.5 ymax: 5343237
#> Projected CRS: +proj=utm +zone=17 +ellps=GRS80 +units=m +no_defs
#> First 10 features:
#> zq90 pzabove2 zsd dist2access geometry
#> 91333 7.84 31.4 1.92 143.46549 POINT (435179.2 5342710)
#> 91607 13.40 79.9 3.69 282.74638 POINT (433187.4 5339524)
#> 91294 6.88 14.2 1.77 71.61624 POINT (438006.6 5339060)
#> 91647 12.30 94.9 3.43 63.13478 POINT (432623.2 5338923)
#> 91449 8.83 77.4 2.19 110.67972 POINT (434375.7 5341003)
#> 91355 4.33 10.1 1.03 67.79961 POINT (435167.9 5341989)
#> 91210 2.98 3.6 0.54 97.84857 POINT (437835.3 5342280)
#> 91340 17.40 88.7 3.56 115.09550 POINT (436419.1 5340416)
#> 91217 14.80 91.6 3.65 140.96187 POINT (438317.1 5341163)
#> 91228 10.60 96.0 1.95 80.54679 POINT (438040.6 5341223)
TIP!
The greater constraints we add to sampling, the less likely we will have strong correlations between the population and sample, so its always important to understand these limitations and plan accordingly.
sample_existing(type = "balanced")
When type = "balanced"
users can define all parameters
that are found within sample_balanced()
. This means that
one can change the algorithm
, p
etc.
sample_existing(existing = e, nSamp = 300, type = "balanced")
#> Simple feature collection with 300 features and 3 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431106.9 ymin: 5337700 xmax: 438559.5 ymax: 5343237
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> zq90 pzabove2 zsd geometry
#> 4 17.10 71.3 3.680000 POINT (438182.9 5343218)
#> 5 19.50 93.6 5.190000 POINT (438556.3 5342269)
#> 14 16.10 83.8 3.190000 POINT (438171.6 5342497)
#> 17 5.05 5.3 1.710000 POINT (437846.7 5343001)
#> 20 22.10 91.7 4.770000 POINT (438436.7 5341716)
#> 27 2.39 2.3 0.350000 POINT (437678.5 5342893)
#> 28 4.73 64.5 1.050000 POINT (437570.2 5343061)
#> 29 23.40 83.9 8.429999 POINT (437461.9 5343229)
#> 34 2.98 3.6 0.540000 POINT (437835.3 5342280)
#> 38 17.40 98.8 2.260000 POINT (437402.1 5342953)
sample_existing(existing = e, nSamp = 300, type = "balanced", algorithm = "lcube")
#> Simple feature collection with 300 features and 3 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431110 ymin: 5337709 xmax: 438556.3 ymax: 5343226
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> zq90 pzabove2 zsd geometry
#> 4 17.10 71.3 3.68 POINT (438182.9 5343218)
#> 5 19.50 93.6 5.19 POINT (438556.3 5342269)
#> 8 19.20 89.8 5.78 POINT (438231.4 5342773)
#> 19 21.30 81.5 5.96 POINT (438545 5341548)
#> 32 15.80 82.1 4.04 POINT (438051.9 5341944)
#> 33 11.00 55.0 2.80 POINT (437943.6 5342112)
#> 34 2.98 3.6 0.54 POINT (437835.3 5342280)
#> 39 24.40 96.8 7.55 POINT (437293.8 5343121)
#> 40 25.70 97.2 6.75 POINT (438533.7 5340827)
#> 43 11.30 76.3 3.01 POINT (438208.8 5341331)
sample_existing(type = "srs")
The simplest, type = srs
, randomly selects sample
units.
sample_existing(existing = e, nSamp = 300, type = "srs")
#> Simple feature collection with 300 features and 3 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431166.7 ymin: 5337723 xmax: 438522.4 ymax: 5343237
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> zq90 pzabove2 zsd geometry
#> 1 20.4 65.6 7.06 POINT (432246.6 5343200)
#> 2 10.7 74.7 2.96 POINT (437610.5 5338567)
#> 3 13.5 23.1 4.44 POINT (435133.9 5339826)
#> 4 19.6 89.9 5.15 POINT (435817.7 5340981)
#> 5 19.8 94.7 5.40 POINT (433452.6 5338743)
#> 6 20.9 72.3 6.66 POINT (431465.8 5342935)
#> 7 12.0 77.0 3.18 POINT (437536.2 5340898)
#> 8 10.2 50.5 2.71 POINT (435339.2 5338769)
#> 9 17.4 84.3 3.74 POINT (433449.4 5342072)
#> 10 29.0 86.9 10.67 POINT (433344.2 5338911)
sample_existing(type = "strat")
When type = "strat"
, existing
must have an
attribute named strata
(just like how
sample_strat()
requires a strata
layer). If it
doesnt exist you will get an error. Lets define an sraster
so that we are compliant.
sraster <- strat_kmeans(mraster = mraster, nStrata = 4)
e_strata <- extract_strata(sraster = sraster, existing = e)
When we do have a strata attribute, the function works very much the
same as sample_strat()
in that is allows the user to define
the allocation
method ("prop"
- defaults,
"optim"
, "manual"
, "equal"
).
#--- proportional stratified sampling of existing ---#
sample_existing(existing = e_strata, nSamp = 300, type = "strat", allocation = "prop")
#> Simple feature collection with 299 features and 4 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431110 ymin: 5337709 xmax: 438522.4 ymax: 5343226
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> strata zq90 pzabove2 zsd geometry
#> 1 4 17.6 85.0 4.09 POINT (433400.9 5342516)
#> 2 4 19.0 93.0 4.09 POINT (436273.6 5341750)
#> 3 4 15.0 51.7 4.21 POINT (435227.7 5342266)
#> 4 4 18.9 82.6 4.03 POINT (432175.5 5342203)
#> 5 4 17.1 96.3 3.60 POINT (434162.2 5338011)
#> 6 4 19.2 87.3 4.31 POINT (437368.1 5340790)
#> 7 4 14.5 98.6 3.14 POINT (436068.3 5342807)
#> 8 4 21.5 93.8 4.03 POINT (435866.2 5340536)
#> 9 4 17.0 92.1 3.97 POINT (433284.4 5338635)
#> 10 4 21.6 97.3 4.28 POINT (435926 5340812)
TIP!
Remember that when allocation = "equal"
, the
nSamp
value will be allocated for each strata.
We get 400 sample units in our output below because we have 4 strata
and nSamp = 100
.
#--- equal stratified sampling of existing ---#
sample_existing(existing = e_strata, nSamp = 100, type = "strat", allocation = "equal")
#> Simple feature collection with 400 features and 4 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431129.5 ymin: 5337709 xmax: 438548.2 ymax: 5343237
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> strata zq90 pzabove2 zsd geometry
#> 1 4 15.9 93.6 4.28 POINT (435407.1 5343095)
#> 2 4 15.2 76.7 3.66 POINT (431671.1 5341878)
#> 3 4 12.9 91.1 3.19 POINT (433235.9 5339080)
#> 4 4 15.7 73.9 3.94 POINT (431443.1 5341493)
#> 5 4 14.7 93.5 3.11 POINT (431614.4 5338273)
#> 6 4 18.9 74.5 3.87 POINT (436826.5 5341630)
#> 7 4 20.3 95.8 4.19 POINT (436490.3 5341414)
#> 8 4 17.0 91.7 3.47 POINT (435723.9 5338541)
#> 9 4 17.9 89.7 4.04 POINT (431383.3 5341217)
#> 10 4 17.1 90.8 3.65 POINT (433680.5 5339128)
#--- manual stratified sampling of existing with user defined weights ---#
s <- sample_existing(existing = e_strata, nSamp = 100, type = "strat", allocation = "manual", weights = c(0.2, 0.6, 0.1, 0.1))
We can check the proportion of samples from each strata with:
Finally, type = "optim
allows for the user to define a
raster
metric to be used to optimize within strata
variances.
#--- manual stratified sampling of existing with user defined weights ---#
sample_existing(existing = e_strata, nSamp = 100, type = "strat", allocation = "optim", raster = mraster, metric = "zq90")
#> Simple feature collection with 99 features and 4 fields
#> Geometry type: POINT
#> Dimension: XY
#> Bounding box: xmin: 431192.5 ymin: 5337780 xmax: 438559.5 ymax: 5343237
#> Projected CRS: UTM Zone 17, Northern Hemisphere
#> First 10 features:
#> strata zq90 pzabove2 zsd geometry
#> 1 4 16.8 91.9 4.61 POINT (437069 5339408)
#> 2 4 15.6 70.6 3.92 POINT (435171 5338661)
#> 3 4 17.5 89.3 4.35 POINT (436202.5 5340753)
#> 4 4 15.3 69.1 3.64 POINT (432634.5 5339644)
#> 5 4 15.5 82.8 3.31 POINT (437103 5341571)
#> 6 4 19.5 93.7 3.19 POINT (436034.4 5340644)
#> 7 4 18.8 95.9 3.20 POINT (431696.9 5339991)
#> 8 4 21.3 92.8 3.33 POINT (432443.8 5338093)
#> 9 4 16.5 97.2 2.07 POINT (431192.5 5339666)
#> 10 4 16.2 85.8 3.54 POINT (433016.1 5342744)
We see from the output that we get 300 sample units that are a
sub-sample of existing
. The plotted output shows cumulative
frequency distributions of the population (all existing
samples) and the sub-sample (the 300 samples we requested).