Back to top

Recent activity

updates

Sandbox - Wed, 08/12/2020 - 00:48
updates
Categories: Recent activity

dominant condition ecosite

Sandbox - Wed, 08/05/2020 - 13:30
dominant condition ecosite
Categories: Recent activity

NAIP overlay revisions

Sandbox - Wed, 08/05/2020 - 13:28
NAIP overlay revisions
Categories: Recent activity

Attempts to build a decently-usable, but not quite fetchNASIS-level, gNATSGO SoilProfileCollection for stress testing the S4 methods

Gists - Mon, 08/03/2020 - 19:33
big-SPC-gNATSGO.R library(aqp) library(sf) library(soilDB) library(data.table) library(microbenchmark) # path to gNATSGO Geodatabase dsn <- "~/geodata/gNATSGO/2020/gNATSGO_CONUS.gdb" # This is one of the main bottlenecks (not the only one) to reading out of GDB # chorizon is a pretty large amount of data. Let's save it as a flat file (2.1GB uncompressd) # # in principle, I think all tables could be pre-read out of the GDB and then joined to a # SPC "skeleton" based off of the chorizon data. # # system.time(h <- sf::read_sf(dsn = dsn, layer = "chorizon", # stringsAsFactors = FALSE, as_tibble = FALSE)) # # user system elapsed # # 216.374 5.678 222.533 # # data.table::fwrite(h, file="~/geodata/gNATSGO/2020/horizons.csv") # read the data in from flat file (TODO: consider SQLite storage of all tables and select * from chorizon) starttime <- Sys.time() gnatsgo <- data.table::fread(file = "~/geodata/gNATSGO/2020/horizons.csv") print(Sys.time() - starttime) # Time difference of 9.053402 secs system.time(depths(gnatsgo) <- cokey ~ hzdept_r + hzdepb_r) # user system elapsed # 78.714 1.057 77.307 system.time(l <- get_legend_from_GDB(dsn, "areasymbol != ''")) # user system elapsed # 0.196 0.000 0.195 system.time(m <- get_mapunit_from_GDB(dsn, "muname != ''")) # user system elapsed # 6.555 0.077 6.926 system.time(co <- get_component_from_GDB(dsn = dsn, "compname != ''")) # user system elapsed # 35.453 0.384 36.051 # need chunking # cgd <- soilDB:::.get_cogeomordesc_from_GDB(dsn, co = co) # pmg <- soilDB:::.get_copmgrp_from_GDB(gdb.path, co = co) system.time(site(gnatsgo) <- co) # user system elapsed # 4.334 0.004 3.248 system.time(site(gnatsgo) <- m) # user system elapsed # 3.247 0.000 2.700 system.time(site(gnatsgo) <- l) # user system elapsed # 2.838 0.000 2.499 microbenchmark::microbenchmark({ a <- gnatsgo[round(runif(1, 1, length(gnatsgo))), 1]}, { b <- gnatsgo[round(runif(1, 1, length(gnatsgo))), 1:2]}, { c <- gnatsgo[round(runif(2, 1, length(gnatsgo))), 1]}, { d <- gnatsgo[round(runif(2, 1, length(gnatsgo))), 1:2]}, times = 10) # one or two [random] pedons; one or two horizons [first or first and second] # # Unit: milliseconds # expr min lq mean median uq max neval cld # { a <- gnatsgo[round(runif(1, 1, length(gnatsgo))), 1] } 116.9621 126.7610 131.7065 135.5488 136.5518 139.0037 10 a # { b <- gnatsgo[round(runif(1, 1, length(gnatsgo))), 1:2] } 121.5565 134.4841 135.2277 137.6504 139.2550 141.0407 10 a # { c <- gnatsgo[round(runif(2, 1, length(gnatsgo))), 1] } 117.7989 137.6253 140.2133 138.9760 151.9303 156.8037 10 ab # { d <- gnatsgo[round(runif(2, 1, length(gnatsgo))), 1:2] } 132.5007 135.7209 149.1425 154.1004 155.4409 161.5209 10 b # open-ended j indexing (generally a pretty slow operation) microbenchmark::microbenchmark({foo <- gnatsgo[,1] foo <- NULL}, {foo <- gnatsgo[,2:3] foo <- NULL}, times = 5) # first horizon or second and third horizon, all profiles # Unit: seconds # expr min lq mean median uq max neval cld # { foo <- gnatsgo[, 1] foo <- NULL } 14.97619 15.46312 15.78757 16.09652 16.13384 16.26816 5 a # { foo <- gnatsgo[, 2:3] foo <- NULL } 18.65325 18.68035 18.85315 18.83340 19.03189 19.06688 5 b
Categories: Recent activity

region 2 lookup table and component AWC updates

Sandbox - Fri, 07/24/2020 - 20:26
region 2 lookup table and component AWC updates
Categories: Recent activity

Compare "mixing" methods -- using a simple depth-weighted average / dominant hue approach and the powerful, but somewhat costly, method in aqp::aggregateColo

Gists - Fri, 07/24/2020 - 15:58
mixed-colors-mollic.R library(aqp) library(soilDB) ## aqp_mixed_colors # # The goal here is to compare dry and moist colors when numerically "mixed" over certain depth intervals # `aqp_mixed_colors` is vectorized over profiles -- performs color "mixing" by profile on interval [ztop, zbot] # # And to compare "mixing" methods -- using a simple depth-weighted average / dominant hue approach # and the powerful, but computationally costly, method in aqp::aggregateColor ## Methods ## 1. depth-weighted average value and chroma + dominant hue in interval ("simple" aggregation) ## 2. conversion of Munsell -> RGB -> CIELAB2000 mix -> RGB -> Munsell (via aqp::aggregateColor) # # Returning results in a three part list reflecting the various methods and color data used. # aqp_mixed_colors <- function(p, ztop = 0, zbot = 18, m_hue = "m_hue", m_value = "m_value", m_chroma = "m_chroma", d_hue = "d_hue", d_value = "d_value", d_chroma = "d_chroma") { # trunc: new wrapper function around glomApply for constant top and bottom depth p.trc <- trunc(p, ztop, zbot) # calculate depth weights (horizon thickness) hzd <- horizonDepths(p) p.trc$wt <- p.trc[[hzd[2]]] - p.trc[[hzd[1]]] # iterate over profiles, calculating depth-weighted average value/chroma + dominant hues mixed <- profileApply(p.trc, frameify = TRUE, # frameify = TRUE means return data.frame result for _each profile_ function(p.sub) { dwt <- aggregate(p.sub[["wt"]], by=list(p.sub[[d_hue]]), sum) mwt <- aggregate(p.sub[["wt"]], by=list(p.sub[[m_hue]]), sum) if (!nrow(dwt)) dwt <- data.frame(Group=NA, x = 1) if (!nrow(mwt)) mwt <- data.frame(Group=NA, x = 1) # construct result res <- data.frame( id = profile_id(p.sub), # profile ID n_hz = nrow(p.sub), # number of horizons dominant_d_hue = dwt[which(dwt$x == max(dwt$x, na.rm = TRUE))[1], 1], dominant_m_hue = mwt[which(mwt$x == max(mwt$x, na.rm = TRUE))[1], 1], wt_d_value = weighted.mean(p.sub[[d_value]], p.sub[["wt"]]), wt_m_value = weighted.mean(p.sub[[m_value]], p.sub[["wt"]]), wt_d_chroma = weighted.mean(p.sub[[d_chroma]], p.sub[["wt"]]), wt_m_chroma = weighted.mean(p.sub[[m_chroma]], p.sub[["wt"]])) # put idname into ID name slot in result names(res)[1] <- idname(p.sub) return(res) }) ### calculate colors mixed in LAB space ## moist p.trc$m_lab <- aqp::munsell2rgb(p.trc[[m_hue]], p.trc[[m_value]], p.trc[[m_chroma]]) m_res <- aqp::aggregateColor(p.trc, groups = idname(p.trc), k = 1, col = "m_lab")$aggregate.data m_res_missing <- which(!profile_id(p.trc) %in% m_res[[idname(p.trc)]]) # deal with missing values if(length(m_res_missing)) { m_res.tmp <- m_res[NA,][1:length(m_res_missing),] m_res.tmp[[idname(p.trc)]] <- profile_id(p.trc)[m_res_missing] m_res <- rbind(m_res, m_res.tmp) m_res <- m_res[match(profile_id(p.trc), m_res[[idname(p.trc)]]),] # reapply original order } ## dry p.trc$d_lab <- aqp::munsell2rgb(p.trc[[d_hue]], p.trc[[d_value]], p.trc[[d_chroma]]) d_res <- aqp::aggregateColor(p.trc, groups = idname(p.trc), k = 1, col = "d_lab")$aggregate.data d_res_missing <- which(!profile_id(p.trc) %in% d_res[[idname(p.trc)]]) # deal with missing values if(length(d_res_missing)) { d_res.tmp <- m_res[NA,][1:length(d_res_missing),] d_res.tmp[[idname(p.trc)]] <- profile_id(p.trc)[d_res_missing] d_res <- rbind(d_res, d_res.tmp) d_res <- d_res[match(profile_id(p.trc), d_res[[idname(p.trc)]]),] # reapply original order } # construct final result res <- list() res$depth_weighted <- mixed res$m_aggregate <- m_res res$d_aggregate <- d_res return(res) } # a fake profile dat0 <- data.frame(id = 1, top = 0, bottom = 50, d_hue = "10YR", m_hue = "10YR", d_value = 5, m_value = 3, d_chroma = 3, m_chroma = 3) # a real profile dat1 <- data.frame(id = 2, top = c(0, 7, 27, 43), bottom = c(7, 27, 43, 59), d_hue = c("10YR","10YR","10YR","10YR"), m_hue = c("10YR","10YR","10YR","10YR"), d_value = c(7, 4, 4, 5), m_value = c(6, 2.5, 2.5, 4), d_chroma = c(2, 3, 3, 3), m_chroma = c(2, 2, 2, 3)) # same color data, shuffled depths a bit so more horizons in surface 0-18cm dat2 <- data.frame(id = 3, top = c(0, 5, 12, 16), bottom = c(5, 12, 16, 59), d_hue = c("10YR","10YR","10YR","10YR"), m_hue = c("10YR","10YR","10YR","10YR"), d_value = c(7, 4, 4, 5), m_value = c(6, 2.5, 2.5, 4), d_chroma = c(2, 3, 3, 3), m_chroma = c(2, 2, 2, 3)) dat <- rbind(dat0, dat1, dat2) depths(dat) <- id ~ top + bottom dat_test <- aqp_mixed_colors(dat) data("loafercreek", package = "soilDB") res <- aqp_mixed_colors(loafercreek) #f <- fetchNASIS() #res <- aqp_mixed_colors(f) #' Construct a Munsell Hue Value/Chroma Code #' #' @param the_hue Character vector of hue values (in the Munsell notation e.g. "10YR") #' @param the_value Numeric vector of value values #' @param the_chroma Numeric vector of chroma values #' @param digits Number of digits to round value and chroma off to (default: 0; integer values) #' #' @return A character vector of Munsell color codes in the format \code{HUE VALUE/CHROMA} #' @export hvc_to_munsell #' #' @examples #' #' hvc_to_munsell("10YR", 2, 2) #' # [1] "10YR 2/2" #' hvc_to_munsell <- function(the_hue, the_value, the_chroma, digits = 0, allow57chroma = FALSE) { chroma <- the_chroma if(!allow57chroma) { # chromas of 5 and 7 are not chips available in soil color book # so divide input chromas in two, round, # multipy by 2 to turn 5 -> [4 or 6] or 7 -> [6 or 8] idx <- which(round(chroma, digits) %in% c(5,7)) if(length(idx)) chroma[idx] <- 2 * round(chroma[idx] / 2, digits) } res <- sprintf("%s %s/%s", the_hue, round(the_value, digits), round(chroma, digits)) # allow for null chroma, convert other missing values to NA res[is.na(the_hue) | is.na(the_value)] <- NA return(res) } # this will round to nearest integer as needed d_depthweight <- hvc_to_munsell(res$depth_weighted$dominant_d_hue, res$depth_weighted$wt_d_value, res$depth_weighted$wt_d_chroma) m_depthweight <- hvc_to_munsell(res$depth_weighted$dominant_m_hue, res$depth_weighted$wt_m_value, res$depth_weighted$wt_m_chroma) good.m.idx <- which(!grepl("NA", m_depthweight) & !is.na(m_depthweight)) good.d.idx <- which(!grepl("NA", d_depthweight) & !is.na(d_depthweight)) # compare depth-weighted dry versus moist -- just as a general sanity check idx <- good.m.idx[good.m.idx %in% good.d.idx][5:10] # idx <- c(7, 23, 70, 78, 96) # nice test IDs for loafercreek that demonstrate some things test <- data.frame(id = res$depth_weighted$peiid[idx], dry = d_depthweight[idx], moist = m_depthweight[idx]) aqp::colorContrastPlot(test$dry, test$moist, # the moist colors (m2) are darker, as expected labels = c("Dry", "Moist")) # this is a known "easter egg" in the loafercreek dataset # representing a fairly easy-to-commit data entry error (v.s. 7.5YR) # note that the test indices includes one bogus "7.5R" hue in dry and has prominent contrast/high # get the aggregated munsell chips from the 0-18cm interval d_aqp <- hvc_to_munsell(res$d_aggregate$munsell.hue, res$d_aggregate$munsell.value, res$d_aggregate$munsell.chroma) m_aqp <- hvc_to_munsell(res$m_aggregate$munsell.hue, res$m_aggregate$munsell.value, res$m_aggregate$munsell.chroma) # compare depth-weighted dry versus aqp-mixed dry moist_data <- data.frame(id = res$depth_weighted$peiid, dwt = m_depthweight, aqp = m_aqp, ord = res$m_aggregate$munsell.sigma) dry_data <- data.frame(id = res$depth_weighted$peiid, dwt = d_depthweight, aqp = d_aqp, ord = res$d_aggregate$munsell.sigma) # remove NA values (mostly from weighted.average missing values) moist_data <- moist_data[good.m.idx,] dry_data <- dry_data[good.d.idx,] # order the data based on munsell.sigma moist_data <- moist_data[order(moist_data$ord),] dry_data <- dry_data[order(dry_data$ord),] idx2 <- seq(1,nrow(moist_data),8) aqp::colorContrastPlot(moist_data$dwt[idx2], moist_data$aqp[idx2], printMetrics = F, col.cex = 0.0, labels = c("nearest chip\ndepth-weighted mean V + C\nw/ dominant H", "aggregateColor")) text(0.3,0.42, "Comparison of 0-18cm Numerical Mixing Methods for the Mollic/Umbric Epipedon") idx3 <- seq(1,nrow(dry_data),8) aqp::colorContrastPlot(dry_data$dwt[idx3], dry_data$aqp[idx3], printMetrics = F, col.cex = 0.0, labels = c("nearest chip\ndepth-weighted mean V + C\nw/ dominant H", "aggregateColor")) text(0.3,0.42, "Comparison of 0-18cm Numerical Mixing Methods for the Mollic/Umbric Epipedon")
Categories: Recent activity

Example: A wrapper method (should be dispatched via S4) for munsell2rgb for the SoilProfileCollection object

Gists - Wed, 07/15/2020 - 22:29
munsell2rgb-SPC.R # I just named this "munsell2rgb2"... but there is no reason why it cant be # "munsell2rgb,SoilProfileCollection-method" in aqp munsell2rgb2 <- function(object, hue = "hue", value = "value", chroma = "chroma", as.spc = TRUE) { h <- horizons(object) # makes a data.frame drgb <- munsell2rgb(h[[hue]], h[[value]], h[[chroma]], returnLAB = TRUE) idn <- idname(object) hidn <- hzidname(object) # munsell2rgb does not return ID names (not inherently aware of the SPC) idcol <- data.frame(h[[idn]], h[[hidn]]) colnames(idcol) <- c(idn, hidn) if(as.spc) { # horizons<- will ensure merge.data.table triggers if @horizons is data.table horizons(object) <- cbind(idcol, drgb) return(object) } else { # TODO: remove ::: return(aqp:::.as.data.frame.aqp(cbind(idcol, drgb), aqp_df_class(object))) } } library(aqp) library(soilDB) data(loafercreek) # test with data.table # loafercreek@horizons <- data.table::data.table(loafercreek@horizons) # metadata(loafercreek)$aqp_df_class <- "data.table" # test with tibble # loafercreek@horizons <- tibble::as.tibble(loafercreek@horizons) # metadata(loafercreek)$aqp_df_class <- "tbl_df" # use SPC interface res <- munsell2rgb2(loafercreek, hue = "m_hue", value = "m_value", chroma = "m_chroma") # inspect (its the class we set after join) horizons(res) # use data.frame-like interface (uses aqp_df_class) res2 <- munsell2rgb2(loafercreek, hue = "m_hue", value = "m_value", chroma = "m_chroma", as.spc = FALSE) res2 class(res2) # respects df class
Categories: Recent activity

Inspired by @smroecker + https://soil.copernicus.org/articles/6/269/2020/

Gists - Tue, 07/14/2020 - 17:02
oblique_coordinates.R ogc_fun <- function(x,y,theta) { sqrt(x^2 + y^2)*cos(theta - atan(y / x)) } x <- y <- -100:100 plot(x, y, type="n", xlim=c(0,200), ylim=c(-150,150)) for(n in 1:8) lines(ogc_fun(x, y, (2*pi/n)), col=n, lty=2)
Categories: Recent activity

Use aqp to handle missing data and typical pitfalls with field profile descriptions for IO from new R package mpspline2 by @obrl_soil

Gists - Sun, 07/12/2020 - 21:20
mpspline2-with-aqp.R # making use of @obrl-soil's new mpspline2 package [soon to be on CRAN!] # # author: andrew g brown; 2020/07/12 # # this is an attempt at creating a generic method for mass-preserving / equal-area splines # on aqp SoilProfileCollection (SPC) data that may contain a large amount of missing values. # # it also provides an avenue to get 1cm continuous spline output back in a handy form (1cm sliced SPC). # # In the presence of missing values, getting the result back into the SPC is not always easy. # # The example code that follows should illustrate the major indexing pitfalls you need to # be aware of. # # I have tried to cover corner cases so please let me know if you find a way to break it! # # get development versions of soil packages + dependencies # remotes::install_github(c("ncss-tech/aqp", # "ncss-tech/soilDB", # "obrl-soil/mpspline2")) # load packages library(aqp) # for SoilProfileCollection object + wrangling code library(soilDB) # sample data library(mpspline2) # generic mass-preserving/equal area spline implementation #' Missing-data-safe, SPC-wide wrapper around mpspline2::mpspline "continuous" 1cm output #' #' @name do_aqp_mpspline #' #' @description Facilitate safe use of just about any numeric SPC horizon attribute, from any SPC, with \code{mpspline2::mpspline}. #' #' This function will automatically filter profiles with \code{NA} in attribute of interest. #' #' This may be more conservative filtering than you expect, with intention of splines over a constant interpolation interval [with essentially no gaps]. #' #' This is all with an eye toward aggregating many profile-level splines together where missing data is hard to reason over. #' #' Data completeness is assessed and the input SPC is filtered and truncated to create a container for the 1cm results from \code{mpspline2::mpspline}. #' #' @param object A SoilProfileCollection #' @param var_name Column name in \code{@horizons} slot of \code{object} containing numeric values to spline #' @param pattern Regex pattern to match for bottom of profile (passed to estimateSoilDepth) default: "R|Cr|Cd|qm" #' @param hzdesgn Column name in \code{@horizons} slot of \code{object} containing horizon designations default: \code{aqp::guessHzDesgnName(object)} #' @param ... Additional arguments to \code{mpspline2::mpspline} #' #' @author Andrew G. Brown #' #' @return A SoilProfileCollection with 1cm slices. Spline variables are in columns prefixed with "spline_" and RMSE/RMSE_IQR are in colums prefixed with "rmse_". If any profiles were removed from the collection, their profile IDs are stored in attr(result, 'removed'). #' #' @export do_aqp_mpspline #' #' @examples #' #' library(aqp) #' data(sp1) #' depths(sp1) <- id ~ top + bottom #' #' res <- do_aqp_mpspline(sp1, "prop") #' plotSPC(res[1:5,], color = "spline_prop", divide.hz = FALSE) #' do_aqp_mpspline <- function(object, var_name = NULL, pattern = "R|Cr|Cd|qm", hzdesgn = guessHzDesgnName(object), ...) { if(is.null(var_name) | !(var_name %in% horizonNames(object))) stop("argument `var_name` must specify a single horizon-level variable", call.=FALSE) hztop <- horizonDepths(object)[1] hzbot <- horizonDepths(object)[2] # glom to "available interval" in each profile # NOTE: we will handle warnings (profiles with no data at all) at end spc.sub <- suppressWarnings(glomApply(object, function(p) { i <- which(diff(c(0, cumsum(!is.na(p[[var_name]])))) == 1) h <- horizons(p) # basically this excludes NA values at top and bottom of profile # (O horizons, bedrock) but wont check missing values inbetween # need at least two horizons to make a spline if(length(i) < 2) return(c(0,0)) top_depth <- h[[hztop]][i[1]] bot_depth <- h[[hzbot]][i[length(i)]] return(c(top_depth, bot_depth)) })) # debug : inspect horizon values for var_name #plot(spc.sub[1:10,], color=var_name) # only take profiles that have 100% data coverage in above interval # i.e. if a single horizon is missing data, remove whole profile spc.sub$nona <- profileApply(spc.sub, function(p) any(is.na(p[[var_name]]))) spc.sub <- spc.sub[which(!spc.sub$nona),] # calculate the deepest top depth and shallowest bottom depth mindepth <- max(profileApply(spc.sub, function(p) p[,1][[hztop]])) maxdepth <- min(profileApply(spc.sub, estimateSoilDepth, p = pattern, name = hzdesgn)) # we will only make interpolations that the "whole SPC supports" # the thought is that these 1cm slices will be further aggregated downstream spc.sub <- glomApply(spc.sub, function(p) c(mindepth, maxdepth), truncate = TRUE) # do the splines res <- mpspline2::mpspline(horizons(spc.sub)[c(idname(spc.sub), horizonDepths(spc.sub), var_name)], var_name = var_name, ...) # concatenate results for re-insertion res2 <- do.call('c', lapply(profile_id(spc.sub), function(pid) { drange <- mindepth:maxdepth zero.idx <- drange == 0 if(any(zero.idx)) drange <- drange[-which(zero.idx)] return(res[[pid]]$est_1cm[drange]) # this returns the 1cm estimate which conforms with sliced spc # # debug: prove that mass is preserved in output by returning block estimates # return(res[[pid]]$est_icm) })) # get the RMSE reserr <- do.call('c', lapply(profile_id(spc.sub), function(pid) { return(res[[pid]]$est_err) })) # make 1:1 with site reserr_iqr <- reserr[names(reserr) == "RMSE_IQR"] reserr <- reserr[names(reserr) == "RMSE"] # inspect #reserr_iqr #reserr # single horizon results cannot be splined, filter those out spc.sub <- filter(spc.sub, profile_id(spc.sub) %in% names(res)) # adjustment for aqp::slice index logic versus glom interval logic if(mindepth == 0) { maxdepth <- maxdepth - 1 } # create slices 1cm thick to insert spline result spc.spl <- aqp::slice(spc.sub, formula(sprintf("%s:%s ~ %s", mindepth, maxdepth, var_name))) # create new "spline_"+var_name variable spc.spl[[paste0("spline_",var_name)]] <- res2 # create new "rmse_"+var_name as site level attributes spc.spl[[paste0("rmse_",var_name)]] <- reserr spc.spl[[paste0("rmse_iqr_",var_name)]] <- reserr_iqr # determine what profiles were removed removed <- profile_id(object)[!profile_id(object) %in% profile_id(spc.spl)] # add an attribute with removed profile IDs. there are three steps # that possibly remove data: # - profiles removed by glomApply have no var_name data at all. # - 100% coverage filtering step -- conservative filter to keep from making splines from bad data # - mpspline itself will remove profiles with e.g. just one horizon attr(spc.spl, "removed") <- unique(removed) return(spc.spl) } ### ### DEMO CODE ### ### make a combined comparison profile plot ### raw pedon data v.s. 1cm-slice + mass preserving spline # load sample dataset data(loafercreek, package = "soilDB") # set all O horizons to 10 percent clay and pH 5.5 # this isnt "necessary" but allows for interpolation to 0cm in loafercreek loafercreek$clay[grep("O", loafercreek$hzname)] <- 10 loafercreek$phfield[grep("O", loafercreek$hzname)] <- 5.5 # use aqp wrapper function for SPC input to mpspline res1 <- do_aqp_mpspline(loafercreek, "clay") # NOTE: 7 profiles are dropped from loafercreek -> res # all of them contain no clay data at all attr(res1, 'removed') # inspect distribution of RMSE (one for each profile) plot(density(res1$rmse_clay)) abline(v=quantile(res1$rmse_clay)) # set graphics parameters to minimize whitespace/leave room for legend par(mar = c(1,1,3,1)) # make sure the removed profiles deserved to be removed plot(filter(loafercreek, profile_id(loafercreek) %in% attr(res1,'removed')), color = "clay") # NOTE: you can run this on anything numeric... but with big datasets # just a couple pedons missing data at a few depths will limit your range res2 <- do_aqp_mpspline(loafercreek, "phfield") plot(res2, color = "spline_phfield", n.legend = 5, divide.hz=FALSE, print.id=FALSE) attr(res2, 'removed') # more complete SPC gives broader range of depths res3 <- do_aqp_mpspline(loafercreek[20:50], "phfield") plot(res3[7:14,], color = "spline_phfield", n.legend = 5) attr(res3, 'removed') # take just a few profiles from the result for plot res.sub <- res1[3:8,] # get the original profiles original <- filter(loafercreek, profile_id(loafercreek) %in% profile_id(res.sub)) # determine max depth in spline output (function of NA values) max_d <- max(res.sub$hzdepb) # use glom to truncate inputs to just interval matching spline output orig.glom <- glomApply(original, function(p) { return(c(0, max_d)) }, truncate = TRUE) # set new profile IDs for the spline'd profiles profile_id(res.sub) <- paste0(profile_id(res.sub),"_mpspline") # put spline results into variable for plotting orig.glom$clay_combined <- orig.glom$clay res.sub$clay_combined <- res.sub$spline_clay # inspect plot(res.sub[,48]) # use aqp::union to merge multiple SPCs spc.combined <- aqp::union(list(orig.glom, res.sub)) # avoid warnings about duplicate phiid hzidname(spc.combined) <- "hzID" # make the comparison plot plotSPC(spc.combined, color = "clay_combined", plot.order = c(1,7,2,8,3,9,4,10,5,11,6,12)) # different way of looking at things dev.off() # set up an xy plot plot(y=spc.combined$hzdept, x=spc.combined$spline_clay, type="n", xlab="Total Clay Content (%)", ylab="Depth (cm)", ylim=c(50,0), xlim=c(10,35)) # add line for profiles # 7 and 8 lapply(7:8, function(i) { lines(y=spc.combined[i,]$hzdept, x=spc.combined[i,]$clay, col=i, lwd=2) lines(y=spc.combined[i,]$hzdept, x=spc.combined[i,]$spline_clay, col=i, lty=2) } )
Categories: Recent activity

fix page numbers in english (offset 1 like SP) and footnotes

NCSS Standards - Sun, 07/05/2020 - 07:39
fix page numbers in english (offset 1 like SP) and footnotes
Categories: Recent activity

use a CSV rather than Rda

NCSS Standards - Sun, 07/05/2020 - 03:49
use a CSV rather than Rda
Categories: Recent activity

dupe

NCSS Standards - Sun, 07/05/2020 - 03:02
dupe
Categories: Recent activity

Merge branch 'httpdb'

NCSS Standards - Sun, 07/05/2020 - 02:50
Merge branch 'httpdb'
Categories: Recent activity

move plumber internals to new directory

NCSS Standards - Sun, 07/05/2020 - 02:06
move plumber internals to new directory
Categories: Recent activity

implement plumber API

NCSS Standards - Sun, 07/05/2020 - 01:40
implement plumber API
Categories: Recent activity

stable commit before switching from .RDA to API

NCSS Standards - Sat, 07/04/2020 - 22:35
stable commit before switching from .RDA to API
Categories: Recent activity

merge latest master, excluding app.R

NCSS Standards - Sat, 07/04/2020 - 22:24
merge latest master, excluding app.R Merge remote-tracking branch 'origin/master' into httpdb # Conflicts: # KST/KSTLookup/app.R
Categories: Recent activity

plumber API and analogsea testing

NCSS Standards - Sat, 07/04/2020 - 22:16
plumber API and analogsea testing
Categories: Recent activity

Vitrigelands merge

NCSS Standards - Sat, 07/04/2020 - 22:15
Vitrigelands merge Merge branch 'master' of http://github.com/brownag/ncss-standards # Conflicts: # KST/SoilTaxonomy_PDF_Parser.R
Categories: Recent activity

httr request to obtain data from plumber API (works locally, but need…

NCSS Standards - Sat, 07/04/2020 - 22:11
httr request to obtain data from plumber API (works locally, but need hosting)
Categories: Recent activity

Pages

Subscribe to humus.rocks aggregator - Recent activity