This page provides runnable examples for every surrogate type in GeoSurrogates.jl.
Setup
First, let’s load the packages and create some sample data to work with.
usingGeoSurrogatesusingRastersusingDimensionalDatausingStatisticsusingZygote# Required for neural network trainingusingCairoMakie# For plotting# Create sample elevation data (a simple hill)# Using 51 points so that 0.0 is exactly on the gridxs =range(-1, 1, length=51)ys =range(-1, 1, length=51)elevation_data = [exp(-(x^2+ y^2)) for x in xs, y in ys]elev =Raster(elevation_data, (X(xs), Y(ys)))# Create sample categorical data (3 classes based on elevation)cat_data =map(elevation_data) do v v <0.3 ? 1: v <0.7 ? 2:3endveg =Raster(cat_data, (X(xs), Y(ys)))# Create sample wind data (circular flow)u_data = [-y for x in xs, y in ys]v_data = [x for x in xs, y in ys]u_wind =Raster(u_data, (X(xs), Y(ys)))v_wind =Raster(v_data, (X(xs), Y(ys)))println("Sample data created (51x51 grids with 0.0 on grid):")println(" elev: $(size(elev)) elevation raster")println(" veg: $(size(veg)) categorical raster (classes: $(sort(unique(veg))))")println(" u_wind, v_wind: $(size(u_wind)) wind component rasters")
Sample data created (51x51 grids with 0.0 on grid):
elev: (51, 51) elevation raster
veg: (51, 51) categorical raster (classes: [1, 2, 3])
u_wind, v_wind: (51, 51) wind component rasters
fig =Figure(size=(900, 300))ax1 =Axis(fig[1, 1], title="Elevation", xlabel="X", ylabel="Y", aspect=1)heatmap!(ax1, collect(xs), collect(ys), elevation_data, colormap=:viridis)ax2 =Axis(fig[1, 2], title="Vegetation Classes", xlabel="X", ylabel="Y", aspect=1)heatmap!(ax2, collect(xs), collect(ys), cat_data, colormap=:Set1_3)ax3 =Axis(fig[1, 3], title="Wind Field", xlabel="X", ylabel="Y", aspect=1)# Subsample for arrow visibilitystep =5arrow_xs = xs[1:step:end]arrow_ys = ys[1:step:end]u_sub = u_data[1:step:end, 1:step:end]v_sub = v_data[1:step:end, 1:step:end]arrows2d!(ax3, [x for x in arrow_xs for _ in arrow_ys], [y for _ in arrow_xs for y in arrow_ys],vec(u_sub), vec(v_sub), lengthscale=0.15, color=:black)fig
Linear regression surrogate for simple trend modeling.
usingStatsModels# Fit a quadratic modellinreg_model =LinReg(elev, @formula(layer1 ~1+ X + Y + X^2+ Y^2))# Predict at a single pointval =predict(linreg_model, (0.0, 0.0))println("Prediction at (0, 0): $val")# Predict on the full rasterlinreg_predicted =predict(linreg_model, elev)error =mean(abs.(elev .- linreg_predicted))println("Mean absolute error: $(round(error, digits=4))")
Prediction at (0, 0): 0.8845363740022153
Mean absolute error: 0.0407
Figure 3.2: LinReg: Original elevation (left) vs quadratic fit (center) and error (right)
IDW (Inverse Distance Weighting)
Simple scattered-data interpolation that weights nearby points more heavily.
# Fit IDW with default power=2idw_model =IDW(elev)# Predict at a single pointval =predict(idw_model, (0.0, 0.0))println("IDW prediction at (0, 0): $(round(val, digits=4))")# Predict on the full rasteridw_predicted =predict(idw_model, elev)error =mean(abs.(elev .- idw_predicted))println("Mean absolute error: $(round(error, digits=6))")
IDW prediction at (0, 0): 1.0
Mean absolute error: 0.0
Figure 3.3: IDW: Original elevation (left) vs IDW interpolation (center) and error (right)
RBF (Radial Basis Function)
Kernel-based interpolation with several built-in kernels. Supports optional polynomial augmentation.
# Fit RBF with Gaussian kernelrbf_model =RBF(elev; kernel=:gaussian, epsilon=1.0)# Predict at a single pointval =predict(rbf_model, (0.0, 0.0))println("RBF prediction at (0, 0): $(round(val, digits=4))")# Predict on the full rasterrbf_predicted =predict(rbf_model, elev)error =mean(abs.(elev .- rbf_predicted))println("Mean absolute error: $(round(error, digits=6))")
RBF prediction at (0, 0): 1.0
Mean absolute error: 0.0
Figure 3.4: RBF: Original elevation (left) vs RBF interpolation (center) and error (right)
TPS (Thin Plate Spline)
Thin plate spline interpolation with optional regularization for smoothing.
# Fit TPS (exact interpolation by default)tps_model =TPS(elev)# Predict at a single pointval =predict(tps_model, (0.0, 0.0))println("TPS prediction at (0, 0): $(round(val, digits=4))")# Predict on the full rastertps_predicted =predict(tps_model, elev)error =mean(abs.(elev .- tps_predicted))println("Mean absolute error: $(round(error, digits=6))")# With smoothingtps_smooth =TPS(elev; regularization=0.01)tps_smooth_predicted =predict(tps_smooth, elev)smooth_error =mean(abs.(elev .- tps_smooth_predicted))println("Smoothed TPS MAE: $(round(smooth_error, digits=4))")
TPS prediction at (0, 0): 1.0
Mean absolute error: 0.0
Smoothed TPS MAE: 0.0
Figure 3.5: TPS: Original elevation (left) vs exact TPS (center) and smoothed TPS (right)
RasterWrap
Interpolation-based surrogate using B-splines. Provides exact interpolation at grid points.
usingInterpolations# Create interpolation wrapper (default: linear B-spline)rw =RasterWrap(elev)# Predict at grid points (exact)original_val = elev[X=At(0.0), Y=At(0.0)]predicted_val =predict(rw, (0.0, 0.0))println("Original value at (0,0): $original_val")println("Predicted value at (0,0): $predicted_val")# Predict at off-grid point (interpolated)offgrid_val =predict(rw, (0.01, 0.01))println("Interpolated value at (0.01, 0.01): $(round(offgrid_val, digits=4))")# Predict on a finer gridfine_xs =range(-1, 1, length=100)fine_ys =range(-1, 1, length=100)fine_raster =Raster(zeros(100, 100), (X(fine_xs), Y(fine_ys)))upsampled =predict(rw, fine_raster)println("Upsampled from $(size(elev)) to $(size(upsampled))")
Original value at (0,0): 1.0
Predicted value at (0,0): 1.0
Interpolated value at (0.01, 0.01): 0.9992
Upsampled from (51, 51) to (100, 100)
Figure 3.6: RasterWrap: Original 51x51 grid (left) upsampled to 100x100 (right)
CategoricalRasterWrap
Kernel smoothing surrogate for categorical data. Returns probability distributions over classes.
# Create categorical wrappercrw =CategoricalRasterWrap(veg)# Predict probabilities at center (should favor class 3 - highest elevation)probs =predict(crw, (0.0, 0.0))println("Class probabilities at (0, 0):")for (k, v) insort(collect(probs), by=first)println(" Class $k: $(round(v, digits=3))")end# Predict at edge (should favor class 1 - lowest elevation)probs_edge =predict(crw, (0.9, 0.9))println("\nClass probabilities at (0.9, 0.9):")for (k, v) insort(collect(probs_edge), by=first)println(" Class $k: $(round(v, digits=3))")end
Class probabilities at (0, 0):
Class 1: 0.0
Class 2: 0.053
Class 3: 0.947
Class probabilities at (0.9, 0.9):
Class 1: 0.568
Class 2: 0.418
Class 3: 0.014
Geometry-based surrogate using distance kernels. Points near the geometry have values close to 1.
importGeoInterface as GI# Create a point geometry at the originpt = GI.Point(0.0, 0.0)gw =GeomWrap(pt)# Predict at the point itselfval_at_point =predict(gw, (0.0, 0.0))println("Value at geometry: $val_at_point")# Predict at increasing distancesfor dist in [0.1, 0.25, 0.5, 1.0] val =predict(gw, (dist, 0.0))println("Value at distance $dist: $(round(val, digits=4))")end# Create influence rasterinfluence =predict(gw, elev)println("\nInfluence raster range: $(round(minimum(influence), digits=4)) to $(round(maximum(influence), digits=4))")
Value at geometry: 1.0
Value at distance 0.1: 0.9231
Value at distance 0.25: 0.6065
Value at distance 0.5: 0.1353
Value at distance 1.0: 0.0003
Influence raster range: 0.0 to 1.0
fig =Figure(size=(400, 350))ax =Axis(fig[1, 1], title="Influence from Point at Origin", xlabel="X", ylabel="Y", aspect=1)hm =heatmap!(ax, collect(xs), collect(ys), Matrix(influence), colormap=:YlOrRd)scatter!(ax, [0.0], [0.0], color=:blue, markersize=15, marker=:star5)Colorbar(fig[1, 2], hm, label="Influence")fig
Figure 3.8: GeomWrap: Distance-based influence from a point at the origin
Neural Network Surrogates
Neural network surrogates require normalized data (values in [-1, 1]).
ImplicitTerrain.MLP
Single SIREN network for scalar field approximation.
# Normalize the elevation dataelev_norm =normalize(elev)println("Normalized elevation range: $(extrema(elev_norm))")# Create and train MLPmlp = ImplicitTerrain.MLP(hidden=64, n_hidden=2) # Smaller network for demofit!(mlp, elev_norm; steps=2000)# Predictmlp_predicted =predict(mlp, elev_norm)error =mean(abs.(elev_norm .- mlp_predicted))println("Mean absolute error after 2000 steps: $(round(error, digits=4))")
Normalized elevation range: (-1.0, 1.0)
Mean absolute error after 2000 steps: 0.0329
SIREN network for 2D vector fields (u, v wind components).
# Normalize wind componentsu_norm =normalize(u_wind)v_norm =normalize(v_wind)# Create and train WindSIRENwind_model = WindSurrogate.WindSIREN(hidden=64, n_hidden=2)fit!(wind_model, u_norm, v_norm; steps=2000)# Predict returns tuple of (u_raster, v_raster)u_pred, v_pred =predict(wind_model, u_norm)u_error =mean(abs.(u_norm .- u_pred))v_error =mean(abs.(v_norm .- v_pred))println("U component MAE: $(round(u_error, digits=4))")println("V component MAE: $(round(v_error, digits=4))")# Predict at single pointu_val, v_val =predict(wind_model, (0.0, 0.0))println("\nWind at (0, 0): u=$(round(u_val, digits=3)), v=$(round(v_val, digits=3))")println("Expected: u≈0, v≈0 (center of circular flow)")
U component MAE: 0.0392
V component MAE: 0.0375
Wind at (0, 0): u=0.037, v=0.02
Expected: u≈0, v≈0 (center of circular flow)
fig =Figure(size=(600, 300))ax1 =Axis(fig[1, 1], title="Original Wind Field", xlabel="X", ylabel="Y", aspect=1)arrows2d!(ax1, [x for x in arrow_xs for _ in arrow_ys], [y for _ in arrow_xs for y in arrow_ys],vec(u_sub), vec(v_sub), lengthscale=0.15, color=:black)ax2 =Axis(fig[1, 2], title="Predicted Wind Field", xlabel="X", ylabel="Y", aspect=1)u_pred_sub =Matrix(u_pred)[1:step:end, 1:step:end]v_pred_sub =Matrix(v_pred)[1:step:end, 1:step:end]arrows2d!(ax2, [x for x in arrow_xs for _ in arrow_ys], [y for _ in arrow_xs for y in arrow_ys],vec(u_pred_sub), vec(v_pred_sub), lengthscale=0.15, color=:black)fig
Figure 3.11: WindSIREN: Original wind field (left) vs predicted wind field (right)
CatSIREN.CatSIREN
SIREN network for categorical/classification tasks with softmax output.
# Create CatSIREN from raster (auto-detects classes)cat_model = CatSIREN.CatSIREN(veg; hidden=64, n_hidden=2)println("Detected classes: $(cat_model.classes)")# Trainfit!(cat_model, veg; steps=1000)# Predict probabilities at centerprobs =predict(cat_model, (0.0, 0.0))println("\nPredicted probabilities at (0, 0):")for (k, v) insort(collect(probs), by=first)println(" Class $k: $(round(v, digits=3))")end# Predict most likely classpredicted_class = CatSIREN.predict_class(cat_model, (0.0, 0.0))actual_class = veg[X=At(0.0), Y=At(0.0)]println("\nPredicted class at (0,0): $predicted_class")println("Actual class at (0,0): $actual_class")# Predict classes for full rasterclass_raster = CatSIREN.predict_class(cat_model, veg)accuracy =mean(class_raster .== veg)println("Overall accuracy: $(round(accuracy *100, digits=1))%")
Detected classes: [1, 2, 3]
Predicted probabilities at (0, 0):
Class 1: 0.027
Class 2: 0.063
Class 3: 0.91
Predicted class at (0,0): 3
Actual class at (0,0): 3
Overall accuracy: 99.6%
Figure 3.12: CatSIREN: Original classes (left) vs predicted classes (right)
Composite Surrogates
AdditiveModel
A boosting-style composite that fits each model to the residuals of the previous models. This example combines a LinReg (captures the broad trend) with an ImplicitTerrain.MLP (captures the fine details the regression misses).
usingStatsModelsusingGeoSurrogates.Optimisers: Adam# Build an AdditiveModel: LinReg first, then MLP on the residualadditive =AdditiveModel([LinReg(elev_norm, @formula(layer1 ~1+ X + Y + X^2+ Y^2)), ImplicitTerrain.MLP(hidden=64, n_hidden=2, alg=Adam(0.001f0))])# fit! trains each model sequentially on the residualsfit!(additive, elev_norm; steps=2000)# Predict on the full rasteradditive_predicted =predict(additive, elev_norm)# Compare errorslinreg_only =predict(additive.models[1], elev_norm)linreg_error =mean(abs.(elev_norm .- linreg_only))additive_error =mean(abs.(elev_norm .- additive_predicted))println("LinReg-only MAE: $(round(linreg_error, digits=4))")println("AdditiveModel MAE: $(round(additive_error, digits=4))")