“This method is computationally expensive and should be avoided.” - A statement made around the period Chambawamba captured the zeitgeist of the 90s.
I was reading Twitter when I found a blog post (“How to make a shaded relief in R”, by Elio Campitelli). The post describes how to create the type of shaded relief maps used in cartography programatically in R. Before I read the full article, I read the title and thought: “Cool project! Raytracing topographic maps to simulate light falling on mountains and shade the map appropriately.” After clicking the link and reading about the method used, I was less enthused; the method used was not at all what I thought it would be.

It's not that I didn't find what he did interesting--I just had one of those moments where you think you know how something is going to be done and then are sorely disappointed when, well, it wasn't done that way. His cartographer-tested-and-approved method is calculating the local slope at each point (by taking the gradient of the data) and–depending on the direction of the slope compared to the light direction–shading the surface 2. Slopes descending towards the light source will be brighter, and slopes descending away from the light source will be darker. This method is much more computationally efficient than raytracing, but it’s disconnected from the underlying physics of what makes real shadows occur: rays of light not hitting a surface due to another surface in the way.
This is one difference between a local and a global illumination model–a local model just looks at the surface to determine shading intensity, while a global model takes into account the surrounding landscape as well. There’s a great source on all of this that Elio links to, reliefshading.com, that says the following about global illumination:
“Global models, such as ray tracing or radiosity are not well suited for cartographic relief shading, since these models are very computation intensive. ” - reliefshading.com, throwing shade.
And not a word more is said on the subject. All of the cited resources from this website all focus on these various local illumination models as well, and it seems the field of cartography decided sometime in the mid-90s that raytracing just wasn’t appropriate for shading topographic maps. And mostly because it was computationally intensive3.
Everything was computationally intensive in the 90s.
Everything was computationally intensive in the 90s. Mashing your computer’s turbo button was an effective placebo, but it was no way to overcome real algorithmic challenges. Since the cool project I expected to see didn’t actually exist–and was being actively suppressed by the most-certainly-real cartographer illuminati–I had to do it myself.
There’s a nice little dataset that comes with R that we’ll be using before moving to actual USGS geographic data: the volcano
dataset. It’s a 87x61 heightmap of Maunga Whau Volcano in NZ. Here’s an image of the actual location (thanks Apple maps!), and a contour/heightmap of the data:
> volcano|
)


And here’s a gif of the aforementioned cartographer-approved-but-not-what-I-imagined method, with the light source rotating around the image. We rotate the light source around the volcano so you can get a better understanding of how this method shades the slope. It does a pretty good job of representing the topography of the area, and in general, it's obvious that we're dealing with some sort of volcano here.
volcano
dataset shaded with a derivative-based method.
However, we see one of the pitfalls of this method: the otherworldy fluctuating flat areas in the top right and middle of the image. This is a result of using the derivative to shade the points; when you have a large flat area the color is determined by the boundary (which dictates the derivative), and there will be no variation across the surface. Areas that otherwise look like they should be in darkness--such as the flat steps in the volcano above when the light is coming from the west--appear to be in bright light.
Other than the fluctuating areas, this method seems perfectly fine when trying to image topographic areas. That is, until we look at some more detailed, real data. The fluctuating effect gets even more pronounced when we look at real data with large flat areas, and we run into issues when dealing with irregular man-made objects. Here’s a view of the National Mall in Washington DC through this algorithm:
It’s like someone coated the National Mall with a thin layer of aluminum foil. The reflecting pool's flat surface creates a fluctuating portal in the center of the image that is 180 degrees out of phase with the water on the edges. Since this real data has lots of low, relatively flat areas with some surface roughness, we also see a lot of “noise” from the small fluctuations in flat areas. Worst of all, there’s no sense of scale–the fact that there’s a large, tall obelisk in the center of the hill to the right is barely noticeable compared to the slight hill it rests on. It’s not that this method isn’t reasonably effective at conveying topology–it's just not that natural looking.
For natural looking shadows, let's simulate nature. Let’s build a raytracer.
For natural looking shadows, let's simulate nature. Let’s build a raytracer.
Our data is an evenly spaced square grid of elevation data entered at each point. Our output data will be a grid of light intensities hitting each of those points. We are going to implement what are called “shadow rays”, where we cast a ray in the direction of the light source for a certain distance, and if they intercept a surface, reduce that amount of light at that point. Specifically, we will start with a grid where every point has a value of 1 (indicating full illumination), and cast N rays out at slightly different angles. If the ray intercepts a surface, we will subtract 1/N from that value. If all of the rays intersect a surface, that point on the grid will have an illumination value of 0 (indicating full darkness). By using multiple rays, we will simulate a finite sized light source, which will give us softer shadows than if we only used one ray.





Now, I know there are some technical minded people who started this article thinking: "Cool project! Raytracing topographic maps to simulate light falling on mountains and shade the map appropriately... wait, that's not a real raytracer." I look forward to reading your blog post Throwing Shade at the Shade Throwers: Realistically Raytracing the Washington Monument... in Python. But for now, just bear with me--a full complexity raytracer isn't needed to do what we want to do.
The main complication of this method is our elevation data is defined only at each grid point, while we want to send out rays in more than just the four grid directions. So, we need to be able to check if we are above or below the surface for an arbitrary point on the surface, even between grid points. This immediately presents a challenge from basic geometry–we want to check if we are above or below the surface, but a plane is defined by three points, and on a grid we have four. Thus, linear interpolation doesn’t work, as most of the time there isn’t going to be a single plane that will intercept all four of the points. Thankfully, there is a technique called bilinear interpolation, which allows us to interpolate a surface between four points. We propogate our rays along the grid in the direction of the light source, and then test at each point whether it’s above or below the surface. If it’s below, we immediately stop propogating the ray and subtract one unit of brightness from the origin of the ray.




Luckily, there’s an R function that does bilinear interpolation for us already, so expand the code below to see the basic R implementation.
Expand Raytracing Codelibrary(ggplot2) library(reshape2) library(viridis) volcanoshadow = matrix(1, ncol = ncol(volcano), nrow = nrow(volcano)) volc = list(x=1:nrow(volcano), y=1:ncol(volcano), z=volcano) sunangle = 45 / 180*pi angle = -90 / 180 * pi diffangle = 90 / 180 * pi numberangles = 25 anglebreaks = seq(angle, diffangle, length.out = numberangles) maxdistance = floor(sqrt(ncol(volcano)^2 + nrow(volcano)^2)) for (i in 1:nrow(volcano)) { for (j in 1:ncol(volcano)) { for (anglei in anglebreaks) { for (k in 1:maxdistance) { xcoord = (i + sin(sunangle)*k) ycoord = (j + cos(sunangle)*k) if(xcoord > nrow(volcano) || ycoord > ncol(volcano) || xcoord < 0 || ycoord < 0) { break } else { tanangheight = volcano[i, j] + tan(anglei) * k if (all(c(volcano[ceiling(xcoord), ceiling(ycoord)], volcano[floor(xcoord), ceiling(ycoord)], volcano[ceiling(xcoord), floor(ycoord)], volcano[floor(xcoord), floor(ycoord)]) < tanangheight)) next if (tanangheight < akima::bilinear(volc$x,volc$y,volc$z,x0=xcoord,y0=ycoord)$z) { volcanoshadow[i, j] = volcanoshadow[i, j] - 1 / length(anglebreaks) break } } } } } } ggplot() + geom_tile(data=melt(volcanoshadow), aes(x=Var1,y=Var2,fill=value)) + scale_x_continuous("X coord",expand=c(0,0)) + scale_y_continuous("Y coord",expand=c(0,0)) + scale_fill_viridis() + theme_void() + theme(legend.position = "none")
And here is the volcano
dataset, shaded by this method. Note there are no odd fluctuating areas like there are with a derivative based method--it looks much more realistic.
volcano
dataset shaded with a custom raytracing-based method. Featuring the viridis color palette.
This method works, but it’s un-usably slow for all but small datasets like volcano
. I’ve written a R package, rayshader
, available at a link at the bottom of the page, that implements a much faster, C++ based version of this algorithm that can quickly shade much larger datasets. It's much faster than any implementation in R, and has built-in multicore support (that can be activated simply by setting multicore=TRUE
) to speed up the rendering process. Many R packages that deal with spatial data require you to convert to a special data format before you can start using them, but all rayshader
needs is a base R matrix of elevation points. Feed in the matrix, tell it from what angle(s) the light is coming from, and presto--you've got a globally raytraced relief map. Below is some sample code form the rayshader package.
Expand `rayshader` Package Codelibrary(rayshader) library(magrittr) usgs_data = raster::raster("usgs.img") #Turn USGS data into a matrix dc_topographic_map = usgs_data %>% extract(.,raster::extent(.),buffer=1000) %>% matrix(ncol=10012,nrow=10012) #Need to add Washington Monument into the data--load heightmap of #monument and add to topographic map. washington_monument <- read.csv("washmon.csv", header=FALSE) lowvals = 3486:3501 highvals = 3519:3534 usgs_data[lowvals,highvals] = usgs_data[lowvals,highvals] + as.matrix(washington_monument) national_mall_map = dc_topographic_map[1700:3700,3000:4000] sunang = 45*pi/180 angle = 0 / 180 * pi diffangle = 10 / 180 * pi numberangles = 30 anglebreaks = seq(angle, diffangle, length.out = numberangles) single_angle = rayshade(sunangle = sunang, anglebreaks = anglebreaks, heightmap = national_mall_map, zscale = 1, maxsearch = 200, multicore=TRUE) #image function and ggplot is slow; save with PNG function #from PNG package. png::writePNG(t(single_angle), "national_mall_shaded.png")
Amazing work and write up! I really enjoyed reading and will reference again the future.
Really nice writeup. I wrote a shader which achieves the same effect faster in about 30 loc. You can see it here: https://www.shadertoy.com/view/Mscfzs
Thanks for a nice write-up and an interesting topic. I’ve been looking at the rayshader package’s code and I wonder why it declares the suncalc package as a dependency since I couldn’t find any usage of its public functions listed in https://cran.r-project.org/web/packages/suncalc/suncalc.pdf. Did I miss anything?
That dependency is due to a function I haven’t included in the public repository yet (because it isn’t finalized). It allows you to enter in a date/time and latitude/longitude, and
rayshader
will use that information to calculate the location of the sun and shade the map appropriately.Great work!
It really looks much better than my lousy implementation. After that blogpost I had also implemented a primitive raytracer to drop shadows, but yours in undeniable better.
I’m happy my humble post inspired you to make this great contribution!
Don’t be so hard on yourself! You’re doing great work with metR.
[…] Visualizing a Reddit Hug Of Death With R: How To Reddit-Proof Your Website For Pocket ChangeLearn R #1: There's no need to apply() yourself.Throwing Shade at the Cartographer Illuminati: Raytracing the Washington Monument in R […]
Looks awesome man.
Have you considered adding a blue ambient light into the shadows for the ambient sky light?
Check out my next post:
http://www.tylermw.com/making-beautiful-maps/
There’s certainly a lot of stuff you can do with color with the package now!
This is a very good library. I am from the natural resources world. It would be great if you could adapt your library to calculate direct-diffuse radiation hitting the ground. Several popular growth algorithms in Forestry and Agriculture rely on that to calculate photosynthesis on a landscape basis. In the past I have used a package in ArcGis (solar analyst) to calculate direct sun light that uses the topography, but it is tedious to have to leave R to run just that part of the analysis.
Hi Tyler,
Super cool package. It seems that since you wrote this blogpost, the operation of the ray_shade (formerly rayshade) method has changed. It no longer produces a “light intensity” map that can be overlayed on a real (color texture) image. Instead it seems to produce a grayscale rendering with both shadows and shading and the shadows are really “deep” (the resulting image is very dark). Take one of the plotting examples from the README for instance, and use the ray_shade method on that same data set:
library(magrittr)
library(rayshader)
#Here, I load a map with the raster package.
loadzip = tempfile()
download.file(“https://tylermw.com/data/dem_01.tif.zip”, loadzip)
localtif = raster::raster(unzip(loadzip, “dem_01.tif”))
unlink(loadzip)
#And convert it to a matrix:
elmat = raster_to_matrix(localtif)
#We use another one of rayshader’s built-in textures:
elmat %>%
sphere_shade(sunangle = sunangle, texture = “desert”) %>%
plot_map()
revx_elmat <- matrix(1, nrow = nrow(elmat), ncol = ncol(elmat))
for (y in 1:nrow(elmat)) {
for (x in 1:ncol(elmat)) {
revx_elmat[y, x] <- elmat[(nrow(elmat) – y) + 1, x]
}
}
sunang <- 45 * pi/180
angle <- -90 / 180 * pi
diffangle <- 90 / 180 * pi
numberangles <- 25
anglebreaks <- seq(angle, diffangle, length.out = numberangles)
single_angle <- ray_shade(sunangle = sunang,
anglebreaks = anglebreaks,
heightmap = revx_elmat,
zscale = 1,
maxsearch = floor(sqrt(nrow(revx_elmat)^2 + ncol(revx_elmat)^2)),
multicore=TRUE)
png::writePNG(t(single_angle), "rayshader_test.png")
# If I were to super-impose rayshader_test.png on an image of the original desert texture, I would end up with a very dark rendering, signaling a time of day when the sun is very low in the sky, like dusk. How do I get the same effect that you demonstrate above with the national monument using the latest rayshader?
Ok, I have just discovered the affect of the maxsearch parameter. If I set it to a lower value, like 1 or less than 1, I get much more useful results. Also, I missed the note about ‘lamber=FALSE’ and that the sunangle is now specified in degrees, rather than the radians you calculated in the original example. Could you perhaps add a little more explanation for the ray_shade method to the README?
Hi John, the package has changed very significantly since this post was written. Check out rayshader.com to read the documentation for the ray_shade function and see other examples.