Code here written by Erica Krimmel. Please see Use Case: Download media based on specimen record search for context.
If you are running this code on your own computer, you may wish to create a new folder for the working directory and save this file to it. This code will create two new subdirectories and save media files to them.
# Load core libraries; install these packages if you have not already
library(ridigbio)
library(tidyverse)
# Load library for making nice HTML output
library(kableExtra)
First, you need to find all the media records for which you are interested in downloading media files. Do this using the idig_search_media
function from the ridigbio package, which allows you to search for media records based on data contained in linked specimen records, like species or collecting locality. You can learn more about this function from the iDigBio API documentation and ridigbio documentation. In this example, we want to search for images of herbarium specimens of species in the genus Acer that were collected in the United States.
# Edit the fields (e.g. `genus`) and values (e.g. "manis") in `list()`
# to adjust your query and the fields (e.g. `uuid`) in `fields` to adjust the
# columns returned in your results; edit the number after `limit` to adjust the
# number of records you will retrieve images for
records <- idig_search_media(rq = list(genus = "acer",
country = "united states"),
fields = c("uuid",
"accessuri",
"rights",
"format",
"records"),
limit = 10)
The result of the code above is a data frame called records
:
uuid | accessuri | rights | format | records |
---|---|---|---|---|
00008f34-9a3a-4ecb-8477-17fad7c56441 | https://bisque.cyverse.org/image_service/image/00-ogmgjEJqzTLFVgViaw3tZS?resize=4000&format=jpeg | CC0 | image/jpeg | 96fa9d9a-7bb3-41d9-8ec8-110265d0b41c |
0000b146-6fd2-4a6a-bf78-9e709cc995e9 | http://mam.ansp.org/image/CM/Fullsize/345/CM345773.jpg | CC0 | image/jpeg | 8af932f6-24c1-4c9c-9963-10b9b584b632 |
0000d1cd-8211-45c6-8dfa-bd4a9a001aad | http://mam.ansp.org/image/TAWES/Fullsize/0005/TAWES0005954.jpg | CC0 | image/jpeg | 0a18cad9-2e85-4ae1-8274-68274b058b61 |
00023a90-46a8-4af4-95ef-be5b8a63fc44 | http://collections.nmnh.si.edu/media/index.php?irn=13725474 | NA | image/jpeg | 7e3e7561-042e-4216-800a-7873fe7c9a2e |
000277e9-659b-4e0c-a61b-c5262d33969b | http://www.pnwherbaria.org/images/jpeg.php?Image=WTU-V-023351.jpg | NA | image/jpeg | ac4d39f6-8775-48ea-b1d1-cd14c6f60e08 |
0003200a-261d-43d5-ae3e-ef39b6f5e2e9 | https://bisque.cyverse.org/image_service/image/00-GUe6j3wkiqQDHA2Q8XFmD7/resize:4000/format:jpeg | BY-NC | image/jpeg | 09157ffe-1a92-4f89-b90f-9c17c2f9dacc |
000495be-df5c-4c01-a951-5656a3fe5ef5 | http://bgbaseserver.eeb.uconn.edu/DATABASEIMAGES/CONN00075691.JPG | BY-NC-SA | image/jpeg | 2642a89e-bcda-4c8c-8b20-3753b37ab990 |
00056e02-50b6-4c62-a975-306cc870dd83 | http://www.mississippiplants.org/images/specimens/MISS0038464/MISS0038464.JPG | BY-NC | image/jpeg | 04764068-999e-4902-b30b-4e1d0d23d214 |
00068ad5-816c-425e-8e81-8a2f808043e8 | http://mam.ansp.org/image/PH/Fullsize/00436/PH00436926.jpg | BY-NC | image/jpeg | 1ac7a558-1450-4ea4-941e-7aed2d95c768 |
00082dc9-d98d-4ced-84b8-3ba5d9d2368a | https://bisque.cyverse.org/image_service/image/00-umMPpFdh5R2CtTUbESrMnJ/resize:4000/format:jpeg | BY-NC | image/jpeg | 14ebb959-fdc2-4788-800b-296fdb6991f6 |
Now that we know what media records are of interest to us, we need to isolate the URLs that link to the actual media files so that we can download them. In this example, we will demonstrate how to download files that are cached on the iDigBio server, as well as the original files hosted externally by the data provider. You likely do not need to download two sets of images, so can choose to comment out the steps related to either "_iDigBio" or "_external" depending on your preference.
# Assemble a vector of iDigBio server download URLs from `records`
mediaurl_idigbio <- records %>%
mutate(mediaURL = paste("https://api.idigbio.org/v2/media/", uuid, sep = "")) %>%
select(mediaURL) %>%
pull()
# Assemble a vector of external server download URLs from `records`
mediaurl_external <- records$accessuri %>%
str_replace("\\?size=fullsize", "")
These vectors look like this:
mediaurl_idigbio
## [1] "https://api.idigbio.org/v2/media/00008f34-9a3a-4ecb-8477-17fad7c56441"
## [2] "https://api.idigbio.org/v2/media/0000b146-6fd2-4a6a-bf78-9e709cc995e9"
## [3] "https://api.idigbio.org/v2/media/0000d1cd-8211-45c6-8dfa-bd4a9a001aad"
## [4] "https://api.idigbio.org/v2/media/00023a90-46a8-4af4-95ef-be5b8a63fc44"
## [5] "https://api.idigbio.org/v2/media/000277e9-659b-4e0c-a61b-c5262d33969b"
## [6] "https://api.idigbio.org/v2/media/0003200a-261d-43d5-ae3e-ef39b6f5e2e9"
## [7] "https://api.idigbio.org/v2/media/000495be-df5c-4c01-a951-5656a3fe5ef5"
## [8] "https://api.idigbio.org/v2/media/00056e02-50b6-4c62-a975-306cc870dd83"
## [9] "https://api.idigbio.org/v2/media/00068ad5-816c-425e-8e81-8a2f808043e8"
## [10] "https://api.idigbio.org/v2/media/00082dc9-d98d-4ced-84b8-3ba5d9d2368a"
mediaurl_external
## [1] "https://bisque.cyverse.org/image_service/image/00-ogmgjEJqzTLFVgViaw3tZS?resize=4000&format=jpeg"
## [2] "http://mam.ansp.org/image/CM/Fullsize/345/CM345773.jpg"
## [3] "http://mam.ansp.org/image/TAWES/Fullsize/0005/TAWES0005954.jpg"
## [4] "http://collections.nmnh.si.edu/media/index.php?irn=13725474"
## [5] "http://www.pnwherbaria.org/images/jpeg.php?Image=WTU-V-023351.jpg"
## [6] "https://bisque.cyverse.org/image_service/image/00-GUe6j3wkiqQDHA2Q8XFmD7/resize:4000/format:jpeg"
## [7] "http://bgbaseserver.eeb.uconn.edu/DATABASEIMAGES/CONN00075691.JPG"
## [8] "http://www.mississippiplants.org/images/specimens/MISS0038464/MISS0038464.JPG"
## [9] "http://mam.ansp.org/image/PH/Fullsize/00436/PH00436926.jpg"
## [10] "https://bisque.cyverse.org/image_service/image/00-umMPpFdh5R2CtTUbESrMnJ/resize:4000/format:jpeg"
We can use the download URLs that we assembled in the step above to go and download each media file. For clarity, we will place files in two different folders, based on whether we downloaded them from the iDigBio server or an external server. We will name each file based on its unique identifier.
# Create new directories to save media files in
dir.create("jpgs_idigbio")
dir.create("jpgs_external")
# Assemble another vector of file paths to use when saving media downloaded
# from iDigBio
mediapath_idigbio <- paste("jpgs_idigbio/", records$uuid, ".jpg", sep = "")
# Assemble another vector of file paths to use when saving media downloaded
# from external servers; please note that it's probably not a great idea to
# assume these files are all jpgs, as we're doing here...
mediapath_external <- paste("jpgs_external/", records$uuid, ".jpg", sep = "")
# Add a check to deal with URLs that are broken links
possibly_download.file = purrr::possibly(download.file,
otherwise = "cannot download")
# Iterate through the action of downloading whatever file is at each
# iDigBio URL
purrr::walk2(.x = mediaurl_idigbio,
.y = mediapath_idigbio, possibly_download.file)
# Iterate through the action of downloading whatever file is at each
# external URL
purrr::walk2(.x = mediaurl_external,
.y = mediapath_external, possibly_download.file)
You should now have two folders, each with ten images downloaded from iDigBio and external servers, respectively. Note that we only downloaded ten images here for brevity’s sake, but you can increase this using the limit
argument in the first step. Here is an example of one of the images we downloaded: